Writing Tests For CSS Is Possible! Don’t Believe The Rumors - Gil Tayar | CSSconf EU 2019

preview_player
Показать описание
You know that fear. The fear of changing something in your CSS. Deleting a CSS rule is a lesson in getting yourself to calm down, telling yourself that it’s OK, you are absolutely sure that deleting that rule won’t change anything.

And only manual testing can assuage that fear. And yet, even then, you’re still frightened that you haven’t checked _everything_, and that you missed something. Not to mention that it’s amazingly boring.

Never fear again! Testing your CSS code, testing the visual aspects of your code, is now possible, and I will show how. A slew of new Saas tools have come to the forefront which enable us to write tests that check that everything is the same that it was (even if we moved from BEM to CSS-in-JS).

So grab that keyboard, refactor your CSS, because writing tests for it is now possible!
Рекомендации по теме
Комментарии
Автор

Damn, Tayar slayed it - one of the most entertaining frontend talks I've seen in a while, and, impressively, on a notorious part of the stack. However I would add an additional remaining, yet unsolved, elephant to the mix of CSS testing grievances, and that is the issue of testing the frontend on user-generated persistent state changes, of which many happen in some backend or cache (but also in virtual DOMs, more recently). In such projects, after a certain point of complexity, e.g. in wireframing and/or business logic, screenshot variability compounds on itself from one state transition to the next, and the sheer amount of possibilities make it difficult to perfectly replicate scenarios at every iteration, much less compare two separate snapshots. Even the transitions themselves (think UI animations) become harder to catch. This is just in accordance with user-input - take into account async or time-contingent processing and you might find, as I have, that an automated e2e instruction set might never be the same from one run to the next!

In that same vein, spawning multiple tests from a single mid-test state feels to be an impossible task because it compels you to export a copy of the whole test instance's state into another instance that resumes at the exact same point in the user experience. Why would you want to do this? To save processing and time from the would-be redundant tasks used to achieve a particular configuration/state, needed for reproducibility.

(Yes, I know mocks can help here, but it seems tedious, in particular when you have to reconcile acceptable diffs vs unacceptable ones)

To my original point, because of how sandboxed browser runtimes are, spawning concurrent derived test instances from mid-test state is a pain and a half! I scoured the internet for some ideas but bore no fruit. Any idea would be greatly appreciated because this is eating me up at work

feihcsim
Автор

Now this is a brilliantly made presentation. 10/10

SteveUrlz