As a frontend developer, testing code has fallen into the "not worth it" category until recently. Most projects were just too small, or things changed infrequently enough that the extra time taken to write and understand testing wasn't worth it. So much of front-end web development is trial and error, that testing just felt redundant.
But things have changed:
- We have good package managers now. It's possible to reuse components across several projects in a way that's more meaningful than copy/paste. Of course, this comes with the small downside that it's now possible to update a dependency and break something.
- Other people change code and break things. Make them aware that something is broken. This isn't a new problem, but…
- Web developments has moved from developing sites to developing apps, which are more complex and interdependent. It's possible to break components by adding or changing others.
- Cross browser testing sucks. But not more than running your own selenium server. Now, we can do that in the cloud, and automate everything.
The last point is clutch. Nothing is more painful than cross-browser testing (and ensuring that you actually tested everything). If there's a way to automate that, the pain becomes well worth it.
Writing tests is still work, and the trick is to balance the number of tests written. I've found the best approach is to write tests in a defensive manner: Defend against forgetfulness, ignorance, or frustration.
- If your thing will be used by other people (or you in 6 months) your tests should be comprehensive enough to describe the thing you built.
- If your thing will be relied on for other people's thing, your tests should be comprehensive enough to ensure them that your thing isn't broken when they have a problem.
- If your thing will be modified by other people, your tests should yell at them when they break it.
- If your using something, break it, and then manage to fix it; write a test for it.
Following these guidelines seems to work pretty well.