100% Code Coverage Is Useless

preview_player
Показать описание

Many companies use some form of test coverage metric to measure how well tested their code is. Oftentimes these metrics are set as rules that must be maintained at all times. This is a terrible idea, though. Test coverage in general is a poor metric that doesn’t actually tell you anything meaningful about your code or the confidence you should have in it being bug free. In this video I talk about why test coverage is a bad metric and what you should do instead.

📚 Materials/References:

🌎 Find Me Here:

⏱️ Timestamps:

00:00 - Introduction
01:18 - Problem #1
04:45 - Problem #2
09:30 - Problem #3

#CodeCoverage #WDS #JavaScript
Рекомендации по теме
Комментарии
Автор

This is actually interesting, I worked for a company whose goal was to achieve 100% code coverage, still buggy because devs didn’t know how to test properly (me included haha)

MurilloVieira
Автор

This is a great video. I’m a big fan of unit tests, but I’ve never understood the obsession with code coverage at some companies. I appreciate you sharing these insights!

thecommoncoder
Автор

Code coverage is way different from code quality.

Code coverage is used to determine if every test went over all the code lines. It's a way to determine what code isn't covered by tests.

Companies use it to make sure that everything is tested thoroughly.

And then this is just a step in the process. To ensure the code works, lots of tools are used.

Code quality should be determined by developers/architects and a combination of tools can be used like, linters, code smell tools, making sure the code adheres to a coding standard, etc.

NewQuietBear
Автор

My project is a complex distributed web app. Initially we did not have any tests at all. Slowly we introduced unit tests but because of the distributed nature we were still seeing failures at client's ends. It was bad code writing for sure but it led to stakeholders loosing the confidence. We included cucumber suits then working along the BDD principle. It did give some confidence to the stakeholders. In the end it's a mix of tdd and bdd that will make you more confident.

anothermouth
Автор

I disagree with the advice of more integration and e2e tests than unit test. I have joined a new project within a company in which they use integration/e2e tests with 0 unit test to cover all test cases using cypress and it is a total disaster since if you change one thing other tests from other repo fails, also takes time to execute thus slows the process of a CI/CD. After months of convincing the team accepted to integrate unit test with less integration and e2e test : “The Testing Pyramid”. I agree we should not just rely on the code coverage it’s up to the devs to code review properly even if it shows 100%.

vampz
Автор

One potential answer to your concern of 100% coverage but not 100% functionality test (i.e. your isEven function) is mutation testing. Look at Stryker and others.

jbanderson
Автор

Hi Kyle, Informative as always, Can you also create a video on Testing Bounary in Unit tests?

navneetkumarsharma
Автор

For myself, code coverage testing is more for catching simple typos like in variable names that don't get caught until runtime or configuration errors for uncommon situations; I had one situation where it turns out I had not set up logging properly, and unfortunately I didn't catch it until it tried to log an error.
I completely agree though that testing is hard, and unfortunately simple metrics like this are often emphasized because it's easier for managers and executive to understand, and because it can be used for marketing, even though 100% code coverage doesn't really mean anything as to how well it was actually tested.

jacob_s
Автор

Another common issue with the 100% metric is that real world code has to include error handling for things outside your control: for example an external network call failing. To test this, you need to be able to inject or mock that error. After a while, all you end up doing is testing your ability to simulate different types of error so that you can test they are handled correctly.

pwalkleyuk
Автор

While i agree that high code coverage shouldn't be an end in itself, my experience is that pushing for reasonably high cc (>80%) does lead to better code, particularly in large / complex projects. "Better" does not only encompass SOLID and robustness, but also agile development and maintenance. Breaking changes are identified faster. It's easier to zero in on the hows and whys of unforeseen failures. This in turn makes for more efficient agile development, faster fixes when customers report some problem, new devs entering the project understand the granularity of the application faster etc. etc.

augustaseptemberova
Автор

One of the best benefits of unit testing is that it makes you look at the code you've written for longer. If getting to 100% coverage aids that, then I'm all for it.

EricSundquistKC
Автор

I believe coverage is an important metric, focusing on quantity rather than quality. What’s more important than reaching a specific threshold is being able to monitor whether coverage decreases, which typically happens when new code is added without corresponding tests or when tests are removed. The situation you mentioned (deleting lines) can be managed by using skip comments (e.g., /* istanbul ignore next */). This way, we can deliberately specify which code is not being tested, rather than allowing coverage to unintentionally decline.

kuscamara
Автор

It's just Goodhart's Law.
When a measure becomes a target, it ceases to be a good measure.

Saru-Dono
Автор

There is a concept called Test Pyramid. Also to check the quality of the unit tests, we can try mutation testing.

arunprakash
Автор

This is a good point, my company also has a lot of integration and e2e tests, but they run very long because each test has to ensure isolation and requires more initial stuff, which greatly hinders our code deployment and iteration time. As a result, we prefer to run more unit tests instead because they are faster.

levideng
Автор

I often see arguments like 100% code coverage won't guarantee that your code works as expected or that unit tests won't catch a certain bug as an excuse not to write unit tests and collect metrics on coverage. Though these statements are true, you will find that code with a high level of good unit tests is usually a lot better than code with low coverage or non. I also think that the argument around less unit tests and more integration tests is flawed. As unit tests are quicker to write and run and can also be written at the same time as your applcation code or before (TDD), you almost certainly want many more unit test than integration or E2E tests. You wouldn't want to test a complex calculations or validtion rules using integration tests or E2E tests. This isn't to say that integration or E2E aren't just as important. Both have there place and should be used appropriately.

jamesadcock
Автор

I take 100 over 0 all day any day; lots of dev never wants to write any test and prefer to just ship it to prod and have the client complain

KhanhNguyen-hdws
Автор

Agree 100%. Been there, done that. It also makes devs code stupid tests just to fulfill coverage

rodrigolaporte
Автор

Unit tests and code coverage are very important, but these metrics do not create a large sense of confidence.

Adding code quality tools for linting, code quality scanning, OSS vulnerability scans, etc. are all very important.

At my company, in my team, we have an 80% threshold for coverage, but we also need to pass all the quality gates beyond that before code can be merged.

Also, testing 80+% of code doesn’t

a) mean you wrote the right / correct tests
b) cover all edge cases
c) validate that all the functional requirements for the feature(s) were even validated. Coverage != full confidence the code does everything in all the different cases you expect it to.

These can be further captured through integration and e2e testing.

It’s such a blurry area of the SDLC. Every team and company have different opinions and processes.

b.tanner
Автор

Unit test made me think how to write code that doesn’t require to write multiple test cases exactly the same with Problem#3 and avoid writing code like Problem#2
So it can take of the some burden off from Coverage Policy
When refactor all the code base just delete it all and write the whole new unit test again without any look back
also manual testing on integration still require because I cant get any of confident level from my unit test

means