Any software house that is worth its salt, cares about the quality of its output. Things are no different at MetaBroadcast. For a small shop like ours, hiring a whole team of testers offshore as some big companies do, is not the way to go. This makes automated tests the best thing since sliced bread. So, upon joining the team, I was asked to read one of the seminal works on TDD, which demonstrated everyone’s concerns here about writing great software.
but unit tests should only test individual features
True. TDD and Unit tests can only go so far. To have some level of peace of mind, one has to add manual tests, integration tests and test coverage reporting, mix it all up and come up with a balanced mixture that gives you confidence in your code. This post explores a little bit of test coverage. Test coverage is nothing more than a way to verify how much of your code you are actually testing. Having in mind that code has several possible paths to run, theoretically, you could be looking at over 100% test coverage. This is known as Cyclomatic Complexity (thanks Fred for bringing the concept up).
But is it really relevant to test every single getter / setter method of your objects? Martin Fowler thinks it isn’t. And he actually argues against it, as a 100% coverage report may indicate that the developers are writing code and tests to make numbers (or maybe managers) happy, rather than creating tests that are meaningful for their project. So, what is the correct balance?
Testivus, the master of all things testing, uses a nice fun story detailing what is the right amount of tests. In his story, he asks “Who is better to judge what is the correct amount of coverage, than the actual developers involved?” After all, they are the ones who knows best what their programs are trying to achieve.
At MetaBroadcast HQ, we use the great Eclemma Eclipse plugin, which allows each developer to verify the coverage of their projects, as shown in the image below:
Eclemma allows you to execute the coverage reports locally on your Eclipse, it lets you expand the project resources and identify how much of the code is being tested on a per method basis. Running it locally has the added advantage of making it easier for the developers to find out what needs more testing, helping them identify what their tests are actually calling on the source code.
So the general rule of thumb is: if you have anything less than 50%, your project is pretty much untested, if you’re between 50% and 80%, the use of coverage reports to identify the areas of your code that need more testing would be greatly beneficial to you and if you have anything over 80%, you’re on the right track.
ice-cream and pyramids
Back with Martin Fowler, we get the concept of a Test Pyramid. This concept divides the testing effort into three layers—Unit, Service & UI—with the Unit test at the base of the pyramid, an intermediate layer called Service Layer between Unit and UI tests and are meant to provide end-to-end testing. Last, but not least, the UI test in itself, which is usually either manual or performed with an UI test-recorder software. One thing to avoid is when the Test pyramid inverts itself, having more manual / UI tests than unit tests, creating the anti-pattern known as the Ice-cream cone.
In a previous company I worked for, which will remain unnamed, we had a few dozens of testers. Unsurprisingly, it was the place where I least saw unit tests being used. Developers were in charge of writing the test scripts which were going to be used by the tester and a lot of manual testing was done. Deployment cycles were slow, testing had to be coordinated with with the testers availability and the more software we wrote, the more tests (and sometimes testers) were added to the team. This happened a few years ago, and at the time we did not have a name for it, but that was a clear sample of Ice-cream cone testing model and to me, if was first-hand experience of how it becomes a herculean task to get out of this vicious circle, once this anti-pattern settles in a team.
Aiming to have good code coverage is our team’s goal. And what is the perfect balance, as mentioned above can be a tricky subject, but one definitely worth answering to reach programmer’s Nirvana. So what are your experiences around testing? Have you found your coverage balance anywhere under the 80% mark?