The Pursuit of Coverage

The Pursuit of Coverage

My advice for learning to love Unit Tests

I never used to understand tests. They were just a necessary pain, developing is the interesting part. I would finish my feature, satisfied with my creation. My job is done! Not quite. The daunting task of writing unit tests still looms, like the mountain of washing up you have managed to create for your evening meal. Procrastination is often the most attractive option. How do you approach them? How can you get them out of the way and get on with doing the important stuff? What even is a unit test?

My attitude towards writing tests has since warmed to the point where, dare I say, I enjoy writing them. Reaching this state of mind I would say has been down to three main realisations; Understanding the true value of unit tests, becoming comfortable with how to approach them, and taking pride in my work. This may not work for everyone and most likely won't speed up your implementation but I have found that it has helped my write better quality code and also led to me enjoying a greater proportion of my job.

Realisation #1 The value of tests

Unit tests are often the easiest way to figure out out if your code is working

This one depends on the project and the feature you are developing but generally holds true. My current project consists of ~40 Spring Boot microservices. My local development environment therefore involves spinning up ~40 Docker containers, various databases and a front-end. This takes a decent chunk of time and puts my PC under a fair amount of load. I then need to figure out which set of clicks in the browser or which endpoint to call to trigger my code. This often requires getting a certain combination of data in the databases (especially if you are working on a more obscure feature/bugfix). And finally if your code does not work, you risk polluting the databases and needing to start again. In short, testing your code this way can be a bit of a headache. Not to say you shouldn't do it, this kind of end to end testing is an important validation step but early in the development phase, it can be frustrating.

Pivot to the trusty unit test. Your new service, buried deep in the program flow of an endpoint or event listener is, in fact just a class with methods. These methods produce outputs based on a set of inputs. The logical way to debug your code is therefore to simply write a unit test that calls these methods. No lengthy boot time, no painful context set up and no manual repro steps. Just a build and run. What's more, the exotic edge cases no longer need to be repoduced via whatever wacky workflow the users manage to come up with. You just call the methods with whatever plausible inputs you can think of.

Testable code is maintainable code

"How on earth do I test this function it has like 20 dependencies and 10 parameters?!" If you find yourself thinking this, writing tests is not the problem, the code you are testing is the problem. My experience has been that writing tests has unveiled code smells that I may not have thought about during my frantic coding of a feature. Refactoring warning sirens should be sounding if you are spending too long writing set up methods for your test files or throwing in the towel and spinning up a full application context to avoid mocking too many dependencies.

The Single Responsibility Principle is one of the most important guiding strategies for writing readable, maintainable code. A single responsibility is easy to test. Confused multiple responsibilities are difficult to test. It is also usually identifiable when writing your assertions at the end of a unit test whether or not your method has tried to do to many things and you need to separate your concerns. So for me, writing tests has become a way for me to validate not only that my code works but also that it is likely to be readable and maintainable.

Coverage is good

The folly of pursuing 100% coverage has probably lead to many heated discussions, confused quality managers and also lots of tests that test nothing. Global test coverage figures are often meaningless without context and setting a target can seem arbitrary and counter-productive. Nevertheless code coverage is extremely important and lines covered by tests is about the only way we can measure the quality of our unit test suite.

Code coverage provides the first level of confidence that your code is working. You can't be sure whether your complex algorithim functions as intended unless every condition is triggered. This may seem laborious when you are writing the code, some conditions or functions will look trivial and may not appear to merit their own test case. However, some of the more valuable benefits of high test coverage are not as immediately obvious. If you know all of your other classes are covered, you have more confidence in mocking these external dependencies in your unit test. No matter how key their role is in your code, it is tested elsewhere. Don't worry about it. Another benefit, is when your small feature that you develop in 6 months time morphs into a widespread refactor, with good test coverage, you know what you break. You will be happy when you realise that the obscure condition that you added a test case for fails before you merge and end up with a nasty production bug.

Realisation #2 How to approach test writing

Don't leave it until the end

The most common approach to unit tests is as an afterthought. The realisation that your branch might not pass the coverage metrics with the existing tests to slip through. Test Driven Development is the slightly controversial, extreme manifestation of my suggestion here. Full blown TDD usually requires that you write your tests before you have started your implementation and you are finished when they have all passed. This can be unnecessarily restrictive and any "one size fits all" development strategy is likely to be awkward in certain situations. But why not write a test once you have got something you think is working, or even before. Debugging a unit test is a good way to get an idea of how the program flow goes once your inputs and variables have values. Sometimes this early check can help you spot a problem with your solution and can help you tweak. Also, returning to my previous section, you may realise you are heading down an un-readable, un-maintainable nightmare of an implementation and you are better off having a re-think before you go too far.

Come up with a sensible naming convention

You have just created a blank test file and you have no idea where to start when it comes to covering the functionality of your new class. I have found some naming conventions for test cases helpdirect my thinking. There are a few out there so see which one works for you and try to stick to it. Any other developer will be able to open your test file and figure out what you are testing. Personally I like the should_doExpectedOutcome_whenInputCondition format. I find it clear to understand and when I begin writing my tests I think "My function does X when I give it a Y so my obvious starting point is should_doX_whenY." Once that is done I think, what if Y is null, what should X be? Boom, second test case and so on.

Use the Given, When, Then structure

I like to break my tests up into blocks so that I can clearly tell what is setting up the starting conditions, what is the actual entry point to my class that I am testing and what is dissecting results and assertions. Unit tests can be tricky to configure so you can end up with a lot of slightly bloated lines at the beginning which blur into your actual testing and it is difficult to see exactly what is being tested. Before I write any code I write three comments on three different lines: // Given, // When, // Then. I now know that setting up my input parameters, mocking any database responses or api calls all go at the top under // Given. Once my initial context and inputs are ready, under the // When comment, I call the function that I am testing and get my output. Under // Then, I can make my assertions on the output and test various calls to mocked dependencies etc. Organising my tests in this way helps me focus on what I am doing and gives some structure to what can be slightly awkward procedures.

Realisation #3 Taking pride in your code

How will people know your code is good if you don't throw everything at it?

I find great satisfaction in writing bulletproof code. Coming up with a set of logic that is so watertight that no one can dispute it. Testing some awkward scenarios in practice just doesn't cut it sometimes, I want to write a set of challenges so mean and twisted that any weak link pops, any leak gushes water and the set of green lights at the end is rewarded with the appropriate kudos. It is all too easy to only write tests for expected inputs. What about non-expected inputs? You can never underestimate the creativity of users in finding new ways to mangle the intended workflow of your application so be imaginative. Try your hardest to break what have done. It is win-win (depending on which way you look at it). Being smart enough to outsmart yourself or being so smart that you can't be outsmarted. That's smart.

Good tests are just nice

The same way good implementation is nice. Sometimes I look at a test and I just think, it is so obvious what is being tested. I can tell at a glance how the test works and if it works I am confident in the code. This test utopia can be difficult to acheive. Sometimes complex models and data structures can necessitate obscured helper functions or worse, test data factories in another file. But if you get it just right, it can be very satisfying. Oh, and the person reviewing your code will thank you too.

That 100% dopamine hit

This may just be me, I may not be the most conventional character but like the social media hooks that keep me addicted to their platforms with likes and comments I am convinced there is some chemical reaction that happens in my brain when I see a good set of code metrics. On my project, we have set up our CI pipelines to run a static code analysis and post the results as a comment on the merge request. I crave that moment:

  • 0 Bugs
  • 0 Code smells
  • 100% Test coverage

Oh yeah. Just in case the reviewer of my request doubted by ability for a second. This is concrete proof of my superiority, I dare you to find something wrong with my work.