My unit testing epiphany
I promised to write up the details of this after writing the following Tweet while listening to Ian Cooper talk on “TDD: Where did it all go wrong?” at DevSouthCoast in April.
“I have had an epiphany at @DevSouthCoast – I’m embarrassed about some TDD techniques I have been using. NO MORE!”
So what was it that caused such a flash-of-light realisation? What was I doing so wrong? Here’s the story, with all the credit going to Ian Cooper who opened my eyes to this issue.
The correct way of doing TDD has not been preserved over time. Misunderstandings have crept in. Processes have been added that undermine the purity of the concept. The version of TDD that appears in a lot of books, articles and examples is this mutated version. It sounds plausible. It has enough in common with the original to sound real. A subtle difference in the method, though, is making an enormous difference in the effectiveness.
I’m not going to talk about everything Ian covered in his talk. You can seek him out for that (as an aside, please seek out a meet-up rather than just jumping out of a bush and confronting him about it). I’m going to talk about the bit that I realise is causing the real damage.
We are being taught to create a test-class to partner up with each class we write in our application. This is wrong. It encourages bad behaviour. It encouraged me to do the wrong thing, or more specifically it stops you from doing the right things. Here is what I know believe to be the correct way and hopefully this is what Ian was eluding to and what Kent Beck originally meant.
Red. Green. Refactor. We have all heard this. I certainly had. But I didn’t really get it. I thought it meant… “write a test, make sure it fails. Write some code to pass the test. Tidy up a bit”. Not a million miles away from the truth, but certainly not the complete picture. Let’s run it again.
Red. You write a test that represents the behaviour that is needed from the system. You make it compile, but ensure the test fails. You now have a requirement for the program.
Green. You write minimal code to make the test green. This is sometimes interpreted as “return a hard-coded value” – but this is simplistic. What it really means is write code with no design, no patterns, no structure. We do it the naughty way. We just chuck lines into a method; lines that shouldn’t be in the method or maybe even in the class. Yes – we should avoid adding more implementation than the test forces, but the real trick is to do it sinfully.
Refactor. This is the only time you should add design. This is when you might extract a method, add elements of a design pattern, create additional classes or whatever needs to be done to pay penance to the sinful way you achieved green.
When you do this right, you end up with several classes that are all tested by a single test-class. This is how things should be. The tests document the requirements of the system with minimal knowledge of the implementation. The implementation could be One Massive Function® or it could be a bunch of classes.
The big mistake comes when you get carried away with isolation. If you were to undertake too much design up-front, you could end up with one test-class per class in your program. In fact, this is a set-up that is encouraged in some books and articles – that’s why you name the test-class after the class you are testing right? You end up creating test-doubles for every dependency because the impression you get is that your tests should not depend on anything outside of the class under test. This creates accidental coupling.
The term “isolation” means that you don’t cross a port. This means not relying on network, database, file system, service or anything else that you might add an adaptor for in a Hexagonal Architecture. It doesn’t mean creating a metaphorical cube-farm for your classes. If you over-isolate your code when you write unit-tests, you are creating tests that rely on implementation – because each change made in the implementation causes problems in your tests. If you change the class structure to introduce a better pattern, your tests all need to be changed.
I am guilty of over-isolation. I have been diligently trying to eliminate every dependency whether it crosses a port or not because I thought it was the right thing to do – but trust me, I am committed to refactoring the tests I have written to put this right.
This wasn’t Ian’s only point on the night – but it was one that made a big difference to how I feel personally about TDD.
The big irony of this is that when I attended a coding dojo a month back, I actually did TDD right. I wrote tests to represent the behaviour required in the program and when I introduced a chain of responsibility pattern, I didn’t split out my tests to match. The structure of the tests made sense to the structure of the business domain, instead of matching the structure of the implementation. Oddly though, this wasn’t how I was doing it in the daylight hours. I think this is because in real life it is harder to find out those required behaviours and the search for them, and the tests that document them, distracts us when we should be applying disciplined TDD.
If you have been doing things the way I did – don’t feel guilty about it, because the literature is misleading. Let’s all just do it better from now.
This article generated lots of great discussion. You can read more in My Unit Testing Epiphany Continued!