How to get started with test-driven development

Learn when, what, and how to test in a TDD system.
107 readers like this.
How Linux got to be Linux: Test driving 1993-2003 distros

Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

I am often approached by software developers who are on board with the switch to test-driven development (TDD). They understand that describing expectations first and then writing code to meet those expectations is the best way to write software. And they agree that writing tests first does not introduce any overhead since they must write tests anyway. Still, they find themselves stuck, not being clear on what to test, when to test it, and how to test it. This article will answer those questions.

First, an analogy

Imagine you're working on a team that has been asked to build a race car. The goal is to deliver a product that will enable a crew to drive the car from one city (say, Portland, Oregon) to another city (say, Seattle, Washington).

Your team could go about designing and building that car in several different ways. One way would be to handcraft a unique, monolithic vehicle where all parts are home-grown and tightly coupled. Another way would be to use only prefabricated parts and stitch them together. And there are many other permutations of these two extreme approaches.

Suppose your team goes with hand-building the constituent components of the race car. A car needs a battery to run. For the purposes of this analogy, focus on the custom-made car battery. How would you go about testing it?

Testing strategies

One way to the test custom-made car battery would be to hire a testing crew, ship the car with the battery to Portland, and then get the testing crew to drive the car from Portland to Seattle. If the car arrives in Seattle, you can confirm that, yes, the car battery functions as expected.

Another way to test the custom-made car battery would be to install it in the car and see if the engine turns over. If the engine starts, you can confirm that, yes, the car battery functions as expected.

Still another way would be to use a voltmeter and connect the positive (+) and the negative (-) terminals to see if the voltmeter registers voltage output in the range of 12.6 to 14.7 volts. If it does, you can confirm that, yes, the car battery functions as expected.

The above three hypothetical examples illustrate how different ways of testing the car battery align with three categories of testing strategies:

  1. Employing the testing crew to drive the car from Portland to Seattle aligns with the system or end-to-end testing strategy.
  2. Installing the battery in the car and verifying if the engine starts aligns with the integration testing strategy.
  3. Measuring the voltage output of the car battery to verify if it falls within the expected range aligns with the unit testing strategy.

TDD is all about unit testing

I hope these examples provide simple guiding principles for discerning between unit, integration, and system end-to-end testing.

Keeping those guidelines in mind, it is very important never to include integration nor system tests in your TDD practice. In TDD, the expected outcomes are always micro-outcomes. Measuring the voltage output of a car battery is a good example of a micro-outcome. A car battery is a unit of functionality that cannot easily be broken down into a few smaller units of functionality. As such, it is a perfect candidate for writing a unit test (i.e., describing the expected measurable output).

You could also write a description of your expectations in the form of: "I expect the car engine to start on the event of turning the key." However, that description wouldn't qualify as a unit test. Why? Because the car is not at a sufficiently low level of granularity. In software engineering parlance, the car does not embody the single responsibility principle (SRP).

And of course, while you could also write a description of your expectation in the form of: "I expect the car, which begins its journey in Portland, to arrive in Seattle after x number of hours," that description wouldn't qualify as a unit test. Many aspects of the car's journey from Portland to Seattle could be measured, so such end-to-end descriptions should never be part of TDD.

Simulating real conditions

In the case of a car battery, just by using a simple voltmeter, you can simulate the operational environment of a car battery. You don't have to go into the expense of providing a full-blown experience (e.g., a fully functional car, a long and treacherous trip from Portland to Seattle) to be convinced that, indeed, your car battery functions as expected.

That's the beauty of unit testing's simplicity. It's easy to simulate, easy to measure, easy to leave the exercise being convinced that everything works as expected.

So what is it that enables this magic? The answer is simple—the absence of dependencies. A car battery does not depend on anything related to the automobile. Nor does it depend on anything related to the road trip from Portland to Seattle. Keep in mind that as your decomposed system components become less and less dependent on other components, your solution gets more and more reliable.

Conclusion

The art of software engineering consists of the ability to decompose complex systems into small constituent elements. Each individual element must be reduced to the smallest possible surface. Once you reach that point in your process of decomposing a system, you can quite easily focus your attention on describing your expectations about the output of each unit. You can do that by following a formalized pattern, in which you first describe the preconditions (i.e., given that such-and-such values are present), the action (i.e., given that such-and-such event arrives), and the outcome or the post-condition (i.e., you expect such-and-such values to be measurable).

What to read next
Tags
User profile image.
Alex has been doing software development since 1990. His current passion is how to bring soft back into software. He firmly believes that our industry has reached the level of sophistication where this lofty goal (i.e. bringing soft back into software) is fully achievable.

5 Comments

Hello Alex,

Thanks for sharing this amazing article,

Test-Driven Development is one of the best and interesting parts of the software industry. I learn a lot while reading your article, keep sharing your valuable knowledge. You can also check my article here it might be helpful to you: https://www.testrigtechnologies.com/test-automationtop-10-things-to-kno…

It's worth noting that Smalltalk was instrumental in inventing (or rediscovering) TDD, thanks to Kent Beck.

In this case, the requirements for the lower level parts seem to be well known, as if they are part of the business requirements. In most cases, the business requirement is more like "build a car". So there are many different ways to go about doing that. Let's say you decide to build a gasoline engine, so you go ahead and TDD your gasoline engine. A month later, you realize that you're spending too much money on gas, and an electric engine is really more efficient. So you've invested lots of time into writing tests for requirements that you guessed at, and it turned out to be wrong. Now you have to rip out these tests and refactor.

Is this a normal thing with TDD, that you have to guess at lower-level abstractions, and then write tests for them? Or does TDD somehow also help you get at these lower level bits correctly, and this article just doesn't demonstrate that.

No, in TDD we don’t work off of guessing. We follow the three Cs methodology, as practiced by Extreme Programming (XP). The three Cs are:
1. Card
2. Conversation
3. Confirmation examples
It starts with business formulating a hypothesis. For example, a product owner wants to grow their product and they brainstorm and arrive at a hypothesis (or two). They then write that hypothesis on a handy card.
That card then is shared with the team, and it entices further conversation. What did you mean by this, what did you mean by that, could you please clarify this part, can I suggest this idea, and so on. After the conversation gets exhausted, it should result in one or more confirmation examples. Those confirmation examples must contain concrete values.
TDD works off of those concrete values. Nothing is left to guesswork.
If the business/stakeholders are not capable of procuring confirmation examples with concrete values, it is meaningless to commence any work on implementing the business hypotheses. Because, what would you implement if you lack the specifications?
Just handing over a wish list (something like “I wish to build a product that would sell amazingly well and make me rich!”) is not going to cut mustard. A lot more specific details, excruciating minutia, must be provided before it makes sense to start building software.

In reply to by Avi Block (not verified)

Great article! Thank you:)

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.