Code Unit Test First

The interface & output of an object are more important than its implementation, so write the former first, in terms of code trying to exercise it.


Here is a really good way to develop new functionality:

Find out what you have to do.

Write a Unit Test for the desired new capability. Pick the smallest increment of new capability you can think of.

Run the Unit Test. If it succeeds, you're done; go to step 1, or if you are completely finished, go home.

4. Fix the immediate problem:
maybe it's the fact that you didn't write the new method yet. Maybe the method doesn't quite work. Fix whatever it is. Go to step 3.

A key aspect of this process: don't try to implement two things at a time, don't try to fix two things at a time. Just do one.

When you get this right, development turns into a very pleasant cycle of testing, seeing a simple thing to fix, fixing it, testing, getting positive feedback all the way.

Guaranteed flow. And you go so fast!

Try it, you'll like it.

Doesn't step 1 imply Acceptance Tests First? -- John Whitlock

The way I work, yes. I typically write a Cucumber Framework story and watch that fail before writing a Unit Test for my implementation. --Marnen Laibow Koser, 14 April 2011


When you run a test, if it works, check the test and all your changes into your version control system (with a meaningful comment). Then you're done. -- Dave Whipp

No. These tests and increments are too small. Do the checkin after you've done a few hours of work (or less if you have a very well automated checkin process that's quick and easy). -- unsigned

Why? Small checkins make it easier to just pick the changes you actually wanted when merging. --Marnen Laibow Koser, 14 April 2011

I do as Dave does, waiting a few hours to checkin just increases the chance of a conflict or that I will lose a few hours of work rather than ten minutes worth. So I like to checkin all the time, and would suggest you do the same, unless your checkin process is slow and hard (in which case my advice is you had better make it quick and easy). -- Erik Meade.

I am not sure of all the advantages of checking in every change. But one is you can count line of code(LOC) changes (added, modified, deleted, etc). This counting could be automated by the Version Controler. A change control like Change Log can help this case. -- Alex Vicente


Do you ever write all the Unit Tests for a class before starting to implement the class? Do you ever have a Master Programmer write the Unit Tests and a Grunt Programmer change the system to satisfy them? Do you ever use Unit Tests as specifications?

I'm expecting the answer "No". The very notion of a Grunt Programmer is probably antithetical to XP, that division of labour being superseded by Pair Programming. I thought I'd ask, to check, and to get the answers explicit and on the record. I'm currently coming at this from the direction of Design By Contract and assertions. -- Dave Harris

We generally don't do those things. Since we practice You Arent Gonna Need It, we couldn't in general write all the Unit Tests for a class. We do sometimes write a batch of Unit Tests for some piece of functionality we're working on, especially if they are all variations on a theme. For example: check SUB with one day less than the max, at the max, one day more.

We might well invite a Master Programmer to sit with us for some hard bit. In general we strive to have everyone cross-trained on everything. Even so there are areas of model or infrastructure expertise. Personally I believe that there are some short-term advantages in specialization, and I'm sorely tempted to use specialization in times of stress. Sorting out what is going wrong with our process and getting it back on track seems to work better.

But some people seem better at design, at seeing consequences faster, while others are better at painstaking programming. In theory, specialization would allow each to do his best. Does Chrysler Comprehensive Compensation take it as an axiom that Pair Programming is always better? -- Aamod Sane

No, they took it as a hypothesis, tested it, and found that the more they paired, the faster they went and the fewer problems they caused. I showed them how to do it. Taking it to such ridiculous extremes was their idea. And it works ... -- Kent Beck


Code Unit Test First. Why don't people like testing? Well, the traditional way of testing is tough to take. You write what seems to be perfectly sensible code, then you write a test and the test tells you that you failed. No one wants to hear that.

Let's turn it around. Write the test first; run it. Of course it fails.. You haven't written the code under test yet. Start writing code.. keep testing. Soon, the test will tell you that you've succeeded!

Nice, Michael! That really captures what Test Driven Programming feels like. It really is a blast to do! -- Kiel Hodges

Michael, let me get this straight. You write a test and you know it will fail, since you haven't written any code for it to pass. So you run it anyway knowing it will fail. Here's a hint: you don't need to run that test, because it will fail. you know the outcome of that test so there is no reason to run it first. Running it only proves that you prefer wasting time. You know it will fail, so there is no reason for the test to be run. The only reason you would want to run the test is if there was a reason to run the test. There isn't one. A religious ritual of running the test for no reason could be created, but it wouldn't serve any purpose other than religious. Software programming has certainly turned into Quackery. Sad.

[Actually there is a purpose in running that test before you write the code. It verifies that the test can fail. There are bugs in tests too, so having some built in sanity checks are not a bad thing.]

There's another purpose too. It sometimes happens that I write a test for what I believe is new functionality. When I run it, it passes. After making sure that my test is correctly written, I discover that the functionality I thought was new was, in fact, already in the system -- so I don't need to write any new code to implement it! In cases like this, the habit of watching the test fail saves me lots of work. --Marnen Laibow Koser, 6 Apr 2013

Another (apparently overlooked) benefit of coding the Unit Test first is that you get to test the Unit Test. When it says your code failed (because it wasn't written yet), the Unit Test has itself succeeded. -- Eric Scouten


In a static language, the compiler/type-checker does some of the job of Unit Tests. Thus instead of running the tests, you attempt a compile (or a link). Now the compile fails because the method doesn't exist yet. Never mind - continue as for step 3. -- Dave Harris

That was very much my experience with Modula Three (www.research.digital.com ) which had the nicest type system I've come across. You sometimes have to work a bit to get the types in your program lined up, but once you do the code usually just works. -- Steve Freeman


Another strong advantage of writing Unit Tests as early as appropriate is that you can use Unit Test As Documentation - if they're written properly, they provide a great example for people who have to use or work on the code later. -- Martin Pool

It struck me last night while painting a wall: Code Unit Test First is the programming equivalent of using masking tape and drop cloths. On the surface it doesn't look like you're making "progress", but the safety it gives you lets you work faster and safer. -- Rob Crawford

Umm .. isn't that a negative example? In XP you start by painting, drip some, and then fix the problem. If you start with drop cloths and you finish painting, and then find them all clean, then you have violated YAGNI. Your example seems to justify Planning Ahead which is the right thing to do when you're painting a house, because your requirements are fixed. -- Nissim Hadar

Another example is a technique used for drawing life-like pictures of just about anything. The idea is to focus your awareness on the Negative Spaces of your subject. By focusing on the negative, you trick your mind in not making assumptions about your subject, and your mind becomes aware of the true shape of your subject. See Drawing On The Right Side Of The Brain by Betty Edwards.

For a beginning artist, the results of this technique are shockingly effective. And this analogy of life-like drawing is so fitting to Extreme Programming -- I think a comparison of the before-and-after drawing of someone using this technique would make a persuasive book cover for Extreme Programming. For a better and fuller description of this technique, read any book by Betty Edwards on the subject of drawing . -- Stephan Branczyk

How can you say that a test is the same as negative space? A test represents a positive intent.

"Negative space" in the context of drawing doesn't map to "failure conditions" in programming. If you have a nameable object (like a book) leaning against another nameable object (like a wall), negative space means instead of drawing the book (positive space) and the wall (positive space), you draw the area between them (negative space) first. The result is you get a boundary for your book and boundary for your wall before you start drawing them. The reason is that when we have a name for something, it's usually attached to a generic model in our heads, but that generic model isn't a very good representation, because it's so vague. Just a sketch, really, even though we give it a lot more credit. By starting with an unnamed (and hence, unstored) shape, we have to focus on each angle, edge and proportion for this particular instance, because we don't "know" what it's "supposed" to look like. When you then draw the positive shapes, you can't slip into automatic mode, because they won't fit the edges you've already defined. It really is a ground-breaking technique.

In programming, the Use Cases you start with (and implement as Unit Tests) are the negative space, in the sense that they're not your code, they're the stuff your code needs to fit against. Your code is the positive space, but if it doesn't match every nook and cranny of the Use Cases, the code's not right. When you start with the code, you might be tempted to think, "I know what a file reader class should do, I'll just slap one together!" and end up missing some bits that are required by this particular project. Working from the Use Case (implemented as Unit Tests) inward, you define the edges and ensure that the code you eventually write matches this project's requirements precisely.


Another advantage of coding tests first is that tests exert a beneficial influence on the design. The tests themselves become additional users of your class interface, which ameliorates the problem of "you can't reuse code before you've at least used the code once" - because you start, right off the bat, with at least one extra user of each class.

Concrete example: you need to test code which (usually) takes input from an external device. But the device is not present during testing. So, in order to code Unit Tests, you are forced to create an interface for your code to talk to: one implemented for the real device, and another implemented for the "fake" testing device. Having done that, you're in a much better position to further implement the interface for multiple other devices.


How do you adapt Unit Testing, in the more realistic scenario, where you are adding functionality to an existing product? Do you unit code retroactively or just for new units? The advantage of the first approach is, you may catch errors that were previously overlooked. The advantage of the latter is you save time and focus on where errors are likely to occur. -- Cayte Lindner

Before you change a method, you have to write a test for what you think it does. Either it runs, and then you're sure that you aren't breaking anything, or it doesn't work, and you learn something about what the code does.

(Unless the design is a mess, the function has side effects you don't detect and don't capture in the unit test, and thus have a false sense of confidence in your unit test.)

Writing "retroactive" tests on demand reduces the amount of up-front investment (you won't ever get permission to spend two months writing tests). It also makes sure that the most rapidly changing code is tested soonest.

New code gets tested as above.


What is a good way to organize Unit Tests for a non-trivial project? Does one usually write a whole slew of tiny test programs, or are they usually amalgamated into a "test app"? -- Maciej Kalisiak

Lots of small Unit Tests are generally added to larger Test Suites.


Questions:

What if your application has a heavy GUI?

How do you Code Unit Test First?

How do you Unit Tests at all?

There are a number of good automated tools on the market for testing GUIs, but they generally compare the GUI with the results of a previously successful run, that you checked by hand; you can't write the test first.

The best HTML testing strategy I've seen (thanks to Massimo Arnoldi and the folks at Life Ware) is to regression test the HTML, then Unit Test everything under the hood. Since there is only one way to see if HTML is "correct", that is, look at it in every browser you care about, the best you can do is run a test that produces a page, manually verify that it is satisfactory, then set up the test so you are notified if the results ever change. When they change, you have to manually verify again, so you don't want lots of these tests, you want to test most of the variations with finer-grained tests. However, you can write a tool that will shoot the HTML to IE and Netscape, then save it somewhere if you say everything is OK. -- Kent Beck

I just did a toy case of this - a Perl CGI that produces a 30-day, free-trial Jera Works license certificate. While developing, I ran the CGI in offline mode, and just piped the output to the input of a shell-based Java program. Programming consisted of modifying the Java code to use the new feature (like including Product Name in the license cookie), and then modifying the Perl code to generate the new feature that the Java code was testing for. Development was fast and fluid. Deployment was very smooth for a CGI. The only semi-major defect I found was that I hadn't uncommented the line "use CGI qw(carp);", which was disabled for offline testing. -- John Brewer

What about Micro Soft Active Server Pages (ASP), which have embedded script (code) that generates the HTML?

Oh, ick. ASP is all about embedding the code in the web page. You'll want to push everything even slightly complicated into external objects called from your ASP script code. Then you can Unit Test those external objects. -- John Brewer

Not anymore in ASP+

I agree. I program mainly in Perl and put all of the real stuff into objects. Lately I've been writing Unit Tests for everything (after I code the functionality; I'll try beforehand) and it makes everything much nicer. You do have to test the HTML output (for browser compat, etc ...) but at least you know that the core of the app already works. -- Dave Tauzell

Has anyone used XP with Procedural Languages? How did you adapt it? -- Jim Hart

What makes you think it needs adapting?


People have been doing so much Test First that there has been a growing interest in Test Driven Programming, and claims that there may be something to Test First Design. -- Erik Meade


Is writing only One Unit Test Ata Time a requirement?

The object is to write, test, and resolve one line of a UT at a time. This minimizes the number of changes before you check for mistakes, thus minimizing the number of places the mistake could be at.


When you Code Unit Test First (manually) you could let a machine (automatically) implement classes to check against the test until the test is satisfied. This way extreme testers can move the complexity of developing some stuff to developing some tests for the stuff. I think the complexity is same. That is why I Dont Write Unit Tests. -- Albrecht Scheidig

How do you know your program works?

I do not know (but it sells). How do you know your test works? -- Albrecht Scheidig

We know our tests work because they fail before we write the code, and pass afterwards. At this point, we know that our code works. -- Laurent Bossavit

You could also write a wrong test, that fails before you write the code, and pass afterwards. At this point, you have a wrong test and a matching, wrong code. How do you avoid errors when you write test code? Do you write test code for the test earlier? And what about that meta-test-code? Etc.

See Bugs In The Tests. In a nutshell : it could happen... but then, you could also get run over by a car tomorrow. In both cases, you don't worry about it all the time - you just look both ways before you cross the street, and you avoid writing complex test code.

Another thing to consider is that while you could, in principle, automate the generation of code that passes the test, that would be unlikely to yield valuable results. Rather, coding test-first lets you focus, all the time, on two sides of a single question : what do I need the code to do (the test) and how does the code do it. Both aspects involve conceptual reasoning that could not be automated. -- lb

The test verifies the code and the code verifies the test. You have written two different implementations of the same problem and the likelihood of having made the same mistake in both is very low. When a test fails, you really do not know if the test or the code is at fault, however, since the test is usually much simpler than the code, it is most probable the fault is in the code. -- anon

Yes, it is not very likely that you mistype in both the test and the code in a way that the test passes, but: What if you are conceptually wrong? You write a wrong test and very likely a matching wrong code. What if you are conceptually right? You write the right test and very likely a matching right code. So why not automate the generation of an implementation of the interface expressed by the test? Because if I do it manually I have twice the time to think about the concept? -- as

I don't think you can "automate" the writing of code that passes tests - it's an AI-complete problem. Yes, having to think about "what I want the code to do" and then having to think once more about "how does the code do it" is what protects you against conceptual errors. But you have to do both anyway; no program will do either kind of thinking for you. -- lb

The only way to be "conceptually" wrong is miscommunication with your users concerning the requirements. The only real way to resolve this (in any development approach) is to get the software into the hands of the users. Code Unit Test First ensures that the program works as intended, it does not verify whether the intentions were correct.

As for having an automated way of generating code from tests, let me know when you find one. Until then, however, I am stuck having to write my own code.

There is Genetic Programming, which does exactly that. BUT you would need a vast amount of test cases... -- Gerriet Backer

I have found an algorithm for automated programming, but I cannot express it...

Let me summarize it: Unit Tests are highly recommended for developers that tend to concentrate on implementation and technical details (How much percent will that be?). Unit Tests require to think about specifications, interfaces, external views, "what I want the code to do". Code Unit Test First is a rule that says: "Think first about what the code should do before you think how the code can do it." An excellent side effect is that the output (Unit Tests) provides test mechanisms that can be executed by the machine and are independent of implementation changes. -- as


A thought that I'm playing with is the task of writing "Unit Tests" after the units themselves (classes) have been written.

My current thinking is that this task is nearly impossible. Once the code is written, the sorts of tests I can write against it tend to be tests which describe the behavior of the implementation, not tests describing the needs which the implementation was addressing.

Consequently, I find that when I Code Unit Test Later, I end up with tests which don't help me at all when I try to refactor. It's like I've surrounded the code with a thick, hard crust of tests that actually inhibit refactoring, rather than supporting it.

Not surprisingly, I also notice that the character of these after-the-fact "Unit Tests" resembles the character of acceptance tests much more closely than Test First style Unit Tests. -- Eric Herman

Perhaps a useful tactic would be to code the Unit Tests specifically for the purpose of helping you refactor. For instance, I recently found myself - pairing with a junior coder - working on an awkward method gleaned from an untested JSP page that pried into private details of other classes; we wrote an "outer" test to pin down the behaviour, then started extracting bits of the code into other methods, writing one or two Unit Tests for each extracted method. Pretty soon we had three or so classes and an interface, with a more balanced design, tests that didn't look like mere afterthoughts, and two significant bugs fixed. We did have some inelegant code still lying around but... "it's a Green Bar, let's go home".

Even after-the-fact Unit Tests help if they are written with the attitude of Test First style tests.


I find it curious given the above that many people have expended time and effort writing tools to generate test cases from existing classes (I do realize this may be useful in a legacy situation). You can find examples at www.junit.org . Look under the IDE section, and there's more under the extensions section. No one seems to have attempted Code Generation in the other direction.

While there is some comment further up that writing (working) code from tests is an 'AI complete problem' it is surely not that hard to generate method signatures by examining test methods. I doubt generating code to pass tests is what people need from a tool anyway - generating code fragments is the time-saving step, and generating code to fail tests is a little easier. Just throw a Runtime Exception, or the equivalent in your Language Of Choice. I love the Eclipse Ide little red x cue-balls that with a couple of clicks add classes, fields, and methods to the code being tested by my unit tests. Still have to do the method bodies by hand of course.

By not that hard (since doubt was expressed above) what I mean is this: work so that your test class is named the same as the tested class up to a 'wart' (eg Money and Money Test). An intelligent editor like emacs can then figure out the intended class name by removing the wart. Next, search for variable declarations. Next, look for method calls on those variables which have the same type as the tested class. Determine (a) method signature required by examining the variables or constants passed in (remember we already know the variable types). Finally open the class under test, check to see if the required methods exist, and insert them (possibly interactively) if they don't. A complete language parser is not required for any of this, the mechanics are there in any language-aware editor in order to fontify, indent, and navigate code.

It will not generally be practical to produce an exhaustive set of unit tests for a given function. It would just take too long. Take for example a unit test for a routine which implements y=sin(x). You can feed it various samples and possibly test some boundary cases, but you will not be able to test all values of input/output. Thus, the unit tests underspecify the function and could not be used to generate the function's behaviour other than at the points that are tested.


y=sin(x).

Here is the complete test: sin(x) = x - x^3/3! + x^5/5! - x^7/7! + x^9/9! - ... + (-1)^(n-1)*x^(2n-1)/(2n-1)! + ... go on until you run out of precision.

Definitions:

x^n is the regular power function: x * x * .... * x (n x's multiplied together)

k! is the factorial function: k * (k-1) * (k-2) * ... * 2 *1 (example: 5! = 5*4*3*2*1 = 120)

Exhaustive testing may be necessary in medical equipment like pacemakers. You want to get it right before you use it in real life.

"...you will not be able to test all values of [x]." The infinite sum above provides a mathematical verification of one value. On a machine with 64-bit floating-point math, you will wait a long time for 2^64 iterations of the algorithm. How does your machine respond to special floating-point values like Infinity or Nota Number? How does your machine respond to mathematical functions when tested at discontinuities?

In safety-critical or life-critical applications, it may not be sufficient to verify code with more code. It is possible (though unlikely) that the Unit Under Test implements the same algorithm you have devised for the Unit Test. In any case, to satisfy certification criteria you may need to manually verify that your Unit Test produces a correct result to compare with the Unit Under Test. Manual verification cannot be performed exhaustively within any reasonable scope of time.

If only all functions were periodic, had bounded derivatives, and could be decomposed into monotonic pieces! There are three ways to comprehensively verify a function: formal methods, a comprehensive list of inputs and outputs, and a trusted function to compare against. Depending on the accuracy requirements, it may be quite easy to provide a trusted function that is easy to prove correct but is unsuitable as an implementation. For instance, if sin(x) is required to give a value within d of the "correct" answer, you can create a lookup table implementation of sin(x) that is proven to be accurate within d/3, then verify that sin(x) gives answers within 2d/3 of the table. Since sine is so well-behaved, it is easy to produce such a table, and it will be small for reasonable values of d. Another way that a relaxed standard of accuracy simplifies the problem is that it may be possible to restrict the number of inputs. If you have a formally verified 64 bit -> 16 bit converter, you can compose it with an exhaustively tested 16 bit -> 64 bit sin function to get a 64 bit -> 64 bit sin function of known accuracy.

Producing a verified 64 bit -> 64 bit sin function that is correct for almost all of its bits of precision is much harder, but even this task can be cut down considerably by producing a formally verified modulus function, verifying sin1 on [0, 2pi), and implementing sin(x) as sin1(mod(x, 2pi)).

Perhaps these methods seem too heavyweight to qualify as Unit Tests, but I imagine that in the situation you describe, changes to the sin function would be undertaken with great care and deliberation, and these methods would be used much like Unit Tests are -- to decide when code is ready for integration, to check whether changes broke the code, etc.


Is it necessary to write Unit Tests that don't fail the first time they are run? See Unit Tests That Dont Break.


For an interesting take on Code Unit Test First, see Code Unit Test Third.


I want to know how to covert legacy code to Extreme Programming. It would seem that what is needed is to add tests to the legacy code so that you can do the refactoring. Does anybody know how to do this? -- Jon Grover


Interesting fact-ette about this 'radical' Code Unit Test First philosophy: it may not be so new after all. My mother started her career as a PL1 and COBOL programmer way back when computers were larger than my house. We were chatting recently about XP and she mentioned that when she started all developers were required to write a unit test plan before they went near any code. The test plan was then used as the basis for the code. I'm guessing that compile and debug times were so much greater that they simply couldn't afford to spend ages fixing silly mistakes. Perhaps its one of those universally good ideas that was simply forgotten about for a while, probably during the excitement of seeing the first computer that was small enough to fit on your desk...

Those who do not learn from history are doomed to repeat it. Guess we programmers prefer thinking about the future to studying history.

That is interesting. Could you get your mother to expand a little on what was in a "unit test plan"?

It's a curious thing about our industry: not only do we not learn from our mistakes, we also don't learn from our successes. -- Keith Braithwaite

Unit Test First programming in PHP

I tried the above the other day, just for the heck of it ... It's actually far smoother than you'd expect. Here are the steps I went through:

I installed the PHPUnit package from pear.php.net

I used it as described above :)

That's it. It actually ended up being no different from coding with a Java tool... Of course, I had to use proper OO design in my script for the unit-testing to be effective, but that's probably a good thing, so I can safely say that not only unit-testing in PHP was easy, but that it improved my code.

Am I missing the point somewhere? Unit testing always sounds like a good idea to me, until I come to actually start using it, at which point I get bogged down in problems. Allow me a real life example: I am currently writing a database application, and I have a function which simply adds a record to a table, using some details passed in as parameters. Very simple, but certainly possible to fail, so it needs a test. How do I test it? It seems that a test would need to relocate the record which has just been added, then read it and check that the correct information was written to it. This is actually more complicated than the original function, and more prone to errors. Which surely defeats the object! What should I be doing? -- Chris Sandow

An approach is to save the record that was added, do a get, and then compare the records. Depending on your system this can be made generic. -- Anonymous Donor


What about testing classes which produce graphical output which cannot be tested by simply comparing every pixel with a test image? For example, one might create a class which has only one publicly accessible function named "drawStuff()". drawStuff() uses OpenGL to draw some stuff. One cannot simply compare the pixels because the graphical output will most likely vary between various OpenGL implementations and will depend on the hardware. How do you write a unit test first for something like that? What if the output is very complex making it very hard to have a test image made prior to the test? -- tomek

Well, you can still guarantee a few things about the function: you know that under such-and-such conditions, it will produce an image, and will not throw an exception or some fatal error. Depending on your application, you may also be able to say that it should throw an error under certain other conditions and check that it does. You can also check some basic things about the image: it should have a size in a certain range (a 1x1 image is probably an error), it shouldn't be all one color, the image produced by input A should be different from the one produced by input B, etc.

Then of course you add to this basic unit test as bug reports come in - as soon as a bug is reproducible, write a unit test that fails because of the bug. Then fix the bug and re-run the test.

If you are in a situation in which defining the expected output seems equally difficult as actually producing it, maybe you are trying to test at the wrong level of abstraction (too much at once). Quite probably, your complex function can be split into multiple helpers, the output of which is easier to check. Take one further step and you will find yourself dealing with multiple objects. If these components all produce correct output given their inputs, and if the master object (formerly the complex function) composes them together correctly, then the whole test passes. Note that the correctness of the master (putting the components together appropriately) should be testable even without their concrete implementations, by relying on Mock Objects. The only concern is that by making your code testable in this manner (following Code Unit Test First) you might sacrifice the necessary performance. I guess you can always optimize (and obfuscate) your perfectly correct code later on, though.

Code Unit Test First is (almost?) always technically possible, but it is difficult. Don't believe those who tell you it's "sooo easy". No, it takes extra time and consideration, and it takes practice to get there. Sometimes the time and consideration is better spent on other activities - my favorite example is learning/exploring/experimenting with code. Does XP advocate programming "spike solutions" with the Code Unit Test First approach? I would argue that if you know that you are going to throw away your tests in the next hour together with your experimental code), don't write them. Writing tests slows you down in the short term. Admit it.

Everything I've read says don't write unit tests for spikes or prototypes. Maybe that was just Kent Beck. Anyway you can probably just use the Pair Programming rule that says only do it with production code.

I think one should look at why he is doing the spike or prototype. If one is merely trying to learn about a particular development environment, language, etc., then tests are not necessary. In other words, don't write tests for "Hello World" programs. If, however, one is trying to prove or disprove something, write the test. The test is an explicit definition of what one is trying to prove, and it is invaluable when explaining to others the result of the spike. Everyone will know exactly what was verified and can reproduce the result.


Coding Test First is like climbing with protection. You wedge various pieces of gear into the face of the rock and you have your partner belay a rope attached to you so that when you fall off, as you will frequently, you won't waste your time with being injured.

A Spike Solution is like free-climbing. No gear, no one to belay you, just you and the naked face of the rock. A really skilled climber can just zoom up the mountain, almost like skiing in reverse. But pride comes before ...

Something I want to try: after I write a Spike which does the job, or some piece of it, I want to refactor it into a set of tests, kind of like the Fake It method of writing unit tests in reverse. Not sure how it'll turn out, anyone else ever try anything like this?

Actually, my usual development method (Stop Using The Word Methodology) is like this. Except that I usually don't end up automating the tests (at least not very well); I just keep a set of things to test in mind, and run my "suite" with every recompile. This works well with games because of the "I know (in)correct behaviour when I see it" problem, but you end up with tests that are more Regression Test than Unit Test in nature.

I do things like this because frankly, trying to write the tests first feels tedious to me - it slows me down artificially, because of psychological factors (i.e. the first test - as well as the process of refactoring towards a solution for the first few tests - seem to insult my intelligence, because I've already figured out how to generalize things. I'd like to start a new page about that, but I'm not sure what to call it...).


I'm wondering about Test First And Functional Programming Synergy, and if it is worth keeping in mind while writing tests and refactoring. -- David Plumpton



I think there may be something slightly naive about the idea that you could write a unit test for a non-trivial method before you write the code for it. I think this comes from the assumption that you know enough to write a unit test once you have the signature, or even the complete "contract" to its callers, of the method under test. In order to write a unit test you must also have a clear specification of all the objects and their methods which your particular method under test uses and how it uses them. In other words, you need to know the "calling out" contracts as well as the "calling in" one. Then your unit tests can use Mock Objects to simulate the call-outs and to check they are correctly used.

In my opinion, this extra "specification" of a method is probably best documented simply by writing the code for the method (see Design From The Inside Out), and that actually ends up with you writing the code for a method before its unit test. Richard Develyn


(I took this comment out of the main body of my text)

Solution: Don't write non-trivial methods.

Any method which uses another object is non-trivial in the context of the discussion above. It isn't possible to avoid these. -- Richard Develyn


I can't see how to apply this to programs that produce complex PDF reports. I work for a document-processing company. We take data files from our customers, and print bills, statements, etc. to PDF. If we were doing plain text reports, we could create a test output file by hand and use diff to check the program-generated output. I actually tried creating a PDF mock-up by printing from Word to Acrobat, but even though the PDF looks a lot like the one generated out of my code, the files are totally different. -- Kevin Kleinfelter


I did something similar but wrote a system to generate word documents from data using the Aspose component. I found it was impossible to test the actual content of the document, as there is no way to get to it - but I could write tests for the framework that built the document. I also wrote tests for the gateway classes that retrieved the data for the document. Not perfect I guess, but there was no easy way to navigate through the document to verify the data. I suppose you could use mock objects and verify that all calls are being correctly made, then at least you would have some confidence in the interactions, if not the state. -- Gareth Jones


How should we go about Choosing The First Test?


See original on c2.com