The "traditional" model of software development lifecycle. Largely discredited*, but still widely used. Development is broken into sequential steps: Analysis, Design, Coding, and Testing. (aka. Drive By Analysis and Big Design Up Front followed by Big Coding In The Middle and Big Bang Testing). Called "Water Fall" because each artifact in the model "flows" logically into the next.
The "traditional" Water Fall model with its Big Design Up Front and months of Big Coding In The Middle, the Era Of Silence, inhibits (or prohibits) evolution of understanding leading to fourth and fifth steps, Big Last Minute Changes followed by An Unacceptable Way Of Failing.
Water Fall is based on the empirical observation of 30 years ago (ref: Barry Boehm, Software Engineering Economics, Prentice Hall, 1981.) that the cost of change rises exponentially (base 10) by phases. The conclusion is that you should make the big decisions up front, because changing them is so expensive. 30 years of progress in languages, databases, and development practices has largely voided this assumption, but it is buried so deep that it seems certain to last for some time.
(*) Discredited as a literal, followable software process. (see Is Water Fall Discredited)
Would anyone care to refactor even some of the extensive discussion that follows into Document Mode? This page is extremely large.
And Larry Constantine remarked at OOPSLA2005, "I created one of the first 'waterfall' approaches but I did not create analysis paralysis".
Now with its own conference! www.waterfall2006.com
Alistair, thanks again for saying Water Fall is An Acceptable Way Of Failing. It took us close to a whole load of things that so badly needed saying that I and others produced a few Emotional Burps in immediate response. In fact I believe that it's fair enough to feel emotional about Water Fall. It's so often been destructive of the productive team work and real creative joy that should characterize delivering great software with and to customers.
Ignorance Fear Pride Or Fraud is the mild overview of the issues raised that I've been able to come up with having disconnected from Wiki for 48 hours and (slightly) calmed down! -- Richard Drake
Per the WF write-up at www.objectmentor.com , WF got invented as the following theory: "When you code, data typically flow from Analysis to Design to Coding to Testing." Unfortunately this was easily mis-interpreted to mean, " time flows from Analysis to Design to Coding to Testing." Bad news.
Under pressure, even the best of project managers fall back to WF practices. That's the bugbear we all fear here. -- PCP
The assumption that is usually invalid in a waterfall process is that the requirements will not change during the lifecycle of the project. In reality, requirements change a lot in most (though not all) projects, especially once the customer gets their hands on it. The failure of traditional waterfall process to recognize this is a fundamental flaw. A mistake in the requirements phase can not be detected in a waterfall process until near the end, when the customer gets to see the (nearly finished) product. This leads to a huge cost in correcting the mistake (the old cost-of-change curve again). XP, and most Agile Processes, attack this area explicitly, putting a high emphasis on getting something to the customer early on, so that feedback can be obtained.
To put things into perspective a bit, the process most likely to allow a requirements mistake to go undetected until the end is the oblivious one, i.e., the process where developers don't "get" requirements. With waterfall, at least requirements are on the map. There's no law against prototyping and verifying with users, either. Maybe waterfall is the toddling process, if you get my analogy. Toddlers don't walk so good, but at least they're walking. Let the balance come with practice. Walk Before You Run. -- Walden Mathews
Walden: I would disagree that your Toddler analogy IS an example of a Water Fall method. I would see "crawling", "steadied standing", "wobbly walking", "walking", "running" & "Olympic sprinting" (very optimistic) as more an incremental approach.
Yikes, I misunderstood your point on first, second and third readings, but I think I got it now. My point is that Waterfall IS the toddler of incremental approaches. See Jaime Gonzalez' better explanation below. -- wm
The original waterfall model, as presented by Winston Royce in "Managing Development of Large Scale Software Systems" (Proceeding of IEEE WESCON, August 1970 [facweb.cs.depaul.edu ]) actually made a lot of sense, and does not resemble at all what is portrayed as a Waterfall in the preceding entries. What Royce said was that there are two essential steps to develop a program: analysis and coding; but precisely because you need to manage all the intellectual freedom associated with software development, you must introduce several other "overhead" steps, which for a large project are:
System requirements Software requirements Analysis Program design Coding Testing Operations
Many of Royce's ideas would be considered obsolete today, but I don't think the problem lies so much with his model as with the cheap (not in terms of money, of course) and rigoristic ways of misunderstanding what became an engineering and project management "standard". This "standard" is what became An Acceptable Way Of Failing.
Royce's valuable ideas must be understood before we simplistically discard the Waterfall. For example, when he advised to do some things twice he was postulating what we can call an embryonic form of iterative development. Also, some features of the Waterfall are inevitable: as Alistair Cockburn has said elsewhere (members.aol.com )�how can you get to do testing if you have not coded?. Easy. Code Unit Tests First
The main problem has not been the Waterfall, but the schematicism involved in a "standard box for all projects". The variety of possible routes to follow is enormous: under many circumstances, training and empowering users to do their own computing will be a radical alternative not only to analysis and design, but to coding as well (after all, as Emiliano Zapata would have said, applications should belong to those who make a living out of them). Under other circumstances, the best way to enhance team collaboration is to design the database through a diagram that makes the objects visible to all participants.
For metrics and data on software project failure, see Capers Jones' books (Patterns of Software Systems Failure and Success is probably the best known).
-- Jaime Gonzalez
Well put, Jaime. Better than my earlier, somewhat frustrated efforts. Thanks. -- Walden Mathews
The problem with the Water Fall model, even as explained above, is that it tries to create phases that do not exist, and reflects a fear of writing software that may be wrong.
The truth of the matter is that it is no more difficult to write a requirement directly in source code than into a requirements document and it is far easier to validate the requirement as running software. The result of the Water Fall is that we first attempt to create the system in prose as a requirements document, then translate the requirements document into a design document in prose, then translate the design document into source code. It is no wonder the intent of the user gets lost by the time the software is actually created.
The massiveness of the process is not due to a misunderstanding of the Water Fall, but a direct result of it. Each phase can be no more correct than the preceding phase and is likely to be less correct. Translation, errors creep in as we convert requirements prose to design prose and design prose into software. Correctness in one phase requires near perfection in the preceding phase, thus each phase slows down to ensure that errors do not creep in. As the time for each phase grows, the cycle time from concept to actual delivery of software grows. As the cycle time grows, there is more and more pressure to include more and more functionality that the users can't wait for. This in turn inflates each phase even more and increases the cycle time.
In effect, the Water Fall method creates a control loop whose output explodes. The phases may make nice diagrams in books, but they do not exist in reality. English prose is not inherently easier to write than software nor inherently easier to evaluate nor easier to change. The instincts of programmers have been right, go directly to the code.
Water Fall is a solution to a problem that no longer exists, as a process it was designed for a era where Machine Time was orders of magnitude more expensive than Human Resource, it really was cheaper to rework a requirement 3 or 4 times in requirement capture, analysis, coding and testing that make a mistake once with the machine. Today Machine Time is vastly cheaper when compared to professionals time. -- Martin Spamer.
But none of these are the >real< reason not to use Water Fall.
If you work in a semiconductor fab, and if you make a tiny mistake with a batch of wafers, you have just burned the entire cost of all the processes your wafers previously went thru, plus the cost of sending an entire new batch of wafers thru the fab to this point.
If you use Water Fall the way it was designed to be used, if you make a mistake in an early process and discover it in a late process, you must go back to the early phase and do all the affected phases again. This is because the phase is important (else why are you doing it?), and during the phase you must manually cross-check the fix with everything else. Then you must run the subsequent phases using only the output of the previously re-done phase. Read more about it here at www.waterfall-model.com
Everyone who says they have Water Fall experience most likely allowed minor tweaks without re-baking everything. But if you take Water Fall seriously (because you are using hardware and each phase really is irrevocable, or if you are using software but a militant environment believes it must do Water Fall), then you must do it right and rebake each phase at mistake time.
As noted above, trying to create "phases" slows the development of software down. It is not going back to previous phases that creates the problem, but doing the same work multiple times in order to define phases.
I think I see the problem here. It comes out through phrases like "Water Fall the way it was designed to be used" and "take Water Fall seriously". There's no way to know how Water Fall was designed to be used, or even to know who designed it or if it was designed at all. And taking something seriously doesn't mean aborting common sense. These words belie a problem with authority and enforced inappropriate applications, not with the model in question.
It makes no sense to judge Water Fall (strictly or loosely interpreted) without establishing a risk context. There's no reason, Water Fall notwithstanding, not to write code on the first day of a project if you smell significant technological risk. Even though you're coding, you're still (big picture) focused on requirements, because you are determining the feasibility of the system the customer wants. Don't lose the forest for the trees. The phases of Water Fall are not absolute, and they are not characterized by specific activities (like coding) but rather by the type of decision being made. Water Fall prioritizes decisions of the kind where a change in decision can invalidate the entire project.
The risk of postponing coding and testing until very late (someone's bum idea of what Water Fall means) needs no explanation. The risk of not assessing a critical body of requirement before committing a lot of code probably does need explanation, to this audience. I have never seen a system in which requirements were not related to each other through interdependence. I have never seen a system in which simplistic and isolated requirement understanding didn't fail when requirements were integrated (built) into a system.
While it's true that there's only so much planning you can do before planning loses its value (and that varies widely according to style and skill set), why wouldn't you want to know about requirements that must change already, not because the customer is fickle, but because they don't make sense together? Even if you have no strategy but "let's build it anyway and see what we can do", why wouldn't you want to know?
Skills matter in the selection of tools and processes. Analysis relies on advanced abstraction ability which seems to develop with experience (no offense, young 'uns). Deep and early analysis by someone with the ability can save you eons of time and wasted effort, but it's a skilled act. Reluctance to use skills not yet attained is wise. If Water Fall doesn't fit because of missing skills on the team, then it's only common sense to pick another strategy. In this case, dismiss Water Fall for cause, but please be clear about the cause.
I believe that the better you get with a given problem and technology domain, the more your work will converge on the simple Water Fall. (In the life of every teen-ager, there comes that epiphanic moment when you realize you are your parents.) When you switch environments, it's another matter. When you can see a lot of the project coming, you can Water Fall it. It's really just a matter of knowing when planning will pay off and when it is a waste. When throwing out the Water Fall, be sure to check for babies.
Also, Water Fall is the simplest (batch) process for building software. How much should you elaborate that process in order to mitigate risk? When reducing risk A, watch out for increase in risk B. What about the risk of an elaborated process? I would rather see a team start with a simple Water Fall plan for building software and then deal with its weaknesses incrementally than begin with an over-elaborated process and try to figure out which risk reduction parts are not needed. I think you can see that an optimal balance for risk reduction is never a trivial thing. But neither do you have to get it perfect.
Deep and early analysis by someone with the ability can save you eons of time and wasted effort...
Care to justify this assertion?
Certainly. Not too long ago I went to a meeting in which the participants were gearing up to build a complex system for tracking per-quote charges for customers who receive stock exchange data feeds. Domain analysis revealed that this pricing model was invalid, hence no need to build such a system. The analysis wasn't so deep, but if you took a "code now" approach, you wouldn't know until you submitted a bill. In another example, a workstation customer requested more advanced security features, providing a list of functions. Analysis of this list revealed an inconsistence such that some of their proposed features were entirely superfluous. They were removed from the project before a line of code was written.
As a general rule, there's tremendous leverage in the requirements of a project. Failure to fully explore the meaning behind a collection of requirements can be a huge mistake. Early coding shifts focus from semantic depth to implementability, which proved irrelevant in the two cases above.
It appears the natural conclusion to statements such as "There's no reason ... not to write code on the first day of a project if you smell significant technological risk." and "When you can see a lot of the project coming, you can Water Fall it." is that Water Fall is only appropriate for low risk projects. If the Water Fall does not provide risk reduction, then why do it for even low risk projects?
Good question. You caught me up in sloppy use of terminology. In the first place, Water Fall calls out prototyping and backtracking as standard practices, so even technological risk and the risk of misunderstood requirements is addressed by Water Fall. It's just that the current generation chooses to ignore the upward arrows on the diagram, preferring to immerse itself in the metaphor of liquid and gravity. But even if we address the popular notions of Water Fall, being strictly phased and so forth, if you look at what the model is saying you find a heap of risk reduction in that. Code design is all about reducing the risk of poor maintainability, for example. Treating requirements as a system, as I've touched on above, reduces the risk of implementing nonsense. But the harsh reality is that any structure we choose reduces some risks while ignoring (perhaps increasing) others. I have used discrete phases on occasion just because I can and because the project permitted. Whether or not you should use discrete phases when there is no unacceptable risk in so doing is a good question. Could be that matters of style eclipse matters of hard practicality in this area.
Why is time and effort spent generating paper more productive than generating software that actually runs? Why would paper explanations of complex processes and interactions be more understandable than actual running software showing them?
The question is not "Why" but "When". The answer is when the "paper" fits the ideal commitment curve for the project you have in front of you. The "paper" we're talking about is models, and its purpose is to verify what we can when we can before moving on. This is risk reduction again. But for models to be effective, they have to be meaningful, small, cheap and disposable according to localized measures. Executable models are fine, provided they meet these criteria. A thread-safe running system does not automatically demonstrate thread safety through casual observation. A paper diagram may do a better job.
Why would a single individual doing thought experiments inside his own head be more effective than multiple people actually trying to use running software?
Summoning the utmost of my intuition, I think you're actually objecting to the situation in which people are expected to invent without feedback. Well, yes and no. In the first place, do you appreciate your full power to validate or invalidate a proposal on what you already know? There's an important tradeoff here. If you have to perform a CPU experiment for every decision you will make in building a given program, you might as well give up now. The question is, when can you rely on memory and when can you not? The answer is: experience. This is an interesting sideline, not strongly connected to the Water Fall model as I see it.
The Water Fall is not the simplest process for building software, instead it is the iterative experimental approach; the approach all of us used to make our first, custom program.
Water Fall is the simplest model for building substantially sized software, but it is not the most intuitive one, nor is it the simplest one for building little programs. How do you decide what's simple and what's not?
I can find the references, if desired, but during the discussions of Ronald Reagan's Star Wars Missile Defense System proposal, it was stated the size of the project was so enormous as to be impossible. It was also noted, however, that programs of that size existed, but had come into being through an evolutionary approach rather than planned design. It seems that the iterative, experimental approach is the only feasible way to build extremely large programs.
No argument there. What size chunks are extremely large programs built in, and what does the lifecycle look like within a chunk? By countering false claims about Water Fall, I'm neither dismissing all other approaches nor claiming that Water Fall has no faults. All lifecycle models are essentially the same, the exception being that Water Fall is the most fully abstracted and generally applicable one. Water Fall is the meta-model of software process. It's the XML from which your particular schema is written. 'Iterative' is a minor invention upon a major theme. -- Walden Mathews
How is the iterative process based on the waterfall? Perhaps you should define your use of waterfall to support your claim that everything is waterfall. You appear to have stripped all meaning from the term.
How is iterative process based on the waterfall?
The iterative lifecycle (not process, if we want to be exacting) takes a bite-size of requirements and follows the steps of Water Fall over them, then takes the next bite-size of requirements and follows the same steps plus the additional integration that wasn't planned, and so on. Or, as is said many times in our midst, "little waterfalls". So the question is, How little does a waterfall have to be before it's not a waterfall?
When I am saying iterative approach, I am talking about a fully parallel approach of defining requirements in conjunction with coding in conjunction with testing. I am not talking about "mini-waterfalls." The waterfall approach addresses threads of functionality broken across phases of requirements, design, implementation, and test (or pick your specific sequence). Instead, we can rotate that model 90 degrees and have threads of requirements, design, implementation, and test broken across phases of added functionality.
"Fully parallel" is illusion, like time sharing. You have to ignore some known detail to believe that. You can switch back and forth rapidly from decision about "what makes sense to build" and "what is feasible to build" and "how would we build that", but you can't really do them all at the same time, not as a conscious process. More importantly, when most of this activity is really geared toward answering the first question, you're essentially defining requirement, even if you're coding much of the time. I've made this point three times now. Even if as a byproduct of all your prototyping (requirements phase) you magically ended up with totally usable and mature code, you're still in step with the model. The real problem is that Water Fall model describes what you do in spite of your attitude toward it. It does not describe the granularity of problem you will attack in one chunk. If granularity is the main issue, we should clearly state that and distinguish between model and granularity of application.
Exactly. Every attempt to successfully alter our environment, whether coding software or hitting a golf ball, can be described in terms of the Water Fall model. NEEDS - PLANS - ACTIONS - ANALYSIS - FEEDBACK. This basic loop is conceptually present in every attempt humans make to reach any goal whatsoever. Rarely does anybody seriously attempt to complete a major development in a strictly linear, one pass through approach, but these things are almost always iterative in reality, as with the Spiral Model. There may be large debate over how big a chunk you should bite off at a time, but even XP methods move through the basic concepts inherent in the waterfall description, maybe just in much smaller pieces at a time. I want to hit the ball; I'll look over there and swing like this; I swing; I see where it goes; I adjust my swing based on the results. I talk to the customer about what they want; I think of a function to test this requirement; I write the code and test it; I see if it passes or fails; I adjust as necessary. needs; plans; actions; analysis; feedback. There is no escaping this loop regardless of the development model one uses, it will simply be applied using different methods, on different scales. XP may advocate doing this on a smaller scale, with smaller gaps between the steps, but as was said that is a distinction between granularity of application, not of model.
"Fully parallel" is illusion....
Sure, if you only have one processor. Waterfall methodologies that I've endured in real projects, up to and including RUP, treat the entire project team as a single processor: the project as a whole is in one phase at a time. This is understandable, if not excusable, given that the main purpose of these phases is to produce the Impression Of Control.
XP et al allow individual processors to be in different "phases" at a given time. In fact, I can't remember the page, but I saw a mention of Unit Tests running in an automated loop "in the background" while code was being developed. Are we having "parallel" yet? -- Tom Rossen
How is parallelism an illusion? The iterative approach is no more than a restatement of a basic control loop. A control loop constantly generates output, compares it against the desired output, and adjusts. All three tasks occur simultaneously. What I have stated is that the Waterfall model is a different model from the iterative model. The waterfall model views software development as containing separate serialized phases while the iterative model views software development as ongoing parallel operations. Within the context of one model, the other is impossible, hence these must be two independent models. The only remaining question is to determine when each model best reflects reality (and never confuse models with reality).
N.B. We're getting to something significant here, finally. Much of the rest of this page can be trimmed down, including all my amateur rhetoric. This is technical. Let's dive in with care and precision. I'll take the points one at a time.
How is parallelism an illusion? ... A control loop constantly generates output, compares it against the desired output, and adjusts. All three tasks occur simultaneously.
They may occur with minimum delay between them, but they can't be both simultaneous and meaningful controls at the same time. Here's why. If the evaluation is really simultaneous with output production, then evaluation has defective inputs, just like what happens when you judge my sentence when you're only half way through. This is because output production, evaluation and adjustment are all processes as opposed to events. Similarly, if you are taking corrective action simultaneous with evaluating, then you're taking the wrong corrective action because again your input was incomplete (ergo wrong).
The waterfall model views software development as containing separate serialized phases while the iterative model views software development as ongoing parallel operations.
I've never heard it stated that way. I've always thought of iterative meaning just that; you take a small subset of the whole problem and explore it deeply (through implementation), then iterate for the remaining subsets. Maybe we want to check some sources on this. I can try to do that, but maybe the task belongs to you.
Within the context of one model, the other is impossible, hence these must be two independent models.
I'll agree that a model of absolutely simultaneous activity differs from the Waterfall model in which temporally and semantically separate stages occur. I should point out that in Waterfall, the semantic phasing is absolute while you can mess around with the temporal thing by changing the granularity of the problem. In other words, you could write some code first (without thinking at all), then go discover some requirements, then compare life with and without the running program to see if it satisfies the requirements. You could, and I'm aware that to a degree, we all do this from time to time, but is it a real strategy for software development? More importantly, can you map your control loop on it? When you do, does the mapping try to "tell" you anything?
Actually, this is a very common approach in software development. The programmer will take a "Hello World" program or another program he has previously written and use that as a starting point. Initially, the program meets none of his needs, but he begins to adapt it to improve its fit with its intended purpose.
The only remaining question is to determine when each model best reflects reality (and never confuse models with reality).
Glad your life is so simple! A resounding "+1" on the "never confuse" part. It's because those pointy haired managers confused waterfall model for waterfall ''process' that we have this mess in the first place. I'm casting some serious stones at your version of parallel "reality" above, so I'll wait to hear where we stand on that. I think, though, that the real "reality" question has mostly to do with the granularity you can tolerate/leverage while solving a given unique problem. Top-down approaches make sense when you have all or nearly all the knowledge you need to do a job, and you just need to organize that. Low frequency Waterfall (big chunks) is a kind of top-down strategy. When you try to use that in a setting where you don't have the knowledge, then you're forced to either guess (and commit to guesswork) or shift gears. And of course, it's the guesswork that isn't "real".
I don't advocate that anyone try to follow a waterfall process, except as an etude. It's a useful etude because you find out exactly how much you do and do not know, and how much concrete feedback it actually takes to build the thing. In reality, we have uncommon sense to tell us both when we are biting off too much (your criticism of Water Fall) and when we are biting off too little (what do you call that, by the way?).
Contrary to late popular belief, Waterfall is not an anti-feedback model. If anything, it's a fast-track feedback model, but as this name implies, it requires a skilled driver. I can write down on paper something like "user will keep getting the login screen (with error advice, if available) until login succeeds" much faster than you can implement that idea in code, and I can get feedback from the affected parties that much sooner (almost "simultaneously", if you must know). There are some concepts that can't be well verified without experiencing them (proof of the pudding and all that), but there are so many that can be dealt with this way, it's a shame not to. And of course the more you can reason correctly through abstraction, the better you get, and so on. From a bird's eye view, the goal of the Waterfall model is to provide early feedback, the mechanism being abstraction. Look:
START:
Specify -> Review -> Identify Errors -> START
START:
Specify -> Design -> Code -> Run -> Identify Errors -> START
The top loop is significantly tighter, but it doesn't help when there are surprises down the line. The bottom loop ferrets out the surprises, but it postpones feedback to achieve that. Which is the holy grail and which is the devil's forked tongue? Both and neither; why do we have to decide now for all cases? In the wrong context, they both suck. If you're hung up on needing broad consensus on which approach is right, then you're making the mistake that makes Water Fall suck. Stop doing that.
Just to reset the stage, I am merely suggesting that there are at least two different models of reality. A model is a simplified subset of reality and different models have different subsets. Different models give different views of reality, by definition; so to compare models to determine which is "right" is pointless. Use the different models to give yourself different views and come to a better understanding of reality.
The only way to evaluate a model is through its implementation. Unfortunately, the act of implementing the model changes it; aspects are added, removed, and changed. As you have noted above, the implementation of the waterfall may very well differ from the pure model of the waterfall, but the implementations are all we have to use to evaluate the model.
The iterative approach to software design reflects successive approximation, with each step being a change in functionality. We hope the step is an improvement, but it is not guaranteed; this is the equivalent of undershoots and overshoots in a control loop. In this model, there is no "understanding" of requirements, only improved understanding of requirements. Also, this model permits the implementation of the software to influence our understanding of the requirements.
Just as the "parallelism" in the iterative approach may be an illusion, so is the "serialization" in the waterfall approach. A requirement cannot be fully understood except in the context of its implementation. Using the log on example posted elsewhere on this page, what was validated through the paper model? What were the significant features of the paper model that needed to be included in the actual software? What were the insignificant features that could be changed or removed? What were the missing features? How long does the log on window take to display? How many characters fit within the user name and password areas? Is the password displayed in clear text? Are there different modes of operation which should also be selected at log on? Is the user name remembered between log ons? The password? If the user name or password needs to be changed, what does typing a character do: append the character at the end, replace the existing text with the typed character, something else? Understanding the requirement of a log on is difficult and goes through iteration as the implementation improves understanding of what is required. Note how switching models affects the view of reality. It does not say the other model was wrong, but it certainly gives a completely different view.
The answer is that to the extent that any of those details are critical to acceptance, they might be written into a simple description that the user can understand and approve. It makes no sense to argue that because there are infinite details to a logon, there is no point in trying to describe the features that matter. In the example above, one critical feature of the application is that you can't use it (at all) until you log on. This is taken from a project I'm now doing. Some of the program's functions are "safe" for anonymous users, while some aren't. The sponsors want to sidestep the intricacies and require logon for all users all the time. The requirement says that. It short-circuits tons of wasteful coding that might provide examples of how the system looks with a "late binding" logon as opposed to the one decided on.
Not all models are created equal. It's easy to create a model that doesn't make sense in the real world. Your model of "parallel phases" is such a model. While in some respects a rigid Water Fall "doesn't make sense", we're at a different level of semantics in saying that. Parallel phases don't make sense in the same way that Escher drawings don't. Lengthy phases may be suboptimal in most familiar development contexts, but they make logical sense at least.
The logical sense of Water Fall makes a huge contribution to the management of projects, even if it lends itself readily to misapplication. That contribution is something unique to human intellect. The ability to describe what does not exist but is desired, to use that description as a standard by which to build, and then to use it again to verify that what was built was what we wanted - that ability provides a complexity-taming Separation Of Concerns that is essentially the same as the separation between interface and implementation in object oriented and other strongly modular approaches to programming.
The answer is that to the extent that any of those details are critical to acceptance, they might be written into a simple description that the user can understand and approve.
Does your definition of waterfall require the simple description to be written in prose or does it allow the details to be written as a running program?
The description has to be in the optative mood, in other words, it has to identify an existing condition and a wished-for condition, and the wished-for-ness has to be explicit in the language of the description. If you can do that in a programming language, it's fine (but can you?). Here's a caveat: pointing to the execution of a program and saying "There's your requirement" is bogus, because it lacks a minimum second point of view. Something cannot be its own standard without eroding the meaning of "standard". Validity is always a matter of comparison. I think that gets to the heart of your question.
Okay, when is a requirement considered defined? Can a requirement be considered defined before it is validated?
"Defined" may be too strong a word. It's possible to do very good work with requirements that are clear and complete enough, but by no means rigorous definitions in the formal sense. Strictly speaking, it makes no sense to validate something that is lacking the expected level of definition (being careful to avoid absolutes in that), just as you wouldn't attempt to validate a not well-formed bunch of XML. Similar to how Water Fall is "defined" (well-formed) but not "validated" (in simplest form) for use on your project.
Okay, when is a requirement considered "clear and complete enough" to move from the requirement phase to the next phase? How is this determined?
When the people who care about it think it is. You seem to be fishing. Catching anything?
Nothing yet. Still trying to get a definition of what the Waterfall Model is. So far every time I propose something I just get told, "No, it's not that." If the purported serial phases exist, surely they must have starting points and ending points? What are they?
Have you read the Royce paper? It's probably as "definitive" as you're going to get. We could focus our questions and answers on that material if you like. Careful, though, because Water Fall is a lifecycle model, not a process model, the latter being a more detailed thing defining entry and exit criteria for process steps and all that. Also, you will find reasonable analogs to your questions in XP, if you look for them. For instance, relating to the "write unit tests first" dictum: When is a test considered complete enough to begin coding? How is that determined?.
A Unit Test is considered complete enough to start coding when it fails.
No I haven't read the Royce paper. Perhaps you could provide a summary?
I have read the Royce paper. It describes the 'classic' Water Fall and says of it that it "is risky and invites failure". It recommends doing design before analysis. It recommends 240 pages of spec per million dollars of system. It recommends that you Plan To Throw One Away. It recommends testing everything (but doing so at the end). It recommends involving the customer. The conclusion I take away from reading the acknowledged defining work on the topic is that the industry has been, for 30 years, using what Royce used as a Straw Man to recommend fairly reasonable things for his time, and calling it Water Fall. -- Laurent Bossavit
Let me propose a strawman for discussion. An aspect of a requirement can only be understood or communicated through an example. The example describes the aspect of the requirement, its implementation, and a means of verification, thus each of these become known simultaneously. These are inseparable, parallel actions. As more and more examples are known, more aspects of the requirement are known. The requirement is never fully known, but our understanding of it should continue to increase.
There are two ways I can record my understanding of a requirement. I can record it in a written document or I can record it in executable software. The waterfall approach has me record the information in a written document (or more usually a series of written documents) and then generate the software from the written document. What I am calling the parallel or iterative approach has the understanding of the requirement recorded directly as software.
Real world problems constitute an example- and problem-space that's not small according to your local measurements (otherwise, you don't regard them as problems). When a space is sufficiently large, there typically presents more than one way to traverse that space. For example, there are depth-first searches and breadth-first searches. Your example sounds like depth-first, while strawman Water Fall sounds like breadth first. Breadth-first resembles batch processing, in which the process is uniform across an entire tier. Depth-first resembles piece work, in which you never have large quantities of intermediate results lying around. Each addresses certain risks and ignores others. In reality, you can search that space freely, now delving deep, now reaching broadly. Your sense of effectiveness - a combination of risk awareness, ROI-sensitivity, self-consciousness, etc. - is the guiding force. Models merely describe some of your options regarding that search. A good model is simple, and like a good tool it works well when it works well, and it works dismally when you try to force it to fit. It also works well when you use it, rather than the other way around.
That said, you understand what you do about this largely because you can fit it into a framework known as Water Fall.
Perhaps you should define your use of waterfall to support your claim that everything is waterfall.
I've tried in the past to "define" waterfall for the purpose of enhanced discussion, but failed to trace waterfall to its historic roots. Please help me do that, if you have a lead. The best reference I have is the Royce paper (see top of Waterfall Model), but that's incomplete. In the first place, Royce never says "waterfall" in the paper, and in the second place, a waterfall-like lifecycle is already assumed at the time of the Royce writing, but apparently without any emphasis on prototyping or code design.
You appear to have stripped all meaning from the term.
Probably only the meaning that you love to hate. If so, I'm succeeding in my crusade.
I gather that the real beef against Water Fall (or whatever) is the dogmatic persistence in staying "in phase" when common sense dictates otherwise, and I gather that there is a maidenhead of resentment built up against managers who obliviously steered projects this way, defeating all the talent inherent in its people. I share the resentment over that practice. However, I would rather that people understand the depth of the waterfall model and the criticality of appropriate application than to think that we've shifted paradigms so that the model is no longer relevant. That smells too much of reinventing the wheel.
I also acknowledge that total departures are sometimes called for when a pattern has gone wrong yet remains too strong an attractor to allow improvement within the system. Could it be that we've had our bloody revolution against you-know-what, that we're "better" now, and we're secure enough to allow the entry of a historical perspective on software development?
Cheers to Walden for picking up the task of defending the bad mouthed Water Fall model. I happen to agree entirely with his viewpoints. It makes a lot of sense to review if bad results of Water Fall approach (by the way, haven't there been any good results at all ?), were consequences of logical flaws in the model, or consequences of bad practice. It's also worth exploring if differences between Water Fall and other development models are logical/structural differences or just differences of accent, nuances, etc. In particular approaches that start a priori with all kinds of prejudiced ideas like "requirements are bad", "documentation is bad", Big Design Up Front is bad and so on, no matter what the context may be, are definitely fishy. --Costin Cozianu
Neither kinder nor truer words have ever been spoken, Costin. Thank you. -- wm
This is really wonderful stuff Walden. Coming from a Large Scale background, much of what you say rings true. About two years ago I was involved in a project where we put together a network management application suite together with a couple of remote design centers. The mess that resulted with conflicting and uncooperative technologies could have been mitigated if we had considered the interdependence of requirements earlier on.
Consideration of this interdependence should also help a project get the most out of Large Scale Reuse. The ideal scenario for my design team would be one where all the standards, protocols, technologies and tools were presented to us (packaged in our favorite language) at the start of a project as if they were the output of a single XP project held for our benefit. And then if all the other design teams and sub-projects also used this set of tools by default, life would be a lot simpler for us. This, of course, is never the case. In the rush to go agile I had forgotten that there are certain decisions that can't or shouldn't be made ad hoc.
This suggests that small iterations have certain pre-conditions, one of them being a stable technology base. Are there any other parameters controlling iteration size? You already mentioned skill sets. One thing I have noticed is that the developers often are the experts (sometimes they are even the customers in a sense). When this happens, up-front analysis and design starts to sound reasonable again. -- Chris Steinbach
Chris, the kind words are much appreciated. I come from an environment where "agile" is like water to the fish, so pervasive it goes mostly unnoticed. In that environment, I observed many of the kinds of failings you describe above, so my inclination has been to push in a particular direction in search of "center". But others have had contrary experience so that "center" is someplace else and I appear to be the enemy, or at least someone barking up the wrong sluice.
You're saying that only when the technology base is stable can you afford small iterations? I'm not sure we've got quite the right separation there. I see it more like that a stable technology base (some consistency to deployed technology) has to be maintained as a project goal, but I wouldn't say that long iterations serve better when technology is up in the air. Was that your meaning? -- Walden Mathews
Yes this was my meaning. Ensuring the necessary technological stability for the first productive iteration is going to take some time and is therefore a limiting factor. This, however, says nothing about the size and form of the following iterations. In the datacom/telecom industry we are ruled by technology. Every new project comes as a shock to the system. The projects tend to be structured as a Water Fall with an iterative model in the middle. -- Chris Steinbach
What technology needs to be stable and why is it necessary?
I mean the technology used in development, deployment and operation. What's the worst thing that could happen if you ignore this. You might have to rewrite the whole code base if, for example, the chosen language is found to be unsuitable. You might waste time and write a lot of code that is later provided by a 3rd party component. Above all, you can't estimate properly without this stability.
A strength of any method must be to what extent it functions as a guide. XP provides a lot of guidance. It suggests a number of practices with complex interrelations. These map to a structure of continuous refinement using small iterations. To jump-start this whole process and to overcome, what are considered, minor difficulties some standard advice is offered. But the simplest project model that offers any guidance is, almost certainly, Water Fall. While XP requires some amount of learning to make it 'pop', Water Fall tells you what to do, almost in bullet point form. If XP advocates are serious and are not just repeating slogans, then in the spirit of Do The Simplest Thing That Could Possibly Work they should start with Water Fall as a project model. Any premature attempt at adding practices, iterations and structure should be met with a firm and resounding: You Arent Gonna Need It. -- Chris Steinbach
You might have to rewrite the whole code base if, for example, the chosen language is found to be unsuitable.
Short of writing some code, how are you going to determine whether a language is suitable? If rewriting the code base is a risk, I would think that would be an argument in favor of a short first iteration, not for a long one.
It's a tough question. Sometimes it is the mixture of tools that decides. If I can take a network management app as an example. I may have customers who want to access the management system using a certain CORBA version. Then on the network element side maybe I have a mixture of CMIP and SNMP interfaces. It can also be the core competence of a design team or a project that decides.
I think you are correct to say that writing code is an important part of this process. But now we are not talking about production code right? Deployment involves all manner of awful things such as licensing, backwards compatibility, upgrade with data migration. Maybe I have overstated the case for technological stability during development and understated the deployment aspect.
Also, please explain the statement about the Waterfall being the simplest project model.
Read above, I say Water Fall captures the project structure 'almost in bullet point form'. I don't want to suggest that there is nothing more to it, but it is simpler to explain, conceptualize, learn and (conditions permitting) realize than most other project models. If only because there is less to it.
However, I only say that Water Fall is the first place you should visit. You might not want to stay there. -- Chris Steinbach
I will agree that most projects usually are structured along the lines of the waterfall method, but I don't think it necessarily follows that the waterfall method is the most natural or appropriate method to use for most projects. I contend the parallel approach is the more natural and simpler approach and is the one most developers will follow without an outside force causing them to use the waterfall. The simple control loop is the model most often used for other human activities, why not software development?
I think that I have concentrated on structure too much here. The technological problems I started out with ought to be solved, in part at least, by standardization and improvements to technology. My appeal to simplicity is also bogus. Both XP and Water Fall have simple structures at some level. These structures are not useful until they combine with human activities. -- Chris Steinbach
A strength of any method must be to what extent it functions as a guide. XP provides a lot of guidance...
It's important, in discussions like this, to beware of comparing apples and oranges, such as Water Fall and XP. Water Fall is a lifecycle model, something quite abstract and insufficient on its own to guide a development effort. XP, as a collection of heuristics, fills a different role in the software process landscape. The manufactured dichotomy of "XP versus Water Fall" is, frankly, childish, and bears all ths shtick inherent in adolescent upheaval and identity search. Nor does this attitude forward the helpful cause of XP, big picture. -- Walden Mathews
Comparing and contrasting different models is the only way to reach greater understanding. We must be always be careful not allow this process to degrade to one versus the other, because at that point, all learning shuts down.
I think in this case I really did make the wrong comparison. However, we don't have to accept my overly abstract characterization of Water Fall. Once you start to reintroduce all the details from feasibility study to review, you have something much stronger than a skeletal lifecycle model. You have something that can be (and has been) used as a guide.
Without wanting to pursue the Water Fall Vs Xp motif here, I do have one last comment. If XP and Water Fall really are like apples and oranges, then this must be a result of, not a barrier to comparison. But don't they have some overlap? They are, after all, both aimed squarely at supporting software development. If they are at odds with one another, then this only makes the comparison difficult. Not necessarily less rewarding. -- Chris Steinbach
To appreciate the difference in roles between Water Fall and XP, try mapping the activities of XP onto the Water Fall model. We can do that here. I'll start.
User stories -> Requirements "Tests" written before "code" -> Specifications Refactoring -> Design etc.
In other words, XP can be understood in terms of Water Fall. Can that be stated the other way around? -- Walden Mathews
How do you explain the onsite customer? The point about XP is that it uses ongoing requirements definitions. And tests are not fully written before the code, but developed in conjunction with the code.
The "onsite customer" rule is a requirements feature; the "ongoing" part is an elaboration of the model. "In conjunction" is not process language because it doesn't imply an ordering, yet ordering exists to the careful observer. Doesn't the rule say "write tests first"? Doesn't "first" mean "before something"? -- wm
I believe the phrase is "test first coding" and it is really trying to emphasize not writing the tests after the code. The definition of test and the code being tested should be the same. They are two alternate implementations of the same concept that must be brought into agreement. In practice, you will find yourself constantly alternating between adding, modifying, and correcting the test code and the tested code. For many languages, the code to be tested is the first code actually modified, if for no other reason than to add the method to be tested. Imposing sequential operations inappropriately is one of the difficulties with the waterfall model.
I'm not sure what the phrase is, but I can relate the gist of an email I exchanged with Kent Beck a couple of years ago in which he said (almost verbatim), "Have you tried writing your tests before you write your code? This to me was one of the most powerful insights...".
Sorry if I seem to be splitting hairs, but I don't believe you are observing what is really happening. "Not writing the tests after the code" is equivalent to writing them before the code, unless you type with both hands on two keyboards at the same time. That would be extreme!
"Constantly alternating between adding, modifying ...". Precisely. You alternate, which is different from doing them simultaneously. You seem to blur the distinction in the name of XP, and I don't think that's "cricket", but since I'm no XP XPert (heh), I wish someone who is would chime in at this point. -- wm
The argument is the opposite, the argument being presented is that requirements, design, and test are not separable tasks. The blurring is based on trying to take micro-increments of time and map them into "phases." A requirement is defined by the test that validates it. The test is validated by the use of the implementation. Characteristics revealed during the use of the implementation both validate and modify the understanding of the requirements, tests, and implementation. All three types of operation are completed at the same time (assuming that they ever can be completed). What may be less obvious is that all three types of operation begin at the same time. Furthermore, the three items are so tightly coupled, it is not possible to determine where one begins and another ends. The realities suggested by each view are some different, it is not surprising that semantic difficulties arise when mapping one to another. A highly iterative approach is quite different from a waterfall approach; one cannot be mapped onto the other.
One of the main purposes of good design is to produce good code. We are only human, and often cannot recognize bad design except in retrospect, by seeing the bad code that the design results in. The fallacy of waterfall, perhaps, is that design can be done without reference to code.
''A different view of the fallacy of the waterfall is that it consists of repeated approximation. A requirements document is written that approximates what the user wants. A design document is written that approximates what is in the requirements document. Software is written that approximates the design document. No wonder the final software differs from what the user originally wanted. As for cost, we have written the same thing three times in three different forms. No wonder the cost and the cost of change go up.''
www.artchive.com
Does it not always somehow seem to come back to haunt you? Ahh, the above mentioned retrospect...
At least one paper on the history of software development (I cannot recall the authors or title) has suggested that the position of the Waterfall Model as the 'traditional' design model is itself specious, a myth established by a misunderstanding of the Royce paper - but not the misunderstanding usually attributed. The argument was that the majority of papers, design documents and press releases which mention 'waterfall' did so in the same way Royce did - as a Straw Man - to contrast with the approach the paper claims to describe. That is to say, they asserted that Waterfall Model was not the traditional design approach, but rather the traditional Straw Man which all designers sought to refute, and that 'waterfall' was never actually a live practice.
See Waterfall Model, Job Security, Water Fall Myths, Design Approach Tina, Falling Water, Customer Information Analysis Design Coding
See original on c2.com