Could this be the root of all evil in S/W development?
Note: this page isn't about Bug Free Doesnt Sell, it is about Bug Free Costs More. What if that weren't true?
Sticking my most cynical hat on here, and I hasten to add that we do not follow this practice in my company, a new company wishing to launch a product into the marketplace might draw up the following list of priorities:
We get it done before our competitors.
We have more features than our competitors.
It works well enough for a demo.
Doesn't wipe customer's hard drive, destroy his database (too often), or eat his lunch.
It works well enough that it won't crash in a few months.
It works well enough that it will never crash.
Points (1) and (2) help you sell the thing. Point (3) helps you sell it to doubters. Points (5) and (6) help you in the 'long run'.
The Critical Bug concept results in releases that aim for the point (4) that I added. -- John Carter
But you ain't gonna have a 'long run' if you don't have a short run first.
Your evil (maybe) marketing man tells the engineers: I cannot put a feature on the product description if you haven't implemented it. I cannot put it in if you haven't at least tested it well enough to get past point (3). However, if the feature doesn't work particularly well that doesn't matter to me. If this company is going to succeed then we need whizzy features quickly even if they haven't been fully tested.
This sort of attitude tends to murder most of what Extreme Programming is all about, because it is basically anti-quality. In particular, engineers will leave all testing to the end of the development cycle because compromising on testing is more important than compromising on features. When testing does take place, it will inevitable be Testing By Poking Around, and Code Unit Test First seems like total madness (nobody buys Unit Tests). With testing left right to the end there's no longer any point in short iterations. We're back to Test Last and the evil old ways of doing things.
I expect most companies are guilty of this sort of prioritising at some point in their lives. With a startup company I haven't really got an argument against it, you don't win market share by being ethical. However, I think Bug Free Doesnt Sell is an attitude virus which spreads into organizations and projects which don't need to think along these lines at all. It works on the fear that you might spend too much time testing and miss your market window or fail to beat your competitor (on unethical grounds).
It's there, and we have to face up to it. My only defence at the moment is to fight it when it's there by default rather than there with justification.
This sort of thing is common in the simulation industry, too. A contract is made with the government to provide software with certain features on a certain date. If you deliver software that can be said to have all the features, even if they don't work for the most part, you will get paid. If you deliver software with less than the required features but each feature present works perfectly, you do not get paid.
Having worked in a venture capital startup recently, Jeff Grigg would suggest...
Extreme Programming techniques can be a big win for venture capital startups:
Having limited time and money to spend before the product must become profitable, the unexpected schedule slips typical of conventional development (Big Design Up Front) can be a killer.
Having to develop a customer base (i.e.:
find your first customers) while the product is being created means that you can't know the requirements in advance. Each new customer you find will force you to change the requirements. (This is normal; trust me.)
I was reading an article about the founder of digg.com. He basically concurred, saying that he learned it was best focus on adding features only when feedback from users points that way.
Having a product that works early on (providing minimal functionality) and which improves at a predictable rate, without regressing, is very comforting to both investors and customers. Iterative development with Unit Tests is a big win here.
Having worked in a venture capital startup recently, Jeff Bay would also suggest...
Extreme Programming is perfect for startups; specifically, bug-free software doesn't sell. However, due to the Ratchet Effect, if you can develop software faster because it is low-defect, then even though the marketers may not need bug-free software for the customers, if they are sophisticated enough to understand the ratchet effect, they will still want bug-free software.
To put it another way... the customer is always right. -- Eric Ulevik
That's only half the story. The other half is "Don't work for wrong customers", meaning that if they want you to do crappy work and doing it offends you, don't do it. Keep your standards and your character. -- Walden Mathews
The assumption here is software, by definition, is buggy. One of the the assumptions behind Code Unit Test First is that you can write bug free software. The concept is the software is always at a fully functional (as opposed to fully featured) state at all times. Fixing problems after the fact is painful to all those involved from programmer to end user.
Extreme Programming is about the only methodology I have seen which actually tries to prevent the creation of bugs, not merely focus on trying to remove them later. This is the true meaning of quality.
I am a strong believer in the idea that It is at least as easy to create good software as bad software. And writing good software is the only way to end the time consuming CMM/ISO procedures documenting bug removal.
-- Wayne Mack
The assumption here is that code takes longer to develop and test thoroughly than to develop and test minimally. Writing bug-free code does take longer (in the short term) if only because of the extra time needed to verify that it is bug-free. Eventually, of course, you pay for your short-cuts. -- Richard Develyn
I think it depends upon what you mean by short term. In most settings, I would categorize short term as being the next release. One of the most unpredictable and time consuming phases of most releases is the integration test phase. Relying on verification after the fact is the cause of this. The assumption of buggy software is what leads to the attempt to test thoroughly. If you can get beyond the belief in buggy software, you start to question the notion of 100% testing. It's a difficult leap, but one software as a profession needs to make. -- Wayne Mack
Extreme Programming is not the only methodology that prevents insertion of bugs (rather than trying to find and remove them later). Big Design Up Front is, by its very nature an attempt to avoid making big mistakes (big bad bugs). Cleanroom Software Engineering, and any approach that has design or code reviews, is a serious attempt to eliminate most (if not all) bugs before the program is run the first time. -- Anonymous Donor
One of the the assumptions behind Code Unit Test First is that you can write bug free software..
I have to disagree with this. I think Bug Free Software is an ethic that is ultimately un-provable. I think we should always try to write bug free software and Code Unit Test First is a good way to show your allegiance to that ethic. However, it is an impossible goal or maybe it is better to say that it is a goal whose achievement cannot be proven. Someone here brought up the idea that Knuth talked about Perfectable Software as an alternative to Bug Free Software. -- Robert Di Falco
I think most sane developers and managers don't intend to write software that is buggy. Bugginess is usually a product of:
Not knowing how to write quality software.
Using too many shortcuts (which in the long term slow you down anyway) because it has to be delivered yesterday.
However, most developers and managers expect to write software that is buggy. The waterfall model has Big Design Up Front and a large integration test finish. The whole concept of doing more and more checks, e.g. design reviews, code walk throughs, Unit Tests, system tests, etc. is based on the belief of a high number of problems in the software. As the error rate falls, these steps cease to be cost-effective.
I fully agree with statement 1 above, I just believe it is an industry-wide problem. As programmers, we do not know how to write quality software. This is why I find the practice of Code Unit Test First so interesting. What you are doing is a micro-level analysis of a class's inputs and outputs before you begin coding. This level of analysis, I believe, far exceeds what is expected of something like Design By Contract.
-- Wayne Mack
I am of the opinion that a compleate set of unit tests is the specification of the contract, and the formal methods of Design By Contract are a special kind of unit tests. If Design By Contract Pre and Post conditions were compeate they would have the same expressive power as unit tests, and our preference for one of the other would be less critical-- It should be possible to translate each unit test into a pre and post condition and conversely automatically. I therefore do not make a hard distinction between them and will use both. I think Design By Contract will prove to be a more useful notion over time becuase it may yet be possible, someday, to test the pre and postconditions themselves for compleateness, but for now I am content to consider either acceptable. I do not think we should see them as orthaganol concepts however, but different methods of expressing the same idea. -- Marc Grundfest
I kind of agree with you, Wayne (except I wouldn't take pot shots at reviews in order to make Unit Testing look better), but I think the truly interesting thing about Code Unit Test First is that it is a clever way to get coders to write specifications without their knowing it. Similarly, code refactoring is a clever way to get them to do analysis. It's like orange flavored Tylenol. The core activities of software development aren't going to change, just the names used to describe them. Reinventing The Wheel is largely about adopting fresh, new names, I think.
I fully agree that many developers and managers expect buggy software, but I think it's only because they haven't known anything better. These people are often allergic to the techniques of quality work (like kids are allergic to sitting still for five minutes), but notice that they can't argue with your superior results!
When in a quality hostile environment, software developers could try to unite around a very basic principle that I call Fix Bugs First. I wonder why they don't. -- Jean Hugues Robert
Are there any real metrics for defect rates in XP? One thing I find irritating about XP, at least as preached by the more noisy advocates, is that it leads to bug-free software. When I examine these claims more closely, it seems that they re-define a bug to mean something that fails a test, which seems a pretty poor definition of a bug to me. I'm not sure how you'd really measure this - are there any commercial or widely deployed, public applications developed with XP that you can check defect rates from version to version? Test oriented development is very interesting and appealing to me but I'm not sure that it'll result in objectively lower defect rates.
Are there any real metrics for defect rates in any methodology?
To be honest, XP does re-defined the word "bug" to reduce its defect rate, but for a sane reason. The usual definition of "bug" is "behavior you don't like". XP breaks this down into "behavior that the onsite customer doesn't like" (dev team's fault) and "behavior that the onsite customer likes but you don't" (onsite customer's fault). Basically, XP denies all responsibility for business decisions. As far as eliminating the first type of bugs, Test First Development reduces but can't possibly remove all bugs. Tests Cant Prove The Absence Of Bugs, after all.
Oh, please. You sound like top trying to avoid discussing hard science in software development engineering. As has already been pointed out, there exists a large body of research into the science of defect reduction and prevention. Trying to pretend such science has not already impacted our profession is like claiming the sun doesn't rise in the east.
And as for defining "bug," how about the simple definition that the late Phil Crosby came up with decades ago: non-conformance to requirements. If you can define what a product (piece of software, whatever) is supposed to do and it doesn't do it, then that's a bug. Simple.
Re: "sound like Top" : For the record, I don't avoid hard science for SE, but merely point out there is none known to apply outside of machine performance metrics. Also note that some mistake "theory" for science, a form of Argument By Elegance. --top
Well, topper ol' buddy, Phil Crosby, Michael Fagan, and others who have studied the science of defect prevention have a lot of statistics to back up their assertions. There need not be any arguments of any kind when you can just point to measurements. Science will prevail, just like in that old Star Trek episode.
Contributors: Jeff Grigg, John Carter, Richard Develyn, Jeff Bay, Eric Ulevik, Walden Mathews, Wayne Mack, Robert Di Falco, Glen Stampoultzis, Marc Grundfest, Jean Hugues Robert, Marty Schrader
See original on c2.com