Analysis Paralysis is a term given to the situation where a team of otherwise intelligent and well-meaning analysts enter into a phase of analysis that only ends when the project is cancelled. [Warning: cynicism]
Frequently, in such situations, designers and developers are staffed, and have no work to do. Busy work is given, and training, just to keep them from quitting before the analysts deliver. This waste is often the reason for cancellation.
Common causes are:
The lure of infinite composability and decomposability
Insistence on completing all analysis before beginning design.
Regular change of leads and their philosophies (each trashing and restarting the work of the previous)
Too many learning curves at once (underqualified analyst) causing incessant revisiting of prior work
Lack of goals
Increasingly conflicting goals (often political)
Creative speculation, when discovery and definition are required.
Big Project Syndrome:
this one will do it all, will use the latest tools, will use a new paradigm, will use all new developers, will start with a clean slate, will handle all use cases of two or more existing systems in the first release, etc.
Risk avoidance, fear of making a mistake.
interject: Yes to Big Project Syndrome especially! This seems like a big problem in highly technical crowds, or where there is less client pressure... Being a hopelessly impractical academic myself, I often fall victim to this, and try to conceive of the most flexible architecture that the world has ever seen, but then the project fizzles because it is too hard/too impractical. Does anyone have a good pattern, paradigm, or mantra to help us let go of our visions of the Ultimate Architecture? -- Rus Heywood :done
In one project (no names mentioned), I've found a fairly large and bright team of people who were underqualified in OO analysis and design methods, who spent well over a year of time and produced only a few slim documents (which they'd gotten from an outside source only weeks before their presentation). Analysis Paralysis had set in, and nothing had been done. The project was cancelled as a "failure", creating a negative emotional association to OO methods in the company.
Whatever the particulars, analysts go into analysis...but they never come out.
when the going gets analytical, the analytical turn Pro
Keep anyone playing with paper for a while and they start to feel it. They work and work ... and see no product. They begin to lose sight of why they're working in the first place. No feedback means no quality in what they do. The sheer abstraction of their work means the only testing it gets is against other analysts' models - and then the analysts start trying to integrate their models...
Months pass. Management gets concerned and starts calling meetings. The developers, to justify the time they've spent, wrap the managers up in a Gordian Knot of diagrams and charts. The meetings become the project. Everyone's busy, everyone can see the deadlines looming, but no one has the courage to break the gestalt by actually doing some implementing...
Solutions:
Keep models small. Never integrate them. Building a bigger model doesn't add knowledge - it destroys knowledge.
You don't have to go the whole XP hog, but make testing drive analysis. How's that? Here's a requirement; build a test harness for it. Oh, you can't? Well, why not? Do enough analysis to know how to build the harness - and then stop!
Don't employ "analysts". Employ developers. If a developer doesn't know how to analyze a requirement, they'll soon learn; if an analyst doesn't know how to develop a solution, their "analysis" is worthless.
If you are management, refuse to review technical documents. Review working functionality. If you're not seeing new functionality every cycle, kick butt until you do.
Employ a professional architect. Just one architect is what you want - never more than that. The architect doesn't have to be team leader - in fact it's probably better if he's not. He's not responsible for analyzing the project's requirements either. He's responsible for providing generic tools to coordinate and support the other developers. Let no one else call the shots on infrastructure, and for f*ck's sake don't go ballsing the thing up by trying to mess with it yourself. Nor let anyone suggest that architecture be decided by voting or meeting - a sure recipe for Analysis Paralysis.
Start your project with one requirement and an architectural prototype. Don't give 'em more than a month to code it. Yeah, it'll be lame - but it's there, and it's code, and it works. Now let 'em refine and replace and refactor - that's real work. A project without a prototype is like a candle without a wick, which is how Analysis Paralysis really happens.
''-- PeterMerel''
Peter: Wonderful! -- Ron Jeffries
Peter: You are definitely right! -- Mauricio Naranjo
Peter: Thanks for writing it so clear. Can someone tell me how to convince client that the project should have small incremental cycles, starting with quick prototype, and/or proof of concept --- Sanjay M
Peter: "cycle"? What's a "cycle"? Don't tell me its one of those things programmers do to waste time --Pointy Haired Boss
"Building a bigger model doesn't add knowledge - it destroys knowledge."
Thanks so much for saying that. Last week I saw a model, of a villa he'd designed, in a architect's shop in Corsica. It was beautiful. I assume it was very useful in imparting knowledge and gaining feedback from the customer, in advance of building. Software "models" somehow seem to do exactly the opposite. Unless they're both executable and usable.
I don't think so. What if the architectural model would have had all bricks drawn on paper? IMO this is
the problem with SW models: you have to keep an adequate granularity in any given model. Then you decompose it into parts, which you again model with an adequate quality. When you're finished doing so, you have just finished writing the source code.
"Employ a professional architect. Just one architect is what you want - never more than that ... He's not responsible for analyzing the project's requirements either. He's responsible for providing generic tools to coordinate and support the other developers."
Hmm. Does the architect, so defined, make decisions on "infrastructure" without reference to "requirements"?
Maybe, maybe not (it isn't specified therein), but you need him all the same. Obviously he should be somebody who has a lot of development experience... -- Daniel Knapp
[How can building a bigger model destroy knowledge?!? I'm confused by this statement.]
When you go into too much detail, a lot more questions arise. You can't answer them all. Or they seem to have a good answer if you give up some assumption of the previous little model. Maybe you should, so, maybe you don't know enough to "correctly" build the first model. Suddenly, you do not know anymore what you previously thought you knew. As I see it, that is why a bigger model "destroys knowledge". Of course, you can get the very same result while coding the first model, then trying to code into some more detail. I always thought that it is better to change the model than the code. People here seem to think that it is better to refactor the code than the model. Indeed, at least this time you can test what you think you should change. (and that is why I love this page:-) I still get into Analysis Paralysis:-) -- Om Candea
Ron, that may or may not be the case. I think it all depends on how the analysis is done. It's true that you can't figure out all the details beforehand because if you do, you'd actually be coding in English, which is a mess. The analysis should be limited to a broad scope or be done as an overall picture + the game strategy. You define what the puzzle probably looks like, you identify you gray areas, your puzzle's areas in green (grass?) and then you start coding (trying the actually fit the pieces, maybe first separating by colors and other signs (strategy), then trying to complete some small areas). The puzzle analogy is only two dimensional, but I think it's useful. If you have a puzzle with 200 pieces, things may be easy and no analysis required. You start trying to fit things into place, and refactor (done areas) until you have the puzzle solved. But if it's a 3D puzzle of 5000 pieces, and some surfaces have different rules for fitting, then you need some vision and some strategy. You don't care about implementation details nor about the little details, and certainly the working "papers" shouldn't be longer than 10 pages. If they are, then I'd say time was wasted. -- Fede Rico
That's such an intriguing idea I'd like to elaborate on it for a moment... There are details at any level of resolution in a model. Perhaps he meant choose a level of detail for your model that doesn't make it so thick with detail that you can't see the forest for the trees. Sometimes projects with many pieces have specific rules for how those pieces interact that need to be understood. You could look at it like a top-down diagram: On level one, you can see only the entirety of the 3D puzzle as a whole (a huge penguin, for example), with no distinctions between parts. After breaking it down, on level two you notice that there is a distinction between the fins and the beak and the rest of the body. The next distinction you might make happens within those previous distinctions: we distinguish between colors and textures. The making of distinctions can continue infinitely (in real reality... because of the granularity of computers, perhaps there is a limit when making distinctions in software), the art consists in knowing when to stop. It's probably similar to the art of deciding when a bit of code should become its own function. -- Scott Williams
There are two key issues here:
Only a developer has a good sense for what is easy and what is not. An analyst who has never been a developer, or hasn't been a developer on this platform, may blithely assume that something is easy, when it is hard or impossible, or that something is hard and requires much modeling detail, when it is actually quite easy. In the latter case, you wasted time producing unneeded details. In the former case, you may have wasted the whole model by designing something that can't be built. My experience is that the precise difference between a great developer and a decent developer is the ability to assess what is easy and what is not. For less skilled developer, it's more effective to get right to the hard part, and that can only happen by trying to write the code and seeing it fail. Accelerating the failure helps get the work on-track quicker.
There's no way to test a model other than to find people smart enough to understand and critique the model. Most developers who are bright enough to create a correct model don't need to have the model to write the code. Less skilled developers can still produce models, but those models are flawed and so useless. Most end users can't test models and simply assume initially that something fabulously technical is happening that they simply don't understand. Eventually, this gives way to cynicism.
level of detail and Software Blueprints. -- Bob Bockholt
Risk avoidance Exhaustive analysis appears to offer the hope of being able to make risk-free decisions. Unfortunately, there is never a point when everyone is comfortable that they know enough, so another round begins. -- Scott Parnell
This seems to be part of Big Design Up Front
Analysis Paralysis - Nicely put. Qualified for Oxy Moron?? Not having clear Goals is the root cause. Any lack of clarity in the actual goal will eventually fail. For Analysis phase, its paralysis. If the project managed to get into development phase, it is path to destruction! -- Arun Prakash
See The Curse Of Xanadu for a extreme example of this (and several other management problems) in action
See also: Anti Pattern, Analysis Smells, Shakespearean Analysis
See original on c2.com