The Unaccountability Machine (Hardback). Why Big Systems Make Terrible Decisions - and How The World Lost its Mind. by Dan Davies. Status of this page: a few quotes and links to some other wiki pages. site
(from chapter 5: Cybernetics Without Diagrams) > Diagrams present you with the information ‘all at once’ and leave you to work out the flow of cause and effect for yourself, while a verbal explanation usually presents you with the story of cause and effect and leaves you to remember the connections. In the context of systems, where feedback is ubiquitous, the relationships are vital and the flow of cause and effect has no obvious start or end point, it’s not hard to see why people draw diagrams. But this emphasis on connections means that diagrams are often ineffective ways of explaining something for the first time; they give you a network of relationships, but in a context that doesn’t tell you much about what the things are which the relationships are holding between.
For a wiki mashup of diagram and prose that is also relevant to discussion of complex systems and feedback, consider How Thermostats Work.
(from chapter 5: Cybernetics Without Diagrams) > Viable System Model has five parts: > 1. the part of the system that does things, > 2. the part that stops things getting in each other’s way, > 3. the part that decides what to do today, > 4. the part that’s responsible for looking at how the environment is changing, and then > 5. ... to be honest, it’s difficult to explain the last part without going into more detail.
Marc has lots of thoughts about the VSM including some notes about The Unaccountability Machine. See From Whales to Ecosystems.
(from chapter 6: Economics and how it got that way) > The damage caused by assumptions about time and uncertainty is more subtle. … when the information set is reduced to price and quantity, and when decision-making is just optimisation of a single value expressed in money terms, there’s no way to talk about ambiguity, perception or incompatible values. This is how cost–benefit analysis goes wrong; a statement like ‘this passenger aircraft has a faulty stall sensor which might cause it to crash’ fundamentally isn’t a piece of information about expected costs – even if you can get people happy with the idea of assigning a financial value to human life.
(from chapter 7: If You’re So Rich, Why Aren’t You Smart?) > Some of the biggest problems of management are problems that economics not only fails to address, but seems to lack the tools to recognise.