This is our second episode featuring the works of Charles Perrow, but which covers a wholly different topic from Episode 76’s “A framework for the comparative analysis of organizations.”

Charles Perrow

Perrow’s 1999 book Normal accidents: Living with high-risk technologies is an instant classic that counters the common narrative that safety can be successfully engineered and risk eliminated in technological systems – including those that pose a “high risk” of catastrophe such as massive explosions, environmental cataclysms, widespread systemic failures, and so on. Perrow defines an accident as the type of event that causes damage to a system such that it cannot be used again, as opposed to an incident in which a system shutdown or other remedial action successfully preserves the system and is reversible such that routine operations can resume.

Technological systems of the late 20th century such as nuclear power plants, aerospace and maritime platforms, chemical factories, hydroelectric dams, and recombinant DNA research either had major accidents or fears among experts or the public have been expressed about the risks of accidents. To alleviate concerns, company owners and government officials have called for and largely gotten safety systems installed that should help identify problems proactively to allow for pre-emptive responses or to shut a system down to mitigate catastrophic effects. The trouble is, according to Perrow, that these safety systems add as much risk as they alleviate, making the potential for catastrophe more likely rather than less.

At the heart of the problem was a fundamental lack of understanding of what causes accidents in the first place. Correcting this required a monumental effort to study the detailed reports from major accidents across a number of industries, including the Three Mile Island disaster of 1979. Through this, Perrow identified two salient factors that were present to a degree in industries that seemed prone to such accidents – high systems complexity and tight coupling of system components. The coupling meant that a failure in one part of the system would contribute to the cascading of problems across other parts. Complexity meant that despite the presence of meters and indicators, it might not be possible for an operator or anyone else to properly decipher what was happening to the system, and therefore their corrective actions could inherently worsen the situation. From this, a number of other phenomena can be explained, such as tendencies for industries to dismiss accidents as the result of operator error rather than the inherent complexity of the system design itself.

This is why the book is titled Normal Accidents, based on Perrow’s contention that certain industries simply cannot avoid accidents. The question becomes to what extent do the effects of the accident either extent only to those on-site (i.e., workers) when it occurs or if the effects extend outside the site, whether the nearby town or across a much wider geographic region. While an informative read, this book is probably not going to make anyone feel good about the increasingly complex technologies that have been fielded in the years since the book’s publication.

You may also download the audio files here: Part 1 | Part 2 | Supplement
Read with us:

Perrow, C. (1999). Normal accidents: Living with high-risk technologies, updated edition. Princeton University Press.

Related episodes from the Talking About Organizations Podcast:

Episode 76. Comparative Analysis of Organizations — Charles Perrow

Episode 64. Disasters and Crisis Management — Powley and Weick

Episode 20. High-Reliability in Practice — Tom Mercer, US Navy

Leave a Reply

Your email address will not be published. Required fields are marked *