Share

The recently released book, ‘Nightmare Pipeline Failures: Fantasy Planning, Black Swans and Integrity Management’, by Dr Jan Hayes and Professor Andrew Hopkins, explores two catastrophic pipeline failures in the US, and provides valuable lessons to asset owners worldwide to improve pipeline safety.

When pipelines fail

In 2010, two catastrophic pipeline disasters occurred in the United States. A high-pressure natural gas pipeline running underneath the San Francisco suburb of San Bruno ruptured. The resulting explosion and fire killed eight people, injured many more and razed 38 homes. Earlier that year, an oil pipeline had ruptured near Marshall, Michigan, resulting in the largest and most expensive land-based oil spill in the country’s history.

The San Bruno disaster was found to have been the result of a number of causes. These included a faulty weld from 1956 that had gone undetected by the pipeline’s owner, PG&E, for more than 50 years. The actual trigger of the rupture was a sudden loss of pressure control in the pipeline due to electrical works, which caused an increase in pressure. The operators did not intervene because the increased pressure was within allowable limits. However, the pipeline’s maximum allowable operating pressure (MAOP) has been set too high due to legislation exempting it from integrity testing when determining the value.

The Marshall oil leak occurred because the pipeline’s operator, Enbridge, took an overly optimistic view of its integrity. While the line was known to have major cracks, the decision was taken not to excavate it for further visual inspection and repair. This decision was based on compliance with regulations, rather than on consideration of risk.

The circumstances that led to the San Bruno and Marshall disasters had existed for many years, and yet no one had acted to change them. Without dramatic evidence of a problem, it is easy for organisations to fall into the trap of fantasy planning – thinking that graphs, algorithms and models are not simply an attempt to approximate reality but that they are reality. Challenging organisational complacency is difficult, but necessary if further catastrophes are to be avoided.

Organisational lessons

A number of key organisational lessons can be learned from these disasters. These major lessons must be taken on board by any organisation that is truly serious about reducing the chances of a major accident.

Latent failures and small incidents

The faulty weld that caused the San Bruno pipeline rupture was made more than five decades before the event occurred. PG&E repeatedly ignored clues that the state of the old pipeline needed to be investigated. Leaks were repaired and then forgotten. The faulty weld remained a ticking time bomb.

Latent defects like this in any system are hard to find, so any evidence of problems should be valued. Most organisations have a system in place whereby everyone is encouraged to report hazards and incidents. While such systems (usually administered by the safety department) provide a vehicle to ensure that small matters are dealt with in a timely manner, the best systems provide much broader benefits. Rather than simply being a database for action tracking, the individual incidents can be shared as stories to keep safety messages alive. This fosters the ‘safety imagination’ and helps people to link their daily work with the potential for disaster.

Regulatory compliance

Enbridge had direct evidence that line 6B was likely to fail (in the form of multiple sets of inspection results indicating serious cracking) and yet the evidence was not acted on. Instead, the company used strict compliance with regulatory requirements to justify delays to expensive investigation and repair work. PG&E also adopted a strict compliance approach in determining the MAOP for line 132. In these cases, compliance was not enough to avoid disaster. Compliance with standards and regulations is important but, in itself, it is not enough to prevent accidents. A ‘compliance is enough’ mentality is a step on the road to a serious accident.

Procedural compliance

On the other hand, both accidents highlight the importance of procedural compliance. PG&E’s failure to comply with its own work clearance processes and the failure of Enbridge operators to comply with shutdown requirements contributed to the accidents. The procedures themselves were inadequate, but greater company efforts to ensure compliance could  have revealed these inadequacies and led to procedural improvement.

Effective risk management

A commonly used safety decision-making tool is risk assessment. While assessment of the likelihood and impact of possible accidents is worth considering, problems arise when this information takes over and is seen to dictate, rather than contribute to, major safety decisions.

The first issue is that risk assessment can be used to prioritise spending to further reduce risk, rather than to reach a conclusion about whether any given situation is safe enough. Discussion about absolute risk is avoided and replaced by language about ‘continuous improvement’. Continuous improvement is admirable, but only if risk is at an acceptable level to start with.

Another lesson from San Bruno and Marshall is that asset owners should be wary of fantasy planning. This is a warning that systems can take on a symbolic value that is detached from the originally intended use of the system, especially when divorced from any real-world feedback. Risk management is always problematic when the model itself becomes reality.

The final lesson for risk assessment from these pipeline disasters is this: don’t fall into the trap of thinking that ‘black swans’ (surprising and rare events with large impact) are beyond control and so no further effort in reducing risk is necessary or useful. Black swans are preventable, provided asset owners seek out a diversity of views about the current state of risk controls and what more can be done. All major accident investigations identify precursors and warning signs.

Awareness of responsibility for public safety

People across organisations often fail to understand how their day-to-day activities impact the safety of the general public. Any organisation that wants an excellent safety record must understand that preventing workers from being injured while undertaking normal tasks requires different strategies to preventing rare but catastrophic events. Both are important, but managing them requires different strategies. With process safety in particular, the absence of major incidents is not a sufficient indicator that all is well. These types of accidents have multiple controls in place to prevent them. The system can be heavily degraded without any observable change in outcome – until that last line of defence fails and with it comes catastrophe.

Analysis of the organisational causes behind individual incidents can provide important information about vulnerabilities that may contribute to a major disaster. The key lessons from the two disasters discussed are vital to any asset owner who wants to take a realistic and responsible approach to safety and pipeline integrity.

Nightmare Pipeline Failures: Fantasy Planning, Black Swans and Integrity Management is available from CCH Australia. Visit www.cchbooks.com.au.

About the authors

Dr Jan Hayes is an Associate Professor at RMIT University. She has 30 years’ experience in safety and risk management. She is Program Leader for the social science research activities of the EPCRC and is a member of the Advisory Board of NOPSEMA.

Professor Andrew Hopkins is an internationally renowned presenter, author and consultant in industrial safety and accident analysis. Professor Hopkins has been involved in various government OH&S reviews and completed consultancy work for major companies in the resources sector.

©2024 Utility Magazine. All rights reserved

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

Forgot your details?