Trish Kerin speaks to safety consultant Andrew Hopkins to find out how the safety culture at Flixborough would meet today’s standards
The tragedy that occurred at the Nypro Flixborough site changed the industrial landscape in the UK and beyond. What do you think were some of the key cultural learnings from the incident?
The key learning from the Flixborough accident was the need for a much more rigorous approach to the management of major industrial hazards. The UK government realised that this would require a new kind of regulation, imposing on duty holders a requirement to carefully analyse all their major accident risks and develop systems to deal with them. The government’s role was then to verify that the analysis was adequate and that operators complied with their own risk management systems. This discipline is particularly important when people are contemplating temporary fixes for problems to allow production to continue. Operations personnel, even though very familiar with their job, may not have the necessary understanding of what can go wrong. All such changes need to be carefully considered by relevant discipline experts and authorised in writing by a professionally qualified manager.
TK: Do you think a modern approach to safety culture would have made a difference in preventing this incident?
AH: The modern approach to safety culture, by itself, would have made little difference. Regardless of what its advocates say, the safety culture approach tends to focus on the role of individuals in accident causation, with the aim of changing the “hearts and minds” of individuals in the way they approach their jobs. This can be helpful in reducing personal injury accidents. But major accidents occur when there is a loss of control of the process and a loss of containment of dangerous substances. The prevention of such accidents requires the rigorous adherence to safety case requirements. This is seldom the focus in safety culture discussions. A focus on process safety culture, as opposed to safety culture would be a big step forward, but this is rare.
TK: Can you expand on what you mean by process safety culture?
AH: The Baker Panel report, written in the wake of the Texas City Refinery accident of 2005, gives an authoritative answer.1 Appendix G contains the “process safety culture survey” it used to assess the process safety culture of five US refineries. The survey instrument makes clear that a good process safety culture includes (but is not limited to) the following:
TK: How have you seen the evolution of organisational culture since the incident?
AH: Since Flixborough, the legislation for the control of major accident hazards has been transformed in many countries. This legislation envisages a far more active and expert role for regulators. Governments have not always responded with the necessary resources. As a result, organisational cultures have not changed as much as might have been hoped.
Where organisations have suffered a major accident that threatens the very existence of the corporation, the organisational culture does indeed change, until memories fade and personnel move on, after which they tend to revert to their former ways.
TK: Why do process safety incidents continue to occur?
AH: Process safety incidents continue to occur because companies don’t devote sufficient resources to ensuring compliance with their safety cases and regulators don’t have the resources to ensure that companies are in full compliance. In addition, there are many countries where the state is not sufficiently powerful or well enough resourced to ensure that companies manage process safety effectively.
Process safety incidents continue to occur because companies don’t devote sufficient resources to ensuring compliance with their safety cases and regulators don’t have the resources to ensure that companies are in full compliance
TK: What do you think needs to be done to help people learn these valuable lessons?
AH: Individual members of boards and executive committees should be challenged to identify the lessons for themselves arising from major accident reports and then to devise ways of implementing these lessons.
TK: The Inquiry into Flixborough noted that ‘...it was this desire [to resume production] which led them to overlook the fact that it was potentially hazardous…”. Similar concerns were also a factor at Macondo/Deepwater Horizon. How can we ensure that appropriate risk assessments are made when staff are under pressure?
AH: Staff are almost always under pressure to work faster, to produce more, to meet deadlines, to cut costs and so on. There are often financial incentives to achieve these objectives. This inevitably leads to shortcuts, and biased risk assessments in favour of the quickest or cheapest course of action. There will be technical people in an organisation who understand that a proposed course of action may be too risky, but these people are generally answerable to relatively low-level business managers. Their voices are therefore not heard at higher levels.
The solution is to organise these “voices for safety” into separate functions, such as an engineering function or a process risk function, headed by a chief engineer or chief process safety risk officer who answers directly to the CEO. The people in these functional lines will play an active role within business units but they must be answerable up an independent line and their performance evaluations will be dependent on satisfying more senior people in these lines, not on satisfying business unit leaders. People in these lines must not be eligible for bonuses based on production, profit, or cost reduction.
This design I am recommending was implemented by bp after its Deepwater Horizon accident and led to a significant improvement in process safety. I believe that this kind of model is the most promising way to drive improvements in process safety.
TK: The engineers at Flixborough were working outside their area of competence. Would a better understanding of other disciplines help, or would it increase the risks?
AH: Whoever is in charge of a job needs to know enough about it to know what engineering specialists need to be consulted.
TK: But as systems become more complex and interdependent it can be difficult for anyone to fully understand them. Is AI likely to help or could it produce rogue solutions?
AH: I don’t know enough about this to give a definite answer. But I would say that the precautionary principle needs to be followed. This means that, until we know more, AI should be treated as a tool to aid final decision-making by a human, and not as a final decision-maker.
TK: Responsibility for ensuring the competence needed for safe operation rests with the organisation’s board/CEO. Is this sufficiently well understood in your experience?
AH: No. I don’t think boards and CEOs fully understand or accept their responsibilities. Most board members and CEOs do not have the technical competence to decide for themselves whether companies are complying with their safety cases. They rely on reports from senior executives who in turn rely on their subordinates to provide the necessary assurances. There is enormous pressure on subordinates to provide the expected assurances which means glossing over problems that may be occurring. And board members tend to simply accept these assurances uncritically. They need to be encouraged to “challenge the green and embrace the red”.
It is widely recognised in safety circles that the required mindset of senior managers and board members should be one of “chronic unease”, or scepticism, about whether major accident risks are truly under control. That mindset is often missing at top levels. The story of how the Boeing board and CEO have failed to adequately manage their major accident risks2 is one recent example of this problem.
Sometimes boards will deliberately fail to inquire too deeply into what is going on for fear of being held personally accountable in the event of a major accident. This attitude is misguided because, in many jurisdictions, senior officers must show “due diligence” to avoid personal liability. The financial incentives paid to the top office holders in big companies exacerbate these problems – bonuses are paid largely on the basis of share market performance, with almost no attempt to incentivise the management of major accident risk.
Andrew Hopkins is an internationally renowned presenter, author, and consultant in the field of industrial safety and accident analysis. In 2008, he was awarded the EPSC prize for extraordinary contribution to process safety in Europe, and in 2016 he was made an Honorary Fellow of IChemE for his contribution to process safety.
1. http://sunnyday.mit.edu/Baker-panel-report.pdf
2. https://reut.rs/3Qym44Z
This and Robin Turney’s article, Lessons for Managers and Engineers Today (p44) are the first in a series of articles that TCE will be running to mark the 50th anniversary of Flixborough.
Among the forthcoming articles, Richard Mundy will reflect on management of change and why it’s essential, Martin Wardrope takes a look at Flixborough from the perspective of an early career engineer, and Trish Kerin gets her footballing cliches out by underlining why safety is a team sport.
We will also be visiting Lincolnshire itself to take in an exhibition devoted to the disaster.
Catch up on the latest news, views and jobs from The Chemical Engineer. Below are the four latest issues. View a wider selection of the archive from within the Magazine section of this site.