Risk! Engineers Talk Governance

Safety Culture (Revisited)

Richard Robinson & Gaye Francis Season 6 Episode 6

In this episode of Risk! Engineers Talk Governance, due diligence engineers Richard Robinson and Gaye Francis revisit the topic of Safety Culture.

They review the work of Professor Patrick Hudson, who identified five levels of safety culture, from pathological (who cares as long as we're not caught) to generative (safety is how we do business around here).

Richard and Gaye observe that many organisations tend to be more reactive, focusing on implementing controls after incidents occur, rather than striving for a generative safety culture. They note that the transition from a bureaucratic, rule-based approach to a proactive, thinking-based approach is challenging, and caution the use of AI, which can lead to a lack of critical thinking.

They end with how organisations should aspire to a generative safety culture, even though it may be an aspirational goal, and highlight the need for clear commitment to safety at all levels of the organisation, rather than just from Board level.

 

For further information on Richard and Gaye’s consulting work with R2A, head to https://www.r2a.com.au, where you’ll also find their booklets (store) and a sign-up for their quarterly newsletter to keep informed of their latest news and events.

Gaye is also founder of Australian women’s safety workwear company Apto PPE https://www.aptoppe.com.au.

Megan (Producer) (00:00):

Welcome to Risk! Engineers Talk Governance. In this episode, due diligence engineers Richard Robinson and Gaye Francis revisit the topic of safety culture.

(00:12):

We hope you enjoy the chat. If you do, we'd love you to give us a rating. And also don't forget to subscribe on your favourite podcast platform. If you'd like more information on R2A's work or have any feedback or topic ideas, please head to the website www.r2a.com.au.

Gaye Francis (00:32):

Welcome Richard to another podcast session.

Richard Robinson (00:34):

Hello Gaye. Good to be back.

Gaye Francis (00:37):

Good to be back. Today we're going to revisit safety culture. I think we did a podcast very early on (Season 2 Ep 4 & 8) about safety culture and the uprise of James Reason and things like that. But we've just been doing a little bit of research and I guess it comes around some of the observations that we've had that organisations are tending to be a bit more reactive to incidents. And yes, if that incident happens, they're very good at then putting in controls and precautions in place to make sure that that doesn't happen again. But they're not sort of aiming for what we would call that generative safety culture in their organisation. And so you've been doing a little bit of research and a bit of reading, and we've come up with a professor, isn't he?

Richard Robinson (01:25):

Yep. Well, he was, he's emeritus, he's retired.

Gaye Francis (01:28):

Okay. Professor Patrick Hudson.

Richard Robinson (01:31):

Yeah, the trick here, he's been around for a while. The thing I didn't actually quite appreciate was we normally use the James Reason model, and the James reason model has three levels, bureaucratic...

Gaye Francis (01:42):

Pathological first, down the bottom -- shoot the messenger.

Richard Robinson (01:46):

Yep. Bureaucratic -- listen to messengers if they happen to arrive alive. And then generative -- training more people to bring bad news to your attention. And everybody aspires to generative. Now, what I didn't actually quite appreciate because for some reason I've always given the credit to James Reason, was this Patrick Hudson fellow was actually the project manager for Tripod Delta where all this stuff came from. And I must say I've always been a little bit surprised just how much kudos James Reason got out of that Tripod Delta thing -- that was the Piper Alpha (oil rig) incident in the North Sea. But this fellow actually had five levels rather than the three levels. In an overall sense, I don't think we particularly care one way or the other, but he actually had a couple of levels which did actually make us twitch just a little bit. Perhaps you might just read them.

Gaye Francis (02:32):

So I need my glasses for this. Pathological, who cares as long as we're not caught. Reactive, safety is important, we do a lot every time we have an incident. Calculative, we have systems in place to manage all hazards. Proactive, we work on the problems that we still find. Generative safety is how we do business around here. And he's sort of got two arrows around two arrows heading up the page of that. And it's increasingly informed and increasing trust, which I think are two really important attributes that are required in a safety culture.

Richard Robinson (03:09):

Well, I think what we sort of regarded with this is that we're aware that somebody gets the fright of their lives and they're previously smug and happy, which have been the pathological viewpoint. And then what they tend to do is leap to Standards. And if you believe the professor from Brisbane University, the fellow, Sidney Decker, he's the one who says (we've) now got so many rules, nobody knows what all the rules are. Now if you go from pathological to the next level and then go to that calculative one, you're sort of going with the Standards, but you're creating more and more rules. And we keep seeing that. We keep people just creating more and more and more rules and truly the people who've got to implement them really don't know what's going on. So they haven't actually taught the philosophy and they haven't actually gone to the right ideas.

Gaye Francis (03:54):

Which is that sort of that bureaucratic level in James Reason terms.

Richard Robinson (03:57):

Yeah. Most of our clients come to us they're generally at the bureaucratic or slightly better than they're looking for better. They're actually searching for better, which is what he calls is proactive. And I agree with that. It is an interesting thing whether generative is actually aspirational. It's bit like zero harm. I don't think anyone actually believes if you're a large organisation, zero harm's going to be the case. But you would like everybody to try and strive for that. The example I usually give is the police commissioner who's trying to go for zero child molestation. Well, we hope that's the objective, but do you think the police commissioner's got the resources to make that happen?

Gaye Francis (04:32):

To be able to do that.

Richard Robinson (04:33):

So we certainly aspire to that. I think the plan is if you aspire to generative, then in his terms you're more likely to be past bureaucratic and being proactive.

Gaye Francis (04:43):

Yep. I agree. And I think generative is one of those ones that it's almost always changing. The goalposts are always changing for generative. As new technology becomes available, more information becomes available so that generative is always moving that little bit further out.

Richard Robinson (04:59):

Well, one of the things that's bothering us is the rise of AI and everybody sort of... we've noticed a lot of people just adopting AI and stopped thinking, and I have a nasty feeling a lot of people are going to try and use AI for safety purposes. And the one thing I don't think you can do is stop thinking. Thinking's hard. And I've got this bad feeling too, that, well, you are about to mark some assignments, I have no idea whether you're going to get a lot of AI coverage in there.

Gaye Francis (05:25):

See how we go. Well, maybe that's one of the extra lines that has to go up through your page, increasingly informed, increasing trust. But it's increased thinking. Isn't it? Almost required to reach that generative and proactive stage.

Richard Robinson (05:39):

Well, I'm not saying AI can't give you certain insights, but I'm pretty sure the courts are against it. If you're an expert, you better not be relying on AI because we've heard some stories about a lawyer who relied on AI and it just turned out to be complete rubbish because remember, all the AI do is scraping the network or the internet and it's to see what the internet collectively thinks. Well, I've got to say human collective thinking can sometimes be an error as I think we have noticed. And the reason for being an expert is that you're actually something different to what the collective actually believes.

Gaye Francis (06:09):

And you have to really believe what you're saying.

Richard Robinson (06:11):

Not so much to believe. You've got to have a reason to argue for that position. I mean, that's why Kant (philosopher) is so hard to read because it's critique of pure reason. It's hard work, particularly the way Germans sometimes think. But there's some good understanding there. I mean, I've been through the fact of time and space are most likely human constructs and used to sort of recoil.

Gaye Francis (06:34):

Yeah. Some days it hit you that first thing in the morning when you get into work, and that's a bit too much.

Richard Robinson (06:38):

I've got grandchildren, you can see the grandchild bash themselves working out hand, face, don't hit, not good.

Gaye Francis (06:44):

Unless it's your brother or sister.

Richard Robinson (06:45):

Yep. Well, in this case you just poke them at the sharp object if you can.

Gaye Francis (06:49):

Anyway, we've definitely got off topic. So we just think that this safety culture, the concept of safety culture, it's always been there in organisations. It's not talked about as much now than it was in the early 2000s. And we are seeing organisations from our experience moved back down to that bureaucratic reactive sort of stage. And that drive for generative and proactive isn't as forthcoming.

Richard Robinson (07:23):

Well, it's got a lot to do. I mean, from our point of view, I mean it's used of target levels of risk and safety. That to us is the classic bureaucratic position because oh, I've satisfied the criteria, I'm good.

Gaye Francis (07:34):

Done and dusted

Richard Robinson (07:35):

As we've described, if you're at 30,000 feet and it's all going wrong and the Captain says, "oh, we satisfied our criteria, even though we could have done more and we wouldn't be crashing if we had", I don't think that's what people want to hear.

Gaye Francis (07:46):

No, no. You would definitely like your airlines to be proactive tending towards generative.

Richard Robinson (07:52):

And that's why people like to have pilots at the front of the aircraft. The aircraft could land itself now. It's probably more reliable than the pilot, but we do like the pilot to be first at the scene of the accident.

Gaye Francis (08:02):

That's a pretty morbid message. But yes, that that's what it is. So we're just saying there's a lot of information out there. We would encourage organisations to go down that proactive to generative approach because I think you do get better safety outcomes.

Richard Robinson (08:20):

But we wouldn't suggest that you rely on that exclusively. I think that's part of what I have difficulty with some of these psychological models. The belief is if the psychological model is right, everything else flows. I haven't quite worked out that having a proper state of mind means that you are diligent. I don't think those things are necessarily congruent.

Gaye Francis (08:36):

I think what it does give you is though, if everybody understands the culture of an organisation and everybody's contributing to it, it's that thought process that goes through it. But as we've said many, many times, the risk business and due diligence business in the technical organisation, it has many, many facets. And you can slice it and dice it a number of ways to get that insight. But it's one of those elements that does contribute to the overall safety of an organisation.

Richard Robinson (09:04):

But it'd be nice to get everybody to agree that zero harm is a good target.

Gaye Francis (09:07):

Absolutely. Rather than just the Board on a piece of paper.

Richard Robinson (09:10):

Correct. And that wouldn't be what our experience has been. To go for a target level of risk of safety is not a zero harm prospect.

Gaye Francis (09:19):

No, no. And it doesn't help when you have to report your days since injuries, last injury and your LTIs and things like that.

Richard Robinson (09:28):

Yep.

Gaye Francis (09:30):

I think that's about it on this one, Richard. So we might wrap it up. As we said, safety culture is an element that should be thought through from an organisational viewpoint. And there's many authors out there that write about it and characterise it in different ways, and it's just interesting to read some of that.

Richard Robinson (09:48):

Yep.

Gaye Francis (09:48):

So thanks for joining us.

Richard Robinson (09:50):

Thanks.