Risk! Engineers Talk Governance

Design Analysis not Risk Analysis

Richard Robinson & Gaye Francis Season 3 Episode 1

In this episode of the podcast "Risk! Engineers Talk Governance," due diligence engineers Richard Robinson and Gaye Francis discuss the topic of design analysis versus risk analysis. They explore the difference between ALARP (as low as reasonably practicable) and SFAIRP (so far as reasonably practicable) and how the interpretation of these concepts has caused confusion and problems in various industries. 

 

They also discuss the importance of safety in design and the need for a retrospective design review to ensure that all reasonable practical precautions are in place. The conversation also touches on the role of AI in consequence modelling and design review, as well as the need for quality assurance and independent checks in governance processes. 

 

The episode concludes with a reminder that there is no one-size-fits-all approach to risk analysis and that different tools and techniques can provide different insights into due diligence issues.

 

If you’d like to learn more about Richard & Gaye, visit www.r2a.com.au. 

 

Please submit any feedback or topic ideas to admin@r2a.com.au – we’d love to hear from you.

Megan (Producer) (00:00):

Welcome to season three of Risk! Engineers Talk Governance. In this episode, due diligence engineers Richard Robinson and Gaye Francis discuss the topic of design analysis rather than risk analysis. We hope you enjoy the episode. If you do, please give us a rating. Also, don't forget to subscribe on your favourite podcast platform.

Gaye Francis (00:27):

Welcome Richard, we're back for Season 3 of our podcast.

Richard Robinson (00:31):

Yes, and as we were just talking, Gaye had an excellent holiday in Finland with the family.

Gaye Francis (00:35):

I did! Had to do due diligence a couple of times with managing kids, but we made it safely and had a good time.

Richard Robinson (00:44):

And since then, you're able to go on more business trips because the kids are so well accepting of your traveling needs.

Gaye Francis (00:50):

My traveling needs. That's correct. Just a little aside there!

(00:55):

Welcome back, as we said, to Season 3. Today we're going to talk about design analysis rather than risk analysis, and the need to demonstrate SFAIRP (so far as reasonably practicable) is really a design exercise.

Richard Robinson (01:12):

Yeah. In part this arose because there's been some interesting discussion if there's really a difference between ALARP and SFAIRP. And I suppose we were sort of completely mystified by the whole discussion because, from our viewpoint, ALARP should never have existed. And the way it got interpreted has caused an awful lot of grief in an awful lot of places for an awful lot of people.

(01:29):

But when we were just fiddling around with it, I think it partly irritated me because I've sort of said a lot of things in different ways and I've understood what the different points were. For example, consequence modeling, that's things that go 'pop' and 'bang' and overpressures and things like that, is a very scientific area of activity. And consequence modeling to me was always fully scientific. What always was clear to me is that risk analysis per se, which is a simultaneous appreciation of likeliness consequence was always a very muddly subject and everybody always got confused.

(01:58):

Now, I just did sort of an exercise out of a curiosity because all this ALARP (vs) SFAIRP business reappeared, I sort of actually went and looked back at Sir Frank Layfield's review of the Sizewell B power station, which is where he had a problem because whether or not they were going to approve the new nuclear power station, which was a fairly complicated idea in the UK, and he had a lot of engineers advising him and he was a lawyer. And one of the things that sort of became clear is that when you look at nuclear radiation levels, you had to decide what was harmful or not harmful and what was reasonable. And so the recommendation that came out of his thing is that somebody should do a review of this. Now that sort of ultimately sort of floated over to the then what was the new UK Health and Safety Executive, and when you look at the people who put the tolerability risk of nuclear power stations together, they were mostly scientists talking about radiation and they were the people who dreamed up this whole ALARP business.

Gaye Francis (02:52):

So they were actually looking at the level of radiation that could be acceptable, in quotation commas, "to humans".

Richard Robinson (02:59):

But what was interesting about that, that so-called dagger diagram never had any numbers in that document, but what they did do was put in the appendice what acceptable or tolerable levels of risk in different industries otherwise were: Car industries about 10 to minus four per annum for a single fatality; and lightning strikes and so forth was about 1 x 10 to minus six or 10 to seven. Now, they didn't necessarily recommend putting those numbers onto their dagger diagram, but that's what everybody in the petrochemical business, in particular, and the land use planning guys in major hazard facilities did for the next 20 years.

Gaye Francis (03:35):

They equated the two (ALARP & SFAIRP).

Richard Robinson (03:36):

They equated the two. And then I realised well the engineers doing Sizewell B and giving advice to lawyers were very careful. And even the scientists when they were putting in the risk level, left it all in the appendix. It was other people that stuck the two together. And that's in fact where the difficulty arose. Now that sort of caused me a reflection in which irritated me because I've thought about this for a long time and trying to put models together and so forth. I mean, one of the things we had realised for example, was that because we had David Howarth the professor of law and public policy out we sponsored into Melbourne in 2017, and the reason why we were interested in him because he had that book "Law as Engineering". And what he was pointing out is that the lawyers, particularly international UK and US lawyers, were consciously studying the design activities of engineers on the basis that the lawyers do the same as engineers. If somebody client turns up says, I got a problem or I want to do something, then in the circumstances what are the options and which is the best for the client? Now that's a design exercise. And I suddenly realised safety in design, well, that's a design exercise. That's the point. Consequence modeling is scientific, which drives the criticality analysis decision. And what the courts actually do post-event, it's not the level of risk that counts, it's a retrospective design review.

Gaye Francis (04:55):

To make sure that all reasonable practicable precautions were in place.

Richard Robinson (04:58):

Now if you look at it like that, you do consequences now just to work out what the critical things are, that is very scientific. And then you do safety and design to manage that consequence. And then if it all goes wrong post-event, you do a retrospective design review, which is what the lawyers are deliberately studying the engineers for. That's what David Howarth's point was. Now that has a couple of interesting little flow-ons because the consequence analysis, which I've always understood was scientific. You've listened to me ramble on about that for 10 years!

Gaye Francis (05:32):

A few more probably!

Richard Robinson (05:34):

Because basically we decided to stay away from major hazards because they were doing risk analysis, not consequence analysis, in the first instance, and therefore weren't demonstrating all reasonable practicable precautions were in place.

Gaye Francis (05:43):

I think just before you go on there, it's just important to know, that we've covered this in another podcast, that major hazards have gone to consequence modeling and consequence analysis primarily now.

Richard Robinson (05:54):

At least in Victoria. That's correct. I'm not aware of any other state doing it yet. And we did suggest that Engineers Australia in their role as the intellectual body of engineers should actually get their act together on this one lickity split, but that's another matter. But what was interesting about this was, you see the business of science is to know about things. So this is not an attack on scientists in any way because the better scientists know, the better engineers can do. That's the whole point of the exercise.

Gaye Francis (06:23):

The better you're able to design for those things.

Richard Robinson (06:25):

That's correct. And then the lawyers have decided they're going to consider what the engineers are doing and do design reviews of, at least, what the engineers have designed. That means there's a remarkable alignment going on. I mean obviously there's a bit of a flow between the scientists and the engineers because sometimes engineers turn more into scientists and vice versa about what can be done.

Gaye Francis (06:48):

And I think that process is a bit more back and forth, isn't it? But if you focus on the credible critical issues, that's sort of where you can get your design the most robust and it usually then designs for the lesser issues as well.

Richard Robinson (07:02):

Correct. And the other reason why this is actually important, all of a sudden it puts the responsibility of the respective parties in the right place because the scientists -- it is important they keep figuring out how the world behaves, how a gas cloud under certain circumstances will behave and all the modeling things that they wish to do -- but it's the engineer's responsibility to make sure that every reasonable practical control is in place to deal with that credible critical issue. And then it's the lawyer's responsibility to retrospectively test that understanding, because in an advanced industrial society, we do create the most enormous hazards. And when you think of where AI's going, I mean that's what they're actually talking about now. Because what an AI can do, it could do a much better job of the consequence modeling probably because it will take a whole lot of parameters into account. Will it do the design review? Now, that is an interesting question and I don't think people have thought about it because what the philosophical framework for that design and then the design review, that's never going to be the job of an AI. I would have thought.

Gaye Francis (08:04):

It's a really interesting question. I gave a board presentation last week and one of the board members asked, what's the role of AI and how as a board do we demonstrate due diligence around it? And I think it's going to go more as a governance and boards and things like that are going to have a responsibility to test the AI where it's going to be used. And I don't know that we can use it for safety critical things yet, but that's just sort of an opinion. I don't know how you put a quality assurance system around it to make sure that it is? But there's going to be some interesting questions around that and quality assurance and how boards govern AI going forward.

Richard Robinson (08:46):

Well, it fascinated me because remember the first, pretty much one of the first jobs you got with R2A as a young engineer was doing the SIL study, the safety integrity study on how two trains would get past each other on a single line track in New South Wales.

Gaye Francis (09:00):

Correct.

Richard Robinson (09:00):

And you had the job of basically testing every track, every intersection, every points and testing to see whether the watchdog that was being created would actually...

Gaye Francis (09:09):

Bark... Or bring up that the hazard existed.

Richard Robinson (09:15):

That's the sort of task you'd think they'd probably throw an AI. But are you going to trust an AI to make sure that every possible configuration is tested or are you going to choose a Gaye to do it in the future?

Gaye Francis (09:28):

<laughs> Well, I think that's where quality assurance comes in, doesn't it, Richard? Because you're going to have to have a confidence around the technology that you're going to use and the AI and the information that it gives out. You're going to have to test it in some way as part of your due diligence process to make sure that you've got confidence in the information that it's delivering to you.

Richard Robinson (09:46):

Well, you might remember my then business partner, Kevin, basically what he had to do... He worked out a process to make sure that none of the collisions or head-ons - all the train collisions - could occur. And then when the designer decided that was the way it was going to be designed, to use that test (they) had to dream up a different test in order to test whether or not what the designer put together...

Gaye Francis (10:12):

Actually worked.

Richard Robinson (10:13):

Actually worked. And so a different kind of risk model had to be put in place to examine what was being done by these large defense-based software players. And we had to dream up, well, Kevin was doing that part and I was doing the checking and you were doing the work as I recall.

Gaye Francis (10:31):

<laughs> But I think that's really interesting because I think those sort of things will require this independent check. And that's part of what, I guess, if you go back to our idea that the courts are testing after the event whether all reasonable practical precautions are in place, so that retrospective design review, they're looking for other tools and techniques to test the governance processes before the loss of control point in a way.

Richard Robinson (10:56):

Correct. So you need different ways of doing that. And I think we have talked about this in another podcast, but perhaps that's another one we should revisit? Particularly because, obviously from our point of view, the Victorian major hazard people have actually ditched what was the target level of risk ALARP process in favour of what we've always understood to be...

Gaye Francis (11:16):

So discounted the likelihood; they don't consider likelihood anymore. So it's consequence base. But we've said it in a number of our podcasts and we will continue to say there's a whole lot of tools and techniques out there that give you all different insights into risk issues or due diligence issues. It's not a one size fits all, it's that you have to think these things through and you will get different insight depending on what you use. So I think that's one of the key things that we would say is your retrospective design review, think about the questions that a lawyer might ask you in the event that an incident happens. And have you demonstrated due diligence?

Richard Robinson (11:58):

Yes. Well, I did observe I've worn my glasses today, so as I commented to Gaye earlier, she's actually been in focus for the entire session, which is nice.

Gaye Francis (12:06):

I hope my words as well as your vision! <laughs>

(12:09):

Alright, I think on that note, we might wrap podcast number one for season three here. Thank you for joining us and hope you can join us next time. Thank you.

Richard Robinson (12:19):

Thank you.

 

People on this episode