Risk! Engineers Talk Governance

AI (Artificial Intelligence), Robots, Controls & Liability

Richard Robinson & Gaye Francis Season 4 Episode 2

 In this episode, due diligence engineers Richard Robinson and Gaye Francis discuss controls and liability when it comes to AI and robots. 

OHS/WHS legislation require achieving the highest level of hazard control that is reasonably practicable. AI/Robots present potential controls to address many safety issues and can be implemented to improve safety. 

They discuss a number of examples where AI and robots are already enhancing safety and removing people from dangerous tasks, and when it comes to due diligence, organisations would need to demonstrate why it is not reasonably practicable to use them. 

The chat also covers how AI provides situational awareness and information to support human decision-making. Quality assurance processes are still necessary to ensure the robustness of AI-generated information. 

And that the use of AI technology also raises questions about liability, ethics, and morals.

If you’d like to find out more about Richard and Gaye’s consulting work, head to https://www.r2a.com.au.

Find out more about their publications at https://www.r2a.com.au/store.

Find out more about their training at https://www.r2a.com.au/education

 

Megan (Producer) (00:01):

Welcome to Risk! Engineers Talk Governance. In this episode, due diligence engineers Richard Robinson and Gaye Francis discuss controls and liability when it comes to AI and robots.

Megan (Producer) (00:16):

We hope you enjoy the episode. If you do, please give us a rating. Also subscribe on your favorite podcast platform. If you have any feedback or topic ideas, get in touch via admin@r2a.com au.

Gaye Francis (00:34):

Hi Richard, welcome to a podcast session.

Richard Robinson (00:36):

Hi Gaye. Welcome back.

Gaye Francis (00:38):

We're going to talk today about AI, robots, controls and liability. And this sort of pops up as that we've given a number of board presentations and some of our clients have asked us, there's a lot of AI coming, robots coming, how do we have to implement them in our business and what does it mean for us? And I think from our engineering due diligence perspective, it's all about control. AI and robots both present potential controls to address safety issues of concern and how can they be implemented to improve safety.

Richard Robinson (01:17):

And this is the leading thing. And one of the points we keep hammering is that the legislation and OHS/WHS legislation is crystal clear in its objective. You've got to achieve the highest level of hazard control that is reasonably practicable. And it's perfectly obvious that AI in its many manifestations are going to enhance things. And the example that we most obvious is the self-drive cars. And we have a lot of clients that drive long distances. I mean the elimination options just to do everything...

Gaye Francis (01:43):

Remotely or over the phone.

Richard Robinson (01:46):

Over the phone and that sort of stuff. But if you're actually tried to run serious meetings, you've got to decide after a while, if you really want it to work, you should try to do it in person. And so if you've got to drive long distances, what's the way to do it? Now traditionally the way to do it if you had to drive out of hours is make sure you take two people and one person keeps the other one awake. The other option is to put a self-drive car, which is basically a form of AI, to help you get there. And if you start nodding off or something strange happens, it will start doing things. I don't suppose it's going to have a sharp object and poke you or anything like that...

Gaye Francis (02:18):

Elbow in the arm!?

Richard Robinson (02:20):

Whatever the previous version was. But you'll get advice that something's got to happen and if you truly did fell asleep, it'd just pull over the side of the road and stop and everyone else just keep passing past you rather than you becoming a hazard in self. Now that's a form of AI and I don't actually know any clients so far has actually deliberately brought Teslas to achieve this outcome. But it's something I think you'd have to say why you didn't do it if you were called up after the event and a horrible car accident that could have been prevented.

Gaye Francis (02:50):

And that's the question, isn't it? People are asking in terms of is it reasonably practicable in terms of AI and robots and you've got to show why it's not reasonable at a time.

Richard Robinson (03:02):

Correct. And it will change. It's one of the points why standards are ineffective because they're lagging indicators and the mandatory aspects of putting AI in cars that'll take several years to get there after it's proven to be beneficial.

Gaye Francis (03:17):

I think one of the difficult concepts that the clients that we work with are grappling with is the liability issues associated with AI and robots. Who takes responsibility, as you said in the event that there's an accident afterwards. And the way that we've sort of seen it implemented at this stage is there's always a secondary function.

Richard Robinson (03:40):

Correct. It's the backup. It's not the prime. It's like the watchdog we did for the railways in New South Wales. There's a GPS watchdog checking where the trains are and if they get to a certain proximity to the watchdog will bark. Now that's a hardwired thing. There's no intelligence in there at all. But obviously just looking at such a system and you say, how could this be has if it had AI? Well presumably it would actually look at all sorts of other factors where track gangs are, who's doing what over there, what the weather conditions are, and provide the driver with further knowledge and assistance beyond just what it currently does.

Gaye Francis (04:14):

I think that's one of the real strengths with it. It's the provision of that situational awareness and information that then people, humans can make informed decisions.

Richard Robinson (04:25):

Well, it's like when we talk about the marine pilots, which do a lot of work for, I mean they have this personal pilotage unit these days, which is their own independent app aid, but it's got apart from satellite navigation, it's got all the GPS weather forecasting and all sorts of things. And if a sudden squall was coming their way rather than the pilot of having to positively check all the time what the weather's doing, now they'll just sort of say a new weather report, the squalls coming faster than was anticipated.

Gaye Francis (04:52):

So that information's just presented rather than the pilot actually having to go and look for it.

Richard Robinson (04:57):

Correct. So all these sort of improvements and so forth that you might get, you can just sort of see... I mean we had the other discussion about you are a parent in a house with a kid and you disappear off to the toilet and the kid's found a knife and is heading towards the power point. Well, it'd be nice if a AI chirp and said kid with knife approaching power point or something.

Gaye Francis (05:17):

I could just imagine the robot flashing lights, "kid approaching". The other place that we've seen it is probably in your personal space in that specialists are using AI as a diagnostic tool.

Richard Robinson (05:35):

Yes. Well the R2A board requires that I have annual medicals and you're talking to your GP after all the usual tests for the year. And I dunno why the conversation popped up, but for ultra scans and all these other sort of tests that you do, I sort of said, isn't AI going to affect the medical? He said it already is. He said a specialist, the AI ability to detect from ultrasounds and things like that to detect a pattern or anomaly is now better than the humans are. Obviously the trick is of course that it's still a human then comes and looks at it and decides whether or not it should be reported and so forth. So the actual decision-making process remains with the human, but the actual first cut of the 'what does this mean?', the AI is obviously doing a very, very good job.

Gaye Francis (06:24):

We sort of talked about this before and with AI presenting such robust information, I guess there is a tendency or there is the potential that humans can become a little bit lazier and just rely on the information that's provided.

Richard Robinson (06:40):

But if the AI information is better than the human can give beyond reasonable doubt, it's going to happen very fast.

Gaye Francis (06:49):

So where does that leave individuals and organisations in the liability space? If you just say we've relied on that, don't you have to have some sort of quality assurance processes and proof that says that that information is robust?

Richard Robinson (07:05):

But what you're doing is flipping it around. Whereas previously it was the human supported by the AI, now it's the AI being supported by the human. That's where it gets tricky and how that move's going to happen, I don't know. I mean it's like we're talking about the roads and the rule on the sea. The mariners have told us a number of occasions. I mean big ships have an inordinate capacity to stay away from each other. You don't need special rules for the most part. And one of the reasons why I said the rules existed was so if you did have an incident and you do have to decide who's responsible and who's going to pay for what, here it is written down, even though most of the time it's not particularly relevant.

Gaye Francis (07:42):

So it's to assign liability.

Richard Robinson (07:46):

It's to assign liability a lot of the time. And I suspect that's going to keep going for some time. I understand now that the numbers say that if all cars were self-drive to a Tesla standard, there will be less accidents on the road.

Gaye Francis (07:59):

Okay. That's an interesting stat.

Richard Robinson (08:00):

That is a bit of a problem. But how would you make the shift from assigning personal liability to a driver to the AI? Does that mean all Tesla pays for all accidents here and after?

Gaye Francis (08:15):

I don't think they'd sign up to that one.

Richard Robinson (08:17):

I don't think they would either.

Gaye Francis (08:19):

That's a paradigm shift

Richard Robinson (08:21):

It, it really is.

Gaye Francis (08:22):

And you can see that's going to happen. I don't know how the actual turn is going to finally flip it one way or the other, but it will be an interesting space. But I think from our viewpoint as due diligence engineers. And when we talk to our clients, it's really about considering those other options. And we sort of haven't touched on robotics, but robotics are in a similar sort of space in that we've seen clients using robotics to take the human element out of doing some dangerous works. For example, some water utilities were using drones to do their water sampling so their people didn't have to work over water. So that took away the drowning potential. You've seen it with the clearance divers.

Richard Robinson (09:09):

Oh yes. If you're at sea in a big sea and you think your propeller's failed or something, then it's easy to send down an ROV, a remotely operated vehicle, to have a look, presumably managed by the diver, before you send the diver down. You only send the diver down when you really need to. First of all, you're going to have a look with a machine.

Gaye Francis (09:27):

So I think that's when robots are very, very useful.

Richard Robinson (09:32):

Well, the rather depressing part about all this was sort of the Ukraine war that we sort of touched on briefly because the Ukrainians are very rapidly developing electronic warfare is obviously rocketing along. The jamming of satellite signals and GPS signals and those sorts of things, so things can't navigate. So the other way to do it is to program whatever drone you've got. So if it picks, for example, a Russian tank in the distance, it doesn't care about the sequence anymore. It can see the tank, it knows what to do, it knows how to do it and it just goes and does it. But that sort of remote control weaponry is a kind of scary idea. But I think that's where we're heading very fast.

Gaye Francis (10:12):

Oh, that's totally scary. And then where do the liabilities, and I mean the ethics and the morals come into that? Which I think is a whole different podcast around that sort of stuff. And where do we start taking responsibility for some of those technologies?

Richard Robinson (10:30):

I think Ukrainians have a fairly clear view of what they intend.

Gaye Francis (10:36):

Maybe not in the war space, but hopefully in the engineering and the technological space, AI and robots can be used for improving safety for organisations.

Richard Robinson (10:48):

I'm sure that'll be the case.

Gaye Francis (10:49):

I think there's still some questions to answer around the liability issues that potentially can arise for the use of the technology. And I think somehow that there's going to have to be some more robust work around quality assurance and making sure the information that you're getting is robust to make those informed decisions and going forward.

Richard Robinson (11:10):

Well, the other one that we've mentioned a couple of times is that Sydney Decca fellow from Queensland Uni, I think it is. He's a pilot turned professional psychologist. And he just points out in passing that there are now more safety rules out there that nobody, at least of all the person doing their job's got any clue about. But you can sort of imagine, I don't know whether you want it on your hard hat, but the AI sort of camped in your phone, keeping an eye on the surroundings and if it sees something, which it gets a bad feeling about, it will alert you to it. So you don't really have to know all the details because it's not possible. So if you start doing something, and we've seen lots of things where people, they've been doing the job that way for years, but when you go and look at it says, well you're a bit lucky here that you didn't get hurt. And if you had an AI watching it, it would've of chirped up the first time he tried to do it that way.

Gaye Francis (11:55):

I think you're still going back to the value in it at the moment is around that situational awareness and giving the person information to make informed decisions.

Richard Robinson (12:05):

And alerting you to something which you're still got the decision to make. It's not doing it for you.

Gaye Francis (12:11):

Yeah. Alright. Thanks for joining us today, Richard. I hope everyone found that interesting. I'm sure this is a space that will proceed at speed over the next...

Richard Robinson (12:24):

We will be revisiting this for we hear some particularly new or novel implementation that hasn't happened before.

Gaye Francis (12:30):

So thank you again and we'll see you next time.

Richard Robinson (12:33):

Thanks.

 

People on this episode