AI Legal Personalities Would Solve the Liability Question

Marcus Storm discusses why we need to consider legal personalities for AI.

Articles around AI (Artificial Intelligence) usually concern themselves with three main points. Firstly, the fascinating world of potential applications and advances. Secondly, data collection and privacy, including potential bias and discrimination. Thirdly, changes to jobs.

One overarching point is missing from the debate, however, which is the underlying legal liability problem. AI, by definition, is autonomous and executes decisions - that is why we create it in the first place, and why we want it, because of its immense power. Since we will want to deploy AI even before it is “perfect”, given the near impossibility of reaching that standard, we must accept that it will get things wrong sometimes - as humans do.

Who, then, is responsible when an AI harms you - when an AI doctor makes a wrong diagnosis that costs you your life? Without clear answers to this question to give confidence to developers, operators, users, and consumers, companies will be hesitant to buy and use AI, developers will create all sorts of AI (but mainly following the profit motive, and not necessarily the ones that certain people do actually need but may not have the financial power to induce the creation of), and the cost of lost progress will be measured in human lives which we could have saved with more advanced technology.

Again, bear in mind that AI will harm you at one point or another. This is something we must accept, but without being afraid of. Those who treat AI as ‘just another technology’ underestimate its profound impact to our society, including the way we use and provide healthcare. We must figure out a new way to adopt these autonomous actors into our society so they can follow the social rules and norms that we set out for them.

This is, of course, politics - and a problem politicians must solve, and they must do it with the consent of the people.

Imagine if we chose a bunch of people at random. They could be police, nurses, doctors, teachers, and academics. Then we decide that no existing laws would apply to them.

That’s a scary and unwelcome scenario. But that’s essentially a metaphor for what happens if we let autonomous actors act without governance. That’s what we should be afraid of - AI let alone to do whatever it wants because we don’t set up the framework within it can operate - not AI itself, which is a highly enabling technology capable of wonderful things.

Laws and regulation, in this case, don’t actually hold back the development of AI. Right now, it’s actually the lack of them that confuses potential developers on how to create AI, and users of how safe and ethical it is to adopt AI.

 

Marcus Storm heads AI products for a major financial institution and is the convener of this year’s Technology Network blog takeover. Earlier this year, he arranged the publication of the network pamphlet “Modern Britain: Global Leader in Artificial Intelligence”. Please get in touch for any collaboration.

Do you like this post?