Who’s Liable if AI Goes Wrong?

Boston Dynamics AtlasThe three laws of robotics

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Isaac Asimov, ‘Runaround’, 1942

Thinkers like Asimov imagined a world where machines had to make what appear to be moral decisions. With the rise of autonomous, self-learning systems which power everything from planes to Netflix recommendations, we’re now entering the world Asimov foresaw.

But the famed author wasn’t laying out a machine behaviour manifesto for when the time finally came. He set about finding flaws in his own reasoning, probing the three laws countless times for paradoxes in his fiction.

Click here to read the rest of this story.