The First Law of Robotics

I’m a sci-fi buff, so I like to compare what we’re doing to Isaac Asimov’s Three Laws of Robotics – or specifically, the First Law.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Isaac Asimov

And the cool thing about Autonomous Vehicles is, I’d argue, that they’re the first robots with any level of agency that we’re likely to encounter in daily use, and the first application where I’d argue that Asimov’s First Law applies at scale.

Asimov’s law is a bit idealistic, but it’s not so much the verbage of the law as the spirit that’s important. I wouldn’t, for example, argue that an AV would ever have a duty to intercept a mugger in order to stop a robbery-in-progress, batman style (although – what a feature that would be!). But I’d much more simply argue that on an ethical level, AV’s have a duty to avoid vehicle collisions.

That’s way too simple, and there are all sorts of pitfalls to accomplishing that ideal. The final form of the First Law, as adopted by society, will almost certainly possess more conditions, exceptions, and complexity in general: in future posts, I’m going to dive deeper into these complications with my own personal take.

But I think the spirit of the First Law ought to be pretty easily agreed upon. Go places. Without hurting people. Mostly legally.