Protecting Your Brand

They’re called accidents, and they happen all the time. You and I have both had our unfortunate share of fender-benders, near-misses, and full-on collisions, many of which were unavoidable or unpredictable.

It seems natural that self-driving cars would also get into their share, and they do. And there are diverging narratives on how much they’d matter.

“Robots are Safer than Humans”

…So we should get the humans off the road as quickly as possible. Fewer drunk drivers, less texting-while-driving, no falling asleep at the wheel. What’s not to like?

“Robots have never killed a human”

I’m not aware of any robot, acting autonomously without human instruction (and not engaging in an act of war), that has killed a human.

Which story is right?

Is the path forward the utilitarian strategy of ensuring safety versus humans – or is it the ethically clear path of applying Asimov’s First Law?

It’s neither. And it’s both.

I fully believe that many of the self-driving cars deployed are already significantly safer than humans (in its intended Operational Design Domain, or ODD). As Waymo, Cruise, Zoox, and other vehicles approach the 100-million mile mark (approximately the number of miles driven by humans, per crash fatality), I don’t expect to be disproved. At the same time, when that first accident inevitably occurs, I expect this argument to ring entirely hollow in court. I expect that the media will want to gobble up every detail of the accident. And as recent events have shown, I expect that each occurrence of an accident of a self-driving car will have far more impact and attention than that of a mere statistic.

Autonomous vehicles need to be much safer than humans – without being required to enforce absolute safety on the road. We need to apply a strict-but-fair, transparent standard that is arrived at by consensus and clearly measureable and testable.

We’ll need to write a new social contract, one of the first between robots and humans.