They always say self-driving cars are safer, but the way they prove it feels kind of dishonest. They compare crash data from all human drivers, including people who are distracted, drunk, tired, or just reckless, to self-driving cars that have top-tier sensors and operate only in very controlled areas, like parts of Phoenix or San Francisco. These cars do not drive in snow, heavy rain, or complex rural roads. They are pampered.
If you actually compared them to experienced, focused human drivers, the kind who follow traffic rules and pay attention, the safety gap would not look nearly as big. In fact, it might even be the other way around.
And nobody talks about the dumb mistakes these systems make. Like stopping dead in traffic because of a plastic bag, or swerving for no reason, or not understanding basic hand signals from a cop. An alert human would never do those things. These are not rare edge cases. They happen often enough to be concerning.
Calling this tech safer right now feels premature. It is like saying a robot that walks perfectly on flat ground is better at hiking than a trained mountaineer, just because it has not fallen yet.
People driving in horse and buggies probably thought the same way when they saw an out of control roadster speeding towards them at a brisk 25mph. One of the things that humanity generally excels at is adapting to a changing environment.
No, they didn’t. The buggy still had a human driver. Back then they were often open cab, meaning you could easily see and maybe even talk to the driver. And horse or buggy, it’s the driver that’s responsible for the vehicle’s behavior.
This is not like the horses to automobiles transition. That’s a false narrative perpetuated by those running the companies trying to sell autonomous driving.
I’m not saying we can’t adapt. I’m saying that while self-driving solves some problems, it introduces completely new ones.
Both of these cases are still human-controlled vehicles. This is more like trying to learn to predict the behavior of wild animals, and even more so, as these bots aren't living beings honed by the same evolutionary forces that shaped us. And people still struggle to predict the behavior of animals, even just other mammals. If we struggle to predict even animal behavior, why would we have such an intuition for the behavior of an utterly alien machine? Look what happened to Siegfried and Roy, two people who spent their entire lives learning the behavior of the wild animals they lived and worked with.