They roll in weekly. We watch them. We rub our hands together with schadenfroh glee.
I’m speaking of Tesla Autopilot crash videos.
Like a train wreck, we seem unable to avert our eyes from videos depicting the Silicon Valley darling’s sheetmetal kissing concrete dividers and other animate and inanimate objects. Time and time again, owners of Tesla’s Autopilot-equipped Model S and Model X vehicles throw caution to the wind and let the computer issue orders in situations when it’s imperative there be human intervention.
And it’s not going to change — not tomorrow, not ever — until we alter course. That’s because we’re trying to answer the wrong question when it comes to autonomous mobility.
First, let’s contrast two things: Tesla’s Autopilot (or any other autonomous system) and someone with a well-below average IQ.
In the latest video depicting a Tesla Autopilot crash, the environment is easy to decipher: the highway is diverted due to construction, Botts’ dots are visible on the road to indicate there’s a new temporary lane for vehicles to follow, and that new lane is bordered by an armco-and-concrete barrier to protect workers in the construction area and/or drivers from hitting heavy equipment.
Autopilot’s camera and radar sensors are going to have a very difficult time finding Botts’ dots. Complicating the scenario is a vehicle directly ahead of the Tesla, which you can see following the newly demarcated lane in the video before the crash. Because of this, we don’t know if the Tesla “sensed” the barrier here, but let’s give Tesla credit and assume it did sense it for the sake of argument. There’s another vehicle beside and just behind the Tesla Model S moments before the crash, which forces the Model S into a quandary: should I stay (in my lane) or should I go (into the other lane and hit another vehicle)? The system is completely unaware of the Botts’ dots and chooses to hit the barrier instead of hitting a vehicle. That vehicle in the other lane, driven by a human driver, is following the Botts’ dots. Had the Autopilot system saw the Botts’ dots, it would have gently steered the right as it would know the object blocking it (the human-driven vehicle) would have moved and not posed a threat.
That’s a situation where we put a lot of stock in the capabilities of Autopilot. For all we know, Autopilot didn’t notice the barrier at all thanks to being screened by the vehicle ahead of it like a winger screening a goalie in hockey.
Now replace Autopilot with any sober, licensed (or maybe unlicensed) driver. Intelligence, even a minute amount of it, is key here. This intellectual idiot, who’s still infinitely more intelligent than a computer, would notice construction signs, see taller heavy equipment ahead, and plan accordingly before the lane diverges. Above all, this person would be able to make these decisions in varying weather conditions.
The great thing about the human brain is its ability to make decisions based on small bits of incomplete information and fill in the blanks. For instance, if we are fiddling with the radio and pop our head up just in time to see a diamond-shaped orange sign drift by, we know there’s likely construction ahead even if we don’t see the content of the sign itself. Conversely, if a camera only faintly sees a snow-covered sign through a blizzard against an equally white background, it won’t know what to do with it. But us imperfect humans do.
So what does this have to do with asking the wrong question? Well, we’re now at a point where we’re trying to digitally sense and program our way around an infrastructure designed for the human interface. Signs are meant to be read by eyes and not cameras. The same logic applies to temporary road markings like Botts’ dots and others. All these warnings, cues, hints, and commands are designed with humans in mind. And we’re now trying to engineer sensors (LIDAR, radar, and cameras) and software to interpret the world as humans do without the necessary intelligence to back it all up. Until we’re able to control the weather and develop some sort of artificial intelligence that’s on par with those populations with even the least amount of mental capacity, this effort is all for nought.
But there is a solution, and it has the ability to fix this and other problems: we need to change our infrastructure to best support autonomous mobility.
It’s no secret that road infrastructure is falling apart, and not just in the United States. Under-funded transportation departments are coming home to roost. We are in for a collective crisis when it comes to the health of our roads. We could just rebuild them again as we’ve done in the past and continue on the 30-year cycle of replacing concrete, or we could take this opportunity to future-proof our roads to handle autonomous operation.
Should you be a member of the camp campaigning for an autonomous vehicle future, you should be cheerleading an infrastructure upgrade. In-road communication between cars and central information hubs is the only currently foreseeable way to solve many of the challenges afflicting the autonomous vehicles we see today, whether they be semi-autonomus Teslas or fully-autonomous Waymos. Weather is no longer a concern if vehicles no longer need to “see” road lines through snow and slush. Construction signs can be a thing of the past as central information hubs can alert vehicles to construction ahead. God forbid there’s an accident a mile down the road in this autonomous utopia, a message could be sent to inbound vehicles to zipper merge without causing excessive delays.
But best of all, and this is me wishing for a perfect world, maybe our meatbag-driven vehicles could be equipped with the same message-reception technology. Instead of being expected to see a school-zone sign hidden behind a badly manicured bush, that sign could then be displayed on a heads-up display. We could be warned of accidents ahead and what lane will get us through the bottleneck most efficiently. And — this is reaching, but allow me a moment — maybe I could blast up through the middle lane of a three-lane freeway at 80 mph while a sea of fully autonomous vehicles part into the other lanes as if I’m some sort of petrol-powered Moses. We can all hope.
In-road communication isn’t without its flaws. In a world where everything digitally connect is also hackable, there’s the risk of hijack via message spoofing — wherein an external actor sends an unauthorized message down the communication channel pretending to be a “road authority” — which could have disastrous results and be a target for large-scale terrorism. But so do all connected, autonomous vehicles, as do all connected, non-autonomous vehicles.
I don’t want to be a party-pooper, but there needs to be a time when we never see car crash videos outside of NASCAR ever again, because the price of those events isn’t just an autonomous algorithm — it’s intelligent and human.