By on March 3, 2017

Fictional Autonomous Ford in Super Bowl Commercial

They roll in weekly. We watch them. We rub our hands together with schadenfroh glee.

I’m speaking of Tesla Autopilot crash videos.

Like a train wreck, we seem unable to avert our eyes from videos depicting the Silicon Valley darling’s sheetmetal kissing concrete dividers and other animate and inanimate objects. Time and time again, owners of Tesla’s Autopilot-equipped Model S and Model X vehicles throw caution to the wind and let the computer issue orders in situations when it’s imperative there be human intervention.

And it’s not going to change — not tomorrow, not ever — until we alter course. That’s because we’re trying to answer the wrong question when it comes to autonomous mobility.

First, let’s contrast two things: Tesla’s Autopilot (or any other autonomous system) and someone with a well-below average IQ.

In the latest video depicting a Tesla Autopilot crash, the environment is easy to decipher: the highway is diverted due to construction, Botts’ dots are visible on the road to indicate there’s a new temporary lane for vehicles to follow, and that new lane is bordered by an armco-and-concrete barrier to protect workers in the construction area and/or drivers from hitting heavy equipment.

Autopilot’s camera and radar sensors are going to have a very difficult time finding Botts’ dots. Complicating the scenario is a vehicle directly ahead of the Tesla, which you can see following the newly demarcated lane in the video before the crash. Because of this, we don’t know if the Tesla “sensed” the barrier here, but let’s give Tesla credit and assume it did sense it for the sake of argument. There’s another vehicle beside and just behind the Tesla Model S moments before the crash, which forces the Model S into a quandary: should I stay (in my lane) or should I go (into the other lane and hit another vehicle)? The system is completely unaware of the Botts’ dots and chooses to hit the barrier instead of hitting a vehicle. That vehicle in the other lane, driven by a human driver, is following the Botts’ dots. Had the Autopilot system saw the Botts’ dots, it would have gently steered the right as it would know the object blocking it (the human-driven vehicle) would have moved and not posed a threat.

That’s a situation where we put a lot of stock in the capabilities of Autopilot. For all we know, Autopilot didn’t notice the barrier at all thanks to being screened by the vehicle ahead of it like a winger screening a goalie in hockey.

Now replace Autopilot with any sober, licensed (or maybe unlicensed) driver. Intelligence, even a minute amount of it, is key here. This intellectual idiot, who’s still infinitely more intelligent than a computer, would notice construction signs, see taller heavy equipment ahead, and plan accordingly before the lane diverges. Above all, this person would be able to make these decisions in varying weather conditions.

The great thing about the human brain is its ability to make decisions based on small bits of incomplete information and fill in the blanks. For instance, if we are fiddling with the radio and pop our head up just in time to see a diamond-shaped orange sign drift by, we know there’s likely construction ahead even if we don’t see the content of the sign itself. Conversely, if a camera only faintly sees a snow-covered sign through a blizzard against an equally white background, it won’t know what to do with it. But us imperfect humans do.

So what does this have to do with asking the wrong question? Well, we’re now at a point where we’re trying to digitally sense and program our way around an infrastructure designed for the human interface. Signs are meant to be read by eyes and not cameras. The same logic applies to temporary road markings like Botts’ dots and others. All these warnings, cues, hints, and commands are designed with humans in mind. And we’re now trying to engineer sensors (LIDAR, radar, and cameras) and software to interpret the world as humans do without the necessary intelligence to back it all up. Until we’re able to control the weather and develop some sort of artificial intelligence that’s on par with those populations with even the least amount of mental capacity, this effort is all for nought.

But there is a solution, and it has the ability to fix this and other problems: we need to change our infrastructure to best support autonomous mobility.

It’s no secret that road infrastructure is falling apart, and not just in the United States. Under-funded transportation departments are coming home to roost. We are in for a collective crisis when it comes to the health of our roads. We could just rebuild them again as we’ve done in the past and continue on the 30-year cycle of replacing concrete, or we could take this opportunity to future-proof our roads to handle autonomous operation.

Should you be a member of the camp campaigning for an autonomous vehicle future, you should be cheerleading an infrastructure upgrade. In-road communication between cars and central information hubs is the only currently foreseeable way to solve many of the challenges afflicting the autonomous vehicles we see today, whether they be semi-autonomus Teslas or fully-autonomous Waymos. Weather is no longer a concern if vehicles no longer need to “see” road lines through snow and slush. Construction signs can be a thing of the past as central information hubs can alert vehicles to construction ahead. God forbid there’s an accident a mile down the road in this autonomous utopia, a message could be sent to inbound vehicles to zipper merge without causing excessive delays.

But best of all, and this is me wishing for a perfect world, maybe our meatbag-driven vehicles could be equipped with the same message-reception technology. Instead of being expected to see a school-zone sign hidden behind a badly manicured bush, that sign could then be displayed on a heads-up display. We could be warned of accidents ahead and what lane will get us through the bottleneck most efficiently. And — this is reaching, but allow me a moment — maybe I could blast up through the middle lane of a three-lane freeway at 80 mph while a sea of fully autonomous vehicles part into the other lanes as if I’m some sort of petrol-powered Moses. We can all hope.

In-road communication isn’t without its flaws. In a world where everything digitally connect is also hackable, there’s the risk of hijack via message spoofing — wherein an external actor sends an unauthorized message down the communication channel pretending to be a “road authority” — which could have disastrous results and be a target for large-scale terrorism. But so do all connected, autonomous vehicles, as do all connected, non-autonomous vehicles.

I don’t want to be a party-pooper, but there needs to be a time when we never see car crash videos outside of NASCAR ever again, because the price of those events isn’t just an autonomous algorithm — it’s intelligent and human.

Get the latest TTAC e-Newsletter!

75 Comments on “With Autonomous Cars, It’s Time to Realize We’re Trying to Solve the Wrong Problem...”

  • avatar

    T***a’s Autopilot is like a false flag op dreamt up by an anti-AV terrorist.

    There badly needs to be a more cautious and successful AV initiative by a major league player to expunge the stench of Elon’s ego.

    • 0 avatar
      SCE to AUX

      No, SAE and NHTSA need to eliminate Level 2 autonomy. Autopilot hardly needs to function at all, because Level 2 *requires* an attentive driver.

      • 0 avatar

        I think we’re beyond this though. Sure, Elon could hit the Big Red Button in California and turn off Autopilot for every single Tesla not equipped with some kind of future port blocking software, but that functionality was sold to customers. Tesla would fight tooth and nail for Level 2 to remain or else it would have to refund a lot of dough.

    • 0 avatar

      The thing is, people have accidents all the time. They just dont get news coverage because “guy wasn’t paying attention and runs into construction barrier” is a dog bites man story.

      Part of the issue is that we need to understand that autonomous vehicles are never gonna be perfect. Expecting Autopilot or anything else to work 100% of the time is foolish, and the hysterical hand wringing over over a few isolated incidents is not needed.

      Maybe it’s a PR problem. Musk spent too much time trying to portray autonomy as a safety thing. And it really isn’t. Probably the cars are safer than human drivers. But when you start touting safety, every single failure, even if it’s less likely than a human gets overblown as proof that your claims were wrong.

      • 0 avatar

        It’s a catch-22 problem.

        Autopilot is great at keeping you on the straight and narrow so long as there are no surprises. Humans are much better at handling edge-case scenarios than today’s array of sensors and software. The unfortunate consequence of Level 2 autonomy is it allows us to mentally shut off for 90% of the journey and we’re not tuned into the task at hand when required. It’s kind of like sitting on a park bench and blissfully listening to music with your headphones on when, all of a sudden, someone hands you a bat and another person chucks a fastball at you. Your brain is just not prepared to do that task. Humans are not wired to change “jobs” that quickly.

        • 0 avatar

          100% this. A version of this happened to me back when I was in college driving the 16 hours from Georgia to Connecticut. A long boring stretch of the road is 95N up along the coast, with 20-30 mile portions dead flat and straight. Driving with cruise control on, barely holding the wheel, I didn’t even notice I was about to run off the road. The lane had ended (and clearly there were lots of signs beforehand), but I literally snapped ‘awake’ just in time.

          I suspect zoning out is much easier when the car can go uninterrupted for 30 miles (20-30 minutes?) with almost zero driver input. If just cruise control and flat highway is enough to turn off almost all of our faculties, it’s x times harder to be all the way out of the loop and have to reengage.

          Mark – to your point it’s even worse than that with AP because it’s likely to disengage when the stakes are highest. The fact that it’s not likely to be wonky or overwhelmed by normal works against it when it’s overrun its abilities.

          In the Genesis G80, the lane keep assist is just good enough to be helpful, but not good enough that I can fully disengage. In a way that lack of competence is probably for the best (even though it is clearly not better at its job than the Tesla).

          To work the analogy, crappier versions never let you forget you’re at home plate waiting for a pitch…

        • 0 avatar

          Well it still leads to less accident than it would have happened without the system. This is the first generation system without all the sensors in the newer system and was still able to decrease accidents by 40%

      • 0 avatar

        Both Mark and Snooder are correct.

        Humans and machines are different. They’re each better at different things. To expect one to behave exactly like the other, in every circumstance, is unreasonable. First and foremost, we must accept that.

        To Mark’s point, we can help autopilot by marking our infrastructure in ways that are perceptible by both human and machine. At the very least, take it into account when we make changes or improvements.

        To Snooder’s point, autopilot is doing pretty darned well. Even in the video in question, I’m impressed as to how well autopilot handled things after it hit the barrier. It maintained it’s lane better than I could expect a human to.

        I continue to be impressed by how well automotive autopilot works. It can only get better. Let’s do what we can to get it there, so I can use it when I can no longer drive. :)

    • 0 avatar

      No – AutoPilot, like cruise control, automates certain functions within a limited set of parameters. The idiocy at work here is not the one Mark describes in his column, which covers poor decision-makers who can still recognize construction. The idiocy at work in this arena is the mystifying and wrong-headed notion that because something is called AutoPilot, it will drive the car by itself all the time and never make a mistake. It can keep the car between well-defined lane markers, and it can generally avoid hitting a car in front of (or beside) the piloted car, so long as that car is travelling in the same direction along the same or parallel courses. It cannot swerve to avoid a deer, or be relied upon to notice construction, or see a car up ahead weaving back and forth and think, “That dude’s drunk. I better be careful around him.” It’s a driving AID, not a driving REPLACEMENT. And Darwinian forces seem to be correcting against those who think otherwise.

  • avatar
    SCE to AUX

    True comments.

    The railroad system has largely done this already, but trains operate on well-defined tracks. It would be most challenging to add automation sensors to the zillions of suburban and rural roads where most accidents occur.

  • avatar
    Big Al from Oz

    Interesting opinion Mark.

    Like EVs, I do believe it will take years for truly capable and safe autonomous vehicles to materialise.

    I do believe with government sponsored interference I n high tech industries coupled with stakeholder expectations we are working around and short cutting the evolution of technology.

    This tech is great, but like most any instrument, tool, model we use in our society a better result is realised when a systematic and methodical approach is used.

    Maybe this tech like EVs are coming on line to fast.

  • avatar

    Brilliant. Absolutely fucking brilliant.

    I think the key point when autonomous vehicles become ubiquitous will be when both the ability to read road conditions and respond to them is there, and also the infrastructure is updated with sensors that provide better machine-readable information.

    The problem is that we can’t have just one. We need the ability to read the road because sometimes sensors break, or the road is old. And we need the sensors because, as you said, they’re just better.

    Imagine if DPS had an integration with a Waze like app, so that every license plate automatically updated the network to let people know where traffic Hotspots are.

  • avatar

    I agree with the moral of your article. There are literally millions of scenerios that programmers cannot totally plan for. How many miles are driven by American’s Daily? Literally in every conceivable weather and road condition. There is a long way to go before you can program them all. If you tell people the car can drive it self and sell the car as such, there is zero incentive to pay attention but for the first few rides where you cannot believe the car is driving itself and you stand guard waiting to intervene. But…after a week, im watching a movie, playing games, anything but driving.

    The autonomous vehicles will no doubt continue to improve. It seems like smart road signs, vehicles connected to eachother, and other external/fixed inputs to the vehicle such as a temporary construction marker, make reliance on others/humans a critical factor again. The construction worker could put the wrong smart sign out, place it in a poor spot, it could be tampered with, etc.

    The vehicle needs to be able to handle it, all of it…. as there will surely be times when it must. Particularly in the future as people may become so ill equipped to drive themselves without the experience we all obtained the hard way, that a system breakdown could render the vehicle stranded, or worse, put the equivalent of a student driver behind the wheel on a busy freeway if the system required human intervention.

    There is a great many wrinkles that still need to be ironed out. Every autonomous accident will be plastered across front page news. It will take time for widespread adaptation once it is ready for prime time.

    As many perils as there are, it still begs an additional question, one that we aren’t asking but probably should.

    “When will autonomous vehicles be statistically safer than piloting your own car?” At that point, despite whatever drawbacks still exist, should we start implementing widespread use? Because even if a computer cannot compete with the sensory prowess of the human body, combined human experience and learning, even that of a moron….. that computer is still probably a better driver than many of the people we currently share the road with.

    Which leaves us with an even more frightening question. If autonomous cars get so good, could it eventually become illegal to pilot your own vehicle on public roads, as statistically, you are the moron more likely to cause death/injury compared to the computer? Food for thought.

    Bored at work on Friday, sorry for the lengthy essay.

    • 0 avatar

      “… even if a computer cannot compete with the sensory prowess of the human body, combined human experience and learning, even that of a moron….. that computer is still probably a better driver than many of the people we currently share the road with.”

      You’re right. Autonomous technology, in its current state and probably well into the future will likely be better than humans at driving a car — but only within very defined parameters. As I said, the human brain is wonderfully adept at figuring out problems based on very little information. Until autonomous vehicle sensor inputs can see as well or better than a human /AND/ process that information as a human would, relying solely on in-vehicle tech to navigate an environment designed to be deciphered by humans is a fool’s errand at best.

      • 0 avatar

        The key flaw in your reasoning is your comparison of the autonomous technology to a trained, calm, sober, attentive driver. In the real world the tech needs to be better than a 75% percentile drive which isn’t nearly a high a bar as the one you’re claiming.

        • 0 avatar

          I never said trained, calm, or attentive. I said sober.

          Edit: I also said “idiot.”

          • 0 avatar

            “I said sober.”

            Which is my point.

            “I also said “idiot.””

            Yes you did:

            “This intellectual idiot, who’s still infinitely more intelligent than a computer, would notice construction signs, see taller heavy equipment ahead, and plan accordingly before the lane diverges.”

            The intellectual idiot will of course often not do that because they are too busy texting, applying makeup, ogling the hot girl jogging, dozing off, etc. IIRC that sort of things happens 33,000 times a day.

        • 0 avatar

          Why only better than the 75th percentile driver?

          If the best that an AV can do is ‘as good as’ or ‘a little better than’ a human, then I don’t want them. Unless they can be nearly perfect . . . then I don’t want them.

          As a human being, I feel a deep distinction between the thought of my death being caused by a) another sentient human, including myself; b) a non-human sentient like a predator or a frightened/enraged beast; or c) an event involving a non-living artifact or situation. They are not the same. They feel different.

          Having evolved for hundreds of thousands of years as a social being, I think we instinctively know that one of our biggest threats is other humans. Death by car really has always fallen into that category. It’s usually either their fault, or your fault.

          I don’t want to turn driving into “well, shit might happen and you might just die” type of event. Even if the odds go down. I want to be in control, and I want *you* to be in control. If you’re going to kill me, then you kill me. If I kill myself, then I kill myself. But I don’t want some god of technology to randomly decide that my time is up and kill me.

          God, it sounds like some dystopian sci-fi story . . .

          “Just step into the transporter, sir, and you will immediately appear at your destination.”

          “Is it safe?”

          “Well . . . there *is* a 0.078% chance that the unit will decide to terminate you instead. Got to keep that population in check! But it’s so much safer than strapping yourself in a partially evacuated tube and hurtling across the ocean seven miles up, like they did in the 21st century!”

          “OK! Do I need to take off my shoes and belt?”

          F that.

          No thanks.

      • 0 avatar


        While I agree with your first premise, I don’t agree with the conclusion.

        What I mean is, I agree that robots are better at handling most situations while humans humans are better at handling rare scenarios. but but I don’t agree that means that trusting robot drivers is a fools errand.

        First, rare scenarios are rare precisely because they don’t happen that often. Which is better, suffering a slightly increased risk of accident 90% of the time, or a larger increase in risk 10% of the time? Maybe we are better off and safer taking the chance that the robot won’t be able to handle an emergency in exchange for a safer drive the rest of the time.

        Second, safety is not the only benefit of an autonomous car. Even if the autonomous car ends up being only as safe as the average driver, there’s a benefit in convenience. I’d liken it to a valet or a taxi. Nobody is clamoring for valets to be rigorously tested and as good a driver as anyone in NASCAR. They just have to pass a drivers license. Or when you get in ad taxi, you don’t expect (or at least I don’t expect) Mario Andretti behind the will. Just tbe ability to get from point A to point B without fucking up most of the time. Even if there was at study that showed that taxis are on average slightly worse than John Q. Public, I’d still take one when I need a ride to the airport.

  • avatar
    Shane Rimmer

    Call me a pessimist, but I fear this, while the right approach, will go about as well as our aborted attempts to swap to kilometers an hour on speed limit signs years ago.

    • 0 avatar

      We should’ve stayed the course. We had to go back because the older generation was a collective bunch of sticks in the mud. At least engine displacements made the switch.

  • avatar

    Here is my pet peeve – poor lane markings. I was driving in the rain the other day and the markings were either worn away, had been ground away, had been ground away and new haphazard markings painted, etc. It was 8 lanes merging and going off in different directions and I guess everyone just went on instinct. That’s not how it should be. A lot more effort needs to go into clearly marking the road.

    • 0 avatar

      There is a lot of effort that goes into marking the road. See the markings on a brand new road? Nice and solid. The problem is, when the city/county/state coffers get low, things that were scheduled on a 5-year program, like redoing the lines on the roads, get rescheduled to an 8- or even 10-year program.

  • avatar

    I’ve berated the fervent beta advocates ad nauseam about the dangers of these systems when you involve the least ‘competent’ denominator, much of it in this forum here.

    As a professional that deals with risk, probability and consequence in a regulated arena, the process to develop the nomenclature and foundation necessary to accurately describe and defend the intended bounds of a self-driving device is a monumental task given the endless variability and inconsistent nature of our roads and highways. It must be able to process the variable inputs into a general, solvable case that results in a safe driving path.

    The limitations of GPS and forward facing cameras and sensors are obvious. IMHO overcoming those limitations and managing the costs of a mass-market product are mutually exclusive right now. Nothing has proven otherwise. Sole focus on adapting the personal automobile to use existing roadways to be the future of safe, semi-automated, dehumanized mass transit is not a solution I can endorse.

  • avatar

    Well, yeah, redesigning all the roads around AVs would in theory enable them to be dumber. But…
    –they still need to be smart enough to properly respond to immediate local oddball circumstances…which is pretty darn smart
    –those expensive road upgrades are not going to happen for any sizable percentage of the roads and streets in the world anytime soon
    –those expensive upgrades would probably be rendered obsolete by future AV technology, or turn out to be the WRONG upgrades
    –if you’re a freedom-loving ‘Merican, and don’t want Big Brother(or hackers for that matters) CONTROLLING your Autonomous Vehicle, you don’t want “smart” roads.

    • 0 avatar

      The whole smart roads thing and v2v fall apart when it comes to wildlife and the effects of weather like floods or fallen trees. While it helps, it isn’t enough and just one part of the picture.

      One major problem to be solved is to giving human intuition and reasoning. The intuition is needed to anticipate potential problems. The reasoning is to figure out something like botts dots being arranged in two parallel lines means they are lane markers. Trust me, the system did see them (the system I’m working with can see the aggregate in the pavement), but it needs to know how to interpret what it sees. Same with the sign. Given a partial bit of something, reconstructing the entire object in its brain.

      Good progress is being made so we’ll get there eventually.

  • avatar

    I like the idea of, as an interim measure, a system that, even if it couldn’t drive the car, could talk to similar systems in other vehicles, draw on databases of things like the signs you mentioned, etc.

    Aircraft have something similar to avoid colliding, called “TCAS” and it talks to the systems in other aircraft to detect potential collisions, and even issue emergency instructions to avoid one.

    • 0 avatar

      I think that’s the next logical step. We’ll likely see a rollout of smart intersections that can tell cars when they can go. But we really need some kind of in-ground wire to keep vehicles between the lines.

      • 0 avatar

        “But we really need some kind of in-ground wire to keep vehicles between the lines.”

        What about potholes and frost heaves?

        • 0 avatar

          Hey … I’m not saying the solution is perfect. We will still need sensors and the cars themselves will still need to make decisions based on its digital inputs. But relying on cameras, radar, and LIDAR as a primary form of vehicle control introduces a very narrow window in which autonomous vehicles can safely operate. By making the primary input part of the infrastructure, all those sensors can then be focused on edge cases in which they’re very good at solving — your potholes, for instance. There’s nothing to say the car can’t go slightly off-course then rejoin “the wire”.

  • avatar

    Static, machine vs fixed world, scenarios like this, is still not even into the foothills of the mountain of complete human replacement for an AV. Throw a million guys actively trying to take advantage of the poor, “low IQ” bot, and see how he fares…..

    Noone is even close to daring to field a robot in any racing series against humans, even. Not even close.

    Though, as far as I’m aware, noone has even bothered attempting time trialing a robot faster than a human driver around an empty, predictable track. Talk about orders of magnitude simplification of the operating environment….

    Until Porsche gets their latest electrocarbon millionaire toy faster around the ‘Ring, with their fastest test driver NOT touching the controls than with him doing so, fully generalized human driver replacement doesn’t even qualify for a spot amongst distant chimeras. At least not outside of prospectuses aimed at grabbing on to ever more of the fresh print emanating from the Fed.

    The story of “AI” in general, I suppose: “Every” nerd worth his salt is a SciFi fan. And every vain schmuck with an above averagely thick wallet, likes to flatter his ego by hanging on to the illusion this situation is somehow due to him “being smarter”…. Than all the other guys that throughout history were/are “smarter…….”

    Anyway, get the time trialing done. Then, racing try against humans with lawyers. Like Marc Marques shoulder bumping Mark the Bikebot at full lean, 200+mph with both tires sliding…. In a dense pack of other riders jockeying for the same lines those two are. Do that successfully for long enough to render humans inevitably irrelevant (talk about expensive to employ drivers ripe for replacement, btw…..). Then pop some Champagne. While poor Marc sulks. Off podium somewhere in the background.

    As even full grid racing, is an orders of magnitude simpler environment than the full breath of human activity behind the wheel on public roads. Which, come to think of it, also includes a bit of racing. Think Jack in his Phaeton. Trying to stay ahead of some illegal engaged in a running gunbattle with police….. Etc, etc….

    And remember, in the rough and tumble of real world traffic, trafficants are there to get somewhere. So “pull over, cease and freeze” just isn’t a viable strategy, no matter how much the legal department may insist it is treated as such. Have Mark the Bot from above do that, and he’ll never even get to merge onto the freeway. As Jack and buddies will bully him with just enough “this is dangerous” signaling to keep him frozen forever. And alongside him, all the poor saps, even aggressive wannabe Jacks, queued up behind him on every onramp….

    In fully financialized dystopias, cool sounding elevator pitches, some trumped up credentials and good connections are all one needs to manufacture in order to make piles money. But pitches about all the great things the future’s gonna hold is also all anyone currently have. If anyone could get even as far as time trial dominance over humans, they would have. Until then, they really have nothing practical. Just dreams and overconfident prospectuses. And, increasingly, lots of money with which to broadcast their illusions to those easily impressed and suckered.

    • 0 avatar

      Mikey hates everything.

    • 0 avatar

      Road and Track did one time trial comparison at Sonoma. The autonomous vehicle was about 10 seconds off the pace of the skilled driver.

      There’s also an electric supercar concept that lapped COTA autonomously, but 29 seconds slower than their test driver.

      I agree that this is relatively simple stuff compared to making a autonomous car operate safely on the street, but racing in a grid is only as simple as driving on public roads if you’re willing to settle for last place.

      Autonomous car racing will be extremely interesting until they get it figured out. Then it will become extremely boring.

      • 0 avatar

        “racing in a grid is only as simple as driving on public roads if you’re willing to settle for last place.”

        On average, yes. But the “complexity peaks” that occasionally occurs on the street, are way more difficult to get a handle on than their grid racing counterparts. On the road, you just have no idea what may happen. And you can’t just wave a flag and stop the race for unexpected weather or other events.

        As an example, and to stay in script, how is your robocar supposed to react to some illegal, or police, or Jack in a Phaeton, deciding to engage you in a running gun battle while you’re trundling through some crowded Chinatown? Or out on the open highway for that matter. Cease and Freeze is exactly the worst thing to do. Humans may not always react 100% “correctly,” but at least they’ll do something that may complicate the wannabe killer’s mission. While the sheer shallowness of an AI’s “brain” and currently remotely possible experience and motivation space, pretty much guarantees it will be completely out of it’s element.

        While those kinds of scenarios may sound far out, human drivers sharing roads with AIs will, since it’s what humans do, attempt to sniff out exactly what they have to do to get the AI to engage freeze and cease, or some similarly overcautious, risk avoidance mode. So that the human driver in question, himself, can use this to his advantage. Crowded traffic is perhaps best modeled as a million concurrent, interacting games of chicken. Full of signaling aimed at eliciting compliance with ones own goals from others. With lives at stakes guaranteeing AIs will be “over cautious”, such a space is really, really far ahead of anything even the most advanced AI can currently dream of competing with humans, or heck even fruit flies, at.

        • 0 avatar

          I see what you’re saying. While the driving itself is relatively easy compared to wheel-to-wheel racing, it’s hard to imagine that it could ever be capable of dealing with every scenario that a person could encounter.

          That’s why I wouldn’t even want my vehicle to be capable of activating the brakes. I’d be willing to run down someone who is trying to harm me, but an autonomous vehicle would disagree on whose life is more important.

  • avatar

    I agree with the premise. However there is no money to fix bridges and dams. How are we going to find the money to overhaul the interstate system to interface with autonomous cars? People making less money equals less tax revenue. This results in crumbling highways and bridges.

  • avatar

    Fully Autonomous Driverless Vehicles were originally envisioned for use only on long stretches of limited access low density hiways. My BS in Transportation from long long ago envisioned this issue at length. But we certainly never wanted driverless cars for urban or suburban driving. It made no sense then. It makes no sense now.
    When did the desire to do away with drivers altogether shanghai this potentially useful idea?
    More to the point: why are we allowing this misuse… this misapplication… of an otherwise useful technology?
    Just because something can be achieved. Does not always mean it should be.

    • 0 avatar

      “we certainly never wanted driverless cars for urban or suburban driving. It made no sense then. It makes no sense now.”

      It doesn’t? How do you know what we “want”?

      Here’s my planned day.
      – Wake up in the morning and crawl into another bed, in my car.
      – I slumber while my car fights rush hour traffic to work (where I can shower and get ready).
      – After work, I drive to the bar and drink until I forget my problems.
      – I crawl into my car, which drives me home.

      Here’s my planned vacation travel.
      – I load up the car in the evening and jump in the back to take a snooze.
      – Sometime the next morning, I arrive at my destination. I can spend the day vacationing, rather than driving and I just eliminated a night in the hotel.
      – Similar tactic on the way home.

  • avatar
    George B

    Mark, maybe the future is more Waze than government controlled messaging to cars. Crowd sourced Waze provides continually updated information about transient traffic obstructions like police collecting revenue or herds of deer near the highway that government is unwilling or unable to provide in a timely manner.

  • avatar

    Re-engineering our infrastructure to handle autonomous vehicles would be as easy as the UK switching to left-hand drive on the right side of the road. The amount of money and the integration between governments at all levels is almost unfeasible. We don’t even fund the human-centric infrastructure, and governments can barely agree on common signage, clearances, and tolerances for the display of signage. Private companies will not assume the liability for piloting everyone’s vehicle.

    Autonomous technicians are developing vehicles to operate in a human-centric transportation infrastructure because it’s the only infrastructure we can almost agree to fund.

  • avatar

    You’re on a very expensive and inefficient wrong track. It simply isn’t possible to pad all of the world’s corners for the sake of technology. Instead, the technology has to be able to recognize and adapt to reality and to changes in reality.

    The primary benefit of autonomous driving (assuming that it ever works properly) is that the technology will never be impatient, angry, distracted, drunk, stoned or aggressive. The problem with human drivers is that they are human, and some humans are worse than others.

    • 0 avatar

      Another advantage of autonomous driving is that there are sensors under development that will ultimately give machines far greater capability of detecting hazards than any human.

      For example femto-photography which will give a machine the ability to see around corners:

    • 0 avatar

      Someone once said that you can’t make a machine smarter, than people are stupid. Tortured english, but I agree.

  • avatar

    Why is self-loathing such a badge of honor these days?

    First it was hating on your country (‘Murica, hur hur). Nothing special there, it’s been going on for fifty years.

    Then came ragging on your own race/ethnicity (stupid white people o_O)

    Recently added hating on your gender/sexuality (cis male . . . ewww)

    Now you gotta hate your f’in SPECIES, too? Meatbags? WTF? Waiting longingly for the Singularity?

    Geez, you’re disparaging the entire freaking animal kingdom with that one, for cripe sake.

    In case you’re wondering, you are not alone. I hate you, too. I wish some people would put a freaking bullet in their head already and spare us all their misery.

  • avatar

    I’ve written software for nearly 20 years now. At first I thought that maybe these autonomous car software developers were just that much better than I am. Then I realized a few things.
    1) they probably are better than I am
    2) they’re doing a great job
    3) they’re wrong and autonomous driving is not around the corner. Let’s just say it’s in the same category as fusion- perpetually 10 years away.

    Autonomous driving will only be successful if a few things happen.
    1) start with small, well defined paths similar to mass transit routes.
    2) when roads are under construction autonomous driving is disabled
    3) drivers who opt to allow the car drive are opting to assume the same liability as if they were driving (if the car kills someone you are just as liable)

  • avatar

    I sincerely hope we never get autonomous vehicles.

    Driving requires thinking. Just because so many people do it every day, does not mean that it is not pushing on the human envelope. We did not evolve to analyze and react to events happening at a 100+ mph relative speed. Until 100 years ago, I bet 99% of humanity had never even seen something travelling at a relative speed of 100 mph to themselves (unless they fell off a cliff). We have to do this constantly driving a car. It must be considered a boundary-pushing human activity.

    Driving is doing reverse physics in real time to predict where things will be in the future. It is also watching multi-actor events unfolding and anticipating future interactions between them and how that might affect us. It is also reading messages left for us, understanding them, deciding which are germane to our task (driving), ignoring those that are not, and acting on the important ones. It is also modeling the thoughts of the actors in other moving vehicles or otherwise moving in the area and predicting how they may react to you and the rest of the environment, then developing strategies to handle likely events. And developing contingency strategies for everything, since shit happens.

    At least it is if you’re doing it right. And a thousand other things I’m sure I’ve missed.

    Developing cars that drive better than us is a big step towards developing machines that think better than us. Or at least think as well as us, and react much better. Still . . . a big step.

    Thinking is the one and only thing that humans do better than anything else on earth, and as far as we know, in the universe.

    Why are so many people so eager to give up that hegemony? And worse . . . force it’s abandonment on the entire species?

    We don’t have claws, or sharp teeth, or big, strong muscles. We can’t run fast, or fly, or see well, or hear well, or swim worth a darn. We die easily of heat and cold and wetness and dryness. But we think like mutha f’ers. Best on the planet, no arguments.

    We should not be so quick to give that up.

    If I were a shark, I would not spend my time trying to invent the Megalodon. Even if it parked itself.

    Every other — EVERY OTHER — previous technological revolution was us inventing ways to better do stuff that we sucked at. Fine. Using our advantage to overcome our weakness. We conquered the planet thus.

    Now we’re trying to invent something that steals our thunder. Our only, puny thunder. Outdoes us at the ONLY thing we’re really good at. Our only advantage. Why are so many people so eager for this? Seriously, I really don’t understand it. It is heading down a path we will regret, eventually. For what . . . an extra 45 minutes per day to scan the webz?

    That’s why it’s getting more and more annoying hearing all the azzhats keep bleating that “sure we’ll lose some jobs, but there’ll be NEW jobs to replace them . . . there always has!!! :-)))”

    We were never a shark inventing the Megalodon before.

    All those drivers that lose out to autonomous vehicles may as well just report to their local euthanasia booth, because they’ve now been outdone, by technology, at the ONE thing they could always in the past do better than any technology. Thinking. Piloting a vehicle == thinking == man’s one and only advantage in the universe.

    Won’t it be great to put 5 million people out of a job in the first wave of PERMANENT losses to technology! But SELF-DRIVING CARZ!!! WOOO!!! I can Uber from my downtown loft to the trendy uptown whatever without seeing a single deplorable person! Yea, me!

    The self loathing new ‘man’ welcomes his robot overlords. He believes they will be his benefactors.

  • avatar

    It’s either to be I, ROBOT or WITH FOLDED HANDS, I think. Personally, I think autonomous cars should not be available to those who want them the most. Note, I didn’t say ‘need’. There are already too many lazy, inattentive drivers as it is, so a tool that allows them to be more lazy and inattentive is even more dangerous. As it is now, the lackwits have to stay somewhat engaged – if they think they don’t have to, what happens when the system malfunctions? Or can’t resolve a problem, as the article points out? I work with a CNC machine that drills and mills wonderfully when programmed properly (and monitored) … but we still have a big red button to push if something goes wrong. It was fun listening to a foreman claim that ‘we didn’t need to watch the machine’, then point out the big red button. How was I supposed to push it from across the shop? Psychically? SMH

  • avatar

    They say that autonomous vehicles can drive in close proximity of each other. You know what? What about having the people who all go in the same direction sit comfortably in an elongated mode of transportation and have the cleverest type of driver (the human kind) sit at the steering wheel? Then have designated organizations run schedules with which these vehicles operate to provide this sort of passenger transportation.

  • avatar
    Master Baiter

    I agree with the premise that AVs are destined for failure unless infrastructure is designed specifically for them.

    Alternatively, I would rather see high speed lanes built for real drivers where a special class of license and vehicle is required for access. No semi trucks, no overloaded gardening vehicles, and no moms off the boat from Shanghai who got their license yesterday. Perhaps such lanes could be autonomous, but they should be HIGH SPEED, like 100 MPH or better.

    • 0 avatar

      Special infrastructure isn’t enough. How do you keep out deer, moose & squirrel, and their friends? What about flash floods and snow squalls? And we all know how great a job your local highway department would do maintaining that sophisticated road infrastructure. Infrastructure improvements would help, but we still need AI and sensor improvements. The good news is that there is a lot of progress. It’s not always publicized.

  • avatar

    Nicely written, and given that we are the B&B, preaching to the choir. I welcome an “autonomous driver” to back me up, but I won’t play backup to an autonomous driver.

    Vehicle to vehicle, and vehicle to infrastructure, is the way to go. Those who prefer to drive should be able to use any road, but those who want the autonomous experience need be confined to major commuter corridors with the appropriate infrastructure and tech.

  • avatar

    As I approach retirement I want a car that will keep me from making those old folk mistakes. Jaming on the gas and crashing into buildings. Keep me in my lane. Avoid rear ending people. Make it easy to work my phone while driving. Such tech could keep me in the drivers seat for many more years. Otherwise I need Uber to come to my rural town.

  • avatar

    Autonomous cars are not just looking at the wrong problem, they’re the answer to a non-problem. You don’t want to drive? Take a bus or trolley. Want to go from your front door to your destination? Call a cab. People who want to travel but not drive have enough other options that don’t impact those of us who DO like to drive.

    You want fewer accidents? Insist on better driver training, regular road testing, have all vehicle controls clearly marked and logically arranged in front of the driver, and stiffer penalties for actions that cause the most accidents.

    • 0 avatar

      “People who want to travel but not drive have enough other options”

      But none of the existing options takes control away from hormone/substance/age/exhaustion addled meatbags, including the vehicles encountered by a cab-riding refusenik.

    • 0 avatar

      “People who want to travel but not drive have enough other options that don’t impact those of us who DO like to drive.”
      — No, they don’t. People who want to travel but not drive have very limited and very restrictive options. Without a personal vehicle, if you live outside of an urban center AT ALL, you’re stuck walking or riding a bike sometimes for miles. Calling a taxi might be an option, but at today’s rates would eat your entire income just going to and from work. The nearest bus stop for me is over a half-hour’s walk away, at which point I’d have to ride it thirty minutes out of my way to reach the next transfer to a different bus or train. The nearest train station is two hour’s walk away in the proper direction to go to my nearest city. That other station from the bus? Goes to a completely different city unless I’m lucky enough to catch an actual Amtrak train (which normally doesn’t stop there.) My town? No station; I’m almost dead center between two major metropolitan areas.

      But let’s ignore my case for a moment and talk about another one. A widowed woman with a child lives in a farm house more than five miles from the closest town with proper grocery and other shopping. Her husband did all the driving and took her to work every day, while the child has the benefit of a school bus to attend school. She doesn’t drive and is now dependent on other members of the family and/or one co-worker to get to work on a daily basis. What, really, are her options? There are people who NEED to travel, whether or not they want to drive and by no means are there “…enough other options that don’t impact those of us who do like to drive.” This person would strongly benefit by a fully-autonomous vehicle capable of taking here where she wants/needs to go, WHEN she wants/needs to go.

      You, sir, are an example of that self-centered mindset I point out below.

      • 0 avatar

        @vulpine: Google ‘devil’s advocate’.

        • 0 avatar


          Bluntly said, some humans don’t want to give up control of any aspect of their lives while others would far rather let automation take their place at the controls. It’s that simple.

          But when do we get to the point where automation takes over all aspects of our lives? How do we make money? How do we afford the things we want? Again, simple: Use your brains instead of your hands.

    • 0 avatar

      • You want fewer accidents? Insist on better driver training,
      — As though that’s done any good. Some of the states with the best driver training have a reputation for the worst drivers. New Jersey and California both have remarkably restrictive training yet both have a reputation for the worst drivers.

      • regular road testing,
      — Worthless, or nearly so. Assuming they can pass a written test each time renewal comes up, drivers will tend to operate at their best behavior at such tests. From what I’ve seen while my wife was learning to drive, there are more failures due to the operators not knowing their cars than due to any failure of procedure. Not knowing where hazard flasher switch is or ensuring all lights are operational caused four cars out of five to have their driver fail the test. Now, that wasn’t five out of five cars testing, that was of the cars that DID fail the day I watched.

      •have all vehicle controls clearly marked and logically arranged in front of the driver,
      — In nearly every case, controls are clearly marked and logically arranged, that doesn’t mean they all have to be identical; that destroys the concept of brand identification. As I pointed out above, it’s up to the operator to know their car before they drive it and they are tested on it at least in order to earn their original license.

      •and stiffer penalties for actions that cause the most accidents.
      — A waste of time and money. All you do is ensure there are even more bad drivers and criminals on the road. “CRIME AND PUNISHMENT — DISTRACTED DRIVING”

      Better to make it physically more difficult to make that action than it is to punish those who perform it. It is human nature to push their limits, no matter the rule. It’s like the saying of, “Locking your doors only keeps the honest people honest.”

      Simply put, punishing the criminals won’t stop them, they’ll just keep on doing it until you make it impossible for them to do it. Not by fines or even pulling their license–but by making it physically impossible through incarceration or taking complete control of the vehicle away from them.

      • 0 avatar

        Not all people push limits to the same degree. I think it would be in the interest of humanity to have policies that encourage those honest people to exist and flourish in a free society, while eliminating the worst of humankind.

        Likewise, your right to drive should be determined by your ability to do so without harming others.

        • 0 avatar

          You mean like 45 years of driving with no injuries in any accident? Had four accidents in my early years with only damage to the cars and no one hurt in any of them. Haven’t had one since mostly because I’m much more aware of what’s going on around me. Got close a couple times, but no contact.

          • 0 avatar

            I mean exactly what I said.

            However, unlike most, I don’t look at minor and major collisions in the same way. A minor collision requires minor levels of negligence, while a major collision requires major levels of negligence. That’s why I believe that one’s ability to drive and the amount he or she pays for his or her poor decisions should be proportional to the amount of injury caused and damage done, respectively, and nothing else.

            I don’t actually think anybody should lose their ability to drive over simple property damage, as long as they can cover those costs in the form of increased insurance costs. $100,000 in damage could be $5000 a year for the next 20 years for the right to continue driving. $1000 would only be $50 a year. As it is, both situations are often viewed equally in the eyes of the system.

            If we only consider certain forms of negligence to be unacceptable, and for the results of negligently operating a brodozer at highway speeds to be equal to that of negligently operating a subcompact car at parking lot speeds, then seriously harming or even killing others becomes acceptable, and people are not held responsible for all the terrible decisions that lead to the predictable consequences.


            It’s a system where the only logical outcome is an arms race of larger and taller vehicles. It doesn’t matter how much harm and damage you cause; only whether it can be proven in court that your chosen forms of negligence include one that is currently demonized. And if it is, no big deal. Just pay the fines and you’ll be tailgating and speeding while sleep deprived in your brodozer again in no time.

  • avatar

    Autopilot using HW1 is basically “super cruise control”, for the people that understand the conditions that it works well in, it is invaluable.

    Last summer I went through a thunderstorm on the highway that I could barely see the road, Autopilot worked fine as the camera could see the lines better than I could and the radar could see the vehicles through the rain and spray.

  • avatar
    Big Al From 'Murica

    The correct question is “Da Fuk?”

  • avatar
    Shortest Circuit

    Again, as long as this Autopilot system is dumber than a roadrunner following a yellow line that the coyote painted against a wall, Tesla and their ilk can keep these technologies in the laboratories as far as I am concerned.

  • avatar

    I’ve long thought the “Right” way to do it is have “autonomous driving lanes” or heck, 120 MPH Highway bypasses for autonomous cars only. These lanes could be funded by tolls, and could have V2I communication and no manually driven vehicles.

    In other words it would almost be like train tracks in a way… maybe a 2 lane, no surprises, V2I lined road.

    How much would you pay to access that? If I can get 900 miles in 7-8 hours, and have the added benefit of not even having to pay attention or touch the wheel, would I pay $500? $300? $250?

    It would be all the benefits of Plane, Train, and Car….

    Create a few explicit networks of this infrastructure leveraging existing highways and roadways.

    • 0 avatar

      At 120 mph you’d be stopping to recharge every 90 minutes. Leave it at a reasonable speed and save the low-flying aircraft for the manually-guided lanes.

Read all comments

Recent Comments

  • Ol Shel: What does it gain anyone for this to be classified as a motorcycle? There must be a reason why it...
  • 28-Cars-Later: @Freed I understood what you meant, what I was saying in reply was I have worked with 500 bed single...
  • Lou_BC: Have you asked JD Power for a sample survey?
  • jkross22: Lou, Yet another example of how Canada somehow has avoided the pitfalls of Americans digging in and...
  • Greg Hamilton: Lou, Here is the ultimate endgame. I don’t understand why someone would cheer it on....

New Car Research

Get a Free Dealer Quote

Who We Are

  • Adam Tonge
  • Bozi Tatarevic
  • Corey Lewis
  • Jo Borras
  • Mark Baruth
  • Ronnie Schreiber