By on February 2, 2017

FCA waymo pacifica

According to the California Department of Motor Vehicles and its autonomous vehicle disengagement report, self-driving cars are in need of less and less human intervention.

Waymo, Google’s autonomous driving project, is leading the pack in this regard. The report shows that the number of times test drivers had to take over in Waymo’s vehicles dropped significantly from .80 disengagements per 1,000 miles in 2015 to .20 disengagements per 1,000 miles.

Now, this report only counts disengagements due to a failure of the autonomous technology or an immediate safety threat that forces the driver to take the wheel. Still, these numbers are staggering. For the 635,867.9 autonomous miles Waymo has driven, there were only 124 qualifying disengagements.

That is – on average – one human override for every 5,000 miles. In theory, it could take you from New York City to San Francisco and back, and it would only ask you to intervene once.

The miles weren’t just traveled on private courses or streets, either. The majority of the testing was done on public roads, where there were plenty of objects for Waymo’s tricked-out Chrysler Pacifica hybrid minivans to avoid.

Proving that autonomous vehicles can be safely operated on public roads is key to landing legislative approval for the technology’s mainstream use.

[Image: Fiat Chrysler Automobiles]

Get the latest TTAC e-Newsletter!

Recommended

28 Comments on “Undistracted Driver: Waymo’s Self-driving Minivans are Becoming Eerily Competent...”


  • avatar
    VoGo

    Carnegie Mellon just announced that their AI soundly beat 4 experts at poker. Granted, Texas Hold ’em is not the same as driving, but the trend is clear: AVs will soon be better than the average driver. They’re already better than drunks.

    • 0 avatar
      Willyam

      Quietly, Google Translate invented its own “bridge” language without being told to. Strides are being made at incredible rates now:

      https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/

    • 0 avatar
      wumpus

      Poker bots have been useful for cheating for a long time. The big news was that a computer recently won in go.

      have to update the xkcd: https://xkcd.com/1002/

  • avatar
    Ihatejalops

    On California roads with no weather and where people obey crosswalks, yes. Have it navigate poor lane lines, weather and night, then we’ll talk.

    this technology will take us to a not a very good place.

    • 0 avatar
      SCE to AUX

      Exactly. Did their test include snowy roads, potholes, construction workers with flags, and burned out traffic lights, at night? Because I can encounter all that within a mile drive from my house.

      • 0 avatar
        Kruser

        It doesn’t have to be be perfect in all conditions to be good enough for a uber-like service that doesn’t leave a bounded area. I’m also pretty sure snow is on their checklist. Unofficially, the updated Tesla Autopilot can handle some snowy conditions.
        https://electrek.co/2016/12/28/tesla-autopilot-snow/

      • 0 avatar
        orenwolf

        The mistake here is believing that AV can *only* operate if *all* of these conditions are met. Of course, this will not happen.

        As with the several vehicles now that will drive you on highways, permitted settings will then move to city streets in clear weather (when the vehicle can accurately detect things) and go from there.

        these vehicles are of course able to detect when they lose sight of the road, or inclement weather, and disable their autonomous functions or refuse to enable them, respectively.

        This is sensible and reflects the article’s point that the rate of learning is increasing. Presumably, this rate of progress will only increase as more and more vehicles can provide real-world data to work with.

        Everyone’s waiting for the announcement that “There is now a car that will drive you everywhere”, and completely forgetting that we’ll get announcements like “Can drive you anywhere in good weather in daytime”, “safe for night driving anywhere”, “can safely detect road traction better than a human” etc. etc. first. It will be a gradual evolution, incrementally rolled out.

        And if that means my grandparents are only a year or two away from a vehicle that can take them to the market during the day in clear weather three seasons of the year, then all the better.

      • 0 avatar
        redrum

        Computers don’t “see” like humans, though — autonomous cars are already better than humans in low/no light situations thanks to night vision/sonar/lidar technology. Handling poor road conditions is also something they are used to doing better than most humans (e.g. traction control, stability control).

        I do think temporary traffic changes (either human flagger or temporary signage) are a challenge that will take longer to hash out.

        • 0 avatar
          orenwolf

          “Computers don’t “see” like humans, though — autonomous cars are already better than humans in low/no light situations thanks to night vision/sonar/lidar technology. Handling poor road conditions is also something they are used to doing better than most humans (e.g. traction control, stability control).”

          Very much this. The car is going to know better than you do if there are traction issues, *exactly* how fast it should travel not to overrun its headlights (and forget headlights, other sensors that see further), and exactly how hard it can turn/brake/etc. to get out of danger.

          These systems will, incrementally, continue to get better and better.

      • 0 avatar
        stuki

        The real gotcha will be how they deal with human drivers specifically aiming to take advantage of the intellectually less adaptive robots, in cut and thrust traffic. Real world traffic navigation, consists of an untold number of little games of chicken, being played out whenever two or more drivers are vying for the same spot.

        Humans evolved specifically to recognize who you likely can scare off with some strutting and bluffing, and who you would best give way to. And any given human’s behavior is real time adaptive: Even the most timid old lady will eventually get to the “I just don’t give a eff anymore” point, after having been cut off enough times. And, the instant she does, those who would normally cut her off, will recognize the cues, and let her in this time. That’s a fiendishly difficult heuristic to emulate satisfactorily in a robot.

    • 0 avatar
      redmondjp

      You forgot bicyclists who routinely blow stop signs and red lights, and also sometimes are coming at you on the sidewalk.

      More cyclists will die with autonomous cars on the road.

      • 0 avatar
        orenwolf

        Patently untrue. Autonomous cars can see cyclists around corners, behind vehicles, even in the dark if they’re crazy enough to wear all black without reflectors. Like cars, AV’s can see them and react far more quickly than a human could ever hope to, because they can see them from further away.

        It’s far easier for an autonomous vehicle to slam on the brakes because a cyclist blew a stop sign at 10Km/H than for a car that does it at 4x the speed.

        The likelihood of a pedestrian/biker being hit drops significantly with the help of radar and lidar to allow the car to ‘see’ them earlier.

      • 0 avatar
        redrum

        “More cyclists will die with autonomous cars on the road.”

        If you read any of the literature that’s out there on AVs, you’ll know that the exact opposite is true. Pedestrians and bicyclists will be MUCH safer with AVs on the road. Google’s test AVs have been rear-ended many times, which can at least partially be attributed to it’s (overly) cautious behavior around non-vehicles, as it sees any nearby object/person as a potential hazard and will slow down and stop if there is any ambiguity as to whether they will jump into the vehicle’s path.

        This actually brings up an interesting quandary — since AVs are currently programmed to always yield to any potential road hazard, that basically means a human driven vehicle or pedestrian can “bully” their way in front of an AV at any time. Imagine ten years from now, assuming AVs are a significant percentage of cars on the road, you can bet aggressive human drivers will see no problem in cutting them off at will and even pedestrians may be lulled into a false sense of security that they can jaywalk with impunity.

        • 0 avatar
          orenwolf

          “This actually brings up an interesting quandary — since AVs are currently programmed to always yield to any potential road hazard, that basically means a human driven vehicle or pedestrian can “bully” their way in front of an AV at any time. Imagine ten years from now, AVs are a significant percentage of cars on the road, you can bet aggressive human drivers will see no problem in cutting them at will and even pedestrians may be lulled into a false sense of security that they can jaywalk with impunity.”

          Yep, like human drivers, AVs will need to adapt as driving culture changes. Thankfully the more of them there are on the road, the more data they’ll have about how to behave in given situations.

          My guess is, since AVs will undoubtedly have cameras everywhere, in the distant future where they are the majority of vehicles, traffic infringements by non-AVs will be automatically reported to law enforcement or similar (or if not automatically done, easily obtained as part of an investigation). That big brother component may deter the behaviour you’re suggesting.

  • avatar
    thegamper

    No doubt that autonomous vehicles will become safer overall than a human drivers, but I think one issue that can never be overcome is that when the vehicle is doing the driving, the human “copilot” will not be paying attention. So when the vehicle requires disengagement and the human copilot is reading a book, sending a text, watching a movie, taking a nap, they are sort of out of luck and never had a chance to avoid the probable collision and potentially resulting death.

    Wait a minute, human drivers already do this with no vehicle autonomy at all………. I guess we would all be safer in autonomous vehicles…now!.

    I hope I can program my Waymo minivan to hoon about town though. Just think, when autonomy gets really, really good, you can program your ride to drift around every corner, powerslide into every parallel parking spot, it might not all be bad.

  • avatar
    Arthur Dailey

    If movies and TV have taught us anything isn’t it that the rise of AI will mean the end of humankind?

    This must be stopped!!!!

  • avatar
    OldManPants

    Lord, speed the Day.

    Meatsacks should control nothing faster than muscle power.

  • avatar
    turbo_awd

    600k miles is around 1 million kms. In my ~30 years of driving, I’ve racked up maybe 500k km. And I’m sure I’ve had at least 50-60 “oh shit” moments..

  • avatar
    JDM

    I drive 70 miles a day as a commuter, in the Seattle area. I see plenty of driverless cars on the road. The left seat occupants are looking at their phones, GPS, picking their noses and doing whatever their father or weird uncle taught them.

    Virtually every day, as I try to merge onto I-5, some jerk tries to move into the merging lane to take the next exit 1 mile ahead. Last week I had to go onto the shoulder to avoid being hit. I’m about ready to get an old pickup truck and stand my ground.

    I’ll take any robot driven car over these clueless drivers any day.

    • 0 avatar
      redmondjp

      I do the Eastside slog up and down 405 from Redmond to Renton every day and see the same thing as you. People use the onramps to pass people on the right!

      Back in the late 1990s, I was still driving a 1971 LTD that had accident damage on both the front and rear left corners. Merging onto the freeway, it was like Moses parting the Red Sea – a giant hole in traffic would magically open up as it was obvious that I had nothing to lose! I loved that effect, but the downside was cops constantly hassling me because I was driving a junker in a nice neighborhood.

  • avatar
    markogts

    Would you step in a car driven by someone who, once every 5000 miles, (or even every 50.000), needs to be taken the wheel out of hand? Do you really expect human attention to stay high for 100 hours of no-event and to jump in exactly at the required moment? This is exactly the mistake NASA experts are warning of.


Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • Hummer: Jeez, I can’t imagine paying that much for 1 vehicle, $1,900 is what one could expect to pay for about 3-4...
  • geozinger: Fnck. I’ve lost lots of cars to the tinworm. I had a 97 Cavalier that I ran up to 265000 miles. The...
  • jh26036: Who is paying $55k for a CTR? Plenty are going before the $35k sticker.
  • JimZ: Since that’s not going to happen, why should I waste any time on your nonsensical what-if?
  • JimZ: Funny, Jim Hackett said basically the same thing yesterday and people were flinging crap left and right.

New Car Research

Get a Free Dealer Quote

Staff

  • Contributors

  • Timothy Cain, Canada
  • Matthew Guy, Canada
  • Ronnie Schreiber, United States
  • Bozi Tatarevic, United States
  • Chris Tonn, United States
  • Corey Lewis, United States
  • Mark Baruth, United States
  • Moderators

  • Adam Tonge, United States
  • Corey Lewis, United States