By on May 24, 2018

The Volvo XC90 that hit Elaine Herzberg on a darkened Tempe, Arizona street was travelling 43 mph at the time of impact. Guided by a combination of cameras, radar sensors, and lidar designed to cut through the gloom, the two-ton SUV “saw” the victim 6 seconds before impact, according to a preliminary report released by the National Transportation Safety Board.

The Volvo, operated by Uber Technologies, applied the brakes 1.3 seconds after impact. However, it wasn’t autonomous software that ended up sending pressure the front and rear pistons. A human did that.

It isn’t the NTSB’s job to assign blame in its preliminary report. The agency simply wants to nail down the facts of what occurred in the lead-up to, and aftermath of, the fatal March 18th collision. Here’s some key findings:

Herzberg entered the road 360 feet south of a marked crosswalk, dressed in dark clothing, in an area with no direct illumination. She didn’t turn her face towards the vehicle until the last moment, and the bicycle she was walking across the road had no side reflectors. A toxicology test turned up methamphetamine and marijuana use.

The vehicle blended Volvo’s own collision avoidance system (which includes automatic emergency braking) with Uber’s forward and side-facing cameras, radar, lidar, and navigation sensors. However, “The Volvo functions are disabled only when the test vehicle is operated in computer control mode,” the NTSB noted.

From the NTSB:

The report states data obtained from the self-driving system shows the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision. According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

In the report the NTSB said the self-driving system data showed the vehicle operator engaged the steering wheel less than a second before impact and began braking less than a second after impact. The vehicle operator said in an NTSB interview that she had been monitoring the self-driving interface and that while her personal and business phones were in the vehicle neither were in use until after the crash.

Three sentences here stand out: “According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.”

We’ve heard reports that Uber was concerned about the high number of “false positives” — objects in the vehicle’s path that needn’t require sudden braking, such as a wind-blown plastic bag. If the vehicle was indeed programmed to initially ignore many objects on the roadway, why wouldn’t there at least be an alert sent to the human driver whose foot is hovering over the brake pedal? A simple rapid beep wouldn’t lead to erratic vehicle behavior, but it would increase safety.

Instead, we have a vehicle operator tasked with avoiding sudden emergencies whose eyes are monitoring a bright multimedia screen, not the darkened road. And no warning to alert the driver to impending danger.

An initial warning could have been sent out six seconds before impact. True, it’s possible the driver wouldn’t have been able to see Herzberg given the factors mentioned earlier, but at least the driver would be looking for something. Brake pressure might even have been applied prior to the vehicle’s conclusion that an emergency maneuver was necessary.

A video released by Tempe police shows the driver staring at the screen for four or five seconds immediately prior to the collision, then looking up and reacting to the sight of Herzberg in the car’s path, presumably less than a second before impact.

In the wake of the crash, Uber shut down on-road testing of its autonomous vehicle fleet, and just yesterday decided to pull out of Arizona altogether. The company recently hired a safety official to scrutinize its program.

“We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture,” Uber said earlier this month. “Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.”

The NTSB’s investigation into the crash continues.

[Image: Uber Technologies, via Tempe Police Department]

Get the latest TTAC e-Newsletter!

Recommended

42 Comments on “NTSB Releases Preliminary Report on Fatal Uber Crash; Vehicle ‘Saw’ Victim 6 Seconds Before Impact...”


  • avatar
    FreedMike

    I seem to recall someone saying that the main impetus for self driving cars would come from insurance companies?

    Well, now one wonders how these companies will feel about the tech now that they have a zillion-dollar payout on their hands from just one mishap. Personally, if I were an insurer, I’d run from this risk as far and as fast as I could.

    And one wonders how consumers will feel about the tech now that they know it’s full of potentially fatal bugs.

    The future isn’t as autonomous as some conspiracy-minded folks would like you to believe.

    • 0 avatar
      jonsey

      Tens of thousands of people die every year in accidents caused by a meat bag behind the wheel.

      Insurance companies are in the business of managing risk. The bar autonomous cars have to meet is not perfection. It’s just better than human. If the insurance company has to pay out on one accident caused by an autonomous car versus ten human caused ones, I think they’ll choose autonomous every day of the week, and the rates will reflect that.

      Don’t fall into the trap of thinking that what the media chooses to cover is an accurate representation of what’s happening in the real world. Car accidents never get beyond the local news unless they have a hot button named like Uber, Tesla, or Google attached.

      • 0 avatar
        TwoBelugas

        The difference is a policy today with a human, the insurance puts a cap I believe on the max amount and the other party’s lawyers can go after the human driver in civil courts if they think it’s worth their while.

        With autonomous cars, the human driver is now replaced by the assets of a corporation of some sort which has deeper pockets in assets and a corporate level insurance policy, given that, the other party will ask for higher amounts knowing a company is behind the driver/AI. Imagine today how the lawyers salivate when their clients get hit by driver in a company owned car.

      • 0 avatar
        ToddAtlasF1

        Let’s say the US has 150 million drivers. 25,000 of them may cause a fatality in any given year. If you’re an insurer, you have a one in six thousand chance of being involved in a death settlement when you underwrite a driver.

        Now lets say there comes a time when all cars are operated by six competing autonomous vehicle companies. Maybe only six thousand people will die due to the programming of said vehicles. Nobody serious thinks the improvement in safety will actually be that great, but the insurance underwriters for the AV companies will be looking at approximately a 100% chance of being on the hook for a thousand fatalities when they decide whether or not to write a policy for an AV maker. The juries full of the sort of herd animals who think it is fine to hand over control of their vehicles to a bunch of H1Bs who hate them will be looking at a corporation like a piñata full of wealth that they’ve been conditioned to redistribute their entire lives. Good luck with that.

    • 0 avatar
      brn

      Insurance rates on Uber based autonomous vehicles will be very high. Other’s are doing a pretty good job of performing well for their autonomy level. They will have better insurance rates.

  • avatar
    Kalvin Knox

    Autonomous cars have no place on American roads.

  • avatar
    MrIcky

    As someone in the insurance industry- risk is just a part of the game, just means you have to charge more. If you weren’t willing to insure anything risky, you wouldn’t insure driving after midnight, logging trucks, forklifts, lumber mills, etc. Hell, my company insures police departments, if you can insure that you can insure anything.

    IMHO though, thoughtful people have been saying autonomous vehicles are a decade away. A lot of companies are trying to rush it though to be first. It’s clearly a when, not if at this point.

    • 0 avatar
      FreedMike

      Right, but risk goes hand in hand with consumer demand. If insurers have to price in excessive risk due to autonomy, then the cost of insurance goes up. If it goes up enough, then consumers will stop buying the autonomous cars. It wouldn’t be the first time – in the ’70s, demand for convertibles and muscle cars dried up due to insurance rates.

      • 0 avatar
        MrIcky

        There’s a world of difference though between pushing for cutting edge safety technology (and regardless of this accident, the insurance companies view this as safety technology) and convertibles, arguably the opposite of safety technology.

        • 0 avatar
          FreedMike

          True, but in the end, if autonomous systems come with excessive risk, it’ll all come down to dollars and cents.

          This one case is going to be a massive payout for Uber – clearly they’re at fault, even if you leave out the whole “self-driving car” bit. You had a driver who had both hands off the wheel and wasn’t looking at the road at the time of the incident. That’s blatant negligence, and those kinds of cases tend involve big payouts. Wait until some guy Autopilots his Tesla into a school bus, or into a van full of senior citizens on their way to the Indian reservation casino. Better call Saul!

          And that’s how most of these cases are going to go down – the autonomous system fails or didn’t work properly, and the driver was paying absolutely no attention to driving, because the car was autonomous. It’s all negligence.

          Now, multiply this by hundreds of thousands of autonomous vehicles roaming around, and the numbers get huge.

          Until the tech’s perfected, I see insurers running as far and as fast as they can from this – or jacking up rates so high that the people employing the tech run away from it. The result’s the same either way – people aren’t going to want to buy this tech if it’s not ready.

          • 0 avatar
            redrum

            “This one case is going to be a massive payout for Uber”

            Uber reached a settlement with the family days after it happened: https://www.npr.org/sections/thetwo-way/2018/03/29/597850303/uber-reaches-settlement-with-family-of-arizona-woman-killed-by-driverless-car

            The amount wasn’t disclosed but considering the victim was homeless and the settlement came so quickly, I really doubt it was anything ground breaking.

          • 0 avatar
            FreedMike

            Bet the family wishes they’d have held out.

      • 0 avatar
        Sub-600

        Insurance companies pay more attention to your zip code and credit score than to the number of cylinders in your car, especially as you age. My R/T would cost me a fortune if I was 22 years-old, at 55 my rates are decent.

        • 0 avatar
          FreedMike

          True, but back in the day, the people buying convertibles and muscle cars were younger men, and they’re more susceptible to higher rates. These days, muscle cars are for old farts like us – back in the day, we’d have been driving some four door boat, or a luxury coupe.

  • avatar
    sirwired

    This case represents fascinating questions on legal liability. (Which will never be answered, since the family settled a couple days after the accident.)

    Given the victim’s location and circumstances, it would have been very difficult for a human driver to spot her… dark clothes, night, jaywalking, no reflectors on the bike. But it should have been very easy for the self-driving system to do so. (Note that the safety driver, who was nominally in control of the vehicle, was not charged in the accident.)

    So, would Uber have liability here? Certainly the system SHOULD have done better, but does it have legal liability if it fails to do so? What’s the standard? Is it “does at least as good as a human”? Or “does as well as its spec sheet suggests it should”?

    • 0 avatar
      FreedMike

      You kidding? We have the autonomous systems that weren’t working, AND video footage of a driver had both hands off the wheel, and wasn’t even looking at the road.

      I’m no lawyer, but I’d have to think that looks real, real bad for Uber’s case.

      • 0 avatar
        sirwired

        No, I’m not kidding. A 100%-attentive human driver without assistance systems would have had trouble avoiding this accident due to the conditions.

        So, again, the question is what legal standard should be used. “As good as a human”? or “As good as it should be capable of”?

        • 0 avatar
          MrIcky

          sirwired- it’s a good question. The initial investigation found it would have been tough for a human to see/avoid this accident.

          With the autonomous system, it’s almost reckless disregard though because it was a known or obvious risk to it’s sensors and it proceeded anyway.

        • 0 avatar
          FreedMike

          I think the key phrase here is “100% attentive.”

          Even if the autonomous systems hadn’t failed, clearly the driver wasn’t paying one bit of attention. She didn’t even have her hands on the wheel.

          Judgment for plaintiff in the amount of 15 gabillion dollars.

          • 0 avatar
            sirwired

            But would the safety driver paying attention have prevented the accident? The answer so far appears to be “no”.

          • 0 avatar
            Russycle

            Of course the driver could have prevented it. Headlights typically illuminate 100 yards of roadway, at 43 mph that’s 5 seconds to see her and react. And there was a street lamp near where she crossed, adding more illumination.

            I don’t really blame the driver, being 100% attentive alone at night in a self-driving vehicle for several hours is petty much impossible. Uber’s decision to disable emergency braking is just nuts. Even with the braking disabled, why didn’t it at least change lanes?

          • 0 avatar
            FreedMike

            “But would the safety driver paying attention have prevented the accident?”

            As with all accidents, it’s a what-if that may never be answered fully. But checking out behind the wheel makes the driver 100% culpable in my book. More importantly, I think a jury would agree.

        • 0 avatar
          Zipster

          I despise people who wear dark clothes at night and likewise bicyclists without lights. However obscure the victim may have been its a virtual certainty that a normal driver would have eventually seen her and taken some evasive action which could have at least mitigated her injuries.

          This is another example of how arrogant UBER is, fitting that they should pay an immense sum.

        • 0 avatar
          Malforus

          Have you seen the unmodified video or the originally released one? The first video released had the brightness turned way down.

          https://arstechnica.com/cars/2018/03/police-chief-said-uber-victim-came-from-the-shadows-dont-believe-it/

          If you watched what the police released you would think you were playing silent hill.

          Its not an accurate representation of the visibilty.

  • avatar
    Stanley Steamer

    So the system saw her, and decided to hit her. Sounds like premeditated murder, even if it was only 6 seconds later. That’s a long time for a fast computer.

    • 0 avatar

      If you read carefully, you’ll see the system decided it didn’t know what to do, and that emergency braking was required.

      The problem is that there’s no notice to the driver that the system has concluded “f*ck it.”

      There are a number of issues with how this Uber system behaves, starting with a lack of pre-braking and warnings when an unidentified/questionable object is seen by the system.

  • avatar
    redmondjp

    Wow! This is really bad.

    Just think – that infrared Night Vision option that GM offered years ago, combined with a semi-attentive driver, would have performed better than all of this gawsh-darn new tech. Who knew?

  • avatar
    Russycle

    ” A toxicology test turned up methamphetamine and marijuana use.”
    Meth can be detected up to 72 hours after use, marijuana much longer. Without knowing what levels were detected this doesn’t tell us much. Regardless, stepping in front of a fast moving vehicle and assuming they’ll avoid you is never a great plan.

  • avatar
    Zipster

    I despise people who wear dark clothes at night and likewise bicyclists without lights. However obscure the victim may have been its a virtual certainty that a normal driver would have eventually seen her and taken some evasive action which could have at least mitigated her injuries.

    This is another example of how arrogant UBER is, fitting that they should pay an immense sum.

  • avatar
    stingray65

    Insurance companies won’t payout for accidents like this – they will either claim the policy is invalidated by the driver’s negligence, and/or they will sue Uber, Volvo, or the technology providers for not accurately describing the level of accuracy and consequent risk the self-driving system provides.

  • avatar
    incautious

    6 seconds, What a Fing joke. It’s time we stand up the siliCON tech BS. In driver’s ed we were taught the 2 second rule. I guess computers need like the two minutes. Hope these things can brake for fire trucks stopped at a light. Oh wait, that didn’t work out either, for the techies.

  • avatar
    pwrwrench

    “stepping in front of a fast moving vehicle and assuming they’ll avoid you is never a great plan.”

    I could not agree more. However what I have observed is people stepping into the path of vehicles without even looking. Apparently they are assuming that all drivers are alert and observant. I see this mostly in parking lots at shopping centers, but also on city streets. I stop for pedestrians, but have had other drivers accelerate around nearly colliding with those walking across.
    I keep hoping I never see someone get knocked down.

  • avatar
    redrum

    I see this as an indictment on Uber more than self-driving tech. They clearly took a shortcut in bringing their version of autonomous driving to market by disabling emergency braking instead of, you know, actually making it work correctly. Seems typical of the culture that their former CEO Travis Kalanick wrought — i.e. “we’re disruptors, we don’t play by the rules”.

    • 0 avatar
      Sub-600

      Disruptors were used by far more races than phasers were, I wonder why? I doubt the Federation would pass up a quick buck by not selling them outside the fold. They seemed to be more accurate than disruptors too.

      • 0 avatar
        FreedMike

        Phasers are better at taking down shields. Disruptors blow s**t up real good, though. Kind of like the difference between a smaller smart bomb and a much bigger “dumb” one. Starfleet is all high-tech, dontcha know.

  • avatar
    conundrum

    An unusual case where the article itself is far clearer and to the point than any of the comments.

    Uber was worried about false positives and had turned off both emergency braking and driver alarm. End of story. One dead pedestrian.

    Was the driver informed of the disconnection between sensor detection and braking action? If not, hardly their fault. They were probably brainwashed about the tech and assured their role was merely high level supervisory, I’ll bet.

    Let this Uber version loose on a normal crowded street with vehicles and pedestrians all over the place, and what would it do? Plow on regardless? Apparently so. It did so here

    • 0 avatar
      redrum

      I see this all the time in software projects when there is a hard deadline but developers are unable to get a functionality working correctly — instead of opening a defect (which might get marked as “critical” and prevent the product from being released until it’s fixed) the requirement for that functionality is simply “descoped” and tentatively re-scheduled for a future release (which often means never).

      I’m really curious to know when the emergency braking functionality was disabled. Has it been since day 1 of Uber’s autonomous testing (in which case I’m surprised it took this long to hit somebody) or was it after some testing had already been conducted and they decided no emergency braking was preferable to abrupt/unneeded braking (which seems like an incredibly hacky and irresponsible workaround).

      In any case it’s really mind-boggling that Uber would think it’s OK to test autonomous cars without such a fundamental feature, especially when the deadline was completely of their own making. Such a shady company.

  • avatar
    Wheatridger

    In this case you had a car’s sophisticated, multi-sensored systems detecting a hazard ahead, but it had no way to alert the driver/monitor and no way to apply the brakes itself. So what good is it doing? Functionally, this sounds like the Google vehicle had nothing but the typical lane-following and speed control operating, plus a presumptive illusion of infallibility.

  • avatar

    Many mentions here – and from prior articles – about the lighting in the area and the pedestrians dark clothes. What is being tested? The human safety driver or the vehicle? If it is the safety driver, was she told to monitor the vehicle, the road and vehicle or the road? It changes the way the safety driver behaved in the situation. If it’s the vehicle, the lighting of the area and the color of the pedestrians clothes is moot. The original video and the later video posted by a driver in the same area (at the same hour of the evening with a camera set to better represent the actual conditions) show two very different views of the area in question. Again, if we’re testing the vehicle the lighting conditions in the area are largely moot. It’s systems do not primarily rely on visibility for it’s correct operation.

    My guess based on how this is being reported is the safety driver was told to monitor the vehicle’s “read outs” primarily and the actual road secondarily. That would be the mindset of someone testing a vehicle in my mind. Observe the vehicle first as it will alert you to when you need to shift your attention to the road being traveled. I believe the human driver, had she been monitoring the road primarily or received inputs from the vehicles systems, could have avoided the accident or, at the very least, would have changed the severity of the accident.

Read all comments

Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • Sigivald: Turbo. WHAT DOES THAT WORD MEAN, PORSCHE?
  • chaparral: Even before that – Toyota was the first to put a five-speed in their cars, and made it standard...
  • ToddAtlasF1: The lesson seems to be that if you’re going to dabble in making cars as badly as Ford does,...
  • JimC2: Let’s just say we’d like to avoid any California entanglements…
  • ToddAtlasF1: Toyota is so conservative that they made four valve heads standard when the rest of the industry thought...

New Car Research

Get a Free Dealer Quote

Staff

  • Contributors

  • Timothy Cain, Canada
  • Matthew Guy, Canada
  • Ronnie Schreiber, United States
  • Bozi Tatarevic, United States
  • Chris Tonn, United States
  • Corey Lewis, United States
  • Mark Baruth, United States