By on January 17, 2017

FF 91 Reveal, Image: Seth Parks

After much anticipation, Faraday Future finally revealed its production car, the FF 91. The presentation introduced the FF 91 as “the smartest car you’ll ever drive” and described capabilities of advanced sensors, machine learning, and autonomous driving — all great buzzwords. We saw a live demonstration of the FF 91’s ability to drive itself with the “driverless valet” feature. The car successfully parked itself in a parking lot outside the reveal and we were told to “never worry about parking again.”

Except, I watched the rest of the reveal and I’m pretty worried.

Before you chalk this up to the dogpile, let me explain.

I’m not worried about the “driverless valet” feature that failed on stage — because it didn’t. The car presented on stage was running specific code for the demo, which was different from the fully functioning car we watched park itself earlier. I don’t know why, but that’s not relevant. The point is the demo code failed, not FF’s self-driving system.

The reason I’m worried is because my Ph.D. is in Human Factors and I have a history in working with automation in aviation. Specifically, I’ve worked on how humans interact with automation in commercial airline cockpits for the FAA’s Next Generation Air Transportation System. This article is my first transition from peer-reviewed scientific publications to editorials. Why?

Because I watched an “autonomous car” reveal that contained more jargon than meaningful automation concepts.

Because I watched a senior VP of Research & Development use the farce of dimming the lights as a cover to sneak a technician into a supposedly self-driving car.

Because I was told a car’s self-driving feature didn’t work because of steel structure in a building’s roof.

Because a PR person had to clarify that a car was using real production technology that just needed to be recalibrated.

And most of all, because Faraday Future cut all of this out of the reveal video before they uploaded the event stream.

So what’s the problem?

Well it’s been over 100 years since the first autopilot was developed in aviation and 35 years since airplanes have been capable of full automation we’d call SAE Level 5. Over these last 35 years, we’ve learned the decision to use advanced automation is guided by trust and self-confidence. In other words, my decision to use automation depends more heavily on whether I feel like it will work, rather than how reliably it actually works. I’m pretty self confident that I can park my car. I do it every day. I know that the FF 91 can probably do it, too. But do I trust that it could park itself in a seven-story concrete and steel parking garage at my work? Do other people who watched the reveal or read the articles about it trust it will? If they don’t, they’ll never buy the car to find out.

And that’s the problem.

Vlad Pop is a research scientist with a Ph.D. in Human Factors. He is a lifelong racing enthusiast and former Formula E developer using years of cockpit automation experience to ensure the future of automated driving is in good hands.

Get the latest TTAC e-Newsletter!

Recommended

13 Comments on “The Problem with Hype(r) Cars...”


  • avatar
    David Walton

    Vlad, you’ve come a long way from DeMuro referring to your truck as a “lawn ornament.”

  • avatar
    dukeisduke

    Wow, I thought “Elizabeth Carmichael” was dead.

  • avatar
    319583076

    Oh man, I’d love to read more content about automation. Colgan 3407 and AF 447 being two of the most recent, high-profile failures of the human-automation interface resulting in preventable losses. Then there’s Asiana 214 and several non-fatal events that had the potential for significantly worse outcomes.

    Then there is the Boeing vs. Airbus approach to automation in general and of course how automobiles are similar and different from airplanes and what bits of automation should be industry standard versus proprietary.

    I’ll keep my eyes peeled for your contributions.

    • 0 avatar
      Vlad Pop

      Thanks for the great feedback. I’m really glad to hear you find it interesting.

      The Boeing vs. Airbus automation philosophy is particularly relevant and I’ve been thinking about it too. Specifically about how management-by-consent vs. management-by-exception maps onto self-driving car vs. mobility companies.

      There’s also concern about certification of automation. The aviation industry has the FAA to certify new automation. What about the automobile industry? Who gives a self-driving car its driving test? That’s where the industry standard vs. proprietary automation debate will come in.

      Stay tuned!

      • 0 avatar
        xander18

        Vlad, I’m an engineer and thus have heard the very cursory overviews of some of the issues and subtleties of aircraft automation. It’s a fascinating field and I’m very much looking forward to hearing more of your perspective on this technology coming to cars. I hope this becomes at least a series or a feature.

  • avatar
    SCE to AUX

    Agreed on all points.

    To turn your last point a little, liability ownership is *the* major issue, IMO.

    Volvo’s grandstanding aside, I don’t believe any mfr will be willing to accept liability for a fatality in their autonomy-enabled vehicle, and neither will consumers choose to purchase or utilize machine autonomy if they will be held liable for the outcome.

    Much also hinges on the attitude of insurers and the courts. Will my rates go up if my car has Level 2, 3, 4, 5 autonomy, or down?

    I believe the only way a driver will not be held liable is if there is no driver. As long as a human is sitting in the driver’s seat, they can be assigned some blame for an accident. Mfrs – even Volvo – are *not* going to simply roll over and write settlement checks to plaintiffs for unchallenged amounts.

    • 0 avatar
      Vlad Pop

      I wholeheartedly agree. Liability ownership is a major issue. That’s how the term “human error” was invented in aviation. We don’t blame Boeing, Airbus, or the FAA, we blame the pilot(s).

      Its interesting that you mentioned the only way a driver will not be held liable is if there is no driver. The reason we still have pilots in airplanes is not technology but insurance. I’m very interested to see how the automobile industry will handle this.

      • 0 avatar
        cognoscenti

        I’m curious: if US Airways Flight 1549 was fully automated, where would it have placed the airplane?

        Full disclosure: I’m a licensed pilot and thus biased.

    • 0 avatar
      BigOldChryslers

      One possible solution to the liability question would be black boxes in all autonomous driving capable vehicles, similar to in airplanes. An industry standard log format and interface for accessing the black box data would be preferred. When someone claims that the vehicle was driving autonomously when an accident occurred, investigators would review the black box log data.

  • avatar
    brn

    “The point is the demo code failed, not FF’s self-driving system.”

    I’m with you in general, but I don’t agree with the above. Technically, you’re correct, the FF self driving system didn’t fail. It’s hard for something that may not exist to fail.

    FF’s primary issue is all they do is hype and deceive. I can’t take anything they say or show at face value. Yes, that’s kind of what you were getting at, but I didn’t need all the rest. They lost me right away, before the excuses.

  • avatar
    Sam Hall

    If I’m reading correctly, the concern is basically that FF will, with all its hype and BS, destroy the reputation of self-driving technology before it reaches the mainstream buying public.

    That’s a valid concern (see: GM diesel cars) but I’m not too worried about it. Electric cars have sofar survived the EV-1, BYD, Elio and a few other bad products and vaporware. Tesla has self-driving tech on the road already, and several other major manufacturers will soon follow suit. I think most people will correctly identify the problem as Faraday Future, not self-driving tech in general.

  • avatar
    BigOldChryslers

    Some technology innovations follow a pattern where they’re introduced but rejected by the market, usually because they aren’t user friendly or are too expensive. Then follows a period of behind-the-scenes development where the technology is only used in niche applications and improved, then it is reintroduced commercially and finally takes-off. I wonder if autonomous vehicles will follow this pattern?

    Other examples:
    – 3D movies (remember the glasses with red/blue lenses?)
    – videodisc/DVD
    – fuel injection
    – cylinder deactivation (Caddy 4-6-8 engine)
    – electric cars


Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • geozinger: Fnck. I’ve lost lots of cars to the tinworm. I had a 97 Cavalier that I ran up to 265000 miles. The...
  • jh26036: Who is paying $55k for a CTR? Plenty are going before the $35k sticker.
  • JimZ: Since that’s not going to happen, why should I waste any time on your nonsensical what-if?
  • JimZ: Funny, Jim Hackett said basically the same thing yesterday and people were flinging crap left and right.
  • JimZ: That and the fact that they could run on gasoline, which was considered a useless waste product back in the...

New Car Research

Get a Free Dealer Quote

Staff

  • Contributors

  • Timothy Cain, Canada
  • Matthew Guy, Canada
  • Ronnie Schreiber, United States
  • Bozi Tatarevic, United States
  • Chris Tonn, United States
  • Corey Lewis, United States
  • Mark Baruth, United States
  • Moderators

  • Adam Tonge, United States
  • Corey Lewis, United States