By on March 6, 2020

Autonomous vehicles “feel” the road ahead with a variety of sensors, with data received sent through the vehicle’s brain to stimulate a response. Brake action, for example. It’s technology that’s far from perfected, yet self-driving trials continue on America’s streets, growing in number as companies chase that elusive driver-free buck.

In one tragic case, a tech company (that’s since had a come-to-Jesus moment regarding public safety) decided to dumb down its fleet’s responsiveness to cut down on “false positives” — perceived obstacles that would send the vehicle screeching to a stop, despite the obstacle only being a windblown plastic bag — with fatal implications. On the other side of the coin, Tesla drivers continue to plow into the backs and sides of large trucks that their Level 2 self-driving technology failed to register.

Because all things can be hacked, researchers now say there’s a way to trick autonomous vehicles into seeing what’s not there.

If manufacturing ghosts is your bag, read this piece in The Conversation. It details work performed the RobustNet Research Group at the University of Michigan, describing how an EV’s most sophisticated piece of tech, LiDAR, can be fooled into thinking it’s about to collide with a stationary object that doesn’t exist.

LiDAR sends out pulses of light, thousands per second, then measures how long it takes for those signals to bounce back to the sender, much like sonar or radar. This allows a vehicle to paint a picture of the world around it. Camera systems and ultrasonic sensors, which you’ll find on many new driver-assist-equipped models, complete the sensor suite.

From The Conversation:

The problem is these pulses can be spoofed. To fool the sensor, an attacker can shine his or her own light signal at the sensor. That’s all you need to get the sensor mixed up.

However, it’s more difficult to spoof the LiDAR sensor to “see” a “vehicle” that isn’t there. To succeed, the attacker needs to precisely time the signals shot at the victim LiDAR. This has to happen at the nanosecond level, since the signals travel at the speed of light. Small differences will stand out when the LiDAR is calculating the distance using the measured time-of-flight.

If an attacker successfully fools the LiDAR sensor, it then also has to trick the machine learning model. Work done at the OpenAI research lab shows that machine learning models are vulnerable to specially crafted signals or inputs – what are known as adversarial examples. For example, specially generated stickers on traffic signs can fool camera-based perception.

The research group claims that spoofed signals designed specifically to dupe this machine learning model  are possible. “The LiDAR sensor will feed the hacker’s fake signals to the machine learning model, which will recognize them as an obstacle.”

Were this to happen, an autonomous vehicle would slam to a halt, with the potential for following vehicles to slam into it. On a fast-moving freeway, you can imagine the carnage resulting from a panic stop in the center lane.

The team tested two possible light pulse attack scenarios using a common autonomous drive system; one with a vehicle in motion, the other with a vehicle stopped at a red light. In the first setup, the vehicle braked, while the other remained immobile at the stoplight.

Needless fear mongering? Not in this case. With the advent of new technology, especially one that exists in a hazy regulatory environment, there will be people who seek to exploit the tech’s weaknesses. The team said it hopes “to trigger an alarm for teams building autonomous technologies.”

“Research into new types of security problems in the autonomous driving systems is just beginning, and we hope to uncover more possible problems before they can be exploited out on the road by bad actors,” the researchers wrote.

[Image: SAE, Ford]

Get the latest TTAC e-Newsletter!

Recommended

13 Comments on “Seeing Ghosts: Self-driving Cars Aren’t Immune From Hackers...”


  • avatar
    ToolGuy

    On the topic of false positives:

    In some parts of the country there are rural roads which often meet at less than a right angle. If there is a stop sign on the road crossing yours at an acute angle, it could easily be read by a machine as belonging to ‘your’ road (sometimes you have to look twice as a human if the road is new to you, especially depending on lighting conditions).

    Best practice is to shield the stop sign so that “the legend is out of view of traffic to which it does not apply” but this is [very] often not done.

    If an automated vehicle misreads the non-applicable stop sign and comes to a stop on a ~55-mph highway, this becomes a potential problem for anyone following.

    • 0 avatar
      MQHokie

      How about “Do not Enter” or “Wrong Way” signs set BETWEEN adjacent on- and off-ramps? Oftentimes these are only very slightly angled toward the actual “wrong way” lane and they look like they could apply to either or both lanes. Not to mention GPS isn’t precise enough to reliably locate the car on the correct side of these combinations.

  • avatar
    Russycle

    Self-driving tech has to be very close to perfect, as at any given time millions of lives will be depending on it. Better-than-human isn’t good enough, and we’re not even there yet. And I don’t think we ever will be, Intel just revealed that nearly all it’s x86 processors contain an exploitable flaw:
    http://blog.ptsecurity.com/2020/03/intelx86-root-of-trust-loss-of-trust.html

    • 0 avatar
      RHD

      Once it is good enough, automakers will then subcontract the software and hardware to China.
      The solder joints will work loose and the motherboards will crack in a few years, and the software will contain viruses (like digital picture frames had a few years ago).
      Whoops, now we have to recall two million potentially homicidal self-driving cars, and there go our profits…

  • avatar
    ToolGuy

    Bad actor tryouts: Mock up an autonomous ‘pod’ like the one on the roof of that Ford in the second picture (e.g., fiberglass resin over plywood, with some appropriate painting and detailing). Mount it to the roof of your daily driver, and you’ll never again be pulled over for distracted driving.

  • avatar
    mcs

    This is really old news. Like at least 5 years ago. This is one of the many reasons I won’t use LIDAR in my systems. I have no idea why Waymo and others keep using it. As a small player competing with giants, it’s always fun to see the big guys head down a dead-end trail. There are other fundamental issues with the type of AI they use as well. This has been known for a while too and some of us are using something different. Waymo and some of the others are going to eventually have to scrap their entire systems. I come from an aviation collision avoidance background and we learned the test the crap out of every system. The problem is that progress is really slow. Trust me, not all of us are using the type of AI they use and we use much better sensor technology. Also multiple sensor types. Oh, and good luck hacking FPGAs that have had their JTAG cables disconnected.

    https://gizmodo.com/a-60-hack-can-fool-the-lidar-sensors-used-on-most-self-1729272292

    https://www.theguardian.com/technology/2015/sep/07/hackers-trick-self-driving-cars-lidar-sensor

    https://www.theregister.co.uk/2017/06/27/lidar_spoofed_bad_news_for_self_driving_cars/

    • 0 avatar
      conundrum

      You’ve been the only expert commenter on TTAC over a period of literally years on the autonomous driving scene. Expert because you actually work on the stuff. I recall when you switched over from the aero work.

      A couple of years it was IBM bio-intelligence chips you mentioned. How’s that coming along?

      These big boys like Waymo and Uber and Cadillac and Tesla messing around and getting nowhere so far as I can see is as you predicted.

      So, how about some guess/comment as to how well progress at the leading edge is actually going? Are we going to see anything autonomous even halfways reasonable by the end of the decade? It’s a giant mountain to climb.

  • avatar
    Art Vandelay

    I’d have to see the architecture on this. I would assume the sensors connection to whatever system is controlling them are discrete. If they are CAN connected vs Discrete its relatively trivial to just spoof whatever messages indicate there is nothing in your path. You’ll be fighting with the real sensors data, but you could at least slow down the car’s reaction time depending on how it is implemented. Compromising CAN connected infotainment suites would be my likely vector if asked to look at this. I’ve seen similar attacks work though nothing involving self driving/lidar.

  • avatar

    How humans are different from computers? We also use sensors and algorithms to process and identify patterns. It takes months if not year or two of intensive learning for humans to more or less reliably drive a car. Autonomous cars also need learning process and lot of data collection and processing. Chinese roads and laws with unpredictable and poorly trained drivers are perfect environment for machine learning.

  • avatar

    Can’t for the first “Pay $500 in Bitcoins or car crashes” message on electric carpads

  • avatar
    northeaster

    I live in Boston. There is no need to outsource the study of unpredictable and poorly trained drivers to China.

  • avatar
    pwrwrench

    Aside from anyone deliberately trying to mess with an ‘autonomous’ vehicle there’s EMI to consider.
    With more and more underground cables EMI is pervasive. Sure you can sheild a system. IIRC in the early days of Electronic Fuel Injection, high powered 2-way radios could cause havoc and engine shut down.
    I installed a few anti-static kits with grounding cables from hoods and trunks to body. As late as the 80s some Bosch systems malfunctioned due to static discharges under very specific conditions. A fix was created with a filter in a cable leading to the ECM.
    As another post has it,”you need to test the crap out of it”

Read all comments

Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • Carmaker1: As usual it’s very obvious the one who moderates here, is slacking and my response from 2 days ago...
  • Tstag: Can someone just make a beautiful saloon again? Here’s hoping Jaguars new design chief gets the message
  • slap: “People will buy wagons if you offer them the right ones at the right price.” “Don’t make the...
  • nrd515: I drove an F150 with the 3.3, and it was tolerable for me. If the Ecoboost engines were trustwotthy, and they...
  • nrd515: Hell, I wouldn’t even want to deal with turbo issues under the warranty, as a friend of mine has done...

New Car Research

Get a Free Dealer Quote

Who We Are

  • Matthew Guy
  • Timothy Cain
  • Adam Tonge
  • Bozi Tatarevic
  • Chris Tonn
  • Corey Lewis
  • Mark Baruth
  • Ronnie Schreiber