Are You Ready For: Cars That Get Inside Your Head?

Edward Niedermeyer
by Edward Niedermeyer

Google’s autonomous cars have already shown how close vehicles are to driving themselves in day-to-day traffic, but there’s still one uncontrollable, unpredictable, and often-irrational variable that autonomous cars still struggle to cope with: you, me and all the other haphazardly-programmed human beings on the road. And though predicting human behavior might be one of the most difficult tasks for a human-programmed computer, researchers at MIT are already digging into the challenge. Using model cars (one autonomous, one human-controlled) on overlapping tracks, 97 out of 100 laps avoided collision. But not all of those laps fell into the near-collision “capture set”… which, as it turns out, is what makes the human threat to autonomous cars so challenging.

According to [MIT Mechanical Engineering Professor Domitilla] Del Vecchio, a common challenge for ITS developers is designing a system that is safe without being overly conservative. It’s tempting to treat every vehicle on the road as an “agent that’s playing against you,” she says, and construct hypersensitive systems that consistently react to worst-case scenarios. But with this approach, Del Vecchio says, “you get a system that gives you warnings even when you don’t feel them as necessary. Then you would say, ‘Oh, this warning system doesn’t work,’ and you would neglect it all the time.”


That’s where predicting human behavior comes in. Many other researchers have worked on modeling patterns of human driving. Following their lead, Del Vecchio and Verma reasoned that driving actions fall into two main modes: braking and accelerating. Depending on which mode a driver is in at a given moment, there is a finite set of possible places the car could be in the future, whether a tenth of a second later or a full 10 seconds later. This set of possible positions, combined with predictive models of human behavior — when and where drivers slow down or speed up around an intersection, for example — all went into building the new algorithm.

The result is a program that is able to compute, for any two vehicles on the road nearing an intersection, a “capture set,” or a defined area in which two vehicles are in danger of colliding. The ITS-equipped car then engages in a sort of game-theoretic decision, in which it uses information from its onboard sensors as well as roadside and traffic-light sensors to try to predict what the other car will do, reacting accordingly to prevent a crash.

When both cars are ITS-equipped, the “game” becomes a cooperative one, with both cars communicating their positions and working together to avoid a collision.

Read the theory and preliminary experimental results from the MIT research in PDF format here, or read more of MIT’s news write-up here.


Edward Niedermeyer
Edward Niedermeyer

More by Edward Niedermeyer

Comments
Join the conversation
3 of 10 comments
  • APaGttH APaGttH on Jun 19, 2011

    Interesting article on this topic in Scientific American: http://www.scientificamerican.com/article.cfm?id=google-driverless-robot-car Seems we're a long way off from self-driving cars.

    • Mcs Mcs on Jun 20, 2011

      +1 I've designed parts of two aviation collision avoidance systems that are in use today, have robotics experience, and plan to have a small autonomous vehicle prototype operating by the end of the year (although it's my lowest priority project, so who knows). I can tell you that we're even further away from something that functions on roads than most of these guys think. For example, today I was blasting down a road in the White Mountains of NH on my bike. Suddenly, a moose jumped onto the roadway and started travelling in the same direction I was heading about 100 feet ahead of me on the left side of the road. Maybe he was British? Anyway, as a human, I know this thing is a moose and based on my rudimentary knowledge of moose, even though he wasn't travelling in my lane, I knew that something bad might happen if I zoomed past him in my lane, so I wisely hit the brakes. An autonomous vehicle might not recognize the moose as a threat (it's not on a collision course, so no problem), and would try to drive past him. This might have caused any number of bad behaviors on the part of the moose and it would have ended badly for the autonomous vehicle and the moose. As humans, we have driving and life experiences that enable us to deal with situations like moose and any number of other occurrences on the roads. Sure, we're making progress (and I'm hoping for a nice patent portfolio), but we have much further to go before we can turn vehicles loose on public roads. Much further than many of my peers seem to realize.

  • Zackman Zackman on Jun 20, 2011

    Unfortunately for me, I have to be at the office due to the fact that's where all our equipment is to be able to accomplish what I do and that is cutting physical paperboard structures and to be able to fulfill sample requests that come up rather suddenly when a customer or salesman calls and has an issue that needs to be resolved quickly. 100-mile daily commute, here I come! Beloved 2004 Impala, don't fail me now!

Next