By on March 31, 2017

Fictional Autonomous Ford in Super Bowl Commercial

Don’t look now, but it would appear that SkyNet has finally arrived — in an expert system designed to make certain judgments during autonomous vehicle operation. NetworkWorld’s breathless report states, “Basically the patented IBM system employs onboard sensors and artificial intelligence to determine potential safety concerns and control whether self-driving vehicles are operated autonomously or by surrendering control to a human driver.” We don’t need to worry about preserving John Connor’s life, or even conceiving that life (with your friend’s mom!) quite yet, however.

The definition of “artificial intelligence” that NetworkWorld is using could just as easily apply to your “smart”phone’s various character-recognition systems. But the problem that this so-called AI purports to solve is one that has far-reaching implications for the timeline, and methods, by which autonomous vehicle operation enters the mainstream.

And it leads to a very simple question.

According to the article, this “smart wingman” would be used to make a particular set of decisions.

“The idea is that if a self-driving vehicle experiences an operational glitch like a faulty braking system, a burned-out headlight, poor visibility, bad road conditions, it could decide whether the on-board self-driving vehicle control processor or a human driver is in a better position to handle that anomaly. If the comparison determines that the vehicle control processor is better able to handle the anomaly, the vehicle is placed in autonomous mode,” IBM stated.

This is the sort of thing that will be critical in autonomous-car adoption, because it has direct bearing on issues of responsibility and, more importantly, liability. We’ve already seen this with Tesla’s “Autopilot” and its inability to completely identify the set of all potential situations where a human being should really be paying attention. Tesla gets around this the same way that fireworks vendors in certain states do: they make you sign a form in which you claim you won’t be doing any of the things that you’re actually going to do. From a legal perspective, Autopilot is a sort of fatigue-mitigation device where you look straight ahead with both hands on or near the wheel, ready to take over at a sub-second’s notice.

This sort of subterfuge works okay enough for Veblen goods sold to the inexplicably rich, but it won’t fly when the hoi polloi have to join the party. There will almost certainly be situations where an autonomous vehicle will require some level of human interaction, whether that takes the form of answering yes/no questions or manipulating a pop-up set of emergency controls. So it’s merely a question of when the car should give up and ask for advice or control from the on-board wetware.

The answers to this will not be cut and dried. Is an SCCA Runoffs champion better at handling a car with a disabled brake caliper than the computer would be? Maybe and maybe not. Is a 16-year-old with minimal training better? Almost certainly not. But what about that burned-out headlight? What combinations of problems will require human intervention? Remember that the autonomous vehicle fleet of the future will stop being shiny and perfect about three months after the first sizable batch of cars hits the ground. After that, it’s basically the low road to Children Of Men. What happens when a child in a low-income community is sick and Prole Mom, who can no more drive than she can conjugate Latin, demands that a car take her to a hospital? When the “smart wingman” determines that Onboard Motor #3 is in suboptimal state, what will it do? Refuse to go?

If you ask me, the road to these answers will be paved in lawsuits. And bankruptices. And blood. But if you disasgree, let’s hear it.

Get the latest TTAC e-Newsletter!

Recommended

30 Comments on “QOTD: When Should Your Self-Driving Car Ask For Help?...”


  • avatar
    sirwired

    Even airliners, which are operated by nobody except highly-trained professionals, struggle with this trade-off. Confusion over how much the computer systems control at what time has been the source of more than a few accidents.

    • 0 avatar
      Heino

      Flew commercial for 8 years. Automation worked great, till it didn’t. Oh you got the chirp or klaxon warning AS it kicks off. I learned to look for other clues in the secondary flight displays.

      Spoke to an engineer friend who works in aerospace as to why some aircraft automated systems are so confusing. His answer was, “Pilots don’t buy commercial aircraft, we only care about certification!”

      • 0 avatar
        DirtRoads

        When I was employed by an airline and working with Boeing on a unique project for them, I couldn’t believe how little they knew about the airline business. They are engineers, building a product. That’s it. They couldn’t care less if it was a 747 or a dishwasher.

        I love flying machines and always have. I ASSumed people building airplanes at Boeing would feel the same way. Some do, I’m sure.

        As many studies are done for ergonomics, and have been done for many years now, there are still a lot of screwed up interfaces. That goes from cockpits to Cannondales.

  • avatar
    Caboose

    My contracts professor used to say that, “America has the worst justice system in the world… except for all the others”.

    Similarly, I’m beginning to think that human drivers have the worst judgment/responses/reflexes/skills… except for all the other possible options for sending a car down the road.

    If the tech companies who are largely pushing for this malarky spent the same effort getting more of their knowledge workers operating remotely – knowledge workers who predominantly live in the traffic-nightmare cities of the country as it is – they could take millions of miles off the roads every year, thus mooting a big portion of the supposed problem.

    • 0 avatar
      Arthur Dailey

      Caboose nails it, despite not mentioning that his Professor was merely paraphrasing Churchill.

      The vast majority of knowledge workers, electronic order takers, data entry and customer service clerks could work just as well or better from home. The major reason that they are not allowed to is that the ‘managers’ who oversee them would then be revealed to be superfluous to the requirements of the organization. In reality it is rather easy to monitor their performance, remotely.

      In return this would remove multitudes of commuters from the roads/trains/buses. It would also reduce the rent paid by companies or the size of their facilities. It would also reduce costs associated with furniture purchases, building maintenance/upkeep and energy costs.

      As for when an ‘autonomous’/self-driving vehicle would hand over control to a human(oid) the major problem as ‘Dan’ mentions below is that as humans drive less and less, their driving skills will diminish.

      • 0 avatar
        FreedMike

        This.

        I’ve been working remotely for almost seven years now. My company has no problem letting me know what my productivity looks like.

        But I’ll disagree on the “working remotely obviates the need for managers” argument. It’s still needed. It just becomes a different management style.

        • 0 avatar
          Arthur Dailey

          Let’s say ‘supervisors’ or layers of management.

          It takes less of these to observe and monitor the productivity of remote employees.

          Just as we no longer need to have a manager/supervisor standing around watching a group of workers.

          You are correct about it needing a different organizational culture, in order to work.

          • 0 avatar
            FreedMike

            I think it depends on what kinds of tools the organization has to monitor productivity or other metrics. If the tools are in place, then it takes no more or less effort to manage people who are remote than it does for someone who is “present.” All the supervisor needs to do is read a report and do his or her job in coaching the employee.

            In order for remote employment to work, there has to be a solid set of metrics in place and a solid system to monitor them. I’d say that’s more of a basic organizational requirement than cultural, but I’m probably parsing, because those two are always interconnected.

          • 0 avatar
            Arthur Dailey

            Cultural shift as dictated by organizational strategy, as it requires a different managerial style: those being ‘classical’, ‘human relations’ or ‘high involvement’. And different employee behaviours, those being ‘membership’, ‘task’ or ‘citizenship’. These generally are required to correspond with the organization’s compensation strategy which could be one of” ‘lag’, ‘match’ or ‘lead’.

            An misalignment in the above leads to organizational dysfunction.

            But then I could discuss this all day because I must reluctantly admit, ‘human resources/organizational restructuring/performance management’ is the profession in which I am engaged.

          • 0 avatar
            FreedMike

            I’ll defer to you on HR, Arthur, but I know from experience that telecommuting works as long as you have a strong set of metrics to work with, and that’s the same thing you need for basically any kind of management.

  • avatar
    Dan

    The fundamental problem here is that the human driver whom control will potentially be handed off to will be awful. Not just inattentive, inconsiderate, and ignorant of the rules of the road awful like the drivers that we have now. All of those problems will still be there and will now be magnified by the rust of not having actually driven for weeks or months at a time while the car drove for him.

    On top of that, obstructed visibility or non standard traffic patterns – the problems which AI has far and away the most trouble dealing with – are likely to occur suddenly during a trip that’s already in progress in which case the human driver will be expected to put down his iphone, assess the situation within seconds, and control the vehicle appropriately without so much as a driveway and a couple of slow residential streets to shake the cobwebs off?

    Not happening. Not coming close to happening.

    • 0 avatar
      dwford

      My thoughts exactly. Personally I would never be able to relax knowing that on a moments notice I may be called to action in an emergency situation. there people, with perhaps a more positive outlook on life, will totally embrace the autonomous experience and not be mentally ready to take over when needed.

    • 0 avatar
      Lou_BC

      Agreed.
      Last year we were heading out on a camping trip and it was getting close to dusk. Prime time for deer to start wandering around. A doe bounded out on the road and I reacted by braking.
      My wife, staring off into the distance in a semi-oblivious state freaked out but well after I had initiated avoidance measures. She would have made the situation worse had she be at the wheel.

      I can see the exact same scenario occurring if and when an autonomous car hands over control. The human as “plan B” isn’t going to be alert or aware enough to make a quick decision.
      If “hand over” is due to a “soft” failure then the vehicle should just go into “limp’ mode and then the driver has the option to override. That would be no different than if the AI decided that the environment was too dangerous to proceed. A person could chose to override.

  • avatar
    JimZ

    all this tells me is that Level 3 autonomy *must not happen.* if it relies on the meatsack to be ready to take control in short order, it is doomed to fail.

  • avatar
    RobbieAZ

    I completely agree with Jack. This is not going to be good.

    It seems like one of the problems these cars have is the assumption that other drivers on the road are going to behave just as responsibly as the autonomous car itself does. We just had a wreck in Phoenix where the autonomous car failed to anticipate the possibility that another driver might do something stupid. I can foresee this happening a lot.

  • avatar
    zoomzoomfan

    Taking control and making evasive maneuvers or decisions at a moments’ notice has proved difficult for Tesla drivers and drivers of standard ol’ manually driven cars. Can you imagine people riding in self-driving cars and being immersed in a book, asleep, watching TV, or doing…other activities with one another? By the time they realized the robot car was confused and needed them, it’d be too late.

  • avatar
    S2k Chris

    If my self-driving car can’t reliably haul me home when I’m 5 Manhattans deep, I don’t want it. For me it’s all or nothing. I wouldn’t buy a car that had a stereo that decided some days it wouldn’t work, you better just sing to yourself instead, and it’s the same way with autopilot. If it’s not going to work, rip it all out and I’ll drive myself.

    • 0 avatar
      Willyam

      Yes THIS.

      I commuted this morning behind, and slightly to the right, of a new Camaro ‘vert. Paper plates. It attempted to swerve left around a semi, lit the brakes up and panic-swerved right back into lane. The Ram diesel 4×4 that was passing shook a fist or two, then throttled back up and continued.

      I figured, wow, visibility on the convertible is EVEN WORSE, and I pulled up gingerly to see what was going on in there.

      She had a phone to the left ear, and was steering with a couple of fingers while she held a 44 oz styrofoam drink in the rest of her right hand.

      If I could automate those drivers off the road, or at least have a car that could think fast enough to avoid them AND let me have 4 drinks at the concert, count me in.

  • avatar
    S2k Chris

    This also calls into question things like the intended Uber self-driving fleet. I’m going to “hail a cab” that might suddenly decide I might need to drive it myself? Uh, thanks but no thanks, if I wanted to drive myself I’d be taking my own car. And who is suddenly liable? If I get summoned into driving an Uber Black S550 and I prang it, am I and my insurance suddenly on the hook? It decides halfway into a skid “uh, need some help here”, switches off autopilot and I can’t recover and crash, it’s on me?

  • avatar
    FreedMike

    So, in other words…

    The car runs autonomously while the driver is reading his texts, or dozing off, or fondling himself, or whatever else drivers who think their cars drive themselves do while they should be paying attention. Or maybe the guy’s just had four tequila shots. Why not – the car drives itself, right?

    Then the car diagnoses some kind of critical problem that whoever made it doesn’t want to take legal responsibility for, at which point control is transferred directly to the texting / dozing off / self-gratifying / intoxicated driver. And I’m sure he’s 110% ready for whatever crisis is being tossed into his lap. Right?

    Said it before and I’ll say it again: autonomous driving in any situation aside from long stretches of Interstate or rural roads is going to cause problems. Count on it.

    (…cue the “gee, but if the driver is really drunk, why not let the car drive him if it can” counter-argument.)

  • avatar
    Kendahl

    It’s also worth worrying about the reverse. That is, the car taking over and screwing up when the human driver was in the process of handling the situation.

  • avatar
    stingray65

    Problems will likely be greatly reduced with level 5 fully autonomous cars that offer no option for manual driving. If in trouble they will need a safe mode that steers them to a stop on the side of the road and automatically calls the self-driving tow truck. Even better from a safety point of view will be the day that manually driven cars are outlawed, which will eliminate the unpredictability of humans and allow the cars to ‘talk’ with each other to avoid problems – no more stops signs, traffic lights, or speed limits. Of course getting from now to that point in time will likely present some interesting issues.

  • avatar
    V-Strom rider

    228 people were killed on Air France 447 when the autopilot handed over to the human crew and they failed to take correct action. The fact that the correct action was “do nothing” doesn’t obviate the fact that the crash was caused by the hand-off. This was in 2009 so it’s not a new, or unforeseeable, problem.

  • avatar
    V-Strom rider

    Bottom line – my own definition of an autonomous vehicle is one where I can get into the passenger seat and go for a ride. Any so-called autonomous vehicle that expects me to be ready to take over will have the “autopilot” switched off and I’ll drive myself. If that can’t be done I’ll ride in something else!

Read all comments

Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • gtem: Precisely Indyfan. Fellow Indy resident. This deal with Tom Wood stunk from the beginning.
  • gtem: Indy resident, this whole program stunk to high hell from the beginning. Tom Wood one of the major local...
  • Luke42: The problem with these the-vendor-wishes-it-was-a-lux ury-brand brands is that they don’t have anything...
  • Carmaker1: As usual it’s very obvious the one who moderates here, is slacking and my response from 2 days ago...
  • Tstag: Can someone just make a beautiful saloon again? Here’s hoping Jaguars new design chief gets the message

New Car Research

Get a Free Dealer Quote

Who We Are

  • Matthew Guy
  • Timothy Cain
  • Adam Tonge
  • Bozi Tatarevic
  • Chris Tonn
  • Corey Lewis
  • Mark Baruth
  • Ronnie Schreiber