By on August 30, 2019

The National Highway Traffic Safety Administration has been pretty good about letting companies test autonomous vehicles on public roads. And yet pretty much every automotive manufacturer, ride sharing firm and tech giant still wants laxer rules. To a degree, it’s understandable. Take General Motors, for example. Back in 2017, GM sought exemptions from NHTSA to deploy fully automated vehicles without steering wheels or pedals, but that would have placed the car in clear violation of preexisting safety standards — as they were not in line with the General’s vision of what a self-driving car should be.

GM’s autonomous division recently said the self-driving Cruise AV it had been prepping for the end of this year will likely have to be delayed. While development issues assuredly played a role in stalling the car’s commercial deployment, it could never have launched as initially designed anyway.

Earlier this year, the Federal Motor Carrier Safety Administration (FMCSA) and NHTSA asked for input regarding the testing of automated vehicles to help decide if the “removal of unnecessary regulatory barriers” would be a prudent move. You can probably guess the feedback received from the automotive and tech industries. 

The public comment period was designated to last until August 28th. According to Reuters, Waymo managed to get its statements in right at the buzzer.

“[The] NHTSA should move promptly to remove barriers while ensuring safety,” Waymo said in a letter posted on Thursday. “on the removal of unnecessary regulatory barriers to the safe introduction of automated driving systems.”

Among the barriers referenced by Waymo were seating configurations and the need for human operators. New seating templates would mean automakers can start converting interiors into mobile lounges while dumping flesh-and-blood drivers, opening up doors for automated taxi services and new distractions embedded into the dashboard of autonomous cars. Unfortunately, there’s not a lot of hard evidence to support human drivers being any worse than automated systems. There is, however, an abundance of industrial marketing materials that suggest they will be someday.

There’s also the oddly pervasive idea that autonomous vehicles will allow literally anyone to move into the driver’s seat. In fact, the National Federation of the Blind has openly supported the development of autonomous vehicles for a couple of years now. But questions remain as to how the visually impaired would effectively operate an AV. While voice commands would be an ideal interface, it would have to perform impeccably to work, as would the car’s navigational abilities.

We’ve also heard claims that mobility solutions would allow people without licenses (which would include the blind) and even children to take solo trips. But this opens up a bevy of new of questions. Can people with no driving experience be made responsible for a motor vehicle? What are the legal ramifications?

Nobody has satisfactory answers, yet Waymo suggested the above hurdles must be removed if vehicles without controls are to be deployed in a “timely” manner. Other firms were only a few millimeters away from being in lockstep. They want new rules as soon as possible.

From Reuters:

General Motors Co in its comments said “it is imperative that NHTSA continue to drive this critical dialogue with a sense of urgency so that the necessary regulatory evolution keeps pace with advancing technology.”

Lyft Inc and Honda Motor Co told the agency in separate comments that it could recognize self-driving cars as a separate vehicle class to address the rules written assuming humans would be behind the wheel.

Numerous automakers have been more realistic of late. Both Ford and GM claim that introductory AVs would be aimed at getting you near your intended destination before dropping you off (in the case of cabs) or forcing you to take over (in the case of personal transport). Other automakers, like Fiat Chrysler, have been more relaxed in their pursuit of the technology. The brunt of FCA’s autonomous commitments revolve around supplying tech companies with platforms they can use to develop their own systems — with its own AV projects playing more of a supporting role.

The NHTSA intends to write rules regarding seating configurations and manual controls in March 2020, hoping to address the safety of passengers facing the side or rear or self-driving cars. Still, Waymo said seating was not an important factor when it came to “the deployment or development” of autonomous vehicles.

We’re inclined to agree. Chair positions are small potatoes in the bigger picture. That’s why the FMCSA’s notice requesting comments focused on ten items that had nothing to do with seating orientations. It’s concerned with the following:

  • Whether federal safety regulations should require a human driver
  • Minimum medical qualifications for human operators
  • How commercial driver license endorsements come into play
  • Hours-of-service rules for commercial AVs
  • Distracted driving and monitoring systems
  • Safe driving and drug and alcohol testing procedures/laws
  • Inspection, repair and maintenance rules
  • Roadside inspections
  • Cybersecurity issues
  • Confidentiality of shared information

Addressing even one of those items thoroughly would be a daunting task. Yet the United States will have to contend with all of them if it’s to deploy autonomous vehicles en masse. Sadly, the people making these rules also seem woefully out of touch with the technology. One issue is that a large portion of the information they’re being fed comes directly from the companies that are developing it. But there’s also a lot of technical information that has to be parsed through to even have a basic understanding of how these systems function, what obstacles they have yet to overcome, and how the development process works.

[Image: Waymo]

Get the latest TTAC e-Newsletter!


28 Comments on “Automakers Asks NHTSA to Remove Autonomous Hurdles...”

  • avatar
    SCE to AUX

    Sure, remove all the barriers. /s

    These fools must have the best lawyers in the world.

  • avatar

    Regulators should proceed cautiously and ignore pleas to expedite new regulations. Years of work and mountains of money seem to have taken autonomy to the level of a new driver according to stories from the Phoenix AZ area. There are too many unanswered questions, such as who does the software choose to sacrifice, and who to protect when a wreck is unavoidable?

    • 0 avatar

      I asked those questions In a casual conversation with a software consultant who was working on AVs for one of the “big data” companies. The so-called AI software is actually plain old decision trees “taught” to a simulated neural network. The really hairy life or death decisions? They have a random number generator to decide which response to use. Yes, it literally is a roll of the dice…

      We tend to give these companies far more credit than they deserve.

      • 0 avatar
        Art Vandelay

        Wow. At least in iRobot there was calculation to decide to save Will Smith. SYes, count me as a hard no on easing regulations. How about you put people at ease about what is out there first and address the alegations that you are fudging the safety numbers we saw on here a while back.

      • 0 avatar

        I consider a roll of the dice to be at least as trustworthy as the moral compass of a tech executive.

      • 0 avatar

        @TimK: ” The so-called AI software is actually plain old decision trees “taught” to a simulated neural network”

        None of the mainstream players are doing that as far as I know. There are a small number of us on the fringe that work with 2nd gen neuromorphic AI that mimics biological systems.

        Don’t know where you’re getting the random number generator thing from. Even in the older aviation collision avoidance systems I’ve co-designed, we never did anything like that. It was a best effort to avoid colliding with anything. It wasn’t “do I hit the deer or the human (ground collision avoidance)” but how do I avoid hitting both. With sensors far more capable of spotting trouble sooner than any human, a machine can have more time to avoid a collision. For example, in an experiment we constructed a 3-d model of an object from the reflections in a car door panel. Picking up and analyzing reflections and moving shadows can give future autonomous technology a huge advantage over humans since it could essentially see around corners and better anticipate problems.

    • 0 avatar

      Humans face the same dilemma. Like drop A bombs on few selected Japanese cities and sacrifice hundred thousands civilians to save millions of American and Japanese lives. What’s the difference?

      • 0 avatar

        Is it really a dilemma for any humans? There are people who will kill themselves and others to save a squirrel. I don’t want those people programming my AV.

        • 0 avatar
          Marc Ramsey

          Could it be that the right way to program an AV, from your perspective, is to have it kill the occasional granny or kid, so the greater number of passengers get to their destinations a few seconds earlier? That is the actual tradeoff.

          Consider VW, Audi, Porsche marketing their new AVs as getting you to your destination faster than the competitors, just as they did with diesel technology and mileage. That turned out great!

          Watch the Republicans and Democrats come up with one rare piece of bi-partisan legislation that caps car, software, and finance company liability in AV accidents. That should really get the ball rolling, we’ll just hope there’s no kid running behind it.

        • 0 avatar

          “There are people who will kill themselves and others to save a squirrel.”

          I doubt about “themselves” unless they are depressive and suicidal. But other – oh yeah. We had lot of that in Russia – “others” were killed by millions for some crazy man’s idea.

      • 0 avatar

        @ ILO — “What’s the difference?”

        The difference is $10 ARM chips running software on the AV controller don’t have a past, a family, or any legal standing. If an AV “decides” to run over a bicyclist on the shoulder of the road instead of endangering the occupants of the vehicle in a frontal collision, it has sentenced a human being to death. Do we really want to give a glorified video game motherboard that license? How is that consistent with any sense of justice or due process?

        We claim to live in a nation of laws with unalienable human rights…

        • 0 avatar

          “The difference is $10 ARM chips running software on the AV controller ”

          Huh? Where do you get this?? There are some ARM chips that cost a $1 moving the data around, but the AI happens in some pricey NVidia Cuda cores, FPGAs, and custom processors.

        • 0 avatar

          @TimK Modern science considers humans to be a biological robots – different kind of CPUs but software is the same AI just evolved naturally. Both humans and silicon based AI evolve via learning algorithms.

    • 0 avatar

      This situation is pretty rare, seems like.a red herring. Also I would think better to think this out in advance in software rather than leave it to split-second reaction of human driver.

      Regulators should proceed cautiously up to a point – but at some point caution is reckless. 100 people/day die in US in car wrecks. To use an analogy, vaccines should be cautiously and carefully tested – but requiring absolute safety would mean we’d still be losing 1000’s to polio every year.

      • 0 avatar

        The red herring here is the initial push from Musk et al that autonomy would solve the carnage problems on our roads. The focus seems to be shifting to the money that could be saved/made by not employing drivers, instead of safety. I suspect that AVs will need to have some kind of identification similar to “STUDENT DRIVER” signs so drivers can give them a wide berth.
        And I suspect that conumdrum, below, has a good point.

        • 0 avatar

          ” the initial push from Musk et al that autonomy would solve the carnage problems on our roads”

          I think that even with a fantastic super AV system in the distant future with capabilities far beyond any human, some people will still die on the roads. There are situations that will happen that nothing can prevent or escape.

        • 0 avatar

          That is not really a “red herring”. A red herring is something that distracts and does not matter – in my example choosing who to kill in unavoidable accidents is very uncommon. Musk’s claim of reducing carnage is not a red herring – I think we all agree reducing auto deaths is important. You can disagree with this claim – but it is not a red herring

          Musk does not say he can eliminate auto deaths, just greatly reduce them. I think he is right.

  • avatar

    I’m not a big government proponent, but this is the sort of thing the NHTSA should protect us from to justify its existence. How many recalls were announced today?

    • 0 avatar
      Art Vandelay

      Exactly. Maybe they could start by actually enforcing the 5 mph bumper standards then worry about complex stuff.

      • 0 avatar

        The bumper standards were gone by 1989. They only lasted about 15 years. I don’t know if there are any standards today relating to bumper damage, at least not any that matter. The bumper on one of my cars can’t stand up to a dirty look.

        • 0 avatar

          Further NHTSA research showed the 5 MPH bumper standard came at the expense of passenger safety, pedestrians and others. And dogs! And of course fuel economy.

          The mismatch of bumper heights made it further irrelevant
          as SUVs, pickups and crossovers became mainstream.

          • 0 avatar

            Are you sure it wasn’t IIHS claim studies that showed it was costing insurance companies far more to repair 5-mph impact bumper cars that were involved in crashes between 6 and 20 mph?

  • avatar

    Dear NHTSA:

    We, the undersigned car manufacturers and advanced technology companies want you to know that many prominent and extremely wealthy citizens and pension funds and venture capitalists, who insist on early return of capital, have invested billions of dollars with us on our earlier projections of deploying autonomous driving vehicles by 2020.

    We must therefore insist that you allow us to test our autonomous driving prototypes on the public highways in these United States free of all restriction, petty or otherwise, so that we may deploy half-as*ed technology untrameled by regulations of any sort. That way we can more rapidly make and sell sort-of-safe autonomous vehicles to the general public and receive the return on capital our investors demand, as soon as possible.

    If not, publicly elected lawmakers will be pressured by some incredibly important and self-centered private interests to abolish the NHTSA and its recalcitrant and irritating staff, who are currently insisting on safety rules for prototype autonomous vehicles. We know better than these regulators how desirable it is that return on capital not be delayed by a nanosecond longer than necessary. Anything preventing immediate return on investment must be overlooked as detrimental to the private good.

    We insist you understand the ramifications and abolish your petty restrictions at once and leave our experts to experiment freely on the public immediately.

    Or else.


    Every car industry and advanced tech manufacturer and AI researcher that matters who need to return investment to keep investors satisfied.

  • avatar

    The regulators need to be very careful. These systems are still very primitive. People above are discussing who the vehicle will prioritize, while current systems can’t even decipher between a garbage bag, a hill or an actual solid object. People are falling for headlines too much, while failing to understand the limitations. The current collision warning and autonomous braking systems are becoming the number reason for buybacks because customers don’t understand that these systems are no where near 100% affective. This can be seen with all the Tesla autopilot deaths. People have seen too many movies, and put way to much trust into technology.

    • 0 avatar

      An autonomous vehicle needs intuition and can’t just react to something that has already happened. Good intuition is one of the things that separates good human drivers from bad human drivers and
      it applies to machines as well. They need to be held to a far higher standard than the absolute best human drivers and can achieve that with advanced sensors. Detecting hidden potential hazards around corners by analyzing reflections and moving shadows is possible and would make autonomous vehicles better than even the best human drivers. But, these systems are still under development.

      I think it is possible to build some truly great AVs, but we’re just not there yet and I won’t even dare guess the time frame.

  • avatar

    Maybe start with long-haul autonomous trucks that leave from big parking lots outside cities. As soon as weather deteriorates, they are forced to stop. For the first and last mile through a more urban landscape, a human driver takes over.

  • avatar

    get a horse!

Read all comments

Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • civicjohn: Anyone considering this car please contact me first. I’ve got a car in the barn that has some sort of...
  • markf: “Fundie churches don’t work that way, FreedMike, they encourage trolling and worse, because of a...
  • JimZ: ney/cars/detroit-auto-show/202 0/03/28/coronavirus-detroit...
  • thelaine: It is becoming apparent that the end-of-the-world predictions, computer models warning of an apocalyptic...
  • Arthur Dailey: Cheap and simple remained a mantra right until the end of the Soviet Union. From the AK-47 to the...

New Car Research

Get a Free Dealer Quote

Who We Are

  • Timothy Cain
  • Matthew Guy
  • Ronnie Schreiber
  • Bozi Tatarevic
  • Chris Tonn
  • Corey Lewis
  • Mark Baruth