By on January 30, 2013

It’s now apparently legal to have self-driving cars in California and Nevada, and this should spread across the country rapidly. One industry report predicts we’ll have them by 2019. For the purposes of this article, let’s assume that the costs will come down slowly but surely and adoption will grow quickly. Let’s jump all the way to the end point, where self-driving technology is safe, reliable, and mandatory (yes, mandatory), just like seat belts, air bags, and so forth.

Last time I wrote about robocars, I focused on the computer security threats (and the risk that hackers will steal your car remotely). This time I want to focus on the regulatory and implementation issues. Consider the case of the humble four-way stop. Today, you’re legally required to stop, even if there’s nobody there. Your robocar, however, could send out a message. “Anybody approaching this intersection?” If not, you blow on through. (Saving time and energy!) Likewise, if every car can compute exactly when it will arrive at the intersection, that means they can negotiate with one another. Maybe you speed up a little and I slow down a little and we nicely miss each other by a few inches. Sounds great, right? Extrapolate a little bit more, and traffic lights become completely unnecessary. Instead, you’ll have cars flying through the intersection, seemingly at random, but always managing not to hit each other. Even if a car experiences a tire failure or other catastrophic event, it can communicate that to everybody nearby, and they’ll respond quickly and safely. (But see my caveats below.)

Now consider that all the Google robocars have a big red button next to the steering wheel that forces the computer to disengage and return the car to your manual control. If you freaked out in one of these busy intersections and hit the big red button, everybody else’s scheduled entry to the intersection is now at risk. Nobody can predict what you’ll do next. Consequently, you could be liable for the damage caused by taking manual control of your car!

Of course, in the future, we’ll still have pedestrians and we’ll still have bicycles, roller skates, pets, and so forth. While a car can negotiate a very specific plan to go through the intersection, pedestrians and bicyclists will almost certainly still be subject to their current constraints. This leads to an interesting question of how robocars and pedestrians will relate. We could retain the current press-to-walk buttons and walk/stand signals. We might instead put fancier sensors that detect your pedestrian presence and telegraph it to every car approaching the intersection, forcing them to slow down and accommodate you, even if you’re jaywalking in the middle of the street. We might also require pedestrians to carry beacons that telegraph their location (maybe building this into their super-duper smartphones) and use those phones to tell them “please wait 30 seconds and then traffic will open up for you.” While this will work great for most cases, there will always be exceptions. For example, unless global warming kills off all wildlife, we’ll still have deer and other critters with which to contend.

Recall William Gibson’s famous quote, “The future is already here — it’s just not very evenly distributed.” That tells us a good deal about how we’ll ultimately solve these problems. There will be high-traffic intersections and roads that will be heavily instrumented: Total Traffic Awareness. Likewise, there will be lower-traffic intersections and rural areas where the cost/benefit of modern instrumentation won’t justify it. When our robocars have more information, they’ll be able to drive more aggressively. Without this information, they will necessarily revert to our current, more conservative traffic behaviors.

But what about your award-winning, meticulously restored V8 muscle car? If you want to use it on public roads, and it doesn’t have a robo-drive controller, you may be restricted to only driving off hours. You will almost certainly be required to have a transponder to warn all the other cars, and you’ll pay a lot of money for the privilege of driving it, since you’re slowing everybody else down. And, of course, you’ll be unhappy at the lack of parking spots, since the robocars just drop off their occupants and head off to a remote garage somewhere. Maybe I’m wrong, but you probably won’t want to drive your classic car except on special occasions.

Caveats: One of the many obvious gains to be had from robocars is that they can form freeway “trains” with minimal spacing between them. This improves road utilization and saves energy, since the lead car in the train is breaking the wind for everybody else. But what if something goes wrong, mechanically, with the lead car? Time will pass before the pack has figured this out and has begun to take countermeasures. That might not be enough time, particularly as you pack the cars in tighter. The same problem would occur in instrumented traffic intersections and anywhere else where cars are negotiating over their future locations. What’s the solution? More information and better predictions. Say the car in front reports it has low tire pressure on one wheel and the tire is nearing the end of its rated treadwear; it might still be perfectly drivable, but the cars behind will compute an increased probability of a tire failure and will give the lead car a little bit of extra room, just in case. They might even compute which way the front car is likely to spin and preemptively stay to the other side. Better safe than sorry, right?

For this to work, we’ll need two things: correctness and trust. We need all of these sensors and car-to-car messaging protocols to work correctly, but we also need some assurance that nobody’s cheating. If any car is making stuff up, they could cause all sorts of mayhem for the other cars in the neighborhood. Computer security academics have been working on this problem for almost a decade now (see, for example, this research group in Switzerland), but the legal side of the equation is pretty interesting. Car makers will bend over backward to avoid having liability for crashes. In order to do that, we can predict that they will have standardized, government-approved software (“don’t blame us; you were running the standard package, same as everybody else!”), along with tamper-resistant mechanisms to keep you from monkeying with that software. Maybe automotive tinkering will still be allowed, but expect it to be treated the same as that V8 muscle car. You tweak your car, then you’re restricted in when and where you can drive it. Yeah, this sounds a bit like a dystopian future, but it mostly indicates the transition from cars as a romantic possession to cars as a boring utility to get you where you need to be.

Related reading: science fiction author and computer scientist Vernor Vinge has a great book called Rainbows End. He describes a near-future to our own that’s full of interesting gadgets. His vision of what cars might become is pretty close to what I’ve been talking about here.

Get the latest TTAC e-Newsletter!


78 Comments on “Self-Driving Cars: The Legal Nitty Gritty...”

  • avatar


    There’s something like 150 million cars in the fleet today. Hybrids can be cost-effective and practical for nominal additional cost but after 12 years they still make up a negligible portion of the new car market.

    SDC’s will follow a similar trajectory. We might see 10% market penetration by 2020 but that’s a stretch and it would still take many years to switch to a fleet that’s a bare majority of self-driving cars.

    Mandatory is a l-o-n-g ways off.

    • 0 avatar

      Hybrids are hardly a negligible portion of the new car market – heck, the Prius was the top selling car in all of California last year, and the third best selling car… in the WORLD.

      But yes, mandatory SDC’s are a long way off indeed.

      • 0 avatar

        Hybrids may be pervasive now, but they still make up the minority of car tech out there. Not to mention it’s taken over a decade to get to this point. I think SDC’s adoption will be even slower, more akin to electric cars. The tech will be vastly more expensive than just adding a small supplemental motor, so to get the majority of the population to buy cars that cost more than the average income will certainly be a steep battle.

        Add to the fact our government can just barely maintain a “dumb” infrastructure, let alone a “smart” one.

        So yeah, I agree very much that mandatory SDCs are decades off, but I think mass adoption is just as far.

    • 0 avatar

      Several posters have commented on my “mandatory” comment. I went into this more last time, but here’s the argument. First things first, this technology will be expensive. That laser rangefinder on the top of the Google car is apparently $75k, all by itself. Until the costs come down, this technology will mostly appear in high-end luxury cars, commercial shipping / trucking, and maybe taxis. That will establish the legal groundwork for how robocars will interact with human-driven cars. Initially, expect robocars to be required to follow the letter of the law, stop for every potential pedestrian risk, etc.

      Things will change when the robocar technology gets cheap enough that it’s no more exotic than airbags are today. At that point, we’ll have a decade or more experience with robocars to get comfortable with them. We’ll know how well they do in inclement weather. (Hint: they can see much better than you can.) We’ll have statistics on their accident rates. (Hint: they don’t get drunk or fall asleep at the wheel.) As such, if you buy a robocar, you’ll almost certainly get a huge insurance discount. From there, it’s a hop-skip-and-a-jump to mandatory robocars, and that’s where my story begins… *after* they’ve been made mandatory.

      Some posters have brought up computer security concerns. That’s my specialty, of course, so it’s certainly something I’ve given some thought to. Suffice to say that Joe Sixpack isn’t going to be any better defending his car against cyber-attack than he is at defending his smartphone or home computer. (Hint: not very well.) On the other hand, a future ZipCar or taxi service has a very strong financial incentive to defend itself. We might well see cars drift away from something you own to something you rent/borrow when necessary. This already works today in big cities like New York. Expect it to spread elsewhere.

      In our lifetime? Maybe. That Time article suggested robocars will be on the market by 2019. Add a decade and I could see them becoming mandatory. Add a few more years and you’re where my article is pointing.

      • 0 avatar

        I don’t doubt some politician with a gleam in his eye will shoot for mandatory at some point. I wish him the best of luck, he will need it.

        SDC may be present in 2020. They may be ubiquitous for new cars in 2030. I still regularly see cars from the early 2000s and mid 1990s on the road; supposing a 2030 car has a similar lifetime, it’s likely that 2040 would be the first time SDC would represent a majority of road traffic .. and 2050 would be the first time that you could reasonably expect to rarely see a non-SDC in typical commuter traffic.

        SDC MUST be designed to cope with non-SDC (also: pedestrians, varmints, cyclists, motorcyclists, etc) because of this 30 year adoption gap.

        I would expect the typical SDC-enabling BOM to dip well below $5k before SDC is available on a mainstream vehicle.

  • avatar

    A couple of questions:

    Will a licensed driver be required in a self-driving vehicle?
    Can the owner get a DUI if the car is self-driving?
    Who (or what) will be liable for accidents?

    • 0 avatar

      1) Yes.
      2) Yes.

      Self-driving cars will always need a capable human present to act as the commander of the car. (So no sending the car to school to pick up the kids wile you’re at the bar…)

      3) Same as now. Generally speaking, there won’t be accidents caused by self-driving cars. (that’s the point…) In the event there was an accident, fault would be assigned as it is now, with the driver ultimately responsible for their own car. SDCs won’t take a turn too fast, they won’t merge into another car, they won’t run a red light or stopsign, etc. If there are faults in the system, or inclement weather, they’ll turn duties over to the driver to deal with it. In normal driving, the won’t crash. (They never have so far, and 100% perfection is pretty good considering that Googles current fleet has driven more than half a million miles on public roads. I can’t imagine that there are many humans who have driven 500,000 miles without any hint of an accident.)

      Also remember that the primary uses will be on 1) long highway hauls and 2) in stop-and-go traffic gridlock. These are very easy for self driving cars to negotiate, and any accidents are generally caused by people not paying attention, falling asleep, or being jerks. SDCs don’t have those problems.

    • 0 avatar

      As a lawyer, I had many of the same legal questions. Will we still need car insurance (I would imagine the powerful insurance industry would have some say in this)? What if there is some type of malfunction or a hacker (I never trust computers) that causes your car to violate traffic laws? Who is liable if this malfunction causes catostrophic injuries or death?

      • 0 avatar

        What if, as a programmer yourself, you override the default settings in your vehicle to optimize your commute? Have you voided your insurance? Have you broken a law?

        I’m talking about something like upping your max available speed from, say 65 to 75, or altering the digital footprint of your car to allow more stopping space fore and aft.

      • 0 avatar

        What if there is some type of malfunction( or a hacker) that causes your car to violate traffic laws? Who is liable if this malfunction causes catostrophic injuries or death?

        Exactly the same as now. Cars, right now, are almost wholly computer based, and could malfunction or be hacked. The liability won’t change any just because the tech becomes more complex.

      • 0 avatar

        Research is hard, but worthwhile. Here’s the current Nevada regulations:

        A quick summary:

        -Google et al are still, as far as I can tell, in the testing phase in Nevada, meaning they are licensed to test autonomous cars in the state, but have to have two people in the car at all times, one of them being the driver, for all intents and purposes: “…one of whom is the operator and must at all times be seated in a position which allows the person to take complete control of the vehicle, including, without limitation, control of the steering, throttle and brakes.” -NAC 482A.130

        So that’s the rule for the red-plate cars you see on Nevada’s roads.

        Once someone gets a car certified (the mechanism for creating a certification facility is in Nevada’s regulations, but not, as far as I can tell, how to certify autonomous cars), the regulations imply that there WILL be a designated operator (with a new, robocar-specific “G” endorsement), but that operator does NOT have to be present in the car.

        I do not know what is meant by that. It could mean there’s a sort of “designated driver” for a whole pool of cars, or it could mean that you can’t summon for use one of these cars unless you’re “G” licensed (the G license rules don’t seem to be in place yet).

        My guess, from the technologies that are being shown off right now, is that there are virtually no technological or economic barriers to autonomous cars. Current robocar demos are showing incredible flexibility in driving around (and for that matter, we have production cars that can already steer (lane-keeping), brake (collision avoidance), and accelerate (active cruise control) your car), and all of the sensor and computing requirements are the type of things that will get much cheaper in volume than they are right now (stuff like LIDARs are effectively solid-state devices; building 1000 at a time is expensive, building 1,000,000 at a time, much less so; present robocars use off-the-shelf computing hardware, and CPU power is still getting cheaper each year). At a guess, production autonomous driving systems will start out as an option costing four figures, not five.

        In other words, we’re talking leather-interior or bigger-engine money.

        To answer some other questions, well, if your car suffers a mechanical failure today that causes an accident, who pays? (Manufacturers (or in some cases, negligent mechanics) are made to pay if the fault can be put on them). Expect that answer to remain the same.

        The unique selling point will be that they’re very, very likely to cut traffic injuries and fatalities by a vast amount, both for their passengers and for other drivers.

        (And as an aside, slap an IR sensor on the front of a robocar, and the chance a deer will surprise the car by bursting out of a dark forest plummets: the car sees the deer before it can be seen.)

    • 0 avatar


      – You’ll be able to summon a self-driving car, much like you summon a taxi. In that model, whoever owns the car has an insurance policy on it. You might be forbidden from taking manual control of the car unless you have your own insurance.

      – When you’re drunk, you take a taxi home. (Right?) This same model should apply here. Again, no manual control for you.

      – Accident liability is going to be a very interesting question. If everybody’s got self-driving cars and they’re all doing the right thing and an accident *still* occurs, then it might be a no-fault situation. If one of the cars was improperly maintained, causing it to fail, then liability might be easier to assign. If there’s a self-driven car in there, there might be an assumption that they’re at fault until shown otherwise.

    • 0 avatar

      Seeing as you can get a DUI for sitting in your car while it is parked and turned off, I see no reason why a self-driving car would get you off the hook. Police departments are never interested in issuing fewer citations.

      • 0 avatar

        That would be up to the courts. As it stands now the understanding is that you ARE the driver. At some point you WILL drive and be DUI. OTOH, with a self driver there is no assurance that you, the impaired, will ever be in control. A good lawyer should be able to set precedence.

  • avatar

    Self driving cars can shove it, as far as I’m concerned. I like owning my car, accessing it when I want and how I want. The last thing I want is to hand over that much more control of my life to the government. They’ve already got the police out and about to harrass and tax you into driving just how they want you to, I can’t imagine letting them actually drive me. Sure it may up safety and productivity, but at some point life isn’t just about safety and productivity. Government is getting too big, and this is an effect of it.

    • 0 avatar

      And don’t kid yourself, governments and police agencies will use this technology to track people and control certain aspects of their behaviour. They already do this with cell phones. I believe there are enough car lovers out there today (and a lot of them being rich and powerful) to ward off mandatory autonomous cars for the foreseeable future. On the other hand, there are people like my sister, who can’t seem to keep a car between the ditches, who would greatly benefit from this technology.

    • 0 avatar

      The tough part is, in this country the people are the government. So you’re fighting a battle for public opinion.

      • 0 avatar

        Here’s the catch. Self driving cars should appeal to:

        heavy drinkers (don’t worry, they don’t vote and will drive anyway)
        youth [driving age](they have better things to do than drive and are voting more than ever)
        soccer moms (would rather do anything than drive to piano lessons, *always* vote).
        elderly (pry the keys from dead hands, many could give up the controls. *Always* vote).

  • avatar

    There are a small percentage of human drivers who actually ENJOY DRIVING. They will be the late adopters. Those of us not driving front-wheel-drive econoboxes with dead steering, bland interior designs and exteriors with no sex appeal at all will be the first to welcome ROBOT CARS.

    Thing is, none of this technology will work until artificial intelligence is created. Until then, cars will require a bunch of sensors: ultrasound, GPS, velocity, etc.

    When those sensors malfunction, the system will not function properly.

    Perfect example: everytime it snows, our ultrasound sensors get caked up with snow and ice and dirt and stop functioning – sending warnings to the system. Without the rear ultrasound or radar sensors, the car can’t sense the proximity of other objects.

    The front’s radar guided cruise control works the same way.

    So what do we have:

    #1 ELECTRONIC STEERING is necessary to decrease weight and simultaneously allow for the computer to “steer itself”.

    #2 Camera sensors are needed to allow the computer to detect lane lines and steer itself back into them if it veers off course (ala the 2013 MKZ)

    #3 Radar guided cruise control is necessary to allow the car to track objects ahead and ensure that it can slow itself down and stop itself in the event of an emergency.

    #4 Ultrasound and radar sensors are required for distance scanning.

    Here’s the problem though:

    #1 all that stuff is expensive. My S550 had ddistronic plus (a $3500 option) and even I was scared to use it in some situations where I wasn’t 100% sure the car would be able to stop itself to 0mph – like on ice.

    #2 Can a car make the decision to swerve to miss another car or sudden object hazard?

    If something suddenly fell in front of me could the robot stop?

    If there was a child playing behind cars and runs out into th street, could the car detect them and suddenly stop? I’ve had the “rolling ball” scenario: “if you see a rolling ball STOP because a kid will likely run out to get it without thinking” and I knew from traing to stop. Would the car come to a sudden halt for a small object like that?

    I’ve run cats over to avoid swerving and hitting larger hazards. What would the computer do?

    Can a computer make a judgement call?

    And how profitable would the state be without ticketing drivers???

    A computerized car would be able to obey ALL laws. I drove a Tesla Model S that wouldn’t let me drive past 80 no matter what. Then I drove an ungoverned version and it wouldn’t let me exceeed 150. Cops can’t ever ticket me for speeding if the preset is the speed limit right?

    And what about parking tickets or no standing signs?

    If I get out of the car and rush into work, the computer could theoretically drive around till it finds a spot and then park in it.

    There’s be NO MORE DUI. The computer could let you smoke your legal marijuana, drink heavy amounts of alcohol or even fornicate in the backseat with your girlfriend and all the cops could do is cry about it.

    I can’t see it happening, but I think I should be writing a movie script right now.

    • 0 avatar

      You make a lot of good, interesting points. Back in the day, I did my graduate thesis in Human-Machine Interaction between humans and robots. These days I’m a human factors engineer in the automotive industry, so this is a pretty interesting topic for me.

      The machine intelligence already exists to make this happen, albeit in crude form. Sensors will be required regardless of intelligence levels- people use sensors in the form of eyes, ears, skin, nose, mouth, etc.
      As you mention, current sensors are not exceptionally robust, and they are expensive, but I think this is where we need to take a bit of a leap. I don’t think we’ll get to mass-produced robot cars by just adding stuff to the current concept of what a car is, we need a complete re-imaging of the car:

      1) By taking the human out of the driving equation (let’s imagine there is NO human intervention), we throw all of the traditional design decisions out the window. No pedals, no gauge cluster, no steering wheel (or column, or power steering). Not needing these controls probably allows a lot more freedom with drivetrain packaging. Don’t need mirrors anymore. Since exterior visibility isn’t a huge safety concern at this point, you could optimize the glass and body-in-white to meet crash requirements with more cost effective designs. No more airbags? Ultimately, I think the cost of additional sensors would EASILY be covered by designing out the driver- you would be shocked at how much of the vehicle design is impacted by the driver and how much more expensive the driving position is than the rest of the seating positions. I would wager that the driver is the single most expensive “part” of the car.

      2) I expect more robust sensors (heated, water repellant, etc…) and a heck of a lot of redundancy. I could see infrared, radar, visible spectrum, audio, GPS, dead-reckoning, networking, and a raft of other sensors I can’t think of to all feed information to the “brain.” If one or three fail, the car can still operate fairly well (maybe it de-rates top speed or following distance based on sensor failures?).

      The ability of a car to make judgment calls is terrifying considering how dumb cars are today (really, active cruise control, parking, and lane tracking are pretty low on the intelligence scale). But if we look at the AI body of knowledge, there is a lot of potential for that research to make a robot car that is much safer than a human driver. Consider:

      1) The robot can use a wide variety of sensors to gather much more information than we can detect. I can see the car being a much safer driver in low-visibility conditions or high population density environments like downtown cores.

      2) The robot could be better at missing jaywalkers, cars running red lights, etc. As drivers we expect pedestrians and drivers to follow certain rules- when they don’t we need to react, and we’re not very good at that. A computer could track multiple human-driven cars, pedestrians, animals, and other obstacles simultaneously and predict their actions using statistical models. They could detect a hazard before a human ever sees it.

      3) I could also see the car prioritizing situations better than a human. Imagine driving down a 50mph road and a man chases a deer into the road (bear with me…). One driver might scream and cover their eyes. Another might aim for the deer. A computer could assess road conditions and distances to the obstacles- Can I brake? How big is the deer and what is the probability that it kills the 3 passengers in the car? Can I swerve into a guard rail to slow down and miss both? Maybe in this scenario, hitting the deer would’ve killed 3 passengers, but the car determined that it could slow down enough to only break the pedestrian’s legs by sliding along the guard rail- a decision most drivers probably wouldn’t make.

      This is all a ways out, but I see the potential out there in the AI research.

      • 0 avatar

        I think you brought up some conflicting ideas. In point 1 you state that there would be no more manual controls, but in point 2 you state that there would be a need for redundancy. Wouldn’t the best redundancy feature be a reversion back to manual controls?

        In modern airliners the flight controls are almost all automated, but the manual controls are still there. Why would automated cars be any different?

      • 0 avatar

        These are just my hypothetical thoughts: At this point in the technology I think you’re right that human redundancy is necessary, but I don’t think that should be the end-game for this concept.

        1) I am referring to redundancy in the car systems. For example, instead of just tracking the car in front using radar, the car is using radar, infrared, communicating GPS info, etc… so that one of the sensors can fail without causing an accident.

        2) I think the airline analogy has some merit, but I would argue that it breaks down when we talk about intervention. An airline pilot is highly trained, supported by co-pilots/ground control/etc., and is paid to pay attention with exceptionally severe consequences if he screws up. He also isn’t flying in high-density packs like a freeway- he has seconds or minutes to react vs. milliseconds. And then there are the hours of service requirements. These factors are HUGE in industrial psychology.

        I see cars as completely different. Yes, we need a manual over ride during the early stages of the tech, but I think the car would need to design out the driver completely to achieve the most benefit. The problem is VIGILANCE. If you put untrained, unpaid John Doe in a seat with nothing to do at 5AM, don’t expect him to grab the wheel in a split second to avoid disaster 45 minutes into an un-eventful trip. Human cognitive performance is strongly tied to mental load- if you give someone too much or too little to do/think about, they aren’t very good at the task.

        Then there’s the problem of a blurred line where the human needs to take control- how does the human know when the computer has lost control? Thinking about “do I need to take control?”, “can I do better than the car in this situation” will waste precious milliseconds.

      • 0 avatar
        Amendment X

        @ hf auto: Awesome post. I really hope you submitted a piece for the writing contest, I’d like to hear more from you on the auto industry. Hearing from an insider is quite refreshing!

      • 0 avatar

        “The machine intelligence already exists to make this happen, albeit in crude form. Sensors will be required regardless of intelligence levels- people use sensors in the form of eyes, ears, skin, nose, mouth, etc.”

        As you point out, we are made up of “sensors” called the “five senses”.

        Thing is, WE HAVE INTELLIGENCE, morals, values and more importantly A FEAR of consequences.

        I value a child’s life more than a cat’s. If I have to choose which to kill to save the other, I choose the cat.

        I am afraid of driving through certain areas – the computer is not.

        The most you can do is program a computer to “execute code line 3” if “D” happens. You cannot make a computer smart enough to drive without true Artificial intelligence.

        By the time computers are smart enough to drive, humans will be out of work.

    • 0 avatar

      There is a joke about the Boeing A380.

      “The Boeing A380 comes with 1 pilot and a dog. The pilot is there to feed the dog and the dog is there to bark at the pilot in case he tries to touch any buttons”.

  • avatar

    Once taken all the factors in consideration, it appears to be “a perfectly elastic spherical horse in a vacuum” sort of case.
    Not in my lifetime, I believe. I hope.

    • 0 avatar

      Never mind the spherical horse (residential driving is a nightmare for both carbon and silicon drivers), just give me something that can draft a car during rush hour and both traffic jams and gas usage mostly go away.

      That isn’t what is bringing the self-driving car. Keeping boomers from plowing through farmer’s markets is bringing the self driving car. I just want rush hour to go away (and get the mileage from drafting).

  • avatar

    Yeah, I’m not going to have to worry about this in my lifetime. *Maybe* if I moved to SoCal, but that’s not happening. It all sounds great until you put a couple feet of snow in the mix, and all the sensors are iced up. I can see them being used in a dedicated lane on a busy thoroughfare, but self driving cars are ALWAYS going to have to interact with their human driven counterparts. At least in the next 100 years. My $.02.

  • avatar

    Every picture of self-driving car I’ve ever seen is on a pristine SoCal sunny day. How will they inclement weather like snow and ice like we get in the North east?

    Are they still going to be plowing along at 65MPH when the road conditions totally don’t support it? How will it “know” what the appropriate speed for the conditions is? What happens when snow has covered up the lane markings on the pavement and the lane departure cameras become useless? Or the three lane freeway has effectively become a one-lane road because snow has only been cleared from the right-most lane?

    What happens when the cameras get covered with ice, fog, or road salt? The backup camera in my car is basically useless in winter because of salt, so I would expect the same to happen to the cameras of a self-driving car.

    How is a self-driving car that has parked itself in some remote parking lot going to brush off 6″ of snow from its cameras, or dig itself out of a snow drift? Are all the remote parking for self-driving cars going to be in heated garages?

    Unless they come up with a way to get rid of winter, I don’t see how they are going to resolve some of these issues.

    • 0 avatar

      I just posed the same questions.

      In a number of cars I’ve driven/owned, the computer knows when “ice” is expected due to the thermometer.

      My mother’s STS posts the speed limit of your zone on the driver’s LED screen and it calibrates the AWD. Same goes for other cars.
      The new Chrysler’s and Dodge’s with AWD disable AWD until they think it’s cold enough for ice to be on the ground.

    • 0 avatar

      Good questions. It’s not too hard to imagine that self-driving cars will put the sensors up high, as Google does, to have a better view and to get away from some of the grime. I wouldn’t be surprised if robocar parking facilities have robocar cleaning facilities as well. Likewise, if a camera is essential to the car’s ability to navigate, then you can expect it to have heated sprayers or some other cleaning device.

      If the robocar is buried under two feet of snow, it’s probably not going to be able to dig itself out. Likewise, if the rooftop sensors are covered in snow or crap, the car might refuse to go until you clean them. Maybe they’ll operate like those pop-up headlights we used to know and love, just in order to keep themselves clean and away from harm.

    • 0 avatar

      “Are they still going to be plowing along at 65MPH when the road conditions totally don’t support it? How will it “know” what the appropriate speed for the conditions is? ”

      Given how many cars I saw in the ditch on a recent drive from Boston to NH in the snow, I’d say the bar for being better than the average driver is pretty low.

    • 0 avatar

      It will know what speed is appropriate and what road conditions are the same way every traction control and abs system do now, except even more accurately – this is long solved stuff. Camera lenses can be heated, wipered, and sprayed with fluid – just like headlights have been for 50+ years.

      Amazingly enough, many people working on this stuff have been doing so for several DECADES. The newbs are over a decade. It’s proven to work, it’s being refined, algos polished, and most importantly, all the computing and hardware is cheap enough to do this.

      This is no flying car, it’s real and it’s here. It’ll be 20ish years before it’s ubiquitous on new cars, but it’ll be on some top-line car in 10 years or likely less.

  • avatar

    I suspect that the odds that the “self driving car” will eliminate driving are about as high as the odds that cruise control will eliminate gas pedals.

    Some of these cars will crash. At some point, we are going to need a court precedent that determines whether the automaker is liable for such crashes.

    Courts in the US are generally business-friendly. So you can bet that the courts will eventually rule that drivers are generally liable for the crashes caused by their self-driving cars, because the courts will not want to use the law to stifle innovation.

    “Self driving” will probably end up being a feature similar to cruise control, which can be used during some circumstances but has to be avoided during many others. It will probably be a convenience feature, not a way to get your elderly blind grandmother to the doctor’s office by herself.

  • avatar

    Self-driving cars in our lifetimes may end up being used by the elderly and the handicapped. If you are 40 now, you may be using/needing one when you are 80. Other than that, I do not see mass adoption of this technology.

  • avatar

    “since the lead car in the train is breaking the wind for everybody else.”

    Are you kidding? I can’t drive the car and now I have to let someone else fart for me. Unbelievable and unconstitutional!!!

  • avatar

    This is a classic case of a question no one was asking. And Brin and Page having too much money with nowhere to go. When working on German cars, I always see the engineers doing the same thing. Drives me crazy. But I was satisfied with the Ramco valve for temp control.

  • avatar

    Hadn’t thought about the pedestrian aspect. When cars are able to flawlessly detect, and stop for, pedestrians, we’ll have all kinds of asshats crossing the streets willy nilly. Might as well just declare downtown Portland a car-free zone, it’ll be gridlocked.

  • avatar

    I’d prefer to drive my car myself. Plus, when you drive antique vehicles like me (never mind that pretty much nothing on the road today could even be adapted) what happens? No more enjoyment of classic automobiles?

    BUT, working in the freight/transportation business, I’d love the day to come where I never have to deal with another truck driver.

  • avatar

    There’s so much “get off my lawn” in these comments.

    Self-driving cars exist today in the form of high-end luxobarges that can accelerate and brake, hold lanes, and come to a complete stop all on their own. The technologies being developed by Google and automakers right now take these systems to a new level of integration and capability. It’s much like going from ABS to the advanced stability control systems in cars today. Oh, but those are NANNIES and we don’t like them. I’m certain your lightning-fast reflexes will steer you to safety as your well-exercised thighs rapidly pump the brakes.

    • 0 avatar
      bumpy ii

      In the long-term (i.e. a few generations after autonomous cars are introduced for general use), *driving* a car will become a hobby like horseback riding. A few folks will do it for whatever reason, probably with special-purpose insurance (think track days) and maybe a tiered license.

      • 0 avatar

        I agree with your prediction. The film “Minority Report” pictured something like this happening in the Washington, D.C. of the future. People many generations from now will likely not lust for the feeling of driving oneself, in the same way that we no longer desir to hear the operator’s voice when we pick up our candlestick phones (though some around here might just be old enough to make that statement untrue).

    • 0 avatar

      “Oh, but those are NANNIES and we don’t like them. I’m certain your lightning-fast reflexes will steer you to safety as your well-exercised thighs rapidly pump the brakes.”

      Spoken like someone who has very little experience driving at the limit. I suppose it would sound scary to be forced to drive that way for a few moments without nannies if you’re not used to it.

      I don’t mind ABS though. It’s great for preventing excess stud wear. It just needs an off switch for deep snow and dirt driving. Stability control? I’d never tolerate it.

  • avatar

    I future where there are no traffic lights and stop signs, where a pedestrian can cross anywhere at anytime sounds like a great future to me!

    Suggestion for the intersection problem: the self-driving cars have sensors to detect animals and pedestrians crossing so there is no need to have them actively signalling they want to cross. So, at each crossing, define the first car to approach it as the “boss of the cross” and it will coordinate all the other cars around the intersection considering all the factors its sensors and the other cars’ sensors are picking up. At a determined point, the boss will leave the crossing and leadership and decision making will be handed over to another car approaching the intersection. In a busy intersection this would be a seamless transition of leadership from one car to another, according to some sort of algorithm and protocol.

    Of course, correctness and trust remain fundamental for it to work.

  • avatar

    I know this is an uncommon idea in America because of the notion that driving a car is the most desireable way to get around, to assert one’s status, to extend individuality… but wouldn’t a public transportation initiative be more efficient and cost effective than robo-cars? maybe a mag-lev train route down the center of the busiest, most congested interstates and freeways as a start. Bus systems that don’t make you dread taking the bus could spur public interest and utilization.

    Just some thoughts that seem a little more reasonable than an outrageously complex infrastructure to support ever more complex vehicles in a nation where many vehicle owners neglect the most basic maintenance and safety checks.

    • 0 avatar
      bumpy ii

      Autonomous vehicles + car-sharing services pretty much equal distributed public transit, without the inflexibility and capital costs of the traditional public models.

    • 0 avatar

      There’s almost no way a public transit anything would be more cost effective. The infrastructure is far too expensive, the logistics are far too limiting, and people are too spread out. All of the efficient routes are in use already, the low hanging fruit has been picked.

      The best type of ‘public transit’ that could come along, in my opinion, is self-driving cars that are shared use, so you don’t have to own a car. An on-call taxi without the most expensive part, the driver. You don’t need to park it at either origin or destination, because it’s just there for your use, then goes to help someone else.

      Cars aren’t really about status or individuality. Some people make them into that as a secondary function. What they’re about is getting from one place to another. And our system of small vehicles that go from any one place to any other place is extremely efficient and desirable.

  • avatar

    I like the act of driving, and I want self-driving cars as soon as possible. Why? Because 90% of my driving is around town boring crap in my 15 year old Honda Civic. I would much prefer not having to pay attention on my drives to and from work, my drives to the grocery store, to the fast food restaurant.

    As for so many of the worst-case scenarios that are being talked about here, they won’t be more frequent with self-driving cars. Catastrophic breakdowns while at speed are vanishingly small. Accidents happen now with things like kids and deer running out in the road, and people sometimes react well, sometimes react very poorly. Someone else I was talking with this about mentioned that “Trolley Problems” (the ones where you have to decide to switch a trolley track to save one group or an idividual) will become something to think about (the car is driving along when suddenly an obstacle pops up: Given the choice of hitting the obstacle or swerving into a lane of oncoming traffic, what should it be programmed to do?).

    I also think that people discount too much the ability to work overrides into such a system. But it’s certain there will be problems to overcome, and that there will be people injured. But I think it always needs to be remembered that we have hundreds of people killed and injured each day on the roads now, and that seems to be about a floor. And if SDC can improve that, then it’s an improvement. We need to make sure that people’s imaginations don’t run away with them, and that we do the best we can to HONESTLY assess the risks, not just say “It’s a machine, it must be bad!”

  • avatar

    I guess this is closer to reality then flying cars. As a pilot and former flight instructor, I can be assured I will never see flying cars en masse (ala the Jetsons in my lifetime. Too many people have too much of a problem thinking car lengths ahead, not miles ahead. Too many people have a hard time not hitting things when the road is the only thing to drive on, let alone the wide open sky.

    Auto driving cars concern me for many of the reasons already posted ( accidents, pedestrians,liability, inclement weather) but another is age. Airplanes at the commercial level are largely flown by automation. Pilots are there to monitor the airplane and make sure the autopilot is doing what it should be. Most modern transport category aircraft can be flown from minutes after take-off to touchdown at the other end without the pilot ever touching the controls. I love flying as much as I love driving, possibly more so, so I fly the airplane manually as much as I can. But an autopilot can fly smoother then me in 95% of flying situations, that’s what passengers want, so we fly by autopilot for 95% of the flight usually.

    But as airplanes age, the electronics need updating, replacing,etc. I fly airplanes that are between 10 and 30 years old. The amount of electrical “gremlins” that make you go “what the Hell is it doing now?” can add up over time. It’s important to be conscious of what the plane is doing to stop these things before they get out of hand. Most people don’t seem to be conscious of what their car is doing when they are doing all the work. And airplanes operated by any US operator have FAA oversight on maintenance, requiring strict control of parts and paperwork when work is done.

    Cars are usually fine when new, but what about 3-5 years of owner neglect or apathy? Who will pay for sensor or computer upgrades? People already shirk maintenance on cars (especially in the US). ECU re-flashes? How many times have you updated your Apple or Android device? What about your Apps? Will we have separate warranty for the auto-drive components?

    The self-driving car is setting us up to be even MORE like the people on the spaceship in Disney’s “Wall-E”. Sit back, do nothing, let computers do the work and get fatter.

    • 0 avatar

      Maintenance is an interesting concern. One conceivable future is that the robocar controller is a piece of “trusted” hardware, in the sense that it’s hard to tamper with and runs software that we all understand and trust. In that world, the controller can be constantly measuring the car’s various systems (tire pressure, wheel alignment, brake pad wear, etc.) and can detect when anything is getting out of tolerance. My limited understanding of commercial aviation is that there is mandatory maintenance for various parts of the plane as a function of how many flight hours it’s undergone. In the same fashion, a robocar that needs maintenance might well refuse to accept an itinerary unless it’s to the mechanic.

      Yeah, sure, an evil mechanic could “pencil whip” the maintenance rather than actually doing it (as has been accused of several aviation repair outfits), but that’s another whole can of worms.

    • 0 avatar

      @gearhead77 “Cars are usually fine when new, but what about 3-5 years of owner neglect or apathy? Who will pay for sensor or computer upgrades?”

      I’m currently working on technology that can be used for autonomous vehicles, so I can give a little insight directly from the trenches. As a pilot, you’re probably familiar with some of the systems I’ve helped design such as ITWS and ASDE-X. Anyway, I wish I had more time to respond to this thread, but my time is limited and I really have to be careful what I say because I don’t want to help my competitors.

      I thought I’d respond to your concerns about maintenance and upgrades. The approach my organization is taking is to produce an RPA (Robotic Personal Assistant) rather than a dedicated vehicle partially because of the reasons you’ve cited and partially because in a recent DARPA Challenge, they wanted us to produce a robot that could climb into an unmodified vehicle and drive it among other things. They kind of opened our eyes and convinced us that this made more sense in terms of an eventual product.

      An RPA is a much better approach. Rather than have a dedicated autonomous car drive you to the grocery store when you run out of snacks, send your RPA out to get them in a vehicle that it can plug into to gain a few extra sensors, power, and maybe some extra computing resources for driving – probably an evolved version of human driver assistance technology.

      BTW – one major piece of technology we’re missing for true autonomous cars is improvements in intuitive AI. It’s a big problem and not easily solved. You can’t live by sensors alone because a vehicle may not have enough time to respond to a sensor event. It needs to understand that the toddler that just broke away from his mother and started running down the sidewalk might take a sudden left onto the road. Intuitive knowledge that an experienced human driver acquires over time helps us anticipate problems, but it’s completely lacking in some of the current attempts at so-called autonomous vehicles.

      • 0 avatar

        mcs I appreciate the response and insight given. I also appreciate your work on ASDE-X and ITWS, especially the latter. The advancement in the amount of information available in today’s cockpits for weather and navigation is truly amazing. Especially given the short time at which it has occurred.

  • avatar

    I haven’t seen this brought up in the discussion, but there is already a heavily automated transportation industry – aviation. If I were a betting man, I would bet that the adaptation, implementation, regulation, and other ….tions will mirror those in aviation. Busier hubs will have more navaids (per this article), professional drivers will likely have better, more sophisticated automation than private users. Manual driving will likely not go away, it will just be more or less difficult to “do” depending on where you are. Automation has definitely made aviation safer, although there are obviously still accidents – sometimes as a result of factors tied to automation. In other words, insurance, licensing, and driver training will all remain with us in the future. In fact, drivers may have to get an endorsement to drive with an autodriver (autopilot?) and this will likely also require regulatory inspections, annual proficiency checks, and so on.

    We’ll solve the “problem” of people driving by creating new industries and oversight that will provide jobs and careers for our children. After all, isn’t this what America does best?

    • 0 avatar

      “Automation has definitely made aviation safer, although there are obviously still accidents – sometimes as a result of factors tied to automation.”

      What aviation accidents resulted from automation? I can think of a few where the pilot ignored what his/her instruments or automated systems said and that’s what caused the crash, but I’m struggling to come up with any that occurred due to auto-pilot.

      • 0 avatar

        This Qantas incident was caused by bad automation.

        “While cruising at 37,000ft, one of the aircraft’s three air data inertial reference units (ADIRUs) began outputting spikes of incorrect values including erroneous angle of attack data. Minutes later, primary flight computers initiated a pair of dramatic, uncommanded dives.

        “Though lasting only a few seconds, the dives hurled unrestrained passengers against the ceiling and overhead bins, injuring 110 of 303 passengers and nine of 12 crew members.”

      • 0 avatar

        Interesting, I hadn’t run across that one before. Thanks for the link. Apparently it was a combo of bad input coming in and then the algorithm not handling that bad input the right away. There were redundancies, but the algorithm didn’t necessarily handle them right.

        Incidentally, sudden dives are why you always wear your seatbelt if you’re in your seat. A friend’s mom who is a flight attendant had this happen on a flight, and one guy not wearing his seatbelt died of a broken neck during a sudden dive. It can happen even when you’re at a smooth cruising altitude.

      • 0 avatar

        Accidents caused by automation (besides the mentioned Qantas) are still usually lumped into “pilot error”. American crashed a 757 into a mountain in Cali,Colombia in 1995. The navigation fix put into the FMS (flight management system) had the same identifier as one hundreds of miles away. There was Rozo and Romeo, but the identifier for both was “R”. This doesn’t happen in the US too much, but it can in other countries ( one of the names was changed after this accident and the identifier as well)

        The crew didn’t confirm it was the correct one and due to mistakes made by the tired and delayed crew, the poor English of the air traffic controller (not uncommon obviously in Colombia) they ended up descending quickly (on autopilot) because they thought they were too high for where they were. They ended up barely missing a mountain at 9800 ft.

        How these type of accidents can happen in autodrive cars, I don’t know, but something similar in nature could happen. It depends how much we as drivers will be allowed to control and what the various DOT’s and the government come up with as “solutions”

      • 0 avatar

        gearhead77, I’m familiar with AA965, and it’s one of the crashes I thought of when I thought of pilot error. Everything you’ve described falls under pilot error, but you forgot something:

        Before the plane hit the mountain, the Ground Proximity Warning System went off to indicate they would hit the mountain. The pilots tried to climb without pulling the speed brakes — complete and utter pilot error. The automated system was working just fine, the pilots just didn’t use it right. The investigation showed that the plane would have cleared the mountain if the pilots had pulled the speed brakes.

      • 0 avatar

        Yeah, it was pilot error, corntrollio, no doubt. My point was the automation lead them down that path because they didn’t double check and followed it blindly. I fear that we could have the same problems in our self driving cars or even with the increasing amount of nanny aids. Hell, some people barely look before backing up or changing lanes. How long before back-up cameras cause that to be the norm or BLIS systems cause people not look before changing lanes?

        I believe the AA965 accident caused Boeing to make a change to the 757 to retract the speedbrakes above a certain power setting. Why it didn’t have that to begin with I don’t know. But as with most regulation or design flaws, nothing gets done until someone dies.

      • 0 avatar

        I had two general scenarios in mind. First, automation reaches a limit or fault and drops out, the unsuspecting operator is caught out complacent and fails to recover, see colgan in buffalo and air france over the atlantic. Second is lack of operational situational awareness by the operator, the automation does as its told, leading to an accident. See the recent sukhoi demo flight or the aa flight in cali. Interestingly, in both of these cases, the pilots failed to heed automated warnings because the autmoated warnings did not jive with the operators mental picture of the situation. More generally, there are serious issues w/r/t human/machine interfaces in addition to technological complacency and erosion of manual operating skill due to the high reliability of automation. When driving, it is advisable to remind yourself that the machine can, and may indeed try, to kill you.

  • avatar

    They will pry my steering wheel from my cold dead hand before I am forced into a mandatory self-driving car.

    When I’m 80 and senile, I can see advantages, until then, let the horsepower roar and the hooning continue.

  • avatar

    I think people are going way overboard as to how and when this is going to happen. It is going to be a LOOOOOOONNNNNGGGGG time before we have self-driving cars in city centers. But what we will have SOON, because we practically have it NOW, are cars that can drive themselves on a limited access highway. High-end luxury cars already have radar-cruise control and lane-departure assistance that can make steering corrections for you. That is 95% of the way there. And frankly, that is about all I want – there is nothing more boring or mind-numbing than cruising along the interstate with the cruise on 72mph. Let the car do that all by itself.

    As far as legality, I see it as no different than a pilot flying an airplane at cruise with the autopilot on. The DRIVER is still 100% responsible for what happens, and needs to be ready to take over at any time. If you are an idiot and try to use the the auto-pilot when it is not appropriate, you bear the responsibility. Just like trying to go 70mph on a sheet of ice.

  • avatar

    Most people don’t appreciate the profound effect this will have on society. In fact it is not just a transition from cars as a fun devices to a utility. It is transfer of control from the individual to to collective. Each one of these control devices will have a GPS linked back to a centralized control. Individual car ownership will become increasingly expensive until it is cost prohibitive. Humans will eventually be deemed too unsafe to drive. It will be cheaper to summon a car on your google device from a company like zip car when needed. Google can then stream you ads and suggest things on your route, as well as constantly takes pictures with the roof mounted video.

    Cab companies, shipping, and trucking companies will be all over this, and I expect to see this technology in cities first.

  • avatar

    As much as I hate to admit it, these things are right around the corner. Attitudes towards cars have changed over the years and the vast majority of young people just aren’t into cars the way previous generations have been. Sure, there will be some die hards, but once we hit some sort of critical mass self driven cars will be mandated and most people just won’t care.

    What’s more, I think it will be better. It will save time, energy and stress. Imagine a car designed more like a conversation pit, with sofas facing one another, where people can, talk, watch TV, surf the net, eat and drink etc. It will make traveling by car what traveling by corporate jet is for the elite today.

    • 0 avatar
      Don Mynack

      Oh joy. I hate TV now…in the future, I’ll get even more of it thrown at me. I can’t even drive away from it. Sounds the opposite of wonderful.

      Has anybody told you your future sucks?

  • avatar
    Brian P

    In my world of under-posted speed limits and cell-phone yapping pedestrians who walk around without regard for their surroundings, a self-driving car that rigidly follows every speed limit and stops for every foreseeable pedestrian interaction will result in gridlock at some times and annoyingly long journey times the rest of the time. And that’s on top of what it would have to do if there is a possibility of snow and ice.

    “Driver aids” – active automatic collision-prevention systems that prevent the driver from doing something that would result in a collision – are already happening, we’re only seeing the beginnings of it and these systems are going to get a lot better (and a lot more intrusive). But completely autonomous under every weather condition and every traffic condition, I can’t see it realistically happening.

    Being a motorcyclist, I really don’t have a problem with a system that prevents a car driver from turning left in front of me.

  • avatar

    This topic reminds me of an old Larry Niven sci-fi story, “Safe at Any Speed.”

  • avatar

    I just do not find the author’s points compelling. The future can be totally imagined as a fantasy of what ifs, but the near future we actually live is made up of far more mundane implementation. Thus trials of self guided vehicles might occur where you would least likely come across them or in environments where their trial malfunctions aren’t fatal.

    Night delivery vehicles warehouse to warehouse, low speed industrial areas for freight transfer, remote & flat interstate areas (outside snow and ice areas) between major metro areas all seem logical but the personal cell phone jabbering, cheeseburger gnashing slob being ushered around in a robo lazy chair? That’s farther away than the near future.

    Such a list of what ifs, if implemented, would’ve precluded the implementation of just about everything we use today.

  • avatar

    So how are these things going to deal with potholes, ice, and snowbanks in northern climates? Road salt and debris getting in the sensors? The “kid chasing ball” scenario? Construction? Falling trees?

    The self-driving features we’ll eventually see will most likely be limited to “approved” environments. Think some sort of “express lane” in major thoroughfares, and very efficient parking garages.

  • avatar

    (wave) Hi, Dan. Good to see you around again. CSUA represents.

    On the technical front…

    I think that assuring that automatically controlled cars are the ONLY vehicles in an area – i.e., ensure mandatory transponders and no nearby transponder-failed vehicles – will be required before you start engaging in standard traffic rules bending. And there’s always the possibility of a kid running out into a 4-way stop after his ball, which limits the bending that can be done. Bikes and pedestrians and so forth will not go away; “the environment” is not static.

    On the reliability front, a lot more introspection is possible, and desirable, anyways. Cooperative safety margin management and performance management in vehicle packs – or even sparse populations of communicating self driving cars intermixed into in a larger manual driven population – can improve everyone’s safety and speeds a lot. Things like predictively merging ahead of time, etc. Little behaviors that in fluid dynamics terms amount to reducing the viscosity.

    On the amusements front, someone should enter a Lemons race with a self-driving car, and see what happens. The recent Stanford Audi racetrack results are pretty interesting; Lemons would add highly unpredictable traffic around. If the self-driven car could navigate safely and avoid becoming a traffic hazard it would say something about levels of assurance of handling “stress situations” as it were.

  • avatar

    “Consider the case of the humble four-way stop. Today, you’re legally required to stop ..”

    Yes indeed. And the Europeans figured out the solution over a 100 years ago and called it a roundabout.

    The implementation of a few of these in North America recently has caused conniptions among our hide-bound conservative citizens. Foreign, not invented here, illogical, blah, blah, blah.

    Now try to put in driverless vehicles and watch heads explode in apoplectic rage. Old wives’s tales will spring up overnight after the first robocar is perceived to have offed a human. Would you trust an Audi to avoid electronic glitches? They had a hard time designing decent fuseboxes only 25 years ago! I know.

    Say all goes well for a while, and driving skills deteriorate from lack of practise. A system failure is bound to happen during bad weather, as mandated by Murphy, and returning control to an average human to gingerly drive on is just the ticket.

    This whole idea shows what we COULD do. Hell, I grew up in the fifties and I’m really enjoying living in Bubble-Dome city these days, climate-controlled and ever-thing. Organic food, too. It’s a bran muffin paradise.

  • avatar
    Don Mynack

    “We might also require pedestrians to carry beacons that telegraph their location (maybe building this into their super-duper smartphones) and use those phones to tell them “please wait 30 seconds and then traffic will open up for you.”

    This will never, ever, ever happen.

    I hear a lot of “mandatory” and “require” in this article here’s some more:

    “If you want to use it on public roads, and it doesn’t have a robo-drive controller, you may be restricted to only driving off hours. You will almost certainly be required to have a transponder to warn all the other cars, and you’ll pay a lot of money for the privilege of driving it, since you’re slowing everybody else down. And, of course, you’ll be unhappy at the lack of parking spots, since the robocars just drop off their occupants and head off to a remote garage somewhere. Maybe I’m wrong, but you probably won’t want to drive your classic car except on special occasions.”

    All hail central planning! Do not deviate from the norm!

    Seriously, if this shit comes to pass, I’m buying a gyrocopter.

  • avatar

    The key to this is that you have to sell the tech. “What’s in it for me?” The obvious selling point is gridlock and driver fatigue. If you can remove those then it’s a win. Limited access lanes with high speed limits is one scenario. Long distance hauling without stoppages is another.

    Neither are inner city solutions and that’s why a driver will still be mandatory. Still, for those long commutes, how would you like to get to work in your car on a “land train” that sends you into the city at 150 mph and drops you off at a centre close to your destination? Cutting 30min off of an hour commute would sell.

  • avatar

    I have a lot of trouble with the “mandatory” part, too.

    What I can see though, especially in the beginning, are things like express/HOV lanes on the freeways reserved for driverless use — complete with things like higher speed limits so the folks who haven’t bought in get to watch the early adopters zip by 20mph faster than them.

Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • THX1136: I remember when these came out that a lot of delivery (and other) places – flower shops, pizza places,...
  • Vulpine: “This begs me to ask. Who the fnck do you expect to pay for the trillions to make EVs a realistic ICE...
  • SCE to AUX: Thanks! It’s been a long time since high school for me…
  • Vulpine: I’m not exactly rich you know; my retirement very probably works out to less than your paycheck. I...
  • arthurk45: The weird notion that only the buyers, not Tesla profits from the govt tax rebates has to be one of the...

New Car Research

Get a Free Dealer Quote


  • Contributors

  • Matthew Guy, Canada
  • Seth Parks, United States
  • Ronnie Schreiber, United States
  • Bozi Tatarevic, United States
  • Chris Tonn, United States
  • Corey Lewis, United States
  • Moderators

  • Adam Tonge, United States
  • Kyree Williams, United States