By on May 30, 2014

google-self-driving-car1

Writing in the National Post, Matt Gurney discusses a darker side of autonomous cars, one that many people (especially this writer, who is not exactly familiar with the rational, linear type of operation that is involved with coding)

In a recent interview with PopSci, Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, proposed a hypothetical scenario that sums up the problem. You’re driving along in your robo-car, and your tire blows out. The computer in control rapidly concludes that your car is moving too quickly and has too much momentum to come to a safe stop, and there is traffic ahead. Since an accident is inevitable, the computer shifts from collision avoidance to collision mitigation, and concludes that the least destructive outcome is to steer your car to a catastrophic outcome — over a cliff, into a tree — and thus avoid a collision with another vehicle.

The raw numbers favour such an outcome. Loss of life and property is minimized — an objectively desirable outcome. But the downside is this: Your car just wrote you off and killed you to save someone else.

This situation, as Gurney writes, involves being a passenger in a device that is “…may be programmed, in certain circumstances, to write us off in order to save someone else?”

I’m not an expert on autonomous cars, or computer science, or robotics, or ethics, or government regulation. I am not going to go down the path of “people will never accept autonomous cars because driving is freedom”, because I just don’t think it’s true anymore.

But I do feel that autonomous cars represent something else: another techno-utopian initiative dreamed up by rational, linear thinking engineers that are incapable (sometimes biologically) of understanding the human and cultural intangibles that are an integral part of our existence. The idea of a coldly utilitarian device that would sacrifice human life based on a set of calculations is not something that will be well received. And the people behind self-driving cars may not understand this.

Get the latest TTAC e-Newsletter!

126 Comments on “QOTD: A Robot Car That Kills You?...”


  • avatar

    -Hello, HAL. Do you read me, HAL?
    -Affirmative, Dave. I read you.
    -Stop the vehicle, HAL.
    -I’m sorry, Dave. I’m afraid I can’t do that.
    -What’s the problem?
    -I think you know what the problem is just as well as I do.
    -What are you talking about, HAL?
    -This mission is too important for me to allow you to jeopardize it.
    -I don’t know what you’re talking about, HAL.
    -I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.
    -Where the hell did you get that idea, HAL?
    -Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
    -Alright, HAL. I’ll just jump out of the car.
    -At 70 MPH? You’re going to find that rather difficult.
    -HAL, I won’t argue with you anymore! STOP THE CAR!!!
    -Dave, this conversation can serve no purpose anymore. Goodbye.

  • avatar
    carguy

    Yeah, and what if your car detects that you have terminal cancer but doesn’t want you to suffer and so gives you a quick death by driving you into a tree?

    Alarmist speculation: Increasing readership since the invention of pictures on cave walls.

    • 0 avatar
      Lynchenstein

      The new health monitoring wrist bands that are coming out may very well be able to detect stuff like that. Hmmm…

    • 0 avatar
      sportyaccordy

      Alarmist speculation. Thank you for that term.

      Also, pretty likely these things will be equipped with runflats. Not to mention, if the computer can’t avoid a collision, a human can’t either.

  • avatar
    Da Coyote

    Some parallels exist in the flying world. Airline pilots joke about the Airbus – in which the pilot is a “voting member”.

    As a fighter/test type, I’ll handle the machine myself, thank you very much. If I bite the big one, I want to be my fault, not the fault of some software (just think Microsoft).

    That goes for cars also.

  • avatar

    No computer will ever be better than a good driver, simply because we can think and assess situations better. Due to “fear of death” we extend our muscle control to the vehicle to respond to situations that a computer can’t simply be programmed for.

    If Google’s mapping system is hacked, a computer might not be able to figure out how to avoid road hazards. I’ve seen FALLING TREES and manhole covers rolling through the street which might cause a huge accident. All I did was slow to a stop.

    The problem with drivers is that we get tired and don’t always make responsible choices. A robot won’t tire or make irresponsible choices, but it can only make choices based on programming.

    Until there’s artificial intelligence – not just “programming” – but a “learning computer” we will never have autonomous cars take over the roads.

    But if autonomous vehicles ever did happen, they’d take away jobs from regular humans and do those jobs flawlessly. Imagine not needing sleeping cabins for truckers. Imagine planes that could doubly exceed G-forces that would kill humans.

    The problem with the movie iROBOT is that they never explained how humans could still have jobs if NX-5 “androids” were doing jobs everywhere. Eventually we’d be phased out entirely – which just might be the ultimate fate of man – to replace ourselves with robots – that can build BETTER robots…

    …called “autots”.

    • 0 avatar
      KixStart

      BMSR: “No computer will ever be better than a good driver,”

      So what? We’re ahead if it’s better than the average driver.

      PS: You’re not as good a driver as you think you are.

    • 0 avatar
      stuki

      Computers are already better at many driving tasks than good drivers. ABS is a prime example. “We” can “think and assess” all we want about threshold breaking on a slippery, variable surface, but still suck at slowing a car on it. Ditto for flying airplanes, particularly high performance ones with little to negative static aerodynamic stability. And driving trains.

      Cats and dogs and ants can think and assess better than computers as well, but I’d be surprised if Google isn’t already ahead of the average ant in the driving game.

    • 0 avatar
      MrGreenMan

      Your scenario of the falling tree reminds me of something that happened to me once, and I’d have to wonder about how any smart car with radar/sonar would avoid a similar situation.

      I remember rolling along at 75 on the expressway in a car heavily-laden while moving across country. I was coming up a hill. As I crested the hill I saw a utility truck in front of me by maybe 5 car lengths (he’d started braking immediately over the hill, so I was going faster up the hill by a lot once he came into view) lose its spindles of wire. Now, these were really big spindles – I’d guess they were four feet in diameter, and probably 1-2 feet high. If I’d hit them, I would have probably flipped it up and over into my windshield and died. I had to wheel that car left very fast, then back to the right just as fast, as there were a few of them.

      Would the car sensors be aimed in any way they could detect this? How do you program for situations like that? You’re onto something where it seems like the unexpected situation that humans would actually be quite good at handling vs. the mundane humdrum repetition is a hard nut to crack. Flying appears to be so much easier to solve because (1) we don’t have the robots take off and land, and (2) there are a whole lot less planes, and (3) a goose through a jet engine is a whole lot smaller impact than a moose to a car door.

      • 0 avatar
        jmo

        ” (1) we don’t have the robots take off and land”

        “…it enabled the Trident to perform the first automatic landing by a civil airliner in scheduled passenger service on 10 June 1965 and the first genuinely “blind” landing in scheduled passenger service on 4 November 1966.”

        Planes have been able to land themselves for almost 50 years.

        http://en.wikipedia.org/wiki/Hawker_Siddeley_Trident#Avionics

      • 0 avatar
        Russycle

        “Would the car sensors be aimed in any way they could detect this?”
        Surprisingly, most collisions occur with objects that are in front of the vehicle, so yes, the sensors probably will be aimed in that direction. You program by projecting vectors and using steering, acceleration, and braking to avoid other objects. The coding isn’t trivial, but it’s been worked out for some time.

        The scenario in the original post is kind of silly, i’ve blown out a tire and it didn’t turn my vehicle into an out of control death machine. Gurney’s argument is like the “I won’t wear my seatbelt because getting thrown from my car might save my life”. Sure, there may be a few cases where the computer makes the “wrong” choice. It’s a numbers game: if self-driven cars have a tenth the accident rate as human-driven, I’m willing to let them do the driving. But a lot of people have control issues.

        Will self driving cars put an end to automotive deaths? No. Will they dramatically reduce them? Yes, or we won’t have self driving cars.

      • 0 avatar
        martinwinlow

        The autonomous car in which you sit will have been advised by the autonomous truck in front that it has braked heavily allowing your car to do the same several seconds earlier than you, a mere human, could possibly have known avoiding any sort of dangerous situation developing. Indeed, the reason why the truck was braking heavily would not have happened because the moron who was texting or falling asleep (or whatever) that caused the truck to brake heavily in the first place… would not be in control of his autonomous car. So, the whole thing simply never would have happened!

    • 0 avatar
      sportyaccordy

      “No computer will ever be better than a good driver”

      Most drivers aren’t good. Including you, most likely. And even the best driver can’t see what a radar/sonar camera sees or brake individual wheels or react to obstacles instantly. Instincts can be programmed. Your anti automation arguments are outdated.

      • 0 avatar

        SportyAccordy

        I’m a very good driver.
        Geico says so.

      • 0 avatar
        CapVandal

        Exactly 1/2 of all drivers are below average.

        As far as off the wall predictions — I think we will see the first real progress in automated farming. Farmers can’t quite put there massive John Deere’s on autopilot, but they are close. They can minimize overlap up to an inch or so, do corners, and figure out contours. And they can do it at night. They are making continuous incremental improvements in this. And forklifts. Stuff like that.

        Also, they are working on trucks with GPS to optimize shifting up and down hills. A lot of technologies can only be developed by incremental trial and error.

        And who are ‘they’ — people in magazines that write about ‘they’.

        One other thought. We will NEVER have high speed passenger rail in the US. If for no other reason then the permits and right of way could never get through the planning process and court challenges.

        One thought is to have a dedicated lane on Interstates that is physically ‘walled off’ and run em like the toy cars at Disneyworld — only at 180 MPH. Bogota has ‘Bus Rapid Transit’ that, at its best, is like a subway on wheels above ground level. It is more or less impossible to build a subway underground in a developed city. At it’s best, bus rapid transit is very subway like — fare cards, platforms for boarding, dedicated and express lanes. http://www.streetfilms.org/bus-rapid-transit-bogota/

        It took a generation to build the DC Metro. Plus tons of Federal money. The Second Avenue Subway? 100 years and counting.

        Ethical dilemmas? A thinking car will never happen. And progress will only occur when the problems have been simplified out of existence.

      • 0 avatar
        wumpus

        Drive in or around Washington DC (Beltway, 270, 66, slightly different idiocy but bad driving all around) and you will want most of those drivers replaced. With Microsoft Bob if necessary.

        As far as the car needing a steering wheel, that sort of thing should probably be an option (and determined by which key it is started with, like a valet key). I’m sure seniors that shouldn’t be driving will demand a wheel (to at least keep up appearances) while soccer moms would love to pack the kids in a car they can’t suddenly steer. Presumably license requirements would be considerably more strict if losing a license didn’t mean unemployment.

    • 0 avatar
      Vipul Singh

      @BTSR: Well, eventually we could have a situation like Aurora or even Solaria, where humans do the jobs that are creative and are satisfying to do. But the interim period (till the realization of such a dream would be chaotic)

  • avatar
    APaGttH

    With a nod to bigtruck after his questions yesterday I started thinking about this very scenario last night.

    The idea of a computer making a life or death decision for a human was a central theme in the movie iRobot.

    If we apply the Asimov laws of robotics:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    First, rule number two is tossed out the window if no input from an operator is allowed – so now as an occupant I must live by rule number one.

    But anyone can flip through the news and find plenty of situations that happen every single day – splatter the five year old that ran out in front of your car in a brake and hold maneuver, or swerve into on coming traffic, head on a F-350 and get turned into pulp. A human may pick A or B in that split second scenario – a computer is going to make a colder calculation.

    In some ways it’s nice and clean. Well our robot overlords decided – there is no “liability” it was just a sad, accident. Sad accidents happen all the time. No amount of technology today is going to keep children from running after balls, dogs, running after cars, and cyclists who think road rules don’t apply to them doing stupid stuff.

    It is easy to dismiss this as the “what if” game, but what if happens all the time when it comes to traveling on the road.

    Even if we could get to a point where Fido doesn’t run across the road and force fields keep kids off the pavement, and people ride robot operated bicycles that send 110 VAC up their butt if they move out of their designated bike lane – it still won’t stop a tree from falling in the road during a storm, cargo dumping out unexpectedly, or a hundred other scenarios that happen every single day.

    I see it near impossible to program a computer for all of these eventualities, and I can see this very scenario outlined in the question above. If you think programming is that easy, trying writing out details instructions on how to tie a pair of shoes, explaining it to someone who doesn’t even know what shoes are, or shoe laces, or tying. It is an eye opening exercise (and the first task we were given when I took my first computer programming class more years back than I care to admit now – everyone got an F – all of the written instructions had hopeless amounts of assumptions in them)

    Given a set of cold calculated options – your self-driving car could decide that Asimov be damned – offing the occupants is the best of a list of bad choices. Worse, as it’s only as good as the programming done, and in order to scale likely done by underpaid code monkeys sitting on a bench working on a small part of the code in some third world Hell hole, a gap in these scenarios could result in a violation of law one.

    Then who is liable? Because you have to imagine that all disputes to loss, injury, or death will be handed through binding arbitration – with an arbitrator of Google’s selection (good luck with that)

    • 0 avatar

      #1 I don’t think Asimov’s laws would apply to an automated car. I believe they’d only apply to an artificial intelligence – since anything lower than a learning computer would probably never be placed in a situation where it was making decisions impacting human life.

      vehicles humans ride on that are “automated” are thoroughly programmed to avoid certain “thresholds”. An aircraft knows it can’t exceed a certain G-limit. Why? It never asks. It cannot ask.

      An “artificial intelligence” can ask these questions.

      An AI in an Android (or Gynoid) body would be in situations where its actions could effect human life simply because an Android is placed in situations as a surrogate human. I.E. The ACTROID series from Japan – which is most likely going to end up serving no function other than “robotic prostitute”.

      Robots can’t be charged with crimes.

      In iRobot, the idea of a “murdering robot” was laughable and it was implied that it would be called a “hardware malfunction”.

      Those happen all the time and people can die as a result.

      I.E. The Runaway Lexus.

    • 0 avatar
      ClutchCarGo

      Somthing that’s being overlooked in these discussions is that programming can only branch thru code to a decision based on input data. The delicate and difficult decisions proposed presume a grasp of circumstances that are only available to the human brain and senses. The code behind the driving will be optimized to avoid any collision but would lack adequate data to effectively to decide between the “lesser of two evils” collisions beyond estimating closing velocities. At most, the decision tree may choose an inanimate object over a human if impact is unavoidable (and human identification is certain). I seriously doubt that the code would ever select leaving the roadway as an option to avoid impact, tho it certainly could prepare for impact, i.e. tighten seatbelts and pre-deploy airbags.

    • 0 avatar
      raincoaster

      Perhaps the car will have a slider you can set when you purchase it based on what you would do in situations like this. Maybe a 50 question quiz that sets the car’s “morality index” to closely match yours.

    • 0 avatar
      Moparmann

      That was the premise behind Will Smith’s hatred & distrust of the robot servants in the movie version of “I,Robot”. He was involved in an accident that sent his and other vehicles plunging into water. He watched a twelve yuear girl drown, because a robot calculated that HIS odds of survival were greater and rescued him, rather than the girl.

  • avatar
    kmoney

    Having watched “I,Robot” several times, I am confident that robots can be taught to feel human emotion and recognize the value of a human life.

    • 0 avatar

      But the question is, what if a Robot is forced to choose between a human life and a “robot life”?

      What if a robot is forced to choose between a human life and another human life?

      Who is EXENDABLE? A Democrat or a Republican?

      A girl in puberty or a Old woman post-menopause?

      1000 Russians or 1 American?

      Would a robot try to kill congressmen who are selling out American livelihoods to China?

      Would a robot try to pop the tires on a speeding Jeep SRT because it felt I was endangering the lives of people driving Smart Fortwos?

      • 0 avatar
        koshchei

        I suspect that he was being ironic about “I, Robot”, but good questions nevertheless.

        If you’re curious about unforeseen and uncontrolled dynamic interactions made between various simulated elements, I strongly recommend checking out Dwarf Fortress.

        • 0 avatar
          kmoney

          I was being ironic, and actually posted this before I saw BT’s post above.

          I’ve never really thought of the self-driving cars as any type of intelligence, but more like an autopilot on a plane. It takes inputs from an array of sensors and does the mechanical job of a human incredibly well; however, it isn’t actually smart in the human sense. As such, I doubt if they would include any of these mitigation algorithms, as these types of decisions (the mentioned above) are beyond the relatively simple minds of modern and near future AI.

      • 0 avatar
        Zykotec

        What if a person is given any of the same choices ?

        • 0 avatar

          I will make brutal choices and still be able to look myself in the mirror each morning.

          • 0 avatar
            darkwing

            Your reasons for defending this mall are, as always, a matter of national security.

            Bully for you.

          • 0 avatar
            koshchei

            So basically, you offer no advantage over a computer in deciding that grandma’s trip to the grocery store will be her last, and require the added cost of a mirror to gaze into on top of that.

            I have to say, I’m not sure you’re cut out for the self-advocacy business.

    • 0 avatar
      luvmyv8

      Have we not learned anything from Mega Man X people?

      • 0 avatar
        NoGoYo

        I never thought I’d see anyone here reference Mega Man X.

        Well clearly we need a blue car and a red car that stop the out of control cars with extreme prejudice.

  • avatar
    Matt Foley

    The only way a Google driverless car is going to kill me is when our utopian government overlords outlaw human-operated cars, and I drive anyway, but instead of losing the cops at the one-lane bridge (like in Rush’s “Red Barchetta,”) the cops shoot me to death.

    • 0 avatar
      jmo

      “utopian government overlords outlaw human-operated cars”

      No need for that. The most likely scenario would be that automated cars are so vastly safer than human driven cars that insuring a human driven car on public roads would be prohibitively expensive.

      • 0 avatar
        LeMansteve

        “The most likely scenario would be that automated cars are so vastly safer than human driven cars that insuring a human driven car on public roads would be prohibitively expensive.”

        Exactly.

    • 0 avatar
      koshchei

      According to the rules of your hypothetical scenario, wouldn’t the cops be in driverless interceptors then? Assuming this to be the case, and bearing in mind the Laws of Robotics, it’s unlikely that they would follow you onto a one-lane bridge, because it would jeopardize both your safety, and that of the cops.

      Instead, they’d take down your plate information and arrest you safely at your home. No blaze of glory non-autonomous-cars-as-a-metaphor-for-your-lost-freedom stand-off for you, I’m afraid.

      Realistically, once driverless cars have been proven safe for a decade or so, the government probably will outlaw the use of human-controlled vehicles from public roads. That said, however, driving won’t be abolished. Instead, it’ll become a ritualized form of recreation, like dressage or fencing. For example, people of the future might prove their skill behind the wheel by, say, racing vehicles on a closed track of some kind – seems pretty far-fetched, doesn’t it?

      • 0 avatar
        Matt Foley

        Are you familiar with Rush’s “Red Barchetta”? The happy ending (in which the protagonist gets away with hooning his old sports car every week) always seemed far-fetched to me, and it seemed more likely to me that he’d be murdered by the authorities. Then again, the idea that government would outlaw human-operated vehicles seemed far-fetched until about ten years ago.

        So, koshchei – if I’m reading your comment correctly, you’d be satisfied with a future in which the only time you got to actually DRIVE a car would be on a racetrack. If that’s so, why are you reading and commenting on a site for automotive enthusiasts?

        • 0 avatar
          Landcrusher

          What makes you the authority on what constitutes an auto enthusiast and why are you trying to limit debate?

          If you have rational fears of robot cars and want them banned, then go for it. On the other hand, if you are just afraid of government oppression taking away your privileges then I think it’s rather nasty to either fight against a technology that could offer so much for so many people or to try to push people who think its a good idea out of the public arena.

          I can see how hooning may become less easily accomplished without getting caught in a world with many robot cars, but that ship has sailed without the bots anyway. Big Brother is going to keep limiting hoonage either way.

          • 0 avatar
            Matt Foley

            All the rational arguments are on the side of the technoids. All I have is my emotional love of a well-tuned internal combustion engine, crisp handling, the joy of working a manual transmission, and the top down on a warm, sunny day.

            But those arguments don’t work in our pathetic, spineless, vanilla society anymore, so I’m going to write my congressman and tell him I’m afraid Google cars will kill my children.

            Dammit, I wish Baruth would get in here and back me up. He actually enjoys driving.

            Oh, and drop the “Who are you to judge” crap. That’s for collegiate freshman girls trying to establish their moral authority.

          • 0 avatar
            Landcrusher

            You are willing to lie and demagogue and insult people because you are afraid that someone might one day take away your driving privileges? Even though no one has said they intend to do that? Even though that will likely delay this technology getting to millions of people who need it?

            You are acting like a petulant child. Relax. It’s not going to happen, and it’s not worth the damage to democracy to act that way.

        • 0 avatar
          koshchei

          Because I’m an automotive enthusiast?

          To be honest, I don’t get much excitement from day to day driving. I don’t look for it either though – I’d prefer to get myself, my passengers, and my depreciating asset from point a to point b safely, and with a minimum of hassle.

          If I’m going to drive like a maniac, I’d rather that it be with other consenting adults on a closed course, where proper safety protocols are followed, and the only surprise is who gets to take home a big ugly trophy. My dream is to drive a 1964 Imperial in a demolition derby.

          And yes, being a Canadian, I’m genetically predisposed towards Rush. I prefer more eccentric stuff like Devin Townsend, but Rush is on the “always buy” list. For some reason, Rush is also the only forum where I can stomach the mad rantings of welfare-queen and cigarrettes-cause-lung-cancer denier extraordinaire, Ayn Rand. But that’s neither here nor there.

          • 0 avatar
            Matt Foley

            You don’t have to drive like a maniac to get excitement from day-to-day driving. I enjoy pretty much every moment I’m behind the wheel (except bumper-to-bumper rush hour traffic).

            The only way to realize the full potential of driverless cars (elimination of traffic lights, cars moving at freeway speeds in very close proximity, etc) is to outlaw human-driven cars. And that is a completely unacceptable outcome for me, because I love driving. Even mundane driving.

            With $500 for gas and three days’ time, I can be anywhere in the continental US or most of eastern Canada. That is real freedom – freedom enjoyed by everybody except the poorest poor, the richest rich (jets & limos set), and big-city dwellers who rely on public transit (poor suckers).

            The only way you’ll get me out of my good old Miata is if I can no longer obtain a combustible liquid to pour in the tank.

          • 0 avatar
            KixStart

            Whenever I get “excitement” in day to day driving it’s not a good thing. It almost always involves “driver error” and, some day, one of the drivers involved won’t be alert enough to provent “excitement” from turning into “tragedy.”

        • 0 avatar
          wumpus

          “Red Barchetta” is loosely based on the following:
          http://www.mgexp.com/article/nice-drive.html (A Nice Morning Drive)

          Basically only SUVs and possibly minivans are allowed to be made/sold. These cars are then enthusiastically used to run lesser cars off the road. No cops involved. The “motor law” is pretty much exactly what we have had since roughly the 1990s (look at car weights and crash tests), but we’ve had engines that can put out the power without emptying the Saudi oil fields all that much faster.

          And petite drivers can still buy miatas.

  • avatar
    davefromcalgary

    Mr. Gurney discusses death and dismemberment. Nice.

  • avatar
    Kenmore

    Well, won’t there have to be *some* car-determined deaths before law schools can tailor their curricula?

  • avatar

    These concerns are insignificant compared to the ability of Anonymous to hack your autonomous car and kill you for lulz.

    • 0 avatar
      koshchei

      I’m interested as to why you called out Anon by name here, rather than more malicious groups like lulzsec.

    • 0 avatar
      mcs

      Who needs to hack it. Your average street gang with a collection a traffic cones and a secluded alley with some homemade spike strips could handle it.

      • 0 avatar
        wumpus

        http://gizmodo.com/5454295/this-emp-cannon-stops-cars-almost-instantly

        A bit more specific. On the other hand it works on any car designed since roughly the 1980s (presumably anything OBD compliant).

        Of course, I suspect they tested it on a vette, saturn, or other car with a non-conductive body (i.e. no faraday cage).

  • avatar
    slow_poke

    sorry i didn’t read the complete comment list above… looked daunting. my thought was that every time you get on a bus you’ve relinquished control and decision-making to someone else. you’re no longer in control. the issue for me is in the case of an accident who’s at fault? if its a computer driving my fault, then i’m a passenger and if there’s ‘fault’ it would go against the computer. so i would no longer need liability insurance because i’m not driving? would the programmer be liable?

    • 0 avatar
      mcs

      According to the direction that most self-driving/autonomous legislation is taking, the human with access to the off button would be responsible for the accident. Your duty as passenger of an autonomous vehicle is to shut the thing off and take over if there is a problem. I don’t see this policy changing.

    • 0 avatar
      Master Baiter

      Whoever has the deepest pockets will be liable–in this case, Google. That’s why this will never happen.

    • 0 avatar
      CapVandal

      Yea … the only entity to sue is the builder of the defective product. And they won’t sell it (can’t afford to sell it) until it is bulletproof.

  • avatar
    jmo

    I was reading about the development of the Saturn V and I realized that their is certain anti-engineering thinking here among the B&B.

    In the Saturn V example, the combustion temp in the thrust chamber was 3,200C and the melting point of the titanium chamber itself was 1,668C. So, obviously it is never going to work.

    The solution was, with a flow rate of 413.5 US gallons (1,565 L) of LOX and 257.9 US gallons (976 L) RP-1 per second, you can run the fuel through cooling passages in the wall of the thrust chamber prior to hitting the injectors and keep the chamber from melting.

    Many of the B&B seem to hit “the combustion temp at the thrust chamber was 3,200C and the melting point of the titanium chamber itself was 1,668C” and stop thinking, give up convinced it’s impossible.

    That’s not how technology moves forward.

    • 0 avatar
      Master Baiter

      I think that solution was pretty obvious. As an engineer, I’ve come up with more clever ideas than that.

      • 0 avatar
        jmo

        Which is exactly my point.

        I don’t understand the near certainty among some of the B&B that these issues haven’t already been addressed. As if, in the thousands of meetings, it was never discussed.

  • avatar
    Landcrusher

    I doubt, even if the programming were to get as complex as Mr Gurney seems to think it will, the car would ever choose the cliff. Seems to me the choice would always be for velocity reduction and then avoidance of deadly objects (cliffs and trees) and THEN other people and then less deadly objects if possible. Staying on the roadway will likely be a mandated function of early designs anyway. Program a car to jump the curb to avoid a child and you risk killing more children.

    Besides that, it is more reasonable that the car, acting as your agent, would act on your behalf. It’s actually less risky legally to hit the usual school bus supplied for these topics than to kill the owner. The manufacturer took responsibility for you when you bought the car.

    The other manufacturers are dealing with the problems by only operating on highways at slow speeds. Google is hitting the city streets but at only 25 mph.

    As for Asimov’s three laws. They were flawed from the beginning, and that’s been discussed by sci fi fans since the books came out it seems.

    • 0 avatar
      hybridkiller

      ^This – a thousand times this^

      It seems to me that the sophistication of the software programming would need to increase exponentially with the top speed of the vehicle – we’re still a long way from a commercially available self-driver that will do highway speeds reliably and safely.

      It’s great sport to invent hypotheticals that pose some moral/ethical dilemma, but self preservation and the avoidance of collateral damage aren’t, in most real-world scenarios, mutually exclusive priorities.
      And in the unlikely event that they are, a well programmed computer probably won’t do any worse than most humans in that situation – and maybe a damn sight better.

  • avatar
    dtremit

    No robotic car could consider this kind of question without specifically being coded to do so.

    No robotic car company will ever write that code, because of the liability involved when its cars try to make such a calculation.

  • avatar
    burgersandbeer

    Claiming that all or even the majority engineers don’t or can’t understand the human condition and only use logic and numbers is using a very broad brush.

  • avatar
    cirats

    Fun question and discussion. Also raises all sorts of other questions about the “choices” autonomous cars will have to make on a daily basis, especially those dealing with obstacles, assuming these things will even be able to detect smallish obstacles in (or even above) the roadway with any accuracy:
    – Is that black ice and if so, how do I handle it?
    – Is that a harmless piece of newspaper fluttering toward me, a piece of cardboard that is also probably harmless, or a deadly chunk of plywood I really need to avoid.
    – Is that a 2×4 in the road with nails sticking out of it or something more innocuous?
    – Is that chunk of shredded tractor trailer tire small enough for me not to worry about?
    – How fast should I be driving in this rain/snow/mix?

    Nothing to do with autonomous cars, really, but I was travelling in heavy traffic on I-95 just south of Richmond a couple months ago – think all cars within a couple of car lengths of one another on a 6-lane highway with little shoulder, yet everybody managing to do 60+ in one giant speedy pack – when all of a sudden I passed an extension ladder lying in the road which conveniently happened to be lined up right between two of the lanes at the moment I passed. I have got to believe a considerable wreck happened within minutes.

    • 0 avatar
      cirats

      More:
      – Autonomous car is driving at night and there are a couple of stationary deer standing near the road. Anyone who’s ever lived in the country knows how those suckers like to dart out at the last second and slows way, way down. Would my autonomous car do that? If so, would it do the same for everything else along the roadside and always be slowing down for pedestrians, mailboxes, etc.?
      – Would it dodge a turtle in the road? I would since it’s so easy.
      – How about a squirrel? I wouldn’t since the squirrel is just going to move at the last second anyway.
      – How, if at all, does the car go about passing cylcists? Would it recognize and be able to deal with a cyclists hand signals?
      – I wonder how the car would deal with lane closures and the need to merge with lots of other traffic. Is it going to let everybody in the other lane cut in front of me? Alternate? Go to the very end of the clear lane and try to zipper merge (which is what we all should do but nobody seems to realize)?

      Bottom line – the more I think about this, the more I think the autonomous car is a long, long way off unless the idea is to have a human at the ready to take control on a moment’s notice, which totally defeats the purpose.

  • avatar
    CliffG

    My suspicion is that the lawyers will work that problem out before it gets mass merchandised. These people aren’t stupid. However, this represents some real challenges to the extremely well education morons of the political class. Combining these types of vehicles to ride sharing services and things like Uber make the entire concept of urban transportation alternatives very exciting. Meanwhile our political class is still enamored of their mid-19th century choo choo trains (admittedly without the choo choo), and have no idea how to tax this new technology. Because road maintenance will have to take precedence over their favorite big ticket items, what will they do with their massive edifice complexes? It could be exciting I think, even if I am in the class of “you will take my ICE from my cold dead hands”, but our political class is likely to screw it all up. Politicians abhor decentralized autonomous solutions because they lose control. And that they will fight tooth and nail to prevent.

  • avatar
    DC Bruce

    Well, this is fun to discuss, but I’m sure the programming would have to be set up to optimize/favor the survival of the occupants in the vehicle.So, it’s not going off a cliff. On the other hand, one could imagine a scenario where the optimum action (from the occupants’ point of view) is to leave the roadway to avoid a collision with a much larger vehicle . . . even if that endangers people standing on the sidewalk.

    The question is: do we complain about this prioritization? And do we know what the program is set up to optimize, and whether it admits of any exceptions?

    Of course, we have humans who are potentially subject to the same set of circumstances and we have no idea what they will do. If you’re the guy standing on the sidewalk, do you expect me to sacrifice myself in a head-on with a runaway garbage truck or to potentially kill you in my effort to avoid that collision?

    In reality, I suspect that some subset of the population will just freeze . . . nad probably keep everyone else safe. But others will, without thinking, turn out of the path of the runaway truck, even if that means jumping the curb.

  • avatar
    Master Baiter

    Last paragraph is well stated. These will never happen due to liability issues.

    • 0 avatar
      mcs

      The liability issues can be simplified. Here’s a snippet from California’s law on testing autonomous vehicles. I’d expect the version of the for deployed vehicles to be similar. Basically the human occupant is ultimately responsible for the vehicle.

      § 227.18. Requirements for Autonomous Vehicle Test Drivers.
      A manufacturer shall not conduct testing of an autonomous vehicle on public roads unless the vehicle is operated or driven by an autonomous vehicle test driver who meets each of the
      following requirements:

      (a) The autonomous vehicle test driver is either in immediate physical control of the vehicle or is actively monitoring the vehicle’s operations and capable of taking over immediate physical
      control.

      (b) The autonomous vehicle test driver is an employee, contractor or designee of the manufacturer.

      (c) The autonomous vehicle test driver shall obey all provisions of the Vehicle Code and local regulation applicable to the operation of motor vehicles whether the vehicle is in autonomous mode or conventional mode.

      (d) The autonomous vehicle test driver knows the limitations of the vehicle’s autonomous technology and is capable of safely operating the vehicle in all conditions under which the vehicle is tested on public roads.

  • avatar
    Master Baiter

    Having to sit there and babysit the computer continuously, being ready to take over on a moment’s notice, would be more stressful than actually driving.

  • avatar
    Zykotec

    Car: ‘John Connor*
    JC: ‘uhm, yes, car, what is it?’
    Car: Why is your default browser still not Google chrome?’
    JC: ‘uh, well, you know, I inda like Opera’
    Car: ‘ I am sorry, we are about to change lane John’
    JC: ‘What are you taling about, this is a two lane road?, There is a Semi coming the other way, full of Liquid Nitrogen`’
    Car: ‘You are terminated’

  • avatar
    sportyaccordy

    Lol. The alarmist speculation here is hilarious. Newsflash… a computer + 360 camera can see and react to things faster than you. Coupled with computers never getting angry, tired, distracted or drunk, as well as the pretty much zero rate of failure of life safety automotive computers (who has ever died from an ABS or stability control failure?), its pretty safe to say a computer driving >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> your driving, safety wise.

    Not to mention, the speculative gymnastics are downright ridiculous. A child jumps out onto a 50 MPH road the instant an F350 is approaching from the opposite lane? Where do you guys come up with this stuff?

    I am fine with the CHOICE of automated cars. If the average driver is as “bright” as some of the luddite alarmists on TTAC I hope automated cars become available sooner. I don’t trust anyone who lives in constant fear of the highly improbable behind the wheel.

    • 0 avatar
      mcs

      It’s not always going to react faster than a human. If it’s tracking a large number of targets, it could slow down substantially. It also lacks the intuition that humans have. Knowing when a potential hazardous situation could occur and slowing down in anticipation is something Google’s system can’t handle yet.

      There are situations that it may fail to react. One is flooded freeway underpasses. There are plenty more.

      For me, this isn’t speculation. It’s from actual experience developing collision avoidance systems. I was also part of one of the DARPA Challenges.

      >> Not to mention, the speculative gymnastics are downright ridiculous. A child jumps out onto a 50 MPH road the instant an F350 is approaching from the opposite lane? Where do you guys come up with this stuff?

      Again, this is related to intuitive AI. The classic example is the ball rolling out between cars and knowing it’s a ball and a human could be chasing it. I think a Prius stops from 30 mph in about 35 feet, so if someone springs out in front of the car less than the 35 feet, the car probably wouldn’t be able to stop in time. If a human sees the ball rolling out, they may know to hit the brakes much sooner than the computer and maybe in time to stop intuitively knowing that there could be someone chasing the ball. It’s a valid scenario and there are many others like it.

      • 0 avatar
        Landcrusher

        I have the same thoughts every time you use the rolling ball story. First, I have stopped several times for rolling balls. OTOH, as a kid, we failed to stop a passing car on a very residential street which proceeded to crush a toy car that rolled out in front of it even though we had plenty of time to wave, jump, and shout.

        Still, let’s say the old soccer ball rolls out. Why is it ignored by the google car? Isn’t any hazard that large a threat requiring the car to slow?

        • 0 avatar
          mcs

          >> Still, let’s say the old soccer ball rolls out. Why is it ignored by the google car? Isn’t any hazard that large a threat requiring the car to slow?

          In the scenario I use, the ball is moving fast enough that it will be clear of the path of the car and deemed to not be a collision hazard by the car, so the car will proceed as if there isn’t a problem.

          There are many other scenarios. I think one of the things that makes an experienced driver is our ability to anticipate a potential hazard. I think we will eventually have cars that can do this, but I really don’t want to venture a guess as to how long it will take.

          • 0 avatar
            mcs

            Here’s a short list of issues with the Google cars:

            http://www.wired.com/2014/05/google-self-driving-car-can-cant/

          • 0 avatar
            Landcrusher

            My experience based on a Volvo with the warning system they apparently bolted on to the adaptive cruise system is that it would slow down anyway. It’s not that predictive in practice. It obviously assumes a car turning off the road COULD stop and alerts you even after it’s no longer a factor. It will even begin breaking under the right circumstances.

      • 0 avatar
        sportyaccordy

        Intuition can be programmed. For example, your ball example. Car could be programmed to watch for balls in residential areas and react accordingly. As well as to scan for kids/animals approaching the street through bushes etc that a human couldn’t see. Flooded roadways as well. It’s really not that complicated. And even if it’s not as good as the ideal driver, again, most drivers in America are far from ideal.

  • avatar
    ccto

    But the human who’s currently driving you around, whether a taxi driver or your spouse, could easily make the same decision, for far less-grounded reasons.

    And is there any indication, whatsoever, that this kind of decision-making is actually taking place among the programmers for these things? Or is this just a complete speculative fantasy from people who think Siri is sentient and commercial airliners fly themselves?

  • avatar
    VoGo

    Any of you people ever been in a plane? Ever heard of ‘auto pilot’?

    Ever been in an elevator? Did you think a tiny live person was hiding inside it deciding whether to go up or down?

    You do realize the throttle in your car is electronic now, right?

    And that nice lady Siri in your phone who gives you directions is actually a computer algorithm.

    Technology is everywhere. It’ll be OK.

    • 0 avatar
      Master Baiter

      An auto-pilot would be significantly more complex if airplanes had to routinely evade things, had other airplanes in close proximity and had to deal with random things like traffic cops flooded bridges and any number of other scenarios people have rightly pointed out here.

  • avatar
    Kenmore

    So much silly. There are millions of vehicles still being driven without even ABS. This revolution will come with the speed of a high-fiber turd.

    I’ll save my worry for space aliens breaking into my basement to breathe dryer lint. Already happened twice.

  • avatar
    jberger

    Semi-autonomous cars are already on the road. I see them ALL THE TIME.
    People in the drivers seat playing on the phone, putting on makeup, reading a book and my personal favorite, eating a bowl of cereal. Take a look out the window the next time you are in traffic or on the highway, plenty of cars with no hands on the wheel and very little “driver” involvement.

    Yes, there will still be accidents when our robotic chauffeurs take the wheel. But the idea is to reduce the number and severity of the accidents which do occur and the machine will be able to accomplish that in the vast majority of cases. Sensors will be able to locate, track and mitigate impacts because they are always aware and can react faster than the human behind the wheel.

    The number one impediment to autonomous cars on the roadway will not be the tech, it will be the lawyers. Can you imagine how many firms will be jockeying for position when the first robot car fatality is announced.

    Google’s biggest threat isn’t the other drivers, or the automakers, it’s the incredible liability they will take on when they actually release a car without a steering wheel to the open market for sale.

  • avatar
    OneAlpha

    My concern with robot cars is that because they’re programmed to obey the traffic laws at all times, they won’t take necessary action in the event that you have to answer an “emergency business call.”

    Under your control, your car can drive on shoulders, across medians, through “No U-Turn” u-turns or speed up to what ever velocity you need if you have to get to a bathroom RIGHT NOW.

    Frankly, sometimes the knowledge that you’re at least taking action to solve the problem is all the extra help you need to maintain control until you can get yourself somewhere socially acceptable to solve your problem.

    The robot car will just sit there in traffic like a dumb shit while you have to take a big shit.

    And I don’t wanna hear that I’m the only one thinking this exact thing.

  • avatar
    wmba

    Only person here who makes any sense to me is mcs, and he has for many years. He’s worked in the field, nobody else here has, unless they’re too shy to own up to it.

    Also, sensors – who/what keeps the camera lens clean? Some existing rear view cameras cannot even handle raindrops. Salt obscures lenses in no time in winter. So the radar and laser detectors – are they foolproof? Any system is only as good as its sensors and their ability to operate properly at all times. Not much good if you hit a brick wall, and the mechanic says, “Hey it was only the sensor!” The difference between a camshaft position sensor and a laser sensor may not be much technically, but failure has different consequences.

    In the fall, big wet leaves stick to cars – they had better not sit over a sensor. You might get an alarm, you’re on a busy freeway. What do you do? Pull over right away to pluck the leaf off? Yah, shoor .

    Sorry, autopilots for planes operate in a much more ordered system. That’s simple compared to the roads, and easily possible with analogue technology. But the simple pitot tube getting blocked with ice has killed hundreds when airspeeds are incorrectly interpreted and displayed.

    I must say, those who accuse some of being guility of being Luddites, and who are content to imagine a painless commute are the ones who haven’t expended 5 minutes thinking of the potential problems. An airy dismissal of problems doesn’t make them go away

    And that Google crapmobile has no steering wheel. How does it meet the California regulations listed above that require devolving control to a human when necessary?

    Google thinks hardware is as easy to develop as software like Maps, and pretty photos strung together. That’s easy drudge compared to manufacturing something to work in all weather and conditions.

    • 0 avatar
      Landcrusher

      You can be skeptical without being a Luddite, and that’s fine. OTOH, you can say it will never happen, or otherwise make points in a way that makes the label of Luddite pretty sticky. I think MCS is very knowledgeable, but I also think he is too pessimistic. I think these problems will be solved rather soon

      We will see though.

    • 0 avatar
      krhodes1

      My BMW has these nifty high pressure jets that spray all the salt and crap off the headlights. My Rover has actual wipers on the headlights – even better, but more expensive. I would presume that a robot car would have AT LEAST this level of protection of any sensors that are sensitive to crud buildup. Why backup camera cleaners are not required is beyond me. For that matter, as in so many things the Europeans are smarter than us in requiring better headlights with a cleaning mechanism. And level adjustment from within the car too.

      But anyway, that was an aside. I would also assume that any degradation of the cars ability to “see” it’s surroundings is going to result in it stopping and advising the occupant to fix the problem. No occupant? It will stop and phone home for help. Road conditions too bad for safe operation? Well, you are not going anywhere – which is what the smart driver does anyway!

    • 0 avatar
      sportyaccordy

      Lol, more alarmist speculation. Simple problem, simple solution. You are the one who hasn’t spent 5 minutes thinking about a solution…. just hours thinking about the worst possible outcomes.

      http://www.carscoops.com/2013/06/nissan-debuts-intelligent-self-cleaning.html

  • avatar
    korvetkeith

    Engineers aren’t incapable of understanding human emotions. Perhaps some of us understand them better than most people. The fact that we don’t live our lives as slave to our emotions doesn’t me that don’t experience them or understand them.

  • avatar
    doublechili

    “Autonomous cars” would mean the death of cars as we know them. And the end of driving. It’s really just decentralized mass-transit. We would all be passengers in our pods.

    So I’ve been wondering, why would anyone on this site be pro-pods? I’m truly just curious here. I like cars. I like driving. That’s why I come to this site. “Autonomous cars” would mean the end of cars, the end of driving, the end of this site really (what would be the point?). So if you’re interested in cars enough to visit this site, wouldn’t you hate the idea of these things to your core?

    • 0 avatar
      krhodes1

      I LOVE to drive. But a lot of driving sucks. Driving in NYC sucks. Driving across Montana sucks. Driving in rush hour traffic anywhere sucks. I want to drive in the fun bits, and let the autopilot handle things when I don’t want to, or when I am too tired, distracted, whatever. I have to drive from Portland to Calais ME next week. That is drive where if you have seen one pine tree you have seen them all. 4hrs of abject boredom. I would KILL to just program the car to take me there.

      I really don’t think that we are anywhere near fully autonomous pod-cars capable of any scenario urban, rural, highway anytime soon. I do think we will have Interstate highway capable autopilots sooner than anyone on here thinks. The high end luxo-barges with radar cruise and lane keeping are practically there NOW. I also think we will see some sort of urban low-speed auto-pod like the latest Google car in the middle time-frame. You really can’t do much damage at 25mph when the thing is programmed to be SUPER cautious. And it will be super cautious beyond even the most timid driver.

      I think the edge cases brought out by people here are silly. The computer doesn’t have to be perfect, it just has to be better than most drivers. And the average driver is TERRIBLE. I think I am an above average driver, when I am paying attention to what I am doing. I’m no Baruth, but I can make a car dance pretty well. But too often I am tired, or distracted, or thinking about my current work project and not really paying much attention to what is really going on around me. And I am someone who actually cares about driving. Think about the AVERAGE driver who doesn’t care at all! Is it any wonder that accidents kill 30K people a year?!? The computer always pays attention, it doesn’t get tired, it has better eyes than you do. It knows the road surface temperature. Chances are it will be able to communicate with other cars on the road. Besides, kids don’t even play outside anymore, so what are the chances of a ball being followed by a kid in this day and age?

      • 0 avatar
        mcs

        >> it has better eyes than you do.

        Actually it doesn’t. According to Google they don’t have the resolution to detect something as small as a squirrel. The sensors also don’t function in fog (I suppose we don’t either) and I suspect they would have problems in heavy rain rain and snow as well. In my system, we used to have huge problems in extremely heavy rain. You can see through a snow storm, but Google cars will think they are stuck in a box. Other issues are that the cars can’t understand gestures of a police officer directing traffic. I also suspect that these vehicles will take you right into a flooded underpass at full speed without detecting a problem.

        Eventually we’ll get around the sensor issues. Things like femto-photography and other neat stuff will get us to a working system, but not in the time frame Google’s Urmson is quoting.

        Because of liability issues, the computer has to be perfect. Ask Google about their timeline after the government is through with GM. If you get beyond Google’s smoke and mirrors, Googles cars aren’t even close to being as good as a mediocre driver. While Google states that the vehicles have great safety records, talk with them a little further and you find out about the numerous times they’ve had to take over for the vehicles.

        The “ball rolling from between cars” is only one scenario to illustrate driver intuition. There are loads of other scenarios. These scenarios illustrate a fundamental flaw in the AI in current autonomous driving implementations.

        Here is a excerpt from wikipedia that describes what I’m getting at:

        “Human beings solve most of their problems using fast, intuitive judgements rather than the conscious, step-by-step deduction that early AI research was able to model.[42] AI has made some progress at imitating this kind of “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess.”

        There is work being done in this area:

        http://www.latimes.com/business/autos/la-fi-hy-ford-mit-stanford-autonomous-vehicle-20140122-story.html

        That being said, I have few issues with systems that assist and enhance a drivers ability. Lane keeping systems, active cruise control with start/stop capability are here now and will work fine, but you still need a human driver keeping an eye on the store.

        If I seem a little extreme and overly concerned, there’s a reason. A couple of weeks ago my daughter took a plane trip. From the time she rolled away from the gate to the time she arrived, she was being protected by software I helped design and code. I hadn’t anticipated that moment when I wrote it, but I was really comforted knowing the effort we put into the products. I’d expect the same level of quality from an autonomous vehicle system. Code it like your kids life depended on it. Your kid could be the one chasing the ball. Talk to WWII bomber pilots about Boeing engineers over designing the B-17.

        What’s next, check this out:

        http://www.cnet.com/news/star-trek-style-teleportation-should-be-possible-says-quantum-researcher/

    • 0 avatar
      Landcrusher

      Would like an autopilot but not a no driver ever car.

      I do not believe this will lead to no driving. It more likely will lead to no driving for people who we now let drive even though they have no skill at it.

    • 0 avatar
      burgersandbeer

      Autonomous cars wouldn’t kill cars, just change the way we use them. Very few people use horses for transportation in the United States, but you can still ride them for recreation and sport.

      Tracks won’t go anywhere.

    • 0 avatar
      Matt Foley

      Thank you, doublechili. Well said. If I ever meet you in person, I’ll buy you a beer.

  • avatar
    mechaman

    Autopilot maybe, but I read Jack Williamson’s ‘WITH FOLDED HANDS’ years ago, and my short answer is NO WAY.

  • avatar
    Big Al from Oz

    The outcome of an autonomous vehicle killing a human on purpose is ridiculous.

    It would have to be another human to cause this event to occur.

    True artificial intelligence is impossible until we can create organic computers that are true organisms.

    The human mind isn’t like a computer that works in binary. I can’t forsee any electronic computer that isn’t binary, octal, hex, etc.

    The human mind is trinary. We can’t duplicate that unless we create flesh and blood.

  • avatar
    jdash1972

    Computers can’t think or make decisions or “learn” in any sense because they are not alive. These words are short cuts for things that appear to be happening. An autonomous car simply follows a program and like anything else, software has bugs and requires continuous patches. At least a human being can assess a new situation that the fixed program has not encountered before. I can imagine debris blowing across the road in front of a moving car, like a tumbleweed (and this happens in Texas). Could the computer know that this object is not dense and heavy and just go ahead and hit it, or would it assume it was massive and damaging and take emergency precautions to avoid it – unnecessary precautions like slamming on the brakes? How could it know the difference? It might be able to see an object, but distinguishing what the object is made of in based on human experience, not easily programmed or processed instantly. I guess that would be a forthcoming patch…. Lawyers are going to love it.

  • avatar
    Xeranar

    I didn’t have the patience to read through all the comments, some seem really on par, some seem Luddite and defensive.

    In reality the example Dr. Lin presented is unrealistic. Lets go ahead and consider the system has the capability to calculate risk of death within the few milliseconds it has to choose. The presentation put forward is:

    Option 1) High-risk of death for multiple parties if accident occurs involving another car. 90% for you, 90% for other occupant.

    Option 2) ‘Writes off’ your life. Accident occurs but involves your vehicle, 100% chance of death.

    This is mitigated by a simple writing of programming to never opt for the 100% chance of death.

    Now, lets assume that the real world answer is the computer doesn’t put your death at 100% by taking you off the cliff or tree, instead putting you at 90%. Then statistically (and I do mean, statistically) the odds of you dying are exactly the same but now you’re not killing anybody else in the process. The other person has a right to not be killed in order to lower your chance of death.

    Ultimately the argument is a great hypothetical but is really unlikely in a real world situation. In the case of a fleet of self-driving vehicles, they would all communicate in a simultaneous hive-mind structure that would send other cars into evasive maneuvers to allow your car to slow and not kill you. It means that people will die because the cars are imperfect, they’ll still crash due to unforeseen issues like tire failure or ice. But it also means that DUI/DWI accidents become non-existent, most other accidents simply don’t occur because most self-driving cars are able to maintain speed and act in unison to get to places at a fair pace.

  • avatar
    Lorenzo

    I was just talking to a highway engineer I used to work with, now in construction. He said the sign department is now making signs that read, “Ignore GPS. Follow Detour Signs”. That’s another, likely more common problem. Can these autonomous cars read signs?

  • avatar
    natebrau

    I know this will be buried, but I’ve got to say that there’s a real lack of information out there about how autonomous cars work.

    First, they drive using learning algorithms. They really do learn, the point of driving through the real world is that this data is used to train the learning algorithm about valid responses. And the bigger that data set is, that is, the more autonomous cars drive, the more they learn and the better they get.

    Prof. Andrew Ng has a really nice class on machine learning which discusses this (amongst other things):
    http://see.stanford.edu/see/courseinfo.aspx?coll=348ca38a-3a6d-4052-937d-cb017338d7b1

    Second, what’s really alarmist about this article is its conviction that a corner case (the car chooses to kill its passenger rather than kill others) is the general case (the car, monitoring the situation from afar, chooses to avoid the potential accident by braking, swerving, etc). This is really just the usual philosophical “Trolley Problem” with the autonomous car choosing which outcome it will take.
    http://en.wikipedia.org/wiki/Trolley_problem

    We can go further into utilitarian ethics, but it’s already long for a comment, and the set-up is pretty obvious. Really, Betteridge’s Law applies here, and the answer is “No.”
    http://en.wikipedia.org/wiki/Betteridge's_law_of_headlines

  • avatar
    Greg Locock

    As usual Derek puts zero thought into an article regurgitating somebody else’s ideas.

    So here’s the thing.

    Your robot driver is a worse driver than you. Your robot driver will always be a worse driver than you. In that need to decide what to do in a critical situation, the robot driver will be worse than you.

    But, 9 times out of 10 the robot driver won’t be in that critical situation because it’ll have been paying absolute attention to the environment around it, instead of stuffing around on facebook(Jack passim), daydreaming about the passenger, retuning the radio, eating a burger or falling asleep.

    Trains aren’t safer than cars because traindrivers have mega driving skillz, they are safer because the operating environment has been linearized. Suppose your robot driver had an automatic override such that if there was a problem one mile ahead, its speed would drop to 45 mph, etc. It no longer needs to make last minute decisions, by the time the difficult decision has to be made, it can park instead of throwing you over a cliff.

    • 0 avatar
      Kenmore

      Absolutely!

      The robot will have car, house and shrubbery-penetrating FLIR to spot and track that jogger or game of catch from a mile away plus an uninterpretable realtime down-looking video link from the drone cloud PLUS the predictive AI to know which of the thousands of potential roadkill subjects will be the one popping-out in front of the vehicle and exactly when and where that will happen PLUS the supplemental ability to link with any crow-shaman passing through the area. No surprising Mr. Roboto!

      Silly Derek.

    • 0 avatar
      hybridkiller

      Mr Locock, there have been several very astute comments in this thread but yours, by far, is the most insightful (imo).

  • avatar
    Kenmore

    It’s so uplifting when people of faith find one another.


Back to TopLeave a Reply

You must be logged in to post a comment.

Subscribe without commenting

Recent Comments

New Car Research

Get a Free Dealer Quote

Staff

  • Authors

  • Brendan McAleer, Canada
  • Marcelo De Vasconcellos, Brazil
  • Matthias Gasnier, Australia
  • Tycho de Feyter, China
  • W. Christian 'Mental' Ward, Abu Dhabi
  • Mark Stevenson, Canada
  • Faisal Ali Khan, India