By on May 4, 2016

2017 Chrysler Pacifica Exterior Front 3/4. Image: Fiat Chrysler Automobiles

A dream collaboration has finally become a reality for Fiat Chrysler Automobiles CEO Sergio Marchionne.

After angling for a partnership for over a year, FCA has announced a joint venture with Google’s Self-Driving Car Project. This is the first time the mega company has worked directly with an automaker to test its shadowy autonomous vehicle technology.

The project will see Google work its sensor-and software magic on a fleet of about 100 purpose-built 2017 Chrysler Pacifica Hybrids, with engineering teams from both companies working alongside each other in southeast Michigan.

Working out the bugs on a fleet of ghost minivans should give the teams a greater understanding of the challenges that need overcoming before the technology becomes mainstream. Google already has pilot projects ongoing in four U.S. cities.

“The experience both companies gain will be fundamental to delivering automotive technology solutions that ultimately have far-reaching consumer benefits,” said Marchionne in a statement.

Before the pilotless soccer team movers roll out across Michigan, they’ll first be tested on Google’s private California test track. The Pacificas will more than double the vehicles Google has to work with.

Marchionne’s enthusiasm for other people’s cutting-edge technology is well known. At the Geneva Motor Show in March, he waxed poetic about a dream partnership with his beloved Apple.

That didn’t come to pass, but the Google fling gives FCA big bragging rights in the self-driving game. The company doesn’t have the bank balance needed to snatch up startups in California (like its competitors), so a partnership beats buying a seat at the table with money it doesn’t have.

[Image: FCA US LLC]

Get the latest TTAC e-Newsletter!

17 Comments on “Strange Bedfellows or: How Sergio Got His Way and Created a Fleet of Robot Pacificas...”

  • avatar

    Marchionne would wax poetically over anyone who could be a proud new parent of FCA.

  • avatar

    IMHO non-engineers seriously underestimate the difficulty of creating a safe self-driving car. With all of the resources of NASA, Neil Armstrong STILL had to turn off the computer and manually land the LEM on the moon. And that was without being surrounded by dozens of other, distracted LEM drivers. NASA engineers simply couldn’t forsee Aldrin accidentally leaving the radar altimeter switch on. And neither will Google engineers be able to forsee all of the things that go wrong on a highway – mismarked or missing lines, vehicle in front of you backing up instead of going forward, mirages, powerlines or flooded roads. I’ve been driving 40+ years and even to this day encounter novel new situations that require I react to avoid collisions. No programmer or team of programmers can possibly forsee it all. After the first self-driving car cuts off a fuel tanker which flips and incinerates a family of six in a minivan, THEN it will get real. Even Google cannot afford the liability of thousands of driverless cars capable of making such mistakes, whereas, bad drivers are only financially liable for themselves and often times simply declare bankruptcy. Drivers struck in bumper to bumper traffic may be able to “zone out” with adaptive cruise control and radar braking, but that’s just a small part of a car being “driverless” including the infinite # of games of “chicken” or “who got here first” at 4-way stops, in parking lots, etc. A “timid” driver can be just as unsafe as an agggressive one. How exactly do you program “assertive?”

    • 0 avatar

      As a Systems Engineer, I agree 100%. As you point out, there is no way to capture enough requirements to “Validate” (an SE term) such a system for all contingencies.

      Just think how hard it is for something like SYNC and other modern auto electronics to work properly, and they just had to replace known technologies (HVAC controls and radios/CD’s) where you can capture all the requirements and Validate such a system

      • 0 avatar

        We’re not looking for the perfect self-driving car, just one that is at least as good as the average driver. If we can keep the computer off the sauce, the weed, and the cellphone, then we’re halfway there.

        • 0 avatar

          Partnering with Google on this, will give FCA exposure to the most experienced player in the field. In the end, it will be a single standard that survives, FCA could do worse than being in Google’s corner.

        • 0 avatar

          You’re missing the point, VoGo. The average driver is still far better than an autonomous vehicle at dealing with unforeseen conditions.

          Can an autonomous car react to a construction worker or policeman waving it through an intersection even though the signal is red? How do you program that?

          As Wade said very well above, the potential liability, especially with a deep-pocket company like Google involved, will make the ambulance-chasers come out of the woodwork like nobody has ever seen in the history of lawyerdom.

          • 0 avatar

            Um, no,
            Autonomous cars will only be introduced when they are above average, including whatever ridiculous scenarios you’d like to suggest. If they aren’t better, they won’t be introduced.

            As far as the liability goes, this is another red herring. Car crashes will be reduced, meaning lower insurance costs for drivers. Liability will remain with the driver/owner as with cars still driven by an unreliable human.

            Why are you so opposed to progress? Why are you so opposed to American companies like Apple, Google and Tesla improving our lives?

            Sometimes it’s like a luddite convention around here.

        • 0 avatar

          In America, “the average driver” has about -$10K for the ambulance chasers to come for. Google, and even FCA, has more than that.

          In addition, humans are fundamentally adaptable. Even when nice and liquored up for Friday night bar-car-hopping, they’ll eventually change their tack if something doesn’t work for long enough. While designing much in the way of fundamental adaptability into the AI algorithms driving a car in a public setting, is sure to get even the most rose-tinted program manager a bit worried.

          • 0 avatar

            Have you ever heard of car insurance? Most states require it these days. So if there is an accident, the vehicle owner is liable only for his deductible.

          • 0 avatar


            Liability insurance don’t cover nearly Google’s cash pile. And, some insurance company can’t be portrayed as an evil, insensitive bogeyman responsible fro the accident. So the ambulance chasers don’t have as much leverage. When scary Google are no longer content top spy on your emails, but also build robots that “kills a kid,” the well indoctrinated minions in the hustle game, are much more likely to fall for the whole liability racket.

          • 0 avatar

            I can’t understand your post. Would you please use the edit function to make it intelligible?

    • 0 avatar

      I’m no engineer and have nothing but skepticism for this tech. I don’t believe it will work and even if it does, I don’t believe drivers will adopt it. Personally, I wouldn’t trust it to do anything but point my car down I-70 in Kansas.

  • avatar

    A Super Mommy Wagon for the mommy who just doesn’t feel like being bothered.

    Imagine being able to send your minivan to go pick up your kids from practice while you are at home getting your groove on.

    The van pulls up.

    The kids get in.

    Enter an access code to return to home.

    A camera in the car lets mom see you from her $100 per month Unlimited-data iphone.

    The car navigates itself back to the house.

    You continue to live – wealthy – while lesser people in lesser cars have to actually spend time with their little underachievers.

    • 0 avatar

      Great example, except that the parents are at work, and the van takes the kids to LAX practice, Karate, music lessons, etc.

      We could free up literally millions of workers with autonomous cars, which would be a huge boost to the economy.

  • avatar

    Even the Washington Metro, after having a system that was designed to run automatically in the early 1970s (with the computing power of that time), still cannot have its trains run in automatic mode. This is a closed system, controlled environment.

    All that being said, the question is where the risk is. People and systems do not adapt well to change. Autonomous cars shift the risk from driver error to computer programming error. The question is: which risk is greater and more controllable? The computer is more controllable and as it gets better, the risk will decrease. Insurance rates for auto cars will drop.

    • 0 avatar

      Washington Metro has some serious mis management of the entire system. Improper testing and maintenance of the system is the cause of the problems they’re having. The track circuit had been malfunctioning for 18 months prior to the 2009 crash—totally unacceptable.

      If Google and or an operator of an automated vehicle fails to fix an issue with the electronic or mechanical systems then the problem lies with the human who was too dumb, cheap, or lazy to get it fixed.

Read all comments

Recent Comments

  • kcflyer: Good read, nice work FM.
  • Arthur Dailey: If I was younger and more talented I would certainly consider opening up a restoration shop in that...
  • aja8888: Nice review. We have had 6 Mustangs through the years with 4 of them being convertibles. Bought the first...
  • kcflyer: “about your car” nice, classy response.
  • mcs: It’s EPA range is 281 miles vs. the Model 3 Performance at 315 miles. It does beat the Bolts range by 22...

New Car Research

Get a Free Dealer Quote

Who We Are

  • Adam Tonge
  • Bozi Tatarevic
  • Corey Lewis
  • Jo Borras
  • Mark Baruth
  • Ronnie Schreiber