Strange Bedfellows or: How Sergio Got His Way and Created a Fleet of Robot Pacificas


A dream collaboration has finally become a reality for Fiat Chrysler Automobiles CEO Sergio Marchionne.
After angling for a partnership for over a year, FCA has announced a joint venture with Google’s Self-Driving Car Project. This is the first time the mega company has worked directly with an automaker to test its shadowy autonomous vehicle technology.
The project will see Google work its sensor-and software magic on a fleet of about 100 purpose-built 2017 Chrysler Pacifica Hybrids, with engineering teams from both companies working alongside each other in southeast Michigan.
Working out the bugs on a fleet of ghost minivans should give the teams a greater understanding of the challenges that need overcoming before the technology becomes mainstream. Google already has pilot projects ongoing in four U.S. cities.
“The experience both companies gain will be fundamental to delivering automotive technology solutions that ultimately have far-reaching consumer benefits,” said Marchionne in a statement.
Before the pilotless soccer team movers roll out across Michigan, they’ll first be tested on Google’s private California test track. The Pacificas will more than double the vehicles Google has to work with.
Marchionne’s enthusiasm for other people’s cutting-edge technology is well known. At the Geneva Motor Show in March, he waxed poetic about a dream partnership with his beloved Apple.
That didn’t come to pass, but the Google fling gives FCA big bragging rights in the self-driving game. The company doesn’t have the bank balance needed to snatch up startups in California ( like its competitors), so a partnership beats buying a seat at the table with money it doesn’t have.
[Image: FCA US LLC]
Comments
Join the conversation
Marchionne would wax poetically over anyone who could be a proud new parent of FCA.
IMHO non-engineers seriously underestimate the difficulty of creating a safe self-driving car. With all of the resources of NASA, Neil Armstrong STILL had to turn off the computer and manually land the LEM on the moon. And that was without being surrounded by dozens of other, distracted LEM drivers. NASA engineers simply couldn't forsee Aldrin accidentally leaving the radar altimeter switch on. And neither will Google engineers be able to forsee all of the things that go wrong on a highway - mismarked or missing lines, vehicle in front of you backing up instead of going forward, mirages, powerlines or flooded roads. I've been driving 40+ years and even to this day encounter novel new situations that require I react to avoid collisions. No programmer or team of programmers can possibly forsee it all. After the first self-driving car cuts off a fuel tanker which flips and incinerates a family of six in a minivan, THEN it will get real. Even Google cannot afford the liability of thousands of driverless cars capable of making such mistakes, whereas, bad drivers are only financially liable for themselves and often times simply declare bankruptcy. Drivers struck in bumper to bumper traffic may be able to "zone out" with adaptive cruise control and radar braking, but that's just a small part of a car being "driverless" including the infinite # of games of "chicken" or "who got here first" at 4-way stops, in parking lots, etc. A "timid" driver can be just as unsafe as an agggressive one. How exactly do you program "assertive?"
A Super Mommy Wagon for the mommy who just doesn't feel like being bothered. Imagine being able to send your minivan to go pick up your kids from practice while you are at home getting your groove on. The van pulls up. The kids get in. Enter an access code to return to home. A camera in the car lets mom see you from her $100 per month Unlimited-data iphone. The car navigates itself back to the house. You continue to live - wealthy - while lesser people in lesser cars have to actually spend time with their little underachievers.
Even the Washington Metro, after having a system that was designed to run automatically in the early 1970s (with the computing power of that time), still cannot have its trains run in automatic mode. This is a closed system, controlled environment. https://www.washingtonpost.com/local/trafficandcommuting/some-computer-driven-trains-returning-monday-to-red-line/2015/04/09/454ff044-deee-11e4-a1b8-2ed88bc190d2_story.html All that being said, the question is where the risk is. People and systems do not adapt well to change. Autonomous cars shift the risk from driver error to computer programming error. The question is: which risk is greater and more controllable? The computer is more controllable and as it gets better, the risk will decrease. Insurance rates for auto cars will drop.