Writing in the National Post, Matt Gurney discusses a darker side of autonomous cars, one that many people (especially this writer, who is not exactly familiar with the rational, linear type of operation that is involved with coding)
In a recent interview with PopSci, Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, proposed a hypothetical scenario that sums up the problem. You’re driving along in your robo-car, and your tire blows out. The computer in control rapidly concludes that your car is moving too quickly and has too much momentum to come to a safe stop, and there is traffic ahead. Since an accident is inevitable, the computer shifts from collision avoidance to collision mitigation, and concludes that the least destructive outcome is to steer your car to a catastrophic outcome — over a cliff, into a tree — and thus avoid a collision with another vehicle.
The raw numbers favour such an outcome. Loss of life and property is minimized — an objectively desirable outcome. But the downside is this: Your car just wrote you off and killed you to save someone else.