Self-Driving Cars are a staple of many sci-fi movies. The image of civilians punching in addresses, then sitting back, drinking coffee, and reading digital news while their automated cars catapult them to work through a minimum of gridlocked traffic is a sufficiently alien image that it seems to belong in a future utopia.
For risk communications, marketing and public relations teams, that future may be now.
In the MIT Technology Review article, “Why Self-Driving Cars Must Be Programmed to Kill,” we learn interesting points on why consumers may want to opt out of this future.
“How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or shouldit protect the occupants at all costs? Should it choose between these extremes at random?”
If the Technology Review’s goal was to highlight the difficulties in programming Virtual Intelligence, they succeeded. If the goal was to make us salivate over these cars, they may have failed.
It’s fairly obvious that the Technology Review’s goal was to inform, not persuade. However, as interesting as the moral conundrum is, there’s the need to emphasize points that expand beyond transportation technology.
- Will consumers buy products that may kill self, family and friends if they learn the programmable default is to save other lives first?
- How do you get broad buy-in? Do we have enough time to build a do-able adaptation plan and execute it or is technology way ahead of us on this?
- With our new perceived corporate social responsibility obligations, is a socially conscientious manufacturer honorable if its officers select to design, manufacture and sell such a vehicle?
- With so many high-risk philosophical decisions possible prior to the launch of basic R&D, when do the risk, marketing and public relations departments enter every aspect of the manufacturing conversation?
- Is this our privilege? What do we budget for regulators, legislators and courts as they engage in philosophy second-guessing?
When we pilot ourselves, there’s a measure of control, a sense of fault, a perception that as long as we remain aware of our surroundings, we can keep our family safe. Now, we may be asked to create attitudinal change, to persuade adults it’s a great idea to delegate control, and to market an automated vehicle that is programmed to sacrifice children under specific circumstances.
The questions just keep coming:
- Who buys a car that may run into a tree at 70 mph because it misinterprets an animal or wind-blown tumbleweed for a child?
- What happens once a single state decides animals have human rights?
- Is the buyer who delegates safety to a manufacturer more likely to sue?
- How much does the manufacturer budget to accommodate for the new risks?
- How much more regulation will evolve to ensure the public has power to second-guess the manufacturers’ decision trees?
Undoubtedly, self-driving cars are coming. Are we ready?
Source: http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/
Share this post