Powers wants all concerned to be thinking about what these vehicles will mean for safety, freedom, equity and sustainability.
Automation is not a new phenomenon, of course. Automatic
transmissions, anti-lock brakes and cruise control are examples of
functions that have morphed from all-manual to increasing degrees of
automation.
"What we're talking about now is a degree of automation to the point
where human beings aren't doing anything at all," Powers said. "...
There is a moral dilemma that must be taken into account when we design
these cars."
What is the dilemma? Philosophers and ethicists have long debated the
"trolley problem," which asks what you ought to do if you were at the
controls of a track switch and saw a runaway trolley heading toward two
unavoidably fatal scenarios. Do nothing and the trolley kills five
people unable to escape on the tracks ahead. Pull the switch and the
trolley kills one person on the side track. That seems to be the
(mathematically) humane solution, but that person wouldnt have died
without your intervention. And what if that one person is your own
child?
Crash-avoidance technology is now included in some new vehicles,
alerting drivers if they drift into another lane or are headed toward an
object. But evasive maneuvers work best if all vehicles in the
proximity have similar capacity. And crash-avoidance algorithms will
have to face something like the trolley problem in at least some cases
of evasion.
Many variables can be addressed in programming, but how are the
values of specific options calculated? And who contributes to those
decisions?
In the January edition of Prism, a monthly publication of the
American Society for Engineering Education, Aditya Johri of George Mason
University asks what role engineers, designers and consumers should
play.
Machines can learn from their users, change their functionality, and
in turn change how users respond, Johri writes. Now that actions
are programmable, should it be the job of the engineers to do so? Should
designers be made to test and use their inventions before unleashing
them onto the public? Should users be involved more in the design?
If vehicles are programmed to follow the rules of the road and never
cross a double yellow line, for example, what happens if there is an
obstruction or a perilous situation ahead that the vehicle cannot get
around otherwise?
And how could this programming be used for marketing purposes? What
could happen, for example, if an automaker promises consumers that its
vehicle will protect itself over all other options? Something like:
"Your family, above all else." Could that mean the vehicle opts to drive
over seven people to avoid a fender-bender?
And who is responsible for that decision? Who will stand before the
judge? The programmer? How many were involved in the design of that
software and what roles did they play?
"There is moral complexity in these crash decisions," Powers said.
Autonomous vehicles will require restructuring of highway systems and
accommodation of bicycles and pedestrians. And UD researchers are
consulting with the state Department of Transportation on those changes
already.
It's important to think about these things sooner than later, Powers said.
"What values can we support or institute through information technology and what values might be left behind?"
The answer, he said, may be waiting for us on the highways.
Article by Beth Miller; photo by Kathy F. Atkinson; illustration by Jeff Chase