Upload new images. The image library for this site will open in a new window.
Upload new documents. The document library for this site will open in a new window.
Show web part zones on the page. Web parts can be added to display dynamic content such as calendars or photo galleries.
Choose between different arrangements of page sections. Page layouts can be changed even after content has been added.
Move this whole section down, swapping places with the section below it.
Check for and fix problems in the body text. Text pasted in from other sources may contain malformed HTML which the code cleaner will remove.
Accordion feature turned off, click to turn on.
Accordion featurd turned on, click to turn off.
Change the way the image is cropped for this page layout.
Cycle through size options for this image or video.
Align the media panel to the right/left in this section.
Open the image pane in this body section. Click in the image pane to select an image from the image library.
Open the video pane in this body section. Click in the video pane to embed a video. Click ? for step-by-step instructions.
Remove the image from the media panel. This does not delete the image from the library.
Remove the video from the media panel.
It sounds great to
many people: Jump in a car, push a few buttons and voila! it takes
you wherever you tell it to go while you sit back, catch up on the news,
play a game or take a nap.
But it turns out that self-driving vehicles have a much more complicated side than that push-button scenario might suggest.
And while a state-of-the-art global positioning system (GPS) will
certainly be standard technology on such vehicles, University of
Delaware philosopher and ethicist Tom Powers wants something else in the
mix: A moral compass.
That's a tall order that lags far behind the fast-forward technology.
But it's mission critical. The way vehicles are programmed to reach a
destination and respond to changing conditions may have life-and-death
"The vehicles themselves will be agents," Powers, director of UD's Center for Science, Ethics and Public Policy,
said during a recent UD Scholar in the Library seminar. "We want to
purposely think about that. If they are agents in a moral sense, with
assumption of decision-making capability, it is no longer human beings
but mechanisms that will be able to make decisions that we consider
Such questions and many others are drawing new audiences to
Powers' lectures. In addition to philosophers and ethicists and other
researchers, Powers is talking to transportation officials, urban
planners, insurance company representatives, lawyers and automakers.
Some automakers Volvo, for example already are moving full speed
ahead to develop and introduce such vehicles. On its website, Volvo says
it believes its first unsupervised autonomous vehicles will reach the
market by 2021.
Ford Motor is pointing to a 2021 date, too, and recently announced it
had reached a $1 billion deal for a robotics startup as it continues
development of a "virtual driver" system.
Steve Dellenback, executive director in the Automation and Data
Systems Division at Southwest Research Institute in San Antonio, Texas,
told an audience at the 2016 National Association of Science Writers he
doubts such vehicles will be a routine presence on public roadways
Dellenback, who has been working with autonomous vehicle technology
for more than a decade, said some farms are using driverless vehicles
and the military is exploring possibilities. But there are many
challenges, including cybersecurity, the impact of weather conditions
and how to manage unmapped areas and unconventional terrain.
Move this whole section up, swapping places with the section above it.
Tom Powers, associate professor of philosophy, is director of UD's Center for Science, Ethics and Public Policy.
Powers wants all concerned to be thinking about what these vehicles will mean for safety, freedom, equity and sustainability.
Automation is not a new phenomenon, of course. Automatic
transmissions, anti-lock brakes and cruise control are examples of
functions that have morphed from all-manual to increasing degrees of
"What we're talking about now is a degree of automation to the point
where human beings aren't doing anything at all," Powers said. "...
There is a moral dilemma that must be taken into account when we design
What is the dilemma? Philosophers and ethicists have long debated the
"trolley problem," which asks what you ought to do if you were at the
controls of a track switch and saw a runaway trolley heading toward two
unavoidably fatal scenarios. Do nothing and the trolley kills five
people unable to escape on the tracks ahead. Pull the switch and the
trolley kills one person on the side track. That seems to be the
(mathematically) humane solution, but that person wouldnt have died
without your intervention. And what if that one person is your own
Crash-avoidance technology is now included in some new vehicles,
alerting drivers if they drift into another lane or are headed toward an
object. But evasive maneuvers work best if all vehicles in the
proximity have similar capacity. And crash-avoidance algorithms will
have to face something like the trolley problem in at least some cases
Many variables can be addressed in programming, but how are the
values of specific options calculated? And who contributes to those
In the January edition of Prism, a monthly publication of the
American Society for Engineering Education, Aditya Johri of George Mason
University asks what role engineers, designers and consumers should
Machines can learn from their users, change their functionality, and
in turn change how users respond, Johri writes. Now that actions
are programmable, should it be the job of the engineers to do so? Should
designers be made to test and use their inventions before unleashing
them onto the public? Should users be involved more in the design?
If vehicles are programmed to follow the rules of the road and never
cross a double yellow line, for example, what happens if there is an
obstruction or a perilous situation ahead that the vehicle cannot get
And how could this programming be used for marketing purposes? What
could happen, for example, if an automaker promises consumers that its
vehicle will protect itself over all other options? Something like:
"Your family, above all else." Could that mean the vehicle opts to drive
over seven people to avoid a fender-bender?
And who is responsible for that decision? Who will stand before the
judge? The programmer? How many were involved in the design of that
software and what roles did they play?
"There is moral complexity in these crash decisions," Powers said.
Autonomous vehicles will require restructuring of highway systems and
accommodation of bicycles and pedestrians. And UD researchers are
consulting with the state Department of Transportation on those changes
It's important to think about these things sooner than later, Powers said.
"What values can we support or institute through information technology and what values might be left behind?"
The answer, he said, may be waiting for us on the highways.
Article by Beth Miller; photo by Kathy F. Atkinson; illustration by Jeff Chase