Driver-less cars and the rise of moral machines

Near the end of a New Yorker piece titled “Moral Machines” appears something like a thesis: “As machines become faster, more intelligent and more powerful, the need to endow them with a sense of morality becomes more and more urgent.” But that statement begs a question: can conscience be encoded in machines?

The article, by Gary Marcus, begins by acknowledging that driver-less cars are now legal in three states. Marcus then suggests that “eventually (though not yet), automated vehicles will be able to drive better, and more safely than you can; no drinking, no distraction, better reflexes and better awareness (via networking) of other vehicles.” The piece then transitions into a discussion of robotic soldiers, assuming that drone warfare is only the first wave.

With the example of driver-less cars, it’s not difficult to imagine a scenario in which automobile transportation is something like a highly sophisticated, computerized public transit. And one can imagine our robo-warfare developing beyond drones into machines more resembling human soldiers in awareness and perhaps appearance - programmed with directives to shoot or not shoot in certain situations. But in each of these examples we’re imagining machines with programmed systems that have ethical implications, which is something quite different from coding machines with the ability to choose to act morally. The machines would be wholly dependent on the ethical content that we give them. Does this really bring our robots close to being human or behaving morally? Can ethics be whittled down to “conscience,” “self-awareness” and other, similar modern-ethical terminologies?

The troubling part of an otherwise fascinating piece comes near the end, when Marcus offers this telling lament: “The thought that haunts me the most is that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation). What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”

What’s disturbing here is the baseline assumption that human moral failure might be characterized as something like a lack of formal exactness - the absence of a more attuned awareness that our future machines might be able to achieve for us, if they can move beyond what our philosophers “devise.” You might say the expectation is that a kind of revelatory Christ-machine will one day show itself, leading us into heavenly efficiency, precision and that treasured height of consciousness: total self-awareness.

But I’m willing to wager that we’d come to find that we’re actually in need of a second coming, one which might restore us to those lost terminologies: “character,” “virtue” and “narrative,” for instance. Maybe humanity’s ethical deficiencies aren’t a lack that machines can fix because the problems are essentially human.

Machines can be a help to human beings in countless ways. But there’s an assumption implicit in the suggestion of robotic ethics that human moral deficiencies are not a matter of good and evil, but of the absence of education, therapy or technical ability. We’re deceived if we think efficiency, precision and heightened awareness can restore righteousness to the human situation, for these are not the primary qualities of goodness or what it means to be human.

Goodness revolves around a telos - a baseline purpose and imperative - to love one another. Who can give us this purpose but a creator God who is love? Who can make this purpose into an imperative but a creator God who is love? To resolve our ethical deficiencies, we need a human savior who is God - not RoboCop.

Comments (6)

Leave a Comment
Thanks for a great read, Nick! I agree with your questions about the assumptions behind robotic ethics (eg. "that human moral deficiencies are not a matter of good and evil, but of the absence of education, therapy or technical ability").

Still, I wonder whether we sometimes erroneously assume our technology to be morally neutral. That is, I can "act on" my iPhone whatever purposes I want, but my iPhone has no ability to act on me. I'm not sure this is where you're going, but I think this is another extreme that needs to be avoided: when I think about it, so many of the products in my life (try to) influence MY morality--what I think is right, wrong, desireable, or just "cool".

So, I guess, even though I would agree that machines can't have a conscience, they DO take on the morality of their maker, don't they? Like humans, machines are all made with a telos too! Unlike humans, however, that telos is not necessarily good (sometimes it just drives the ravenous consumerist cycle). That telos is, however, very intentional.

If we assume that these new (or any) technologies do not reflect the "image" or telos of their makers, I think we're setting ourselves up for trouble. So while their morality may not be self-determined, I would argue that it's very real.
Hey Adrian,

Thanks for commenting!

So are you saying that the following nuance that I made in the piece isn't strong enough?

"But in each of these examples we’re imagining machines with programmed systems that have ethical implications..."
This is unsettling in many, many ways. And, as someone who appreciates useful, efficient and precise technology and engineering as well as what it means to "be human" there's a lot to think about and struggle with in this article.

First, I really hope we can learn some things from the science fiction genre because I think it has a lot to say, as the NYer article even included.

One bit that I latched onto between both articles is the idea that philosophers in the 21st century have an impact on mainstream cultural/national/military ethics. Maybe they do to some extent and in a way that I'm not aware of, but when it comes to military procedure (drones, humanoid AI drones, etc), I get the feeling that because those higher in command spend a great deal of time in education, they (perhaps rightly?) become ethical philosophers who determine what is ethically acceptable.

But, I'm not so sure. The US
military, in conjunction with the White House,
determines what percentage of civilian death is
acceptable in (drone) strikes. If I can find the
article I read this in, which specifically refers to drone strikes, I'll put it up. If that's truly the case, I think we have a long ethical road ahead, and I'm clearly not the only one:
http://www.telegraph.co.uk/news/worldnews/asia/pakistan/8695679/168-children-killed-in-drone-strikes-in-Pakistan-since-start-of-campaign.html -- if my quick math serves me correctly, 7-8% of all drone strike deaths in Pakistan since the start of the campaign, are children and well over 30% are civilians. I know their have been terrible ethical philosophers to come down the line the last few thousand years, but 30% civilian death is not tenable from any reasonable, ethical ground. But what is reasonable? Fortunately, I think we're in a cultural climate where this kind of thing is less and less acceptable.

I think some of this relates you the questions you posed:
1/ "The machines would be wholly dependent on the ethical content that we give them. Does this really bring our robots close to being human or behaving morally?" -- Unfortunately I think it does, though maybe not in the way you mean it. Like the first
comment noted, we give a technology the
responsibility to perform a function --what we think is necessary, and therefore a piece of who we are. Who we are in the Pakistan/drone illustration, is deeply disturbing.

2/ "Can ethics be whittled down to “conscience,” “self-awareness” and other, similar modern-ethical terminologies?" Can? Yes, and much worse things like statistics. Should? No.

//.02
This was a fascinating, if troubling, read. I agree with you that reducing morality to a kind of efficiency seems to water down morality so much you're almost sure to lose something crucial there. I'm also not keen on using the law as a substitute for morality, since a big part of morality involves having the freedom to choose whether to do good or evil. (This is actually a large part of why I am pro-choice: not because I think abortion is okay, but because I think it's important that mothers choose life rather than having it legally mandated.)

As I was reading this, the question that jumped out to me is how different we are from our machines. I believe the heart of morality is about making choices: being able to do A or B, and choosing the better of those two options. But science seems to point to several ways that we are "programmed," too: our genes, our early experiences that form our characters, and so on. To be truly human, we must find a way to see our limits in a different way than the either/or way our society seems to prefer. If you have a horrible temper, that may mean you have to work harder at controlling it, and there may be limits - things that are beyond your control. Similarly if you're depressed or mentally ill or if you have physical limits on what you can do with your body. But these don't lock us into only one option, and we can still choose the best course available to us. Just because I cannot fly to save a child trapped in a tree doesn't mean I shouldn't get a ladder and climb up to help her.

Robots are stuck on one track. There is no choice to be made, and thus no will, no telos, no morality. Humans may have our options hemmed in a bit, but that doesn't mean there's no choice to be made.
Nick,

Bravo for this.

I'm reminded of a review essay Terry Eagleton wrote in the FT ( http://www.ft.com/intl/cms/s/2/3fa57592-5be4-11e0-bb56-00144feab49a.html ) in which he responded to Simon Baron-Cohen's thesis that evil is somehow a theory-of-mind error, a failure to understand fully the mental state that arises out of the circumstances of another.

Eagleton (rightly) dismissed that notion, arguing that evil requires evil intentions but also that it is less tied to emotional state than to action.

Of course to a consequentialist utilitarian the all-knowing machine that can calculate all possible outcomes in the same manner as a chess-playing computer is the ideal moral actor because it can optimize the utilitarian calculus. Given that that is what much of moral dialogue is now reduced to, Marcus's speculation isn't too surprising.

I suppose that before we set out to make "good" machines, we ought to first decide what we mean by "good."

js
Reading it again, I think the argument you made is strong enough (and more focused than mine). I promoted this post to a couple friends and my comments are more along the line of conversations I've been having with others who want to argue that morality-free technology exists.

Forgive me for getting a little sidetracked here, but thanks again for the insight!

 

Leave a comment

A login account is required to leave a comment

See the latest in:

Promotion

promo 1 promo 2
promo 3 promo 4

Donate Now

{/exp:mx_jumper:out_global}