Discussing
Driver-less cars and the rise of moral machines

Nick Olson

Adrian deLange
December 21, 2012

Thanks for a great read, Nick! I agree with your questions about the assumptions behind robotic ethics (eg. "that human moral deficiencies are not a matter of good and evil, but of the absence of education, therapy or technical ability").

Still, I wonder whether we sometimes erroneously assume our technology to be morally neutral. That is, I can "act on" my iPhone whatever purposes I want, but my iPhone has no ability to act on me. I'm not sure this is where you're going, but I think this is another extreme that needs to be avoided: when I think about it, so many of the products in my life (try to) influence MY morality--what I think is right, wrong, desireable, or just "cool".

So, I guess, even though I would agree that machines can't have a conscience, they DO take on the morality of their maker, don't they? Like humans, machines are all made with a telos too! Unlike humans, however, that telos is not necessarily good (sometimes it just drives the ravenous consumerist cycle). That telos is, however, very intentional.

If we assume that these new (or any) technologies do not reflect the "image" or telos of their makers, I think we're setting ourselves up for trouble. So while their morality may not be self-determined, I would argue that it's very real.

Nick Olson
December 21, 2012

Hey Adrian,

Thanks for commenting!

So are you saying that the following nuance that I made in the piece isn't strong enough?

"But in each of these examples we’re imagining machines with programmed systems that have ethical implications..."

Matthew Lee Grannell
December 21, 2012

This is unsettling in many, many ways. And, as someone who appreciates useful, efficient and precise technology and engineering as well as what it means to "be human" there's a lot to think about and struggle with in this article.

First, I really hope we can learn some things from the science fiction genre because I think it has a lot to say, as the NYer article even included.

One bit that I latched onto between both articles is the idea that philosophers in the 21st century have an impact on mainstream cultural/national/military ethics. Maybe they do to some extent and in a way that I'm not aware of, but when it comes to military procedure (drones, humanoid AI drones, etc), I get the feeling that because those higher in command spend a great deal of time in education, they (perhaps rightly?) become ethical philosophers who determine what is ethically acceptable.

But, I'm not so sure. The US
military, in conjunction with the White House,
determines what percentage of civilian death is
acceptable in (drone) strikes. If I can find the
article I read this in, which specifically refers to drone strikes, I'll put it up. If that's truly the case, I think we have a long ethical road ahead, and I'm clearly not the only one:
http://www.telegraph.co.uk/news/worldnews/asia/pakistan/8695679/168-children-killed-in-drone-strikes-in-Pakistan-since-start-of-campaign.html -- if my quick math serves me correctly, 7-8% of all drone strike deaths in Pakistan since the start of the campaign, are children and well over 30% are civilians. I know their have been terrible ethical philosophers to come down the line the last few thousand years, but 30% civilian death is not tenable from any reasonable, ethical ground. But what is reasonable? Fortunately, I think we're in a cultural climate where this kind of thing is less and less acceptable.

I think some of this relates you the questions you posed:
1/ "The machines would be wholly dependent on the ethical content that we give them. Does this really bring our robots close to being human or behaving morally?" -- Unfortunately I think it does, though maybe not in the way you mean it. Like the first
comment noted, we give a technology the
responsibility to perform a function --what we think is necessary, and therefore a piece of who we are. Who we are in the Pakistan/drone illustration, is deeply disturbing.

2/ "Can ethics be whittled down to “conscience,” “self-awareness” and other, similar modern-ethical terminologies?" Can? Yes, and much worse things like statistics. Should? No.

//.02

Marta L.
December 21, 2012

This was a fascinating, if troubling, read. I agree with you that reducing morality to a kind of efficiency seems to water down morality so much you're almost sure to lose something crucial there. I'm also not keen on using the law as a substitute for morality, since a big part of morality involves having the freedom to choose whether to do good or evil. (This is actually a large part of why I am pro-choice: not because I think abortion is okay, but because I think it's important that mothers choose life rather than having it legally mandated.)

As I was reading this, the question that jumped out to me is how different we are from our machines. I believe the heart of morality is about making choices: being able to do A or B, and choosing the better of those two options. But science seems to point to several ways that we are "programmed," too: our genes, our early experiences that form our characters, and so on. To be truly human, we must find a way to see our limits in a different way than the either/or way our society seems to prefer. If you have a horrible temper, that may mean you have to work harder at controlling it, and there may be limits - things that are beyond your control. Similarly if you're depressed or mentally ill or if you have physical limits on what you can do with your body. But these don't lock us into only one option, and we can still choose the best course available to us. Just because I cannot fly to save a child trapped in a tree doesn't mean I shouldn't get a ladder and climb up to help her.

Robots are stuck on one track. There is no choice to be made, and thus no will, no telos, no morality. Humans may have our options hemmed in a bit, but that doesn't mean there's no choice to be made.

Jason Summers
December 21, 2012

Nick,

Bravo for this.

I'm reminded of a review essay Terry Eagleton wrote in the FT ( http://www.ft.com/intl/cms/s/2/3fa57592-5be4-11e0-bb56-00144feab49a.html ) in which he responded to Simon Baron-Cohen's thesis that evil is somehow a theory-of-mind error, a failure to understand fully the mental state that arises out of the circumstances of another.

Eagleton (rightly) dismissed that notion, arguing that evil requires evil intentions but also that it is less tied to emotional state than to action.

Of course to a consequentialist utilitarian the all-knowing machine that can calculate all possible outcomes in the same manner as a chess-playing computer is the ideal moral actor because it can optimize the utilitarian calculus. Given that that is what much of moral dialogue is now reduced to, Marcus's speculation isn't too surprising.

I suppose that before we set out to make "good" machines, we ought to first decide what we mean by "good."

js

Adrian deLange
December 26, 2012

Reading it again, I think the argument you made is strong enough (and more focused than mine). I promoted this post to a couple friends and my comments are more along the line of conversations I've been having with others who want to argue that morality-free technology exists.

Forgive me for getting a little sidetracked here, but thanks again for the insight!

Add your comment to join the discussion!