Take our user survey and make your voice heard.
tech

For driverless cars, a moral dilemma: Who lives or dies?

18 Comments
By MATT O'BRIEN

The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.

© Copyright 2017 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

©2024 GPlusMedia Inc.

18 Comments
Login to comment

There is of course a third option, self-destruct. If it truly came down to options, like the examples given in the article here, the option to self-destruct should be considered as well.

3 ( +3 / -0 )

Isaac Asimov's "Three Laws of Robotics" were designed by Asimov to be error-prone, so that he could write stories about their failures... They were not meant as a serious guide to creating good robotic thinking. His stories were great examples of how the laws could fail, in many situations! You should read his stories before you say we should use such simplistic laws!

The other problem is that, merely making a robot understand many of the terms in the "Three Laws" is nearly impossible. Even humans disagree how they should be interpreted, and it makes for great arguments...

But they served Asimov's purpose well, by giving him a structure to write within.

2 ( +2 / -0 )

Imagine you’re behind the wheel when your brakes fail. As you speed toward a crowded crosswalk, you’re confronted with an impossible choice: veer right and mow down a large group of elderly people, or veer left into a woman pushing a stroller. Now imagine you’re riding in the back of a self-driving car. How would it decide?

The failing brakes would be moral dilemma on the manufacturers full stop. But this is a BS argument in the first place. How many groups of elderly AND mothers with strollers will be saved because of the elimination of drunk drivers, exhausted people falling asleep at the wheel, and basic human error?

Self driving cars may not be completely flawless, but compared to human drivers, they are a much better option. Watch and see. The less human drivers and the more self driving cars, the less casualties and accidents will occur.

1 ( +5 / -4 )

Seemingly stupid question that may give somebody food for thought:

Does a driverless car that drives need to gt a licence to drive before it can legally drive, like I have to and probably you too?

At least i do not need a new algorithm.

1 ( +1 / -0 )

Therefore, a self-driving car will be faced with exactly the same choices a driver would make. Is that BS??

Yes actually it is. Humans are no better at making a split second decision than a computer. Therefore my argument still stands

1 ( +2 / -1 )

Whatever happens in this situation, a robot is still going to be able to react faster and control the vehicle better than a human.

Presumably the robot would realize that the brakes have malfunctioned much faster than a human. Hollywood movies tell us that people react to this situation by mashing the non-functioning brake pedal for several seconds. A computer could be programmed to whack on the handbrake, change down to activate the engine brake, and turn on the hazard lights all at the same time. This is all assuming that robot powered cars drive in a manner that requires heavy brake usage in the first place. My guess would be that they drive at lower speeds in built up areas but with much shorter distances between cars, so traffic moves more smoothly and lower speeds do not mean longer journeys. So this question is probably a straw man.

1 ( +1 / -0 )

given that the robot can't distinguish age, you minimize the potential loss of life. when panic sets in, i doubt a driver in this situation would do much better.

0 ( +1 / -1 )

Use engine braking and parking brake. Then, 180 spin.

0 ( +0 / -0 )

@mt9334

In answer to your question the simple response is neither? While it is easy to say the needs of the many out weigh the needs of few you, as a human, can't put a value on human life, every life is priceless! However a computer possibly could put a value on life through the use of data and percentage chances! Because of that process it will not understand the complex moral or emotional importances we as human beings give to life! For a machine, however intelligent, the concept of the value of life is meaningless, it can't process that kind of data as it can never be broken down so easily in to a computer readable code, mainly as in a simple form it's a chemical and biological function non existent in AI!

What Your question is a catch 22 scenario and what ever choice Is made someone looses! What I am postulating is that a machine driven autonomous vehicle will have no emotional considerations when making a choice where as a human being has that potential to put their selves above mere mathematical algorithms and act according to their conscious! A computer can not, and will not ever be able to do that!

0 ( +1 / -1 )

A computer will play the odds every time and will never be able to "feel" moral dilemmas! For example, If a computer controlled car was facing a situation where it was crashing into a wall it will identify all the probable options and assign a percentage chance of success to each one! So it could have the following decision to make!

Save it's occupant from hitting the wall but that involves running over a group of schoolchildren! 45% Avoid running over the children but it and the occupant hits the wall resulting In the loss of the occupant! 40% Apply brakes and severe into a crowd! 15%

The computer will play the odds and callously commit to what logically and statistically it considers the most likely outcome! In this case running over the schoolchildren, because in its logic that was at that moment the right thing to do statistically! Now I'm not saying that a human would not wish for self preservation and take that option as well, but it does remove the moral possibility of someone trying to avoid the children!

Legally, who is to blame if the car takes the option of running over the children, I don't think any law exists in any country where you can sue a machine or computer! Car companies as well will be doubly sure to avoid all responsibility for autonomy software! So who is to blame? The driver who inputted the destination? The car manufacturer l? The company who supplied the software? The software programmer? Who indeed?

While autonomous vehicles are ok they will never be truly efficient until all the cars on the road are the same and they can all talon to each other! Until then, ideally a human needs to be able to take control when necessary and be able to override the computer in an emergency! This would at least provide a sense of humanity and compassion into the mix!

-1 ( +1 / -2 )

theFu,

Isaac Asimov would be so proud of you!

In the real world however, robots are not sentient individuals and are merely machines responding to instructions written by a human being. Asimov, in fiction, posited a self-aware machine that would need to have "laws" such as these 3....in reality, robots are not self-aware. (and never will be)

Awareness of a moral sense, in contrast to instinct is a human attribute. Man is much more that a "developed animal".

-1 ( +1 / -2 )

First intensive surveillance cameras. Next internet snooping. Next computer controlled cars. Next GM food. Next male pregnancy.

-1 ( +0 / -1 )

veer right and mow down a large group of elderly people

-1 ( +0 / -1 )

Premise: Drivers are faced with many choices, some are matters of life or death.

Therefore, a self-driving car will be faced with exactly the same choices a driver would make.

Is that BS??

Will accidents be reduced with driver-less cars? Possibly. Does this negate the above? No.

-2 ( +1 / -3 )

@Joyridingonthetitanic,

What has more value, 1 life or 10?

Is it morally correct to take the life of 1 innocent individual in order to save the lives of 10 others?

However you would respond, please give a grounding for your answer.

Thanks.

-2 ( +0 / -2 )

MIT certainly knows this, but ... there are 3 laws of robotics:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]

-2 ( +0 / -2 )

In an ccident situation where a human is driving, the decision is based on the individual driver; It will be based on the personality of the driver, their own morals and beliefs.

My suggestion for a solution is this: Give each driver a unique 'key' or profile. The first time you start the car you will be asked a series of psycological questions that will measure your beliefs and way of thinking ("protect myself at all costs, or save more pedestrians, sacrifice myself"). This only needs to be done once, but i guess you could update it any time you want. You would keep this profile saved on your key and possibly could be used for any autonomous car you get into.

Then the car's algorithms will factor this in when making life/death decisions. Since the decision is based on your psychological profile, the responsibility of any deaths caused will rest on the driver, much like if they were driving normally (of course equipment failure is unavoidable in both instances).

-2 ( +0 / -2 )

The article reads: "Engineers already program cars to make moral choices, such as when they slow down and leave space after detecting a bicyclist." ............................................................................................................ Engineers who create programs make moral choices. Cars don't make moral choices. Let's not allow the creators to distance themselves from their creations. Are these "moral choices" universal? Will there be an international convention to determine just what these "moral choices" are? For my money such self-driving vehicles will just lead to new restrictions (especially for bicyclists) all for the benefit of investors and manufacturers under the guise of greater safety. Another scam from the high tech folks.

-2 ( +0 / -2 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites