Moral machines: Stop discussing thought experiments - Conditio Humana - Technology, Ai and Ethics (2023)

by Isabel Schünemann

Autonomous machines like robots and self-driving cars should serve human beings. But what should they do in situations when they can’t serve everyone? To find an answer to that question we should stop discussing moral thought experiments. Instead, we need to start collecting data and creating dialogue.

(Video) The danger of AI is weirder than you think | Janelle Shane

A self-driving car faces an unavoidable crash. It only has two options: It can either drive straight and kill an innocent pedestrian, or swerve and crash into a wall, killing its passenger. What should it do? If you haven’t already come across a similar story, and you are unsure how to respond, don’t worry. There is no straightforward answer to what the car should do. This dilemma is inspired by the trolley problem, a famous philosophical conundrum. It is probably the most discussed scenario depicting the challenges of developing autonomous machines today.

With the rapid advances in artificial intelligence, self-driving vehicles, robots and other intelligent machines will soon frequently face moral choices.[1]There are good reasons though why such dilemmas as described above shouldn’t stop us of from the development autonomous machines. Self-driving cars are expected to be much safer than human drivers and, overall, the scenario seems to be extremely rare, although realistic. But what we can learn from discussing these extreme cases is what the automation of moral choices will confront us with, how we should approach developing solutions, and how we shouldn’t.

Turning an intuitive reaction into intent

The trolley problem represents so-called distributive moral dilemmas, situations characterized by the question ‘How should an actor decide when immoral behavior is to some extent inevitable?’. While much of debate on the ethics of algorithms has been focused on eliminating biases and discrimination, and thus about ‘getting right’ what we know we want to achieve, we don’t know what the right solutions for cases of distributive moral dilemmas are yet. In these moments, human moral judgement greatly considers human nature, a comfort we don’t give to machines. A human driver confronted with the trolley problem is not expected to make a well-reasoned decision. If a driver didn’t speed or take any other illegal action to cause the situation, he or she will face no condemnation for their intuitive reaction in that moment. Machines, however, make calculated decisions in a split of a second. And their development demands directives on the outcome of moral decisions upfront, effectively turning an intuitive human reaction into deliberate intent. Because we haven’t been confronted with intentional immoral behavior of humans in these dilemmas, we haven’t established desirable outcomes for machines.

(Video) 6 big ethical questions about the future of AI | Genevieve Bell

But even if it was possible to find out what would theoretically be the ‘right’ decision in a moral dilemma, it wouldn’t necessarily bring us closer to the development of moral machines.

Moral theories are not enough

It’s not like humans hadn’t thought about what would be right and wrong in these situations. Moral philosophers gave distributive dilemmas like the trolley problem a great deal of thought. And over the last two decades, machine ethicists most often turned to these discussions in their quest to create pathways for the development moral machines.[2]But moral philosophy never developed unanimously agreed-upon answers to these dilemmas. Hypothetical events like the trolley problem were initially designed only as thought experiments to discuss different approaches of morality, not as real-life challenges to be solved. Should we maximize for net social benefit, like utilitarian moral theory preaches, that is to steer the trolley into a direction that will save as many lives as possible? Or should we take a deontological approach, that is to avoid taking any proactive action that would lead to harming someone?

The lack of consensus on what is the right choice in a moral dilemma is largely because humans are notoriously inconsistent in their moral judgement. For example, many people generally agree with the idea of utilitarianism, that is to maximize overall social benefit. In the trolley problem they would want to save as many lives as possible, even if that means to take action and steer the trolley away from a larger group on the rails and towards a smaller group of bystanders.[3]But taking action and, for example, killing a healthy person in a hospital to donate his or her organs to save 10 sick people goes against most people’s intuition, even though it would save as many lives as possible. The practical implications of human moral judgement are thus limited: Studies found that most people would explicitly even want to save as many lives as possible in an unavoidable crash of an autonomous vehicle if this would mean to sacrifice its own occupant. However, people also said that they wouldn’t want to buy a self-driving car that may eventually sacrifice them to safe others.[4] It is because of these contradictions that no general consensus on moral principles ever evolved.[5] Humans simply don’t adhere consistently to moral theories. Thus, the chances derive general guidance from human moral behavior to develop machines capable of dealing with dilemmas are slim.

(Video) Ethics in AI Colloquium | Algorethics: Thinking about the Techno-Human Condition

Thought experiments cannot determine how humans will react to the widespread presence of autonomous machines in our everyday lives. To go ahead with the development of truly autonomous systems we thus need to invest more time and effort into analyzing the impact of their choices

The need for more data and dialogue

But even if it was possible to find out what would theoretically be the ‘right’ decision in a moral dilemma, it wouldn’t necessarily bring us closer to the development of moral machines. Suppose we eventually followed utilitarianism how would we define social benefit and identify actions that maximize it? Whose interests, well-being and lives have which value? And which impact would these choices have on how we interact with moral machines?

Researchers at MIT Media Lab recently used crowdsourcing to gather data on some of these questions. Millions of individuals from 233 countries gave over 40 million answers to various scenarios of the trolley problem on an online platform.[6] The scenarios allowed choices on nine different attributes of the casualties, such as gender, social status, fitness or the overall number of lives lost. One of the three attributes that received considerably higher approval than the rest is the preference to spare younger over older humans. But what would public life look like if we followed this result? If a self-driving car is programmed to rather spare the young than the elderly in an inevitable crash, senior citizens would probably refrain from traffic.

(Video) The Quest for Ethical Artificial Intelligence | Timnit Gebru || Harvard Radcliffe Institute

The example illustrates that the true challenge of developing moral machines isn’t finding the right answer to a single moral dilemma. It is anticipating and managing the systemic impact of the automation of moral choices: Imagine the incentives we would create if self-driving cars would treat those cyclists without a helmet with more caution than those who protect themselves? Or what traffic would look like if everyone knew that self-driving cars invariably stop for pedestrians on the road? Who would wear a helmet on a bike, or even care about jaywalking anymore?

Thought experiments cannot determine how humans will react to the widespread presence of autonomous machines in our everyday lives. To go ahead with the development of truly autonomous systems we thus need to invest more time and effort into analyzing the impact of their choices. For this we especially need studies and simulations that explore how humans would change their own behavior in interaction with these machines. And we need to establish processes of public participation to develop a common desirable future with them. Instead of continuing the past debate of moral machine behavior alone we should look forward and discuss a desirable future in which humans and moral machines coexist.

FAQs

Is AI ethical or morally respected? ›

"Responsible AI can go a long way in retaining talent and ensuring smooth execution of a company's operations," Jha said. As AI and machine learning are becoming central to IT systems, companies must ensure their use of AI is ethical.

What was the purpose of the MIT moral machine experiment? ›

The aim of this study is to understand individuals' judgements about difficult moral dilemmas that involve life or death situations (both in medical and non-medical contexts).

What is the moral machine dilemma? ›

The Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. We generate moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians.

What can we learn from moral machine? ›

“We show that machines can learn about our moral and ethical values and be used to discern differences among societies and groups from different eras.”

Why AI is a moral issue today? ›

The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment.

Why AI is not good for society? ›

Design flaws or faulty and imbalanced data that is being fed into algorithms can lead to biased software and technical artifacts. So AI just reproduces race, gender and age bias that already exists in society and deepens social and economic inequalities.

Can robots teach ethics answers? ›

Answer: The best way to teach a robot ethics, they believe, is to first program in certain principles (“avoid suffering”, “promote happiness”), and then have the machine learn from particular scenarios how to apply the principles to new situations.

Are moral machines possible? ›

According to a recent article, the answer is a resounding “no.” In a series of nine studies, researchers examined people's comfortability with machines making morally relevant decisions in a variety of situations, including medical, legal, and military, as well as potential ways to increase acceptability of moral ...

Can we teach morality ethics to machines? ›

Teaching morality to machines is hard because humans can't objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on.

What is ethical dilemma in AI? ›

But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.

What are some examples of ethical dilemmas in healthcare? ›

What are Ethical Dilemmas?
  • Advance directives.
  • Surrogate decision making.
  • Refusal of treatment.
  • Conflicts with caregivers.
  • Foregoing life-sustaining treatment.
  • Do Not Attempt Resuscitation (DNAR) orders.
  • Other issues perceived as ethical problems.

Can robots make moral decisions? ›

Autonomous robots such as self-driving cars are already able to make decisions that have ethical consequences. As such machines make increasingly complex and important decisions, we will need to know that their decisions are trustworthy and ethically justified.

Can artificial intelligence have morals? ›

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans.

Do you think that AI is a moral agent? ›

Against this background, we identify two conditions of moral agency: internal and external. We argue further that the existing AI architectures are unable to meet the two conditions. In consequence, machines - at least at the current stage of their development - cannot be considered moral agents.

What is a moral impact of technology on society? ›

The uses of technology have the capability of making many innocent people suffer, and this is a moral concern. An example of such issue can be the fact that nuclear technology has the potential of killing many people and consequently destroying the environment. This issue raises some health problems.

Should you follow AI ethics yes or no? ›

Algorithms can enhance already existing biases. They can discriminate. They can threaten our security, manipulate us and have lethal consequences. For these reasons, people need to explore the ethical, social and legal aspects of AI systems.

How does AI violate human rights? ›

AI in fact can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability and safeguards on how they are designed, how they work and how they may change over time.

How does AI affect human behavior? ›

Manipulation can take many forms: the exploitation of human biases detected by AI algorithms, personalised addictive strategies for consumption of (online) goods, or taking advantage of the emotionally vulnerable state of individuals to promote products and services that match well with their temporary emotions.

Is AI evil or good? ›

AI isn't inherently moral -- it can be used for evil just as well as for good. And while it may appear that AI provides an advantage for the good guys in security now, the pendulum may swing when the bad guys really embrace it to do things like unleashing malware infections that can learn from their hosts.

Can a machine have a mind? ›

In theory, then, it is possible that a machine running the right kind of computer program could have mental states, it could literally have a mind. It could have beliefs, hopes and pains. And, if it could have a mind, then it could also be intelligent. All of this assumes, of course, that functionalism is true.

Does AI know right from wrong? ›

Artificial intelligence has made it possible for machines to do all sorts of useful new things. But they still don't know right from wrong.

Can a machine have human feelings Why? ›

The answer is no. Feelings are associated with emotions that occur within the body, while the machines can sense the world and agents around them, and by doing so they can respond to the circumstances.

Do we need ethics in technology? ›

Technology ethics is the application of ethical thinking to the practical concerns of technology. The reason technology ethics is growing in prominence is that new technologies give us more power to act, which means that we have to make choices we didn't have to make before.

What are the 3 ethical dilemmas? ›

Some examples of ethical dilemma include: Taking credit for others' work. Offering a client a worse product for your own profit. Utilizing inside knowledge for your own profit.

Who is against artificial intelligence? ›

Stephen Hawking: The world-renowned theoretical physicist also fears the advent of AI, and the impact it might leave on humanity. Hawking stressed on the fact that humans are limited by slow biological evolution, and thus would be superseded, if there's a struggle between AI and humanity.

What is AI ethics in simple words? ›

AI ethics is a set of guidelines that advise on the design and outcomes of artificial intelligence. Human beings come with all sorts of cognitive biases, such as recency and confirmation bias, and those inherent biases are exhibited in our behaviors and subsequently, our data.

What is the biggest ethical issue in healthcare today? ›

Patient Confidentiality

One of the biggest legal and ethical issues in healthcare is patient confidentiality which is why 15% of survey respondents noted that doctor-patient confidentiality is their top ethical issue in practicing medicine.

What are the top 5 ethical issues in healthcare? ›

5 Ethical Issues in Healthcare
  • Do-Not-Resuscitate Orders. ...
  • Doctor and Patient Confidentiality. ...
  • Malpractice and Negligence. ...
  • Access to Care. ...
  • Physician-Assisted Suicide.

What are 3 disadvantages of robots? ›

Begin Robotics
  • The use of robots can create economic problems if they replace human jobs.
  • Robots can only do what they are told to do – they can't improvise.

Do robots have a positive or negative influence on society? ›

Robots eliminate dangerous jobs for humans because they are capable of working in hazardous environments. They can handle lifting heavy loads, toxic substances and repetitive tasks. This has helped companies to prevent many accidents, also saving time and money.

Can AI make its own decision? ›

What is AI decision making? AI decision making is when data processing – like analyzing trends and suggesting courses of action – is done either in part or completely by an AI platform instead of a human to quantify data in order to make more accurate predictions and decisions.

Can AI understand human values? ›

AI systems can learn human values by asking questions. Questions are often vulnerable to challenges like uncertainty, deception or the absence of a reflective equilibrium.

Is artificial intelligence ethical or unethical Why? ›

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.

Why can't robots act as moral agents? ›

Robots ultimately lack the intentionality and free will necessary for moral agency, because they can only make morally charged decisions and actions as a result of what they were programmed to do.

How has technology changed our moral values? ›

On the other hand, technology can bring out our worst behaviors. Social media platforms can serve us content that enrages or depresses us, making it more (or less) likely we will take immoral actions based on our feelings. These platforms also can be used by bad actors to take immoral actions more easily.

What is the biggest impact of technology in our society? ›

Technology affects the way individuals communicate, learn, and think. It helps society and determines how people interact with each other on a daily basis.

Is the use of AI ethical? ›

The benefits from the ethical uses of AI are numerous and significant. The application of AI can help organizations operate more efficiently, produce cleaner products, reduce harmful environmental impacts, increase public safety, and improve human health.

Can an AI or artificial intelligence be moral or immoral? ›

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans.

Does AI have moral status? ›

This paper reviewed the literature on the moral status of AI, emphasizing that although a majority of philosophers agree that human-grade AI in the near future—AGI—will have moral status based on capacities, there is disagreement about the specific degree to which AGI has moral status.

Can we have an AI system without ethical concerns? ›

You cannot simply have AI ethics. It requires real ethical due diligence at the organizational level—perhaps, in some cases, even industry-wide reflection. Until this does occur, we can look forward to many future AI scandals and failures.

Do you think AI is good or evil? ›

AI isn't inherently moral -- it can be used for evil just as well as for good. And while it may appear that AI provides an advantage for the good guys in security now, the pendulum may swing when the bad guys really embrace it to do things like unleashing malware infections that can learn from their hosts.

Why we shouldn't be afraid of AI? ›

Smart people all over the world are working to solve the puzzle of intelligent machines. The basic fear of AI taking over the world and enslaving humanity rests on the idea that there will be unexpected consequences. When you unpack the thought process behind that fear, it's really quite irrational.

Do people trust AI more than humans? ›

A study by researchers at Penn State University found that people are more likely to trust machines with their personal information than other humans. The findings seem to fly in the face of people's general distrust of computers and artificial intelligence.

What are pros and cons of AI? ›

Artificial Intelligence Pros and Cons
  • To 'err' is human, so why not use AI? ...
  • AI doesn't get tired and wear out easily. ...
  • Digital assistance helps in day to day chores. ...
  • Rational decision maker. ...
  • Repetitive jobs. ...
  • Medical applications. ...
  • Tireless, selfless and with no breaks. ...
  • Right decision making.

Why robots are not moral agents? ›

It is almost a foregone conclusion that robots cannot be morally responsible agents, both because they lack traditional features of moral agency like consciousness, intentionality, or empathy and because of the apparent senselessness of holding them accountable.

Videos

1. Regulatory Subterfuge: Grand Narratives on AI Ethics || Artificial Intelligence
(ORF)
2. A practical guide to integrating artificial intelligence ethics I Cansu Canca
(MIT Technology Review)
3. AI Ethics
(Dartmouth)
4. Ethics of AI in Healthcare
(UWDeptMedicine)
5. Ethics in AI Seminar - Does AI threaten Human Autonomy
(TORCH | The Oxford Research Centre in the Humanities)
6. The ethical dilemma of self-driving cars - Patrick Lin
(TED-Ed)
Top Articles
Latest Posts
Article information

Author: Pres. Carey Rath

Last Updated: 02/09/2023

Views: 5567

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Pres. Carey Rath

Birthday: 1997-03-06

Address: 14955 Ledner Trail, East Rodrickfort, NE 85127-8369

Phone: +18682428114917

Job: National Technology Representative

Hobby: Sand art, Drama, Web surfing, Cycling, Brazilian jiu-jitsu, Leather crafting, Creative writing

Introduction: My name is Pres. Carey Rath, I am a faithful, funny, vast, joyous, lively, brave, glamorous person who loves writing and wants to share my knowledge and understanding with you.