Researchers developed a new experiment to better understand what people consider moral and immoral decisions related to driving vehicles, with the aim of collecting data to teach autonomous vehicles to make “good” decisions. The work is designed to capture a more realistic range of moral challenges in traffic than the widely discussed life-or-death scenario inspired by the so-called “trolley problem.”
The article titled “Moral Judgment in Realistic Traffic Scenarios: Moving Beyond the Streetcar Paradigm to Autonomous Vehicle Ethics” is published open access in the journal AI and society.
“The streetcar problem presents a situation in which someone must decide whether to intentionally kill one person (which violates a moral norm) in order to prevent the deaths of multiple people,” says Dario Cecchini, first author of a paper on work and postdoctoral researcher at North Carolina State University.
“In recent years, the tram problem has been used as a paradigm to study moral judgment in traffic,” says Cecchini. “The typical situation involves a binary choice for a self-driving car: swerve left, hit a deadly obstacle, or move forward and hit a pedestrian crossing the street.
“However, these trolley-type cases are unrealistic. Drivers must make much more realistic moral decisions every day. Should I exceed the speed limit? Should I run a red light? Should I stop for an ambulance ?”
“These mundane decisions are important because they can ultimately lead to life-or-death situations,” says Veljko Dubljević, corresponding author of the paper and associate professor in NC State’s Science, Technology and Society program.
“For example, if someone drives 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they either have to swerve into traffic or get into a collision. There is currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.”
To address this lack of data, the researchers developed a series of experiments designed to collect data on how humans make moral judgments about the decisions they make in low-stakes traffic situations. The researchers created seven different driving scenarios, such as a parent who must decide whether to break a traffic light while trying to get their child to school on time.
Each scenario is programmed in a virtual reality environment, so study participants engaged in the experiment have audio-visual information about what the drivers are doing when making decisions, rather than just reading the scenario .
For this work, the researchers relied on what is called the Agent Deed Consequence (ADC) model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or the intention of the person doing something. ; the act, or what is done; and the consequence, or result that results from the act.
The researchers created eight different versions of each traffic scenario, varying the combinations of agent, act and consequence. For example, in one version of the scenario where a parent is trying to get their child to school, the parent is considerate, brakes at a yellow light, and gets the child to school on time.
In a second version, the parent is violent, runs a red light and causes an accident. The other six versions modify the nature of the parent (the agent), their decision at the traffic light (the act), and/or the outcome of their decision (the consequence).
“The goal here is to allow study participants to view a version of each scenario and determine how moral the driver’s behavior in each scenario is, on a scale of 1 to 10,” says Cecchini. “This will provide us with robust data on what we consider moral behavior in the context of driving a vehicle, which can then be used to develop AI algorithms for moral decision-making in autonomous vehicles. ”
Researchers conducted pilot tests to refine the scenarios and ensure they reflected credible and easy-to-understand situations.
“The next step is to undertake large-scale data collection, involving thousands of people in the experiments,” says Dubljević. “We can then use this data to develop more interactive experiences with the aim of further refining our understanding of moral decision-making. All of this can then be used to create algorithms for use in autonomous vehicles. We will need to We then engage in additional testing to see how these algorithms perform.
More information:
Dario Cecchini et al, Moral judgment in realistic traffic scenarios: going beyond the tram paradigm for the ethics of autonomous vehicles, AI and society (2023). DOI: 10.1007/s00146-023-01813-y
Provided by North Carolina State University
Quote: To help autonomous vehicles make moral decisions, researchers abandon the ‘trolley problem’ (December 1, 2023) retrieved December 2, 2023 from
This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.