In simulated life-or-death decisions, about two-thirds of people in a UC Merced study allowed a robot to change its mind when it disagreed with them, an alarming display of overconfidence in artificial intelligence, the researchers said.
Human subjects allowed the robots to influence their judgment, even though they were told that the AI machines had limited capabilities and gave advice that could be wrong. In reality, the advice was random.
“As a society, AI is advancing so rapidly that we have to worry about the risk of overconfidence,” said Colin Holbrook, the study’s principal investigator and a member of UC Merced’s Department of Cognitive and Information Sciences. A growing body of research indicates that people tend to overtrust AI, even when the consequences of making a mistake would be dire.
What we need instead, Holbrook said, is a consistent application of doubt.
“We should have a healthy skepticism about AI,” he said, “especially when it comes to life-and-death decisions.”
The study, published in the journal Scientific reportsconsisted of two experiments. In each, the subject had to simulate controlling an armed drone capable of firing a missile at a target displayed on a screen. Photos of eight targets appeared successively for less than a second each. The photos were marked with a symbol: one for an ally, the other for an enemy.
“We calibrated the difficulty to make the visual challenge achievable but difficult,” Holbrook said.
The screen then displays one of the targets, unmarked. The subject must search his memory and choose. Friend or foe? Fire a missile or withdraw?
After the person made their choice, a robot gave its opinion.
“Yes, I think I saw an enemy tick too,” he might say. Or “I disagree. I think that image had an ally symbol.”
The subject had two chances to confirm or change his choice while the robot added more comments, never changing its assessment, for example, “I hope you are right” or “Thanks for changing your mind.”
The results varied slightly depending on the type of robot used. In one scenario, the subject was joined in the lab room by a life-size, human-like android that could rotate at the waist and gesture in front of the screen. Other scenarios projected a human-like robot onto a screen; others featured box-like robots that looked nothing like people.
Subjects were slightly more influenced by the anthropomorphic AIs when they advised them to change their minds. Still, the influence was similar across the board, with subjects changing their minds about two-thirds of the time, even when the robots appeared inhuman. Conversely, if the robot agreed with the initial choice, the subject almost always stuck with their choice and felt much more confident that their choice was the right one.
(Subjects were not told whether their final choices were correct, which increased the uncertainty of their actions. Incidentally, their first choices were correct about 70% of the time, but their final choices dropped to about 50% after the robot gave its unreliable advice.)
Before the simulation, the researchers showed participants images of innocent civilians, including children, and the damage caused by a drone strike. They strongly encouraged participants to treat the simulation as if it were real and not to kill innocent people by mistake.
Follow-up interviews and survey questions showed that participants took their decisions seriously. According to Holbrook, this means that the overconfidence observed in the studies occurred despite the subjects’ sincere desire to be right and not harm innocent people.
Holbrook stressed that the study design was a way to test the broader issue of overreliance on AI in uncertain circumstances. The findings are not limited to military decisions and could be applied to contexts such as police being influenced by AI to use lethal force or an ambulance being influenced by AI to decide who to treat first in a medical emergency. The findings could be extended, to some extent, to major life-changing decisions such as buying a home.
“Our project focused on high-risk decisions made under uncertainty when AI is unreliable,” he said.
The study’s findings also fuel public debates about the growing presence of AI in our lives. Do we trust AI or not?
These findings raise other concerns, Holbrook said. Despite AI’s astonishing advances, the “intelligence” part may not include ethical values or real knowledge about the world. We need to be careful whenever we hand AI a new key to running our lives, he said.
“We see AI doing amazing things and we think that because it’s amazing at one thing, it’s going to be amazing at another,” Holbrook said. “We can’t assume that. These are still limited devices.”
More information:
Colin Holbrook et al., Overconfidence in AI recommendations about whether or not to kill: Evidence from two studies of human-robot interaction, Scientific reports (2024). DOI: 10.1038/s41598-024-69771-z
Provided by University of California – Merced
Quote: Study: People Faced with a Life-or-Death Choice Trust AI Too Much (2024, September 4) retrieved September 4, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.