Credit: Unsplash/CC0 Public Domain
Honesty is the best policy…most of the time. Social norms help humans understand when to tell the truth and when not to, to spare someone’s feelings or avoid hurting them. But how do these norms apply to robots, which are increasingly working alongside humans? To understand whether humans can accept robots lying, the scientists asked nearly 500 participants to evaluate and justify different types of robotic deception.
“I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of distrust toward emerging technologies and their developers,” said Andres Rosero, a doctoral candidate at George Mason University and lead author of the study. Frontiers of robotics and AI. “With the advent of generative AI, I thought it was important to start looking at possible cases in which anthropomorphic design and behavior sets could be used to manipulate users.”
Three types of lies
The scientists selected three scenarios reflecting situations in which robots already work (medical work, cleaning, and retail) and three different deception behaviors. These are external state deceptions, which lie about the world beyond the robot; covert state deceptions, where a robot’s design hides its capabilities; and superficial state deceptions, where a robot’s design overstates its capabilities.
In the external state deception scenario, a robot working as a caretaker for a woman with Alzheimer’s disease lies that her deceased husband will be home soon. In the hidden state deception scenario, a woman visits a house where a robot housekeeper is cleaning, unaware that the robot is also filming. Finally, in the superficial state deception scenario, a robot working in a store as part of a study on human-robot relationships falsely complains of pain while moving furniture, prompting a human to ask someone else to take the robot’s place.
What a tangled web we weave
The scientists recruited 498 participants and asked them to read one of the scenarios and then answer a questionnaire. They were asked whether they approved of the robot’s behavior, how deceptive it was, whether it could be justified, and whether anyone else was responsible for the deception. These responses were coded by the researchers to identify common themes and analyzed.
Participants disapproved of most of the covert state deceptions, with the cleaning robot with the unrevealed camera being the most deceptive. While they considered the external state deception and the superficial state deception to be moderately deceptive, they were more disapproving of the superficial state deception, where a robot pretended to feel pain. This could have been perceived as manipulation.
Participants endorsed most external deceptions, where the robot lied to the patient. They justified the robot’s behavior by saying it was protecting the patient from unnecessary pain, prioritizing the norm of sparing others’ feelings over honesty.
The Ghost in the Machine
Although participants were able to justify all three deceptions (for example, some suggested that the cleaning robot was filming for security reasons), most participants said that the covert state deception could not be justified. Similarly, about half of the participants who responded to the superficial state deception said that it was unjustifiable. Participants tended to blame the developers or owners of the robots for these unacceptable deceptions, especially the covert state deceptions.
“I think we should be concerned about any technology that can hide the true nature of its capabilities, because doing so could lead to users being manipulated by that technology in ways that the user (and perhaps the developer) never intended,” Rosero said.
“We have already seen examples of companies using web design principles and artificial intelligence chatbots to manipulate users into taking a certain action. We need regulation to protect us from these harmful deceptions.”
Scientists cautioned, however, that the research should be extended to experiments that could better model real-life reactions, such as videos or short role-playing games.
“The advantage of a cross-sectional vignette study is that we can obtain a large number of attitudes and perceptions from participants in a controlled manner,” Rosero says. “Vignette studies provide baseline findings that can be corroborated or challenged by more extensive experiments. Experiments with in-person or simulated human-robot interactions are likely to provide greater insight into how humans actually perceive these deceptive behaviors by robots.”
More information:
Andres Rosero et al., Exploratory analysis of human perceptions of social robots’ deceptive behaviors, Frontiers of robotics and AI (2024). DOI: 10.3389/frobt.2024.1409712. www.frontiersin.org/journals/r …9/frobt.2024.1409712
Quote:Will Humans Accept Robots That Can Lie? Scientists Find It Depends on the Lie (2024, September 5) Retrieved September 5, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.