Credit: CC0 Public domain
Artificial intelligence (AI) systems are often described as sentient agents poised to eclipse the human mind. But AI lacks the crucial human capacity for innovation, researchers at the University of California, Berkeley, have found.
While both children and adults can solve problems by finding new uses for everyday objects, AI systems often lack the ability to visualize tools in new ways, according to findings published in Perspectives on psychological science.
AI language models like ChatGPT are passively trained on datasets containing billions of words and images produced by humans. This allows AI systems to function as a “cultural technology” similar to writing that can summarize existing knowledge, Eunice Yiu, co-author of the paper, explained in an interview. But unlike humans, they have difficulty innovating on these ideas, she said.
“Even young human children can produce intelligent answers to certain questions that (language learning models) cannot produce,” Yiu said. “Instead of thinking of these AI systems as intelligent agents like us, we can think of them as a new form of library or search engine. They effectively summarize and communicate to us the existing culture and knowledge base.”
Yiu and Eliza Kosoy, along with their doctoral advisor and lead author of the paper, developmental psychologist Alison Gopnik, tested how AI systems’ ability to imitate and innovate differs from that of children and adults. They presented 42 children aged 3 to 7 and 30 adults with textual descriptions of everyday objects.
In the first part of the experiment, 88% of the children and 84% of the adults were able to correctly identify which objects would “go best” with another. For example, they associated a compass with a ruler instead of a teapot.
In the next stage of the experiment, 85% of the children and 95% of the adults were also able to innovate on the expected use of everyday objects to solve problems. In one task, for example, participants were asked how they could draw a circle without using a conventional tool such as a compass.
Faced with the choice between a similar tool like a ruler, a different tool like a round-bottomed teapot, and an irrelevant tool like a stove, the majority of participants chose the teapot, a conceptually different tool that could nevertheless perform the same functions. work like a compass by allowing them to trace the shape of a circle.
When Yiu and his colleagues provided the same textual descriptions to five large language models, the models performed similarly to humans during the imitation task, with scores ranging from 59% for the least model. performing at 83% for the most efficient model. The AI responses to the innovation task, however, were much less precise. Effective tools were selected between 8% of the time by the worst performing model and 75% of the time by the best performing model.
“Children can imagine completely new uses for objects they have never seen or heard of before, like using the bottom of a teapot to draw a circle,” Yiu said. “Larger models have a much harder time generating such responses.”
In a related experiment, the researchers noted, children were able to discover how a new machine worked simply by experimenting and exploring. But when the researchers gave several large language models textual descriptions of the evidence the children produced, they struggled to draw the same conclusions, likely because the responses were not explicitly included in their training data. Yiu and colleagues wrote.
These experiments demonstrate that AI’s reliance on statistical prediction of language patterns is not sufficient to discover new information about the world, Yiu and colleagues wrote.
“AI can help convey information that is already known, but it is not innovative,” Yiu said. “These models can summarize conventional wisdom, but they cannot extend, create, change, abandon, evaluate, and improve conventional wisdom in the same way that a young human can.”
However, AI development is in its early stages and there is still much to learn about how to expand AI’s learning capability, Yiu said. Drawing inspiration from children’s curious, active and intrinsically motivated approach to learning could help researchers design new AI systems that are better prepared to explore the real world, she said.
More information:
Eunice Yiu et al, Transmission versus truth, imitation versus innovation: what children can do that great models of language and language and vision cannot (yet), Perspectives on psychological science (2023). DOI: 10.1177/17456916231201401
Provided by the Association for Psychological Science
Quote: Artificial intelligence systems excel in imitation, but not in innovation (December 12, 2023) retrieved December 12, 2023 from
This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.