Reasoning, the process by which humans mentally process information to draw specific conclusions or solve problems, can be divided into two broad categories. The first type of reasoning, called deductive reasoning, involves starting with a general rule or premise and then using that rule to draw conclusions about specific cases.
This could mean, for example, starting from the assumption that “all dogs have ears” and “chihuahuas are dogs” and concluding that “chihuahuas have ears.”
The second most common form of reasoning is inductive reasoning, which involves generalizing (i.e., formulating general rules) from specific observations. This might mean, for example, that all swans are white because all the swans we have encountered in our lives have been white.
There have been many studies on how humans use deductive and inductive reasoning in their daily lives. However, the extent to which artificial intelligence (AI) systems employ these different reasoning strategies has rarely been explored until now.
A research team from Amazon and the University of California, Los Angeles recently conducted a study exploring the fundamental reasoning capabilities of large language models (LLMs), large AI systems that can process, generate, and adapt texts in human languages. Their findings, published on the website arXiv preprint server, suggest that these models have strong inductive reasoning capabilities, while they often exhibit poor deductive reasoning.
The aim of the paper was to better understand the gaps in LLM reasoning and to identify why LLMs exhibit lower performance on “counterfactual” reasoning tasks that deviate from the norm.
Several previous studies have assessed LLM students’ deductive reasoning skills by testing their ability to follow instructions in basic reasoning tasks. Yet, their inductive reasoning (i.e., their ability to make general predictions based on information they have processed in the past) has not been closely examined.
To clearly distinguish inductive from deductive reasoning, the researchers introduced a new model, called SolverLearner. The model uses a two-stage approach to separate the process of learning rules from applying them to specific cases. In particular, the rules are applied through external tools, such as code interpreters, to avoid relying on the LLM’s deductive reasoning capability, according to an Amazon spokesperson.
Using the SolverLearner framework they developed, the Amazon team trained the LLMs to learn functions that map input data points to their corresponding outputs, using specific examples. This allowed them to study how well the models could learn general rules based on the examples they were given.
The researchers found that LLMs have stronger inductive reasoning abilities than deductive ones, especially for tasks involving “counterfactual” scenarios that deviate from the norm. These findings can help people better understand when and how to use LLMs. For example, when designing agent systems, such as chatbots, it may be better to take advantage of LLMs’ strong inductive abilities.
Overall, the researchers found that LLM graduates performed remarkably well on inductive reasoning tasks, but they often lacked deductive reasoning skills. Their deductive reasoning appeared particularly poor in scenarios that were hypothesis-driven or deviated from the norm.
The results gathered in this study could inspire AI developers to exploit the strong inductive reasoning capabilities of LLMs to tackle specific tasks. Furthermore, they could pave the way for new efforts aimed at understanding LLMs’ reasoning processes.
According to an Amazon spokesperson, future research in this area could focus on exploring the relationship between an LLM’s ability to compress information and its strong inductive capabilities. This perspective could further enhance the LLM’s inductive reasoning capabilities.
More information:
Kewei Cheng et al., Inductive or Deductive? Rethinking the Core Reasoning Skills of LLMs, arXiv (2024). DOI: 10.48550/arxiv.2408.00114
arXiv
© 2024 Science X Network
Quote:Exploring the Fundamental Reasoning Abilities of LLMs (2024, August 31) retrieved September 1, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.