• About
  • Advertise
  • Contact
Friday, May 16, 2025
Manhattan Tribune
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
Manhattan Tribune
No Result
View All Result
Home Science

Exploring the fundamental reasoning skills of LLMs

manhattantribune.com by manhattantribune.com
1 September 2024
in Science
0
Exploring the fundamental reasoning skills of LLMs
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Comparative experiments that use a coherent task in different contexts, each emphasizing either deductive reasoning (i.e., methods (a) and (b)) or inductive reasoning (i.e., methods (c) and (d)). Credit: Cheng et al.

Reasoning, the process by which humans mentally process information to draw specific conclusions or solve problems, can be divided into two broad categories. The first type of reasoning, called deductive reasoning, involves starting with a general rule or premise and then using that rule to draw conclusions about specific cases.

This could mean, for example, starting from the assumption that “all dogs have ears” and “chihuahuas are dogs” and concluding that “chihuahuas have ears.”

The second most common form of reasoning is inductive reasoning, which involves generalizing (i.e., formulating general rules) from specific observations. This might mean, for example, that all swans are white because all the swans we have encountered in our lives have been white.

There have been many studies on how humans use deductive and inductive reasoning in their daily lives. However, the extent to which artificial intelligence (AI) systems employ these different reasoning strategies has rarely been explored until now.

A research team from Amazon and the University of California, Los Angeles recently conducted a study exploring the fundamental reasoning capabilities of large language models (LLMs), large AI systems that can process, generate, and adapt texts in human languages. Their findings, published on the website arXiv preprint server, suggest that these models have strong inductive reasoning capabilities, while they often exhibit poor deductive reasoning.

The aim of the paper was to better understand the gaps in LLM reasoning and to identify why LLMs exhibit lower performance on “counterfactual” reasoning tasks that deviate from the norm.

Introducing the SolverLearner team’s framework for inductive reasoning. SolverLearner follows a two-step process to separate the learning of input-output mapping functions from the application of these functions for inference. Specifically, the functions are applied via external code interpreters, to avoid incorporating LLM-based deductive reasoning. Credit: Cheng et al.

Several previous studies have assessed LLM students’ deductive reasoning skills by testing their ability to follow instructions in basic reasoning tasks. Yet, their inductive reasoning (i.e., their ability to make general predictions based on information they have processed in the past) has not been closely examined.

To clearly distinguish inductive from deductive reasoning, the researchers introduced a new model, called SolverLearner. The model uses a two-stage approach to separate the process of learning rules from applying them to specific cases. In particular, the rules are applied through external tools, such as code interpreters, to avoid relying on the LLM’s deductive reasoning capability, according to an Amazon spokesperson.

Using the SolverLearner framework they developed, the Amazon team trained the LLMs to learn functions that map input data points to their corresponding outputs, using specific examples. This allowed them to study how well the models could learn general rules based on the examples they were given.

The researchers found that LLMs have stronger inductive reasoning abilities than deductive ones, especially for tasks involving “counterfactual” scenarios that deviate from the norm. These findings can help people better understand when and how to use LLMs. For example, when designing agent systems, such as chatbots, it may be better to take advantage of LLMs’ strong inductive abilities.

Overall, the researchers found that LLM graduates performed remarkably well on inductive reasoning tasks, but they often lacked deductive reasoning skills. Their deductive reasoning appeared particularly poor in scenarios that were hypothesis-driven or deviated from the norm.

The results gathered in this study could inspire AI developers to exploit the strong inductive reasoning capabilities of LLMs to tackle specific tasks. Furthermore, they could pave the way for new efforts aimed at understanding LLMs’ reasoning processes.

According to an Amazon spokesperson, future research in this area could focus on exploring the relationship between an LLM’s ability to compress information and its strong inductive capabilities. This perspective could further enhance the LLM’s inductive reasoning capabilities.

More information:
Kewei Cheng et al., Inductive or Deductive? Rethinking the Core Reasoning Skills of LLMs, arXiv (2024). DOI: 10.48550/arxiv.2408.00114

Journal information:
arXiv

© 2024 Science X Network

Quote:Exploring the Fundamental Reasoning Abilities of LLMs (2024, August 31) retrieved September 1, 2024 from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.



Tags: exploringfundamentalLLMsreasoningskills
Previous Post

Study describes activity-regulated genetic program underlying synapse formation during development

Next Post

Died at work: her corpse spends five days in the office without raising eyebrows from colleagues

Next Post
Died at work: her corpse spends five days in the office without raising eyebrows from colleagues

Died at work: her corpse spends five days in the office without raising eyebrows from colleagues

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Category

  • Blog
  • Business
  • Health
  • International
  • National
  • Science
  • Sports
  • Wall Street
  • World
  • About
  • Advertise
  • Contact

© 2023 Manhattan Tribune -By Millennium Press

No Result
View All Result
  • Home
  • International
  • World
  • Business
  • Science
  • National
  • Sports

© 2023 Manhattan Tribune -By Millennium Press