Credit: Image generated by AI
Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers discovered that Chatgpt d’Openai, one of the most advanced and popular AI models, makes the same types of decision -making errors as humans in certain situations – diffusion biases as excessive confidence in hot aggression (player) – the cost acting in others (for example, not suffering from basic basic or falling date).
Published in the Management of manufacturing and service operations Journal, the study reveals that Chatgpt does not just chew numbers – he “thinks” in a strangely similar way to humans, including mental shortcuts and dead angles. These biases remain rather stable in different commercial situations, but can change as AI evolves from one version to another.
AI: an intelligent assistant with human defects
The study, “Manager and an AI enter a bar: Chatgpt make biased decisions like us?” The results?
- AI falls into human decision -making traps – Chatgpt has shown biases as excessive confidence or an aversion of ambiguity, and a conjunction error (aka like “Linda problem”), in almost half of the tests.
- AI is excellent in mathematics, but fights with calls for judgment – it excels in logical problems and based on probabilities but stumbles when decisions require subjective reasoning.
- The bias does not disappear – although the new GPT -4 model is more precise analytical than its predecessor, it has sometimes shown stronger biases in tasks based on judgment.
Why this counts
From employment to loan approvals, AI is already shaping major decisions in business and government. But if AI imitates human biases, could it strengthen bad decisions instead of repairing them?
“As AI learns human data, she can also think like a human – biases and everything,” explains Yang Chen, principal author and assistant professor at Western University. “Our research shows that when AI is used to make judgments, it sometimes uses the same mental shortcuts as people.”
The study revealed that Chatgpt tends to:
- Play safely – AI avoids risks, even when risky choices could give better results.
- Safety – Chatgpt assumes that it is more precise than it is really.
- Look for confirmation – AI promotes information that supports existing hypotheses, rather than challenging it.
- Avoid ambiguity – I prefer alternatives with more information and less ambiguity.
“When a decision has a good answer, I have the clue – it is better to find the right formula than most people,” said Anton Ovchinnikov from Queen’s University. “But when the judgment is involved, the AI can fall into the same cognitive traps as people.”
So can we trust AI to make major decisions?
With governments around the world working on AI regulations, the study raises an urgent question: should we count on AI to make important calls when it can be just as biased as humans?
“AI is not a neutral referee,” said Samuel Kirshner of Unsw Business School. “If it is not controlled, it might not solve decision -making problems – it could actually worsen them.”
Researchers say that this is why political companies and decision -makers must monitor AI decisions as closely as they would do it a human decision -maker.
“The AI must be treated as an employee who makes important decisions – she needs surveillance and ethical directives,” said Meena Andiappan of McMaster University. “Otherwise, we risk automating erroneous thinking instead of improving it.”
What is the next step?
Study authors recommend regular AI -oriented decisions and refine AI systems to reduce biases. With the influence of AI, growth, by ensuring that it improves decision -making – rather than reproducing human faults – will be the key.
“The evolution of GPT-3.5 to 4.0 suggest that the latest models become more human in certain regions, but less human but more precise in others,” explains Tracy Jenkin of Queen’s University. “Managers must assess how different models work on their decision -making cases and reassess regularly to avoid surprises. Some use cases will need a refinement of significant models.”
More information:
Yang Chen et al, a manager and an AI enter a bar: Chatgpt make biased decisions like us ?, Management of manufacturing and service operations (2025). DOI: 10.1287 / MSOM.2023.0279
Provided by the operational research institute and management sciences
Quote: AI think like us – FLAWS AND ALL: Study finds the chatpt reflects human decision -making biases in half of the tests (2025, April 1) recovered on April 2, 2025 from
This document is subject to copyright. In addition to any fair program for private or research purposes, no part can be reproduced without written authorization. The content is provided only for information purposes.