As the artificial intelligence (AI) revolution continues, scientists have just developed a model that thinks like you and makes decisions in a remarkably similar way to humans, paving the way to better understanding human cognition.
Indeed, researchers at Helmholtz Munich have created an AI model called Centaur, which can simulate human behavior with incredible precision, having trained it on over 10 million decisions from psychological experiments, according to a report by Tech Xplore on July 2.
Creating AI that thinks like you
Specifically, the team led by Dr. Marcel Binz and Dr. Eric Schulz, researchers at the Institute for Human-Centered AI at Helmholtz Munich, has devised an AI model that can both offer a transparent explanation of how people think and confidently predict their behavior – something that was previously out of reach.
Now, the breakthrough AI model has managed to achieve the sought-after goal, thanks to training from a specially curated dataset called Psych-101 containing more than 10 million individual decisions from 160 behavioral experiments the results of which have appeared in the journal Nature.
As it happens, Centaur can predict human behavior both in familiar tasks as well as in entirely new situations that it has never encountered before, recognizing common decision-making strategies, adapting to variable contexts with flexibility, and even anticipating reaction times with shocking precision.
According to Dr. Binz, the team has “created a tool that allows us to predict human behavior in any situation described in natural language – like a virtual laboratory.” In the words of Dr. Shulz, the institute’s director: “We’re just getting started and already seeing enormous potential.”
“These models have the potential to fundamentally deepen our understanding of human cognition—provided we use them responsibly.”
Notably, the potential use cases for this advance range from analyzing classic psychological experiments to simulating decision-making processes in various clinical contexts, such as depression and anxiety disorders, to opening up new perspectives in health research, like understanding decision-making in people with different psychological conditions.
Elsewhere, despite developers’ best efforts to avoid AI acquiring certain malicious behaviors, it seems to be learning to manipulate, deceive, and even threaten its creators, including Anthropic’s new Claude 4 and OpenAI’s o1, according to the latest word of caution.