How psychologists kick-started AI by studying the human mind
- Written by Chris Ludlow, Lecturer in Psychology, Swinburne University of Technology
![](/images/00b/int.jpg)
Many people think of psychology as being primarily about mental health, but its story goes far beyond that.
As the science of the mind, psychology has played a pivotal role in shaping artificial intelligence, offering insights into human cognition, learning and behaviour that have profoundly influenced AI’s development.
These contributions not only laid the foundations for AI but also continue to guide its future development. The study of psychology has shaped our understanding of what constitutes intelligence in machines, and how we can address the complex challenges and benefits associated with this technology.
Machines mimicking nature
The origins of modern AI can be traced back to psychology in the mid-20th century. In 1949, psychologist Donald Hebb[1] proposed a model for how the brain learns: connections between brain cells grow stronger when they are active at the same time.
This idea gave a hint of how machines might learn by mimicking nature’s approach.
In the 1950s, psychologist Frank Rosenblatt built on Hebb’s theory[3] to develop a system called the perceptron[4].
The perceptron was the first artificial neural network[5] ever made. It ran on the same principle as modern AI systems, in which computers learn by adjusting connections within a network based on data rather than relying on programmed instructions.
A scientific understanding of intelligence
In the 1980s, psychologist David Rumelhart[6] improved on Rosenblatt’s perceptron. He applied a method called backpropagation[7], which uses principles of calculus to help neural networks improve through feedback.
Backpropagation was originally developed by Paul Werbos, who said[8] the technique “opens up the possibility of a scientific understanding of intelligence, as important to psychology and neurophysiology as Newton’s concepts were to physics”.
Rumelhart’s 1986 paper[9], coauthored with Ronald Williams and Geoffrey Hinton[10], is often credited with sparking the modern era of artificial neural networks. This work laid the foundation for deep learning innovations such as large language models.
In 2024, the Nobel Prize for Physics was awarded to Hinton and John Hopfield for work on artificial neural networks. Notably, the Nobel committee, in its scientific report[12], highlighted the crucial role psychologists played in the development of artificial neural networks.
Hinton, who holds a degree in psychology, acknowledged[13] standing on the shoulders of giants such as Rumelhart when receiving his prize.
Self-reflection and understanding
Psychology continues to play an important role in shaping the future of AI. It offers theoretical insights to address some of the field’s biggest challenges, including reflective reasoning, intelligence and decision-making.
Microsoft founder Bill Gates recently pointed out[14] a key limitation of today’s AI systems. They can’t engage in reflective reasoning, or what psychologists call metacognition.
In the 1970s, developmental psychologist John Flavell[15] introduced the idea of metacognition. He used it to explain how children master complex skills by reflecting on and understanding their own thinking.
Decades later, this psychological framework is gaining attention[16] as a potential pathway to advancing AI.
Fluid intelligence
Psychological theory is increasingly being applied to improve AI systems, particularly by enhancing their capacity for solving novel problems.
For instance, computer scientist François Chollet[17] highlights the importance of fluid intelligence[18], which psychologists define as the ability to solve new problems without prior experience or training.
In a 2019 paper[20], Chollet introduced a test inspired by principles from cognitive psychology to measure how well AI systems can handle new problems. The test – known as the Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI)[21] – provided a kind of guide for making AI systems think and reason in more human-like ways.
In late 2024, OpenAI’s o3 model demonstrated notable success[22] on Chollet’s test, showing progress in creating AI systems that can adapt and solve a wider range of problems.
The risk of explanations
Another goal of current research is to make AI systems more able to explain their output. Here, too, psychology offers valuable insights.
Computer scientist Edward Lee[23] has drawn on the work of psychologist Daniel Kahneman[24] to highlight why requiring AI systems to explain themselves might be risky.
Kahneman showed how humans often justify their decisions with explanations created after the fact, which don’t reflect their true reasoning. For example, studies[25] have found that judges’ rulings fluctuate depending on when they last ate — despite their firm belief in their own impartiality[26].
Lee cautions that AI systems could produce similarly misleading explanations. Because rationalisations can be deceptive, Lee argues AI research should focus on reliable outcomes instead.
Technology shaping our minds
The science of psychology remains widely misunderstood. In 2020, for example, the Australian government proposed reclassifying it as part of the humanities[27] in universities.
As people increasingly interact with machines, AI, psychology and neuroscience may hold key insights into our future.
Our brains are extremely adaptable, and technology shapes how we think and learn. Research[28] by psychologist[29] and neuroscientist Eleanor Maguire[30], for example, revealed that the brains of London taxi drivers are physically altered by using a car to navigate a complex city.
As AI advances, future psychological research may reveal how AI systems enhance our abilities and unlock new ways of thinking.
By recognising psychology’s role in AI, we can foster a future in which people and technology work together for a better world.
References
- ^ Donald Hebb (en.wikipedia.org)
- ^ Frank Rosenblatt / Wikimedia (en.wikipedia.org)
- ^ built on Hebb’s theory (doi.org)
- ^ perceptron (news.cornell.edu)
- ^ first artificial neural network (americanhistory.si.edu)
- ^ David Rumelhart (en.wikipedia.org)
- ^ backpropagation (wiki.pathmind.com)
- ^ said (www.google.com.au)
- ^ paper (www.nature.com)
- ^ Geoffrey Hinton (en.wikipedia.org)
- ^ TT News Agency / EPA (photos.aap.com.au)
- ^ scientific report (www.nobelprize.org)
- ^ acknowledged (www.utoronto.ca)
- ^ pointed out (www.fastcompany.com)
- ^ John Flavell (doi.org)
- ^ gaining attention (arxiv.org)
- ^ François Chollet (en.wikipedia.org)
- ^ fluid intelligence (en.wikipedia.org)
- ^ ARC Prize (arcprize.org)
- ^ 2019 paper (arxiv.org)
- ^ Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) (arcprize.org)
- ^ notable success (www.bloomberg.com)
- ^ Edward Lee (www.researchgate.net)
- ^ Daniel Kahneman (en.wikipedia.org)
- ^ studies (pubmed.ncbi.nlm.nih.gov)
- ^ despite their firm belief in their own impartiality (doi.org)
- ^ reclassifying it as part of the humanities (www.theguardian.com)
- ^ Research (pmc.ncbi.nlm.nih.gov)
- ^ psychologist (www.bps.org.uk)
- ^ Eleanor Maguire (en.wikipedia.org)
Read more https://theconversation.com/how-psychologists-kick-started-ai-by-studying-the-human-mind-248542