On Friday, the PRISM network welcomed Dr. Jaroslaw R. Lelonkiewicz (University of Valencia) for an invited lecture exploring the role of Large Language Models (LLMs) in cognitive science.
In a talk titled “What Can We Learn from Stochastic Parrots?”, Dr. Lelonkiewicz challenged the ongoing debate about whether LLMs are merely imitators of human behaviour — or whether they can serve as valid experimental models for studying cognition.
By drawing analogies to non-human animal research (e.g., baboons, parrots), he argued that:
"Even if LLMs are not truly cognitive agents, their behaviours can still inform human psychological theory — especially when machine-derived predictions are later confirmed in human studies."
Highlights from the talk:
A review of what is currently known about LLM cognitive architecture
A proposed research pipeline for testing psychological hypotheses using LLMs
A recent case study where machine responses led to a novel hypothesis, later confirmed in human participants
A freely available tutorial and codebase for running human experiments with LLMs
The talk sparked lively discussion among researchers at the intersection of psychology, language, AI, and philosophy of mind.
📄 Read the full abstract