In his thought-provoking DLD25 talk, Stanford University researcher Michal Kosinski challenges our understanding of Large Language Models (LLMs), arguing they’re far more than their name suggests.
While LLMs are trained to predict the next word in text sequences originally written by humans, this seemingly basic task requires recreating complex aspects of reality. “Large Language Models are not simply modeling the meaning of the words and grammar”, Kosinski explains. “They are also modeling the physical world, the cultures, the societies.”
Perhaps most interestingly, he adds, “they also have to model the psychological processes and psychological mechanisms that we employ when we generate our language.”
As a consequence, LLMs inadvertently develop complex abilities like reasoning, empathy, and theory of mind – skills once thought uniquely human – , Kosinski argues.
Watch the video for further insights into the psychology of AI systems, and whether they are merely simulating human behavior or could develop genuine intelligence