One of the world’s leading cognitive scientists, Anil Seth (University of Sussex), explores the age-old human fascination with creating artificial versions of ourselves, specifically focusing on the question of whether AI could ever be truly conscious.
“For centuries, people have fantasized about playing God by creating artificial versions of themselves”, Seth observes. “This is a dream that is reinvented with every breaking wave of new technology. And with AI, the wave is a really big one.”
It’s important to make a distinction between intelligence and consciousness, Seth emphasizes. “Very broadly, we can think of intelligence as doing the right thing at the right time”, he explains, defining it as “the ability to achieve goals by flexible means.”
Consciousness, by contrast, is “all about being and experiencing”, from the redness of red to the intensity of pain, for example. “It feels like something to be me”, Seth says. “It feels like something to be each of you.”
What does that mean for AI systems like ChatGPT and Claude that may claim to have “a rich inner world of thoughts and feelings and hopes and fears” – but could they really develop feelings and hopes and fears?
Watch the video to hear why Seth cautions against confusing compelling language models with conscious entities, and how human psychology, particularly anthropomorphism, lead us to project consciousness onto AI systems.