
Why Tristan Harris Fears a “Race to Recklessness” in AI
The famous tech ethicist sees great potential in artificial intelligence – but also the danger of further eroding the foundations of society. His solution? Balancing power with responsibility.
Keeping up with the Kardashians may seem easier than trying to keep up with the AI industry. The moment OpenAI, maker of ChatGPT, presents its newest Large Language Model, competitors will rush to announce an update for their own generative AI model.
Whether it’s Google, Anthropic, Meta, DeepSeek or Alibaba – any company trying to secure a share of the hottest business in decades is rapidly iterating on their own LLMs, constantly tweaking and fine-tuning the algorithms to make them ever more powerful.
Dangerous Shortcuts?
While corporations worldwide are rushing implement AI, hoping for “value creation within the business units”, as McKinsey analysts note, some observers are getting concerned at the pace of the technology’s development.
“There’s a race to roll out this technology as fast as possible, by taking many shortcuts”, Tristan Harris, co-founder of the Center for Humane Technology tells DLD. “It’s like a race to recklessness with profound implications for society.”
Tristan Harris
is one of the most prominent technology ethicists in the world. He co-founded the Center for Humane Technology in 2018 with a mission to help create “a world with technology that respects our attention, improves our well-being, and strengthens communities”.
Speaker Profile
Any healthy society needs a solid foundation, and this foundation is already crumbling, in no small part because of the corrosive effects of social media, Harris argues. The former Google ethicist gained worldwide fame through the Netflix documentary “The Social Dilemma”, released in 2020, a few months after Harris’ DLD Munich talk on democracy and the loss of trust due to social media and the attention economy.
“If the business model is the race to maximize engagement, you are not worth as much as a citizen while you’re not on your screen”, Harris says.
The dilemma: Many activities that are good for society, such as pending time with friends and family, “are directly in conflict with the business model of these platforms”, Harris emphasizes. “We are worth more when we are addicted, distracted, polarized, sexualized, outraged, and not agreeing with each other – because all of those features of culture are drive the success of this business model.”
The AI Dilemma: Watch our DLD backstage interview with Tristan Harris.
Building Blocks of Society
With AI, the digital economy is again setting the wrong incentives, Harris fears. He likes to compare the dynamic to the popular board game Jenga, where players take turns removing blocks from a tower and keep stacking them on top, trying not to make the tower fall.
AI is certainly adding many benefits to society, Harris admits: “Anybody can make art, write code, do things in biology or clone someone’s voice. Those are very powerful capabilities. And if you look just at the top of the tower, you see amazing, positive things. You can be an optimist.”
But when you look lower down at the tower, “you’ll see that you had to pull out foundational building blocks to create these new capabilities”, he argues. Yes, we can make cool new AI videos and art – but no one knows what’s true anymore, and democracy is weakened.”

That’s why his DLD25 talk was about taking a holistic view, he adds. “We’re going to get incredible new cures to cancer – and at the same time we’re facing new risks.”
Matching Power With Responsibility
The best way forward is striking a balance between these risks and the giant rewards that AI promises, Harris argues. “There’s a narrow path when technology tends to work well in society – and that is when power is matched with responsibility”, he says. “We must hold AI developers responsible. It’s really like constructing a building. If the developers have are held liable in case something goes wrong, they exert much more caution.”
This will require putting far more money “into societal defense” than right now, where the vast majority of investor capital flows into building new AI capabilities. “Currently, there’s a thousand-to-one gap between the amount of money going into AI being more powerful, meaning scaling AI capabilities, and the amount of money going into making AI safe or defending society”, Harris notes.
In an ideal world, he says, “we would reallocate a bunch of the dollars that are currently invested in AI into increasing societal adaptation and resilience.” If that doesn’t happen society may face dire consequences, he believes. “We’re about to disrupt millions of jobs without people having a new, alternate economic future. That’s going to create so much economic anxiety that we can’t just roll out AI and overwhelm society.”
Watch the video of our interview with Tristan Harris for further insights into AI ethics, better ways to regulate artificial intelligence, and upskilling government so that it can keep up with the speed of AI in business.