AI-powered disinformation is a growing danger to society and calls for collective action to safeguard democracy, as you’ll hear in this DLD Future Hub panel discussion with Fabian Mehring, Bavaria’s State Minister for Digital Affairs; Yuki Asano, professor at the Technical University of Nuremberg, who researches the foundations of trustworthy AI; Andrea Martin, CTO at IBM and head of the Watson Center Munich; and journalist Max Gilbert from Bayerischer Rundfunk’s #Faktenfuchs team — these professional fact checkers are on the frontlines of fighting fake news and conspiracy theories every day.
In his introductory remarks, Fabian Mehring warns that generative AI – despite all of its positive potential – poses a great danger to democracy when used for political disinformation. He calls for measures like transparency in social media funding, real-name obligations, and a “platform levy” to support quality journalism, which is often exploited by AI models without compensation.
Yuki Asano warns of an imminent future where autonomous AI agents may generate deepfakes and manipulate social media profiles, making it increasingly difficult for humans to tell truth from disinformation. Asano suggests creating a public service – similar to a digital weather service – that independently labels and verifies information to counteract the scale of AI-generated manipulation.
Andrea Martin emphasizes the sophistication of AI-generated content, from deepfakes to voice clones, which are becoming indistinguishable to the human eye and ear. She stresses the importance of using AI itself to detect disinformation but emphasizes that technological solutions alone are not enough.
Max Gilbert highlights the sheer scale of misinformation and the difficulty of detecting it. Platforms often lack incentives to combat disinformation, leaving public institutions and journalists to bridge the gap.
Please note that this discussion is in German language.






