Who controls AI, and how can the systems become more trustworthy? These questions lie at the heart of the DLD AI Summit discussion between Mark Surman, President of the Mozilla Foundation, and journalist Andrian Kreye about the benefits of open source in the world of artificial intelligence.
The key principles of trustworthy AI are agency, accountability and transparency, Surman argues – but transparency is one of the big challenges of generative AI because the systems essentially resemble “black boxes”, he explains.
“How do we understand what happened in the black box?”, Surman asks. “When we talk about auditing, explainability, we can’t move forward without solving that [issue]. And do we want to solve it by a few companies behind closed doors? Or do we want to solve it collectively?”
AI also creates risks around the spread of misinformation, especially relating to elections, and AI could also be used for malicious purposes like terrorism, the discussion makes clear.
Surman sees an opportunity for the EU to build a different kind of tech industry focused on trustworthiness. However, challenges remain around access to capital and resources to scale companies.