How Can We Trust Social Media Again?
The young man from Silicon Valley was full of optimism when he spoke at the DLD conference in Munich. By giving people “the power to share”, his company would “make the world a more open and transparent place”, Facebook founder Mark Zuckerberg told the audience – implying that this would also lead to a better world, almost automatically.
“People are going to understand more about what’s going on with their friends and the people around them, society at large, the different businesses that make up the economy, and also government”, Zuckerberg said. “You know, this movement towards more openness as people share more is going to have a positive impact on all of those things.”
This was in January of 2009, arguably a more innocent time. Facebook counted 200 million users but was still in its infancy. Mark Zuckerberg, not yet 25 years old at the time, could be excused for his idealism bordering on naiveté. Today, Facebook is a favorite online destination of 2.7 billion people around the world. Add to that some 2 billion WhatsApp users and 1.1 billion Instagram fans – two services that Facebook owns as well – and it’s clear why Mark Zuckerberg’s company has found itself at the center of a passionate debate around misinformation and conspiracy theories.
Web of Lies
As much of the world has been suffering Covid-19 lockdowns, the pandemic has amplified the problem of “fake news” spreading online. “With real-life interaction suppressed to counter the spread of the virus, it’s easier than ever for people to fall deep down a rabbit hole of deception”, The Guardian notes in a report about the QAnon movement.
The phenomenon, which started on the fringes of society, quickly expanded beyond its origins on shady message boards, spreading across all major social media platforms and undermining a common belief in a shared truth.
“I think very few people have an understanding of how their minds work and what draws them to some information and some people, and what pushes them away”, observes Rachel Botsman, a University of Oxford researcher who explored the importance of trust for technology and innovation in her book Who Can You Trust?
“Ask yourself, not what you believe, but why are you believing that?”
To the expert, it’s not surprising that millions choose to believe in their own reality, rather than scientists or media reports. The explanation, Botsman says, lies in the fact that traditional media sources require “layers of trust” – in reporters, experts, the newspapers, radio and TV stations – whereas information shared on social media typically comes from friends. So there’s a built-in boost of credibility.
“When I get something that is directly sent to me from someone that I know, who I think I know, there’s only one layer of trust there”, Botsman says. “So it feels like a more direct source of information.”
Addictive By Design
Social media platforms have long resisted the idea that they should be responsible for information shared on their networks, but critics see a direct connection between design decisions and harmful effects for society.
Tristan Harris, a former design ethicist at Google and co-founder of the Center for Humane Technology, speaks of “technology hijacking human weaknesses” when he describes how services like YouTube, Twitter or Facebook try to capture people’s attention, for as long as possible, so that they can sell clicks and visits to advertisers.
In his DLD 20 talk, Harris explained that most YouTube viewers follow the recommendations of the software, rather than actively putting together their own program. “70 percent of the watch time is from the machine at YouTube calculating, ‘What can I show your nervous system? What can I dangle in front of your brain that’s going to keep you watching?’”, Harris said.
The problem: Recommendation systems tend to amplify rather than promote diversity – so that existing beliefs are confirmed, increasing the likelihood that viewers stay tuned, readers keep reading, and users keep scrolling. Roger McNamee, an early Facebook investor turned vocal critic, blames this concept of “algorithmic amplification” for much of what’s gone wrong with social media.
“I want to ban algorithmic amplification”, he declared at DLD Munich in January, predicting that “the problems with hate speech, disinformation and conspiracy theories that are destroying civilization will evaporate” if the practice were outlawed.
“I believe that technology is too important to be left to the people who are running the industry today.”
The Brexit referendum in 2016, McNamee recalls, was a moment of revelation for him. “I realized, ‘Oh my gosh, the same tools that make Facebook so effective for advertisers can be used to harm democracy’”, he told DLD in an interview. “That’s when I began digging in, to try to understand what was going on.”
Micro-targeting and algorithmic amplification become a huge problem when applied at the societal level, he explained, because these practices make it easier to manipulate millions of people by steering them, often in subtle ways, in a certain direction. Efforts to detect abuse through technology only were bound to fail, McNamee argued. “If you’re imposing artificial intelligence where previously human judgement was brought to bear, what’s going to happen? You’re going to gradually squeeze out individual choice.”
Facebook’s VP of Global Affairs, Nick Clegg, defends ranking algorithms as a smart way to serve up an individual news feed to billions of users. “Of course people don’t want to see click bait, they don’t want to see polarizing things, they don’t want to see hate speech”, Clegg said at DLD Munich. “That’s why not only do we suppress that stuff, we ban it!”
Watch the DLD Talks
Hold That Thought
When Mark Zuckerberg spoke at DLD in 2009 he praised the ease at which information can travel in a networked society. “If you go back, like, 50 years in the world, a lot of information couldn’t spread very efficiently”, he told his interviewer, Techonomy editor David Kirkpatrick. “With the Internet and all these tools that are so effective, things spread across social connections very quickly.”
By now, the negative effects of this efficiency have become abundantly clear – and the question is: How to slow down information as it races across social networks?
Twitter introduced a new feature amounting to a virtual speed bump ahead of the U.S. elections 2020. Instead of allowing users to share messages with a simple click, the network kept asking if they really wanted to hit the retweet button.
In Rachel Botsman’s view, this is a step in the right direction. “I believe many of the solutions lie in slowing people down”, she says. “You know, teaching people what skepticism is and how to dole their trust out well. And the speed that information travels, the way that the systems work, the way that we gather and absorb information is designed to do the very opposite of that.”
She argues that this is a skill that children should be taught in school because she doubts that more technology can be an answer. “If we wait for design solutions around being able to flag content that has come from an unreliable source or that that person is not real, it will be game over”, Botsman says. “It will be too late.”
Manipulating images used to be hard work. When photos were still developed in dark rooms you needed to be an expert at handling chemicals, and even digital photoshopping required a lot of training – until computers got powerful enough to learn from humans. By now, smartphone apps can retouch images automatically and even edit videos well enough to make them Instagram-ready in mere seconds.
That’s good and well as long as we’re talking about selfies and vacation photos. What worries experts is that smart algorithms are increasingly used to mass produce images that look real but are entirely fabricated. These so-called deepfakes have the potential to wreak havoc on all levels: personal, business and politics.
While the number of detected deepfakes is still relatively small, it has been roughly doubling every six months, according to Sensity, an analytics firm based in Amsterdam formerly known as Deeptrace.
“Until yesterday it was OK to trust audio and video recording as factual representation of something that actually happened”, Sensity CEO Giorgio Patrini told DLD in an interview. “But today there are technologies that can manipulate them very realistically, and they are inexpensive and easy to use for people to do harm.”
When Patrini spoke at DLD Munich in January he demonstrated how still images of human faces could easily be turned into video animations, with the option of changing gender or ethnicity at will. “This is going to open new opportunities for fraudsters or social engineering, for disinformation, that were not there before”, the Sensity CEO points out. “Therefore we need new defensive tools to tackle this issue.”
His company’s solution is to teach an artificial intelligence system how to spot manipulated images. “We use deep learning and computer vision against itself, in a way”, Patrini explains. “In a fight of AI vs. AI.”
The system gets trained on photos and videos that Sensity engineers know to be manipulated. The goal is for the AI to understand at pixel level which clues and inconsistencies to look for “that could tell us, for example, that this face was not there when the camera recorded that video”, Patrini says.
“My belief is that the existing social media networks are failing society.”
Deepfakes are already bringing harm to individuals, so it seems to be a matter of time when they will also be used for political purposes – renewing the question: How should social media companies be held responsible for what happens on their platforms?
In Rachel Botsman’s view it’s clear that the tech giants need to do a better job of controlling what’s being shared on their networks, but within clear limits. In her DLD Sync talk she advocated a “combination of responsibility of the platform and external regulation that enforces that responsibility.”
Crucially, though, social networks should not be held responsible for what users do based on information they find online. “This is an important distinction”, she said. “The platforms take responsibility for the content, not necessarily the consequences of the content.”
Under mounting pressure social networks have called for new regulation themselves. Facebook’s Nick Clegg argued at DLD in January that deciding which content to allow and which to delete shouldn’t be left to corporations. “You can only fix this by legislators and regulators setting down new rules”, he said, “just like they set down new rules for cars, for every new technology which has ever erupted in the history of time.” A few weeks later Facebook published a whitepaper (PDF) proposing new guidelines for content regulation.
Ultimately the impact of a social network depends on its popularity, of course. So there’s an individual responsibility in which services we choose. Network effects make it hard to turn away from platforms where all of our friends and colleagues are – but nobody is forced to use Facebook, Twitter, WhatsApp or TikTok. It’s a voluntary decision.
And Wikipedia founder Jimmy Wales is hoping that he can get at least some of us to reconsider. In October 2019 he launched WT Social, a new kind of social network that is designed to be more quiet, more amicable and more friendly.
Current networks, Wales told DLD, “prioritize low-quality information, clickbait content, outrage. And it’s damaging the world.”
While WT Social is free to use, Wales trusts that a small but significant number of users is willing to make donations – similar to Wikipedia fans. “We have a completely different incentive structure”, he explains. “Because the only way we get any money is if people voluntarily choose to pay, they won’t pay unless it’s meaningful to them and important in their lives.”
To keep things civil he’s relying on the community. Personal attacks, hate speech and other toxic behavior are no big issue, Wales says, because the community won’t tolerate it.
“It’s never going to be perfect”, he admits. “There’s always going to be some squabbling. That’s just the nature of things.” But community design and setting the right tone from the beginning matters, he argues. “You can start out and say, ‘When you come here the standards of behavior are different. And we try to be nice to each other. We try to be thoughtful and kind.’”