Photo Credit: Image by Shutterstock (c) fyu6561
The dark side of human nature is dominating the way politics is portrayed on social media, according to an unprecedented new study in Science that confirmed suspicions innuendo and conspiracies are outracing more humdrum facts and truth-telling on Twitter.
The Twitter study was conducted by a team at the Massachusetts Institute of Technology and MIT Media Lab. It analyzed a decade of Twitter posts, focusing on 126,000 examples of false news spread by 2 million to 3 million people. The study noted how rumor spreads much faster than truth, and claims human nature, abetted by algorithms fanning those reflexes, is to blame.
“Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information,” the study authors found. “False news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots [fabricating online personas] accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”
How much faster do rumors and false news spread on Twitter?
“False news reached more people than the truth; the top 1% of false news cascades diffused to between 1,000 and 100,000 people, whereas the truth rarely diffused to more than 1,000 people,” the study said. “Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed.”
The findings reported by Science are part of a growing chorus of academic expert opinion that is pointing out how the political arena is uniquely vulnerable to propaganda. For many reasons, the American tradition of protecting most political speech has dovetailed with the content-curating inner workings from social media platforms like Facebook and Twitter, and video platforms like YouTube, which all rely on advertising-based business models.
A New York Times commentary published Sunday by Zeynep Tufekci, an associate professor at the School of Information and Library Science at the University of North Carolina, cited this same dynamic at YouTube, which consciously feeds a stream of increasingly extreme content.
“It seems as if you are never ‘hardcore’ enough for YouTube’s recommendation algorithm,” she wrote, after observing a trail of served-up politicized content. “It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”
Tufekci squarely attributed the radicalizing content to social media’s business model, which has pushed Silicon Valley to devise addictive devices and curate provocative content. She said Silicon Valley’s programmers weren’t seeking to roil the political world by elevating conspiratorial content. But social media has unleashed a new outbreak of propaganda, even if ordinary people are—or human nature is—playing a role in accelerating its spread.
“This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff,” said Tufekci. “A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.”
“What keeps people glued to YouTube?” she asked. “Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with—or to incendiary content in general.”
The Twitter study affirmed that people are intrinsically drawn to spicier—and not always true—content. But its finding that social media users and not bots (fabricated online personas) are mostly driving false news feedback loops only accounts for part of what’s happening with misinformation on social media. After all, programmers created its brain-mimicking and brain-triggering algorithms that first profile users (from their keystrokes) and then serve up inflammatory media.
“No matter how neutral a platform may seem, there’s always a person behind the curtain,” noted the New Yorker’s Andrew Marantz, in a Monday piece profiling the social media site Reddit, its CEO Steve Huffman, and asking how to “detoxify the Internet.”
Marantz’s observation is key. It points toward the solutions raised by the social scientists who, when commenting on the MIT study, asked in another article in Science, “How can we create a news ecosystem … that values and promotes truth?”
They noted “about 47 percent of Americans overall report getting news from social media often or sometimes, with Facebook as, by far, the dominant source. Social media are key conduits for fake news sites.”
Silicon Valley’s Reactions
The attention economy’s response to the outbreak of propaganda on its platforms has not been to alter its money-making machinery—its content curating algorithms. Instead, institutions like Facebook and Google have tried to create tools for media organizations to help their readers discern more and less truthful content. But those efforts seem to be futile—apart from their public relations value—the social scientists said in Science, because people are still drawn to what’s edgy.
“Fact checking might even be counterproductive under certain circumstances,” the researchers noted. “Research on fluency—the ease of information recall—and familiarity bias in politics shows that people tend to remember information, or how they feel about it, while forgetting the context within which they encountered it. Moreover, they are more likely to accept familiar information as true. There is thus a risk that repeating false information, even in a fact-checking context, may increase an individual’s likelihood of accepting it as true.”
Richard Gingras, a senior Google executive, said at a recent blue-ribbon panel at Stanford University that the problem, if there is one, is anything but anti-democratic. Rather, there’s an outbreak of political speech, Gingras said, which might be politically disruptive, yet is expressing the views of multitudes of individuals.
It may be that the political sphere is returning to where it was a century ago, during World War I, before corporate public relations emerged and national media monopolies imposed journalistic norms of objectivity and balance, the social scientists said in Science.
What to do about the rise of political propaganda on social media is becoming one of 2018’s most pressing issues. The MIT research on Twitter shows it is due to a mix of brain-tapping technology and individual responses inherent to human nature.
There is little debate that politics, domestically and globally, have become more dominated by authoritarians. Social media has a role in this change. Silicon Valley may want to pay attention to the political implications of what they have created; however, studies like the MIT research may underscore why they have no choice.
The United States’ founders were heirs to an intellectual tradition that didn’t just worry about authoritarian monarchs but about the dark side of human nature. They created a republican form of government with deliberate checks and balances to restrain those darker impulses. The latest research suggests those restraints are what’s missing from social media’s algorithms.