How social media influences elections
A Q&A with VICE reporter David Gilbert on how disinformation and social media are influencing the 2020 U.S. election.
The big interview
I recently sat down with VICE reporter David Gilbert, who is currently covering disinformation in the 2020 U.S. election, to talk about how disinformation and social media influence elections. The focus of our conversation was on the upcoming presidential election in the United States, but the themes are relevant to democratic processes everywhere.
The following is a transcribed version of our conversation, edited for length and clarity. I hope you find it as enlightening as I did.
When did the issue of social media influencing elections become a topic of concern for you? Was it the 2016 U.S. election or prior?
I guess it was the 2016 election, really. Prior to that, it was probably something that was on my radar but it wasn’t something that I really felt was the threat it turned out to be.
I think 2016 really crystalized the idea that disinformation was a hugely powerful weapon that at the time no one — not the media, not the intelligence community, certainly not the social networks themselves — really understood.
The issue of social media and elections is a very complicated and nuanced one, and it continues to develop every day. So is there one specific issue regarding social media — whether it’s disinformation or the echo chamber or political advertisements — that you think is most concerning?
I think it’s disinformation because — I suppose it depends on how you define disinformation.
Coming into this election, for example, the social networks have a lot of people watching for disinformation campaigns that are happening and there are researchers looking out for it, but they’re all looking for major coordinated, typically foreign-based campaigns. But what disinformation has morphed into in this election, which is really worrying and insidious and everywhere, is hyper-local, hyper-personalized disinformation where the people spreading it aren’t some random bot pretending to be a republican party in Tennessee, it’s your brother, it’s your sister, it’s your aunt, your uncle.
People are just sharing information and they’re not checking where it’s coming from. And there’s so much of it out there that people have become so divided in America and so stuck in echo chambers and they have been for four years because of the leadership in America, who have kind of driven that division. You’re either one thing or another, and there’s no space for nuance, discussion, compromise, or anything like that.
So disinformation now is completely different from what disinformation was in 2016. Disinformation now is kind of opinions and viewpoints not based on fact, but based on whatever they want to base it on. It looks real, it looks like it’s coming from websites that look like typical news websites, and it’s being driven by these celebrity individuals.
The stuff we see most being shared is on the conservative side, the far-right side, and there are individuals there who are way more powerful on Facebook than the New York Times, the Washington Post, or CNN in terms of reach and engagement. That’s where it’s happening: they have followers all across the states and it’s those people who are helping to amplify it. It’s not Russian bots, it’s not the Chinese, it’s not Iran, it’s not anyone else. It’s people who have been essentially radicalized over the last four years by a leadership that has tried to create as much division as possible within the country. That’s why it’s so hard for us to get a grasp of the scale of the problem at the moment.
Why have people on the far-right had the most success using social media to generate huge online followings?
Because they’re incredibly smart at what they do.
It’s the likes of Ben Shapiro at the Daily Wire. These people are coming at it from a very digital-first — even a Facebook-first — approach. They gear their whole operation around Facebook. They don’t even care about a website or anything like that, it’s all about Facebook. That’s where they get all their traffic from. So they build networks and groups and they build an audience, and they have been building an audience over the past four years, because they’ve seen the kind of engagement that President Donald Trump gets from his followers on Twitter, and they know that these digital networks work.
They’re successful because they’re very clever at walking the line between what Facebook bans and what Facebook doesn’t. They could probably tell you Facebook policies much better than I could or than many other so-called experts on disinformation could because they’re going right up to the line every single day and just not crossing it… They’re very careful about what they print. They know what the rules are. And they’re just very good at building audience bases… They know how to speak to their audience in terms of “draining the swamp,” “big tech is out to get you,” “the democratic elite.” Even more recently, they’re slowly weaving in more extreme examples of conspiracy thinking into their content.
I just think that these guys are very smart at knowing their audience, knowing the platform that they’re working with, and exploiting it to the max.
In recent weeks, we have seen companies like Facebook, Twitter, Youtube, and TikTok create new policies against disinformation. Last week, for example, the New York Post published the Hunter Biden laptop story, which Facebook downgraded until it could be fact-checked and Twitter banned completely before reversing action the next day, with CEO Jack Dorsey promising that Twitter would “add context” to dubious stories rather than censor them going forward.
Why has it taken so long for these companies to create disinformation policies? Is it truly because disinformation policy is so complicated, or does it have to do with the people in charge of the companies wanting a certain result in the upcoming election?
I think that’s the question everyone is trying to figure out.
I genuinely don’t think either Mark Zuckerberg or Jack Dorsey want to influence the election one way or another… I think Twitter is probably just not very well-run. Dorsey is splitting his time between Twitter and Square as CEO of both companies and they’re just kind of scrambling a little bit and haven’t got it together for whatever reason. It just seems incredible that they’ve had four years of time to put these policies in place knowing that this was going to be a huge contentious election, knowing that Trump was going to criticize them no matter what they did, and why they didn’t set these rules out six months in advance and get it out of the way, and why they’re still scrambling to say what exactly they’re going to do, [is mind-blowing].
In terms of Facebook… I think Zuckerberg is very, very wary of upsetting — or he was at least — wary of upsetting Trump while Trump was in office. He did pretty much everything he could to make sure he kept the President happy. But it seems to have changed in the last month or six weeks. We’ve seen Trumps’ posts be more aggressively targeted, his ads have been taken down, this ad ban policy has been put in place, and I just think that Zuckerberg has realized that Trump is probably going to lose the election, so he’s thinking: OK, I better do some things to make Democrats happy.
I think what it comes down to is that these companies are tech companies, and when they were founded they never ever in their wildest dreams thought that they would be making decisions that could potentially have such an enormous impact on the democratic process of one of the world’s largest democracies.
Even with these policies in place, we have seen countless examples of misinformation spreading even after it has been flagged and the wrong content being flagged, such as Facebook and Instagram accidentally flagging #EndSARS posts.
It’s clear that the Artificial Intelligence tasked with finding and flagging misinformation is flush with problems, so why don’t these companies hire more humans to do the work?
I think they just fall back on the fact that they think technology will solve every problem.
Facebook, for example, has 15 thousand moderators now but none of them actually work for Facebook: they’re all third party [contractors] earning minimum wage. There have been repeated calls for Facebook to bring them in-house and pay them properly, but Facebook doesn’t seem to have any interest in that. I’ve reported on moderators previously, about the conditions they work it, and it just sounds absolutely horrific.
But the fact that they are relying so heavily on technology — artificial intelligence technology is still so young, so nascent, it’s unproven in a lot of extents, but it’s being rolled out live on Facebook’s platform… Facebook has some of the foremost AI experts in the world, without a doubt… and some of their technologies are really, really impressive. But they’re just playing catchup to what’s happening in the real world.
The scale of their platform, the number of languages they support, everything they’re doing is focused on English, so while their English language AI detection tools are good, every other language is just not (read, for example, about how Spanish speaking Americans are being targetted with disinformation through social platforms).
When it comes to places like Myanmar or Ethiopia, where there are local languages and dialects and codewords used, you need human people there to understand that to inform the AI systems. In Ethiopia, there’s warning that there will be a repeat of what happened in Myanmar because Facebook is boosting hate speech there. And they have no employees in Ethiopia… they just don’t seem to be willing to hire people in countries where there’s a major problem.
You recently wrote a story titled, “Malicious Campaigns Are Trying to Stop Black and Latinx People From Voting.” Can you talk about what you learned reporting on that piece, specifically in regards to social media?
The most revealing thing about talking to those people is the fact that this is something that’s been happening for decades, for centuries, where they’ve been trying to stop Black people from voting, but now it’s just taken on a new look because it’s gone digital. So previously, people have been told they couldn’t vote because there’s a chance they’ll be arrested or they’d be put on national registry lists, and these same messages are now just being repeated but with the help of Twitter and Facebook they’re being amplified and they’re reaching many more people than they would before…
[On the bright side], activists are actually countering these messages in ways that people — especially young people — will engage with, such as memes and images on different platforms from TikTok to Instagram to Facebook and Twitter (such as Tok the Vote, a non-partisan effort to get young people to vote. While turnout among young voters is historically low, with fewer than half of Americans aged 18 to 29 voting in the 2016 election, 29 percent of 18 to 21-year-olds have heard about the upcoming election on TikTok, which seems positive).
So it’s a really interesting dynamic at the moment because the amount of disinformation is growing but the pushback is getting much bigger as well.
Let’s talk about echo chambers that social media platforms create because I think it’s an issue that has been lost in the recent conversations about disinformation.
Social media, when it’s working as intended, creates echo chambers where users only see content that fits their worldview, unless users try really hard to follow different kinds of accounts. Is this a problem? Is it partially responsible for the extreme polarization of the country? How can platforms can deter this?
I think it's probably the main reason why America is so divided at the moment.
With social media, its entire business model and makeup are designed so that its algorithms will always err towards more engagement and keeping people on the platform for as long as possible. And long ago they figured out that the best way of doing that is by serving up recommendations of things that they know people want to watch/read. And that’s essentially the echo chamber.
Youtube and Facebook are all about driving engagement through recommendations that they think their viewers/readers will like. And that’s done by serving up more and more extreme content. Along with being an echo chamber that keeps you in there, it helps bring people who may be right-wing or conservative into the more extreme far-right. Because that stuff tends to be more clicky and more sensationalist, that content will be served up, and when Facebook sees that you’ve been watching that video for five minutes, it’ll go: ok well maybe I’ll recommend something else that is along those lines or even a bit more extreme.
It’s a really insidious and dangerous thing, but I can’t see it ever changing because Facebook and YouTube’s entire business models are built around keeping people on the platform for as long as possible, and the way they figured out how to do that is by showing you, again and again, stuff that will back up your beliefs, confirm your biases, and keep you on the platform for as long as possible.
Is that how the echo chamber and disinformation are intertwined? The further people go down these rabbit holes and the more extreme content they see, certain creators like Ben Shapiro know that they can get away with some disinformation, and that’s how it gets spread?
In terms of the news and disinformation, on one end you’ll get conspiracy theories like the hardcore Qanon stuff [that is] completely off-the-wall with no basis in fact. And then the other extreme is the mainstream media, which is based in fact. And in between, there is a sliding scale of 5 percent misinformation, 10 percent misinformation, 20 percent. And that stuff is probably the most dangerous because it’s mixed in with 80 percent of stuff that’s accurate and true and factual and can be backed up. But there’s that 20 percent that people then — I supposed because they are in that echo chamber — are willing to ignore because they have been told that it’s true [and they trust the narrator] so they just don’t ever question it.
So I think you’re right that the echo chamber and disinformation work hand-in-hand, and it’s very hard to stop the spread of disinformation… because these guys are so good at being able to publish stuff that will not breach Facebooks’ community standards. It’s a slippery slope, and it’s what Facebook is built on.
Where do you see this going? Are companies ever going to successfully combat misinformation on their platforms?
I really don’t know. Facebook is just too big at this point to fail, and it’s got markets it still needs to conquer. Africa is just wide open, it’s a huge growth market for Facebook, they’re putting much more focus on that now and it’s going to be an absolute disaster, it’s going to be a mess, it already is in a lot of countries because Facebook there is essentially the internet to a lot of people. So that's quite worrying.
They've got so many problems in terms of dealing with languages, dealing with local customs and nuances. Then there’s the whole other problem with Facebook being weaponized by authoritarian leaders like Hun Sen in Cambodia and Narendra Modi in India, who are using Facebook to silence and attack their opponents, using troll armies that Facebook seems to be happy with… So I think it’s going to face massive, massive problems around the world in the next five, ten years but it's not going to impact its growth. It’s just too big at this point for it to stop.
For all the things that have happened in the last three or four years with Facebook, it hasn’t made a dent really in its ability to earn money, in its ability to grow its user base, which is quite terrifying.
What is the future of social media and elections? Is the U.S. government going to step in and regulate social media platforms?
Regulation could happen, it could make the situation better, but it could also make the situation worse.
There’s a lot of lawmakers in Congress who don’t really understand or have a proper grasp of the issues. Now some do. So I guess it depends on what happens in the election, who gets in and who doesn't.
But I think generally my view is that in four years time we could be back talking again about how social media is — it’s probably going to have an even bigger impact on the election in four years time because it’s not going away anytime soon and it’s not going to get better. It’s going to get worse if anything because there's no easy fix.
Notes
https://www.theringer.com/2020/10/21/21525921/presidential-election-trump-biden-tomorrow-together
https://time.com/5892347/social-media-platforms-bracing-for-election/
https://www.vice.com/en/article/jgqe33/malicious-campaigns-are-trying-to-stop-black-and-latinx-people-from-voting - By David Gilbert
https://www.vice.com/en/article/n7w8zd/how-facebook-allows-misinformation-to-spread-even-after-its-flagged - - By David Gilbert
Questions
Social media is not going away, but how can we have a fair democracy with all the disinformation it spreads and the echo chambers it creates?
Are you in favour of government regulation of social platforms? Why or why not?
Does it worry you that the heads of these social media companies are among the most powerful people in the world? Or do you trust them?
If you enjoyed this post, I hope you’ll share it with friends using this link:
And subscribe to get the newsletter delivered to your inbox for free using this link:
I’m always looking for writing work! If you have any leads, email: orenweisfeld@gmail.com or DM me on Twitter: @orenweisfeld. My published work can be found here.