The Known Unknowns of Online Election Interference
What are the things we *know* will be factors in the months leading up to November? Experts in elections and mis/disinformation campaigns give us their two big concerns.
In late January,
and I hosted an in-person reprisal of In Lieu of Fun, the goofy live TV show we did for 787 episodes starting in the early days of the pandemic (almost two years of which were DAILY, including weekends and holidays!) for ’s INFORMED conference in Miami.Our guests were the estimable
(former director of public policy for global elections at Facebook) and (writer and research manager at Stanford Internet Observatory on conspiracy theories, terrorism, and state-sponsored information warfare) and the topic was misinformation and disinformation in online election interference.To the mostly U.S. audience, Katie started by reminded that there’s not just of course the US Presidential election this year, but also elections in India, Indonesia, Ukraine, Taiwan, the UK, and European Parliament. It is a massively important year for elections, but simultaneously tech companies have also had massive layoffs creating new problems in how they’ll deal with these concerns.
So to frame the discussion, Ben asked the guests, in a bit of homage to erstwhile U.S. Defense Secretary Donald Rumsfeld’s “unknown unknowns” what the KNOWN unknowns were of the 2024 Election Season globally. Here are the two things that the conversation focused on:
1. The Deceptive Power of AI Generated Audio
The day the show happened was also the evening before theRepublican primary in New Hampshire. The day prior, robocalls using an AI-generated recording of President Joe Biden’s voice had circulated to as many as 25,000 voters in the state discouraging them from voting.
I pointed out that of course deceptive robocalls were nothing new — people have always been able to get someone that even just sounded like Joe Biden, record something and disseminate it to voters during a critical time. But over the hour, Renee and Katie and members of the very expert audience emphasized five important parts of this incident that were new and worrisome:
First, AI democratizes access to deep fake technology, particularly audio. It makes it cheap, affordable, and incredibly dangerous for any Russian troll or bored teenager in their bedroom looking to make trouble to deploy really difficult to debunk technology, cheaply, quickly, and to wide effect. But this is even more true for audio information which can literally be used to replicate any person’s voice using a ten second sample of you speaking.
Second, AI allows for bad actors to influence down-ballot local elections in ways that were not feasible or cost-effective to do before. This is particular a threat in the United States as State elections have become more and more pivotal following the abortion decisions from the Supreme Court and the activism of State Attorneys General and State legislatures to litigate and regulate tech.
Third, AI allows bad-actors to not just hyper-target candidates but pivotal audiences — like the unlikely voters — in order to alter voting turnout. This is the idea that you can super-personalize a particular type of manipulation to a particular type of voter to give them the exact type of fake news that will get them off their duff and to a voting booth on election day.
I gave the example of the old woman shut-in with ten cats, who is sent a video of Joe Biden torturing cats the day before the election. She wasn’t going to vote, but that’s the type of highly emotional content that would get her out the door and to a voting booth.
Fourth, the popular awareness and panic around AI Generated media leads to societal degradation of trust and consequently decision paralysis. This means that you don’t trust the good or the bad, you just don’t trust anything. You don’t know what to believe, and the epistemological crisis creates so much confusion people won’t make decisions.
Five, the mainstream media and lawmakers are also vulnerable to amplifying AI generated content. Not only might mainstream media be used to amplify fake content, but they have devote copious resources to hunting down what is true or not in order to decide what to report. Similarly, lawmakers attention gets diverted to launch investigations into fake crises. All of this not only takes away from the amount of attention we can spend on legitimate issues and concerns, but uses up valuable resources and energy in industries and areas of government that don’t have the people or money to waste.
2. The Great Social Media Platform Decentralization’s Effects on Moderation
Not only were the problems of mis- and dis-info going to be on steroids due to AI, platforms’ ability to address them had gotten more difficult due to what’s been called The Great Platform Decentralization. The Great Decentralization is the phenomenon of people moving off large scale commercial platforms like Facebook, Instagram, Twitter, TikTok, and YouTube and into the federated network of platforms at places like Mastodon and Blue.sky — but also smaller ideologically motivated platforms like Gab and Truth Social.
I pointed out to the group that for the last few years a major concern for people worried about information systems had been the creation of “filter bubbles” on social media newsfeeds — customized ranking and recommendation results that cut off users from the full breadth of available content. But the modern threat, I argued to the panel, was much worse because it wasn’t just filter bubbles. People had now decamped to REAL bubbles, in which they would not have the chance to even accidentally brush up against content they disagree with.
Optimistically, Ben hypothesized that perhaps this decentralization had actually made it harder for the baddies who pushed mis- and dis- information to reach their target audiences — but both Renee and Katie very much disagreed. “I think it’s actually easier,” Katie said. And Renee explained that most disinformation is aimed not at wide mainstream audiences but already radicalized groups:
What Russia[n disinformation campaigns] did was never targeted at what we might have called the unified American public — and thats because there’s no such thing as a unified American public. So if you’re targeting a faction, and the factions have just conveniently removed themselves to different platforms then your right-wing trolls are going to be on right-wing platforms, which by the way, is exactly what we see.
Renee continued explaining that the trolls behavior was also a type of regulatory arbitrage. Regulatory arbitrage is a legal policy concept that you basically create rules or laws that favor a certain type of industry, then you make it an optimal place to do business:
And so, back when Twitter actually moderated things, what you would see is [trolls] would move to other platforms where they were not going to be moderated, because they could still reach the target and they could still be effective there. So even in the 2022 midterms, one of the things we observe from Russia, Iran, and China was actually the prevalence of targeting of down-ballot races and down-ballot narratives — what you might call down-ballot social media communities, like where there on Gab or Reddit and these much more niche places.
Because what the [disinformationists] want to do is to either nudge somebody towards a particular action by being a member of that community, [or by] speaking as a member of that community. [Trolls] don’t care about unified American public, they care about that particular niche and when the niches declare themselves quite visibly, you’ve actually made their targeting easier.
Other experts in the audience pitched in to volunteer agreement, including former head of Trust and Safety at Twitter, Yoel Roth, who has just published a paper on the security and information risks of decentralized platforms in the Journal of Online Trust and Safety.
In case that’s not enough to worry about
Finally, after summarizing the panel’s concerns into these two major buckets, Ben closed out by asking the mostly-expert audience to volunteer a grab back of their other known unknown anxieties. For security reasons, I won’t reprint those here, just in case I end up giving bad guys any ideas, but I will offer one anxiety that I think is so often forgotten about in the American and Western European contexts.
Paraphrased, one of the audience members reminded in the last five minutes:
If you think all this moderation is bad in the English-speaking world have you considered WHAT ELECTION MODERATION LOOKS LIKE IN NON-ROMAN ALPHABET LANGUAGES?
And with that final question and a collective groan, the group disbanded to enjoy the traditional In Lieu of Fun Scotch that I’d bought at the duty free shop en route from France.
My hope, and I’ll admit this is probably wish casting as much as anything, is that the pervasiveness of AI generated content actually forces people to assume the default position of not believing anything unless it is confirmed by a credible source. As opposed to the current standard (for most) that seems to be believing everything that reinforces your current priors and dismissing anything that counters them.
What ever happened to critical thinking skills and questioning authority, fact checking and not relying on one source? I am the old lady with 6 cats, 2 dogs, and 7 chickens.