News

AI didn’t sway the election, but it deepened the partisan divide

A.Davis13 hr ago

This was the year that artificial intelligence was expected to wreak havoc on elections.

For two years, experts from D.C. to Silicon Valley warned that rapid advances in the technology would turbocharge misinformation, propaganda and hate speech. That, they worried, could undermine the democratic process and possibly skew the outcome of the presidential election.

Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post.

Those worst fears haven't been realizedbut other fears have been. AI seems to have done less to shape how people voted and far more to erode their faith in reality. The new tool of partisan propaganda amplified satire, false political narratives and hate speech to entrench partisan beliefs rather than change minds, according to interviews and data from misinformation analysts and AI experts.

In a report shared with The Washington Post ahead of its publication Saturday, researchers at the Institute for Strategic Dialogue (ISD) found that the rapid increase in AI-generated content has created "a fundamentally polluted information ecosystem" in which voters increasingly struggle to distinguish what's artificial from what's real.

"Did AI change the election? No," said Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation. "But as a society now, we're living in an alternate reality. ... We're disagreeing on if two-plus-two is four."

The social media platform X, whose owner Elon Musk went all-in in backing Donald Trump's campaign, also cemented its role as a place where AI content can circulate without guardrails.

As Trump prepares to enter office, experts said that AI, especially on X, may provide his supporters with a creative medium to foster community and an acceptance of controversial policy positions, such as mass deportations or abortion bans. AI-generated fakes, they said, will probably help influencers spread false narratives on loosely regulated social media platforms and bolster the partisan beliefs of millions of people.

"This is the playbook," Farid said. "If you don't like something, just lie and then get it amplified."

X did not respond to a request for comment.

Deepfakes emerged early in the election cycle, notably when President Joe Biden's voice was spoofed in January to discourage New Hampshire voters from voting in the state's primary. The Democratic operative behind it, who claimed he sought to raise awareness about the dangers of AI, was fined $6 million by the Federal Communications Commission, which cited violations of telecommunications regulations.

In July, Musk shared on X an AI-generated fake audio clip of Vice President Kamala Harris celebrating Biden's decision to drop out of the race and calling herself a "diversity hire." The post was viewed over 100 million times, according to X's public metrics, and it appears on the platform without a label or fact-check.

Cartoonish AI images portrayed Trump in Nazi garb and Harris in sexually suggestive and racially offensive ways. In March, the BBC unearthed dozens of AI-generated fake photos of Black people supporting Trump, a voting demographic courted by both campaigns.

While more than a dozen states have laws penalizing people who use AI to make deceptive videos about politicians, such content went largely unmoderated, exposing gaps in how those laws and social media policies are enforced. The array of software created to debunk AI deepfakes fell short of its promise, leaving a haphazard system of mainstream-media fact-checkers and researchers to flag fake images and audio as they proliferated across social media.

Foreign influence actors used AI up to the closing hours of the election, spreading baseless allegations of voter fraud in battleground states such as Arizona and spreading fake images of world leaders such as Ukrainian President Volodymyr Zelensky urging people to vote for Harris, according to the misinformation tracking organization NewsGuard.

Despite AI's prevalence, however, there was no evidence that malicious activity had a "material impact" on the voting process, Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, the federal government's lead department on election infrastructure security, said in a statement Wednesday.

Researchers identified only a handful of cases in which AI was used to generate disinformation about the voting process, Kate Starbird, co-founder of the University of Washington's Center for an Informed Public, said in a media briefing Wednesday. "For the most part, the rumors we see are usually based on misinterpretations of real evidence rather than fabricated evidence," she said. "That held through on Election Day."

This mirrors trends in elections abroad. AI did not impact elections in Britain and the European Union, according to a September research report by the Alan Turing Institute, Britain's national center for data science and artificial intelligence. Researchers found only 16 confirmed viral instances of AI disinformation or deepfakes during the British general election, they said. Only 11 viral cases were identified in the E.U. and French elections combined.

"There remains no evidence AI has impacted the result of an election," said Sam Stockwell, lead author of the report and research associate at the Alan Turing Institute. "But we remain concerned about the persistent erosion of confidence in what is real and what is fake across our online spaces."

Similarly, just because AI didn't produce an election-changing "October surprise" doesn't mean it had no impact on American voters.

Far-right actors created a deluge of AI-generated misinformation on X. It included viral AI-generated images of distraught children in the aftermath of Hurricane Helene, which fueled conspiracy theories and antisemitic attacks on Biden administration officials and others, complicating the response to the disaster. Shortly after Trump was elected, Farid said, deepfake audio surfaced of the president-elect's voice falsely claiming he would kill Canadian Prime Minister Justin Trudeau.

Going forward, Farid said, X will be a powder keg of AI-generated misinformation, with its loose moderation and potential to reach large audiences making it a laboratory for people to quickly test which types of deepfakes and propaganda could go viral.

"Now the ability to create AI bots, large language models, images, audio, video to support all of this nonsense is absolutely poisoning the minds of people who get the majority of their information from social media," Farid said. "It's getting easier and easier to lie to 350 million people in a country where it shouldn't be that easy."

Trump and his allies seized on AI at several points in the cycle, at times prompting backlash - although it's unclear whether the effort ultimately helped or damaged his campaign.

In August, he shared AI-generated images of Taylor Swift fans seeming to endorse him - a move that Swift said prompted her to publicly endorse Harris. That same month, the Republican candidate falsely claimed that photos of Harris greeting a large crowd at a rally in Detroit were AI-generated. That many supporters believed him, experts say, is an example of a phenomenon known as the "liar's dividend," in which public awareness of the possibility of AI fakes allows dishonest actors to cast doubt on truthful claims or genuine images.

In an analysis of 582 political fakes that emerged during the presidential election cycle, Purdue University researchers found that 33 percent were about Trump, while roughly 16 percent focused on Harris and 16 percent on Biden. These included content that cast the candidates in both positive and negative lights.

Roughly 40 percent of these AI fakes were shared for satirical reasons, the data shared by Purdue University researcher Christina Walker showed. About 35 percent were shared by "random users" who had minimal social media followings, while roughly 8 percent were shared by figures who had more than 100,000 followers on the social media platform where they shared it.

These AI-generated memes allowed people to latch onto popular current events and fads - such as Trump supporters seeing a prime example of government overreach in the case of "Peanut the Squirrel," an allegedly illegal pet seized and euthanized right before Election Day - to foster a sense of community and shared identity, said Kaylyn Jackson Schiff, an assistant professor of political science at Purdue University.

AI fakes help voters "develop positive attitudes or an understanding of current events around those deepfakes they are sharing, even if they don't think that the image itself is actually real," she said.

But some of AI's most lasting damage has been in muddying the waters of reality, experts said, causing people to more broadly question what is true.

Researchers at ISD compiled more than a million social media posts involving AI and the election on X, YouTube and Reddit - which together amassed billions of views - and then analyzed a random sample of 300 of them. They found that platforms regularly fail to label or remove AI-generated content even when it has been debunked. They also concluded that users who made claims about whether a given piece of content was AI-generated got it wrong 52 percent of the time.

Interestingly, the researchers found that users far more often saw authentic content as AI-generated than the reverse.

While platforms have sought to bolster their processes to detect false content, said Isabelle Frances-Wright, ISD's director of technology and society, "what we're now really seeing is a crisis when content is true."

Many of the mistaken assessments relied on outdated assumptions about how to spot AI-generated content, with users often overestimating their ability to do so, ISD found. AI detection tools didn't seem to help, with some widely viewed posts using or misrepresenting these tools to draw false conclusions.

On top of that, many of the posts involving the use of AI in the election - 44.8 percent - suggested that the other side was using AI habitually and therefore nothing it said could be trusted. Some users expressed concern that AI was leading them to be suspicious of just about everything.

Sowing that suspicion, which people apply in line with their own beliefs, is a key impact of AI, Frances-Wright said. "It's just giving people one more mechanism they can use to further entrench their own confirmation biases."

Related Content

Trump allies push to punish Jack Smith in first test of retribution vow

Beatles versus Beyoncé? The Grammys are talking nonsense again.

0 Comments
0