Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate.
LONDON — Sitting in an unseasonably warm park in the British capital, Hannah O’Rourke cuts an unusual figure as an artificial intelligence advocate.
The thirtysomething activist has spent much of her early career championing greater rights for workers and even lobbied the British government on behalf of students during the Covid-19 pandemic.
But ahead of the United Kingdom’s general election — now expected in the fall — O’Rourke is channeling her inner tech bro.
Throughout monthly hackathons, O’Rourke and other progressive computer scientists at CampaignLab, a nonprofit that she co-founded, whipped up an AI-powered chatbot. Designed with different personalities and varying emotions, it helps volunteers learn how to best interact with potentially skeptical voters on the campaign trail.
Another crowdsourced project relied on AI tools to track political discussions on TikTok. A third used off-the-shelf AI technology to help confused voters decipher the labyrinthine website of the country’s election commission.
“There are some interesting, creative AI solutions that can help humans do better at things like campaigning,” O’Rourke said over coffee last month on a sweltering spring day in South London — with kids laughing at a nearby playground and commuters grabbing a drink on their way home from work.
“Ultimately, people who want to do bad things will be using AI,” she added. “So the question is: How do we, as people who want to do good things, use this tool in a way that is in accordance with what we think is right?”
O’Rourke is not alone. Amid the hype around AI — which went into overdrive in late 2022 when OpenAI released ChatGPT to the world — campaigners, academics and private companies have quickly jumped onto the bandwagon of tech’s next big thing.
That includes those seeking an advantage in the bumper crop of elections planned this year — from those in Bangladesh and Pakistan earlier in 2024 to the European Parliament vote in June to the November presidential election in the United States. Others want to retrofit AI tools to detect potential electoral harm.
Just like a decade ago, when campaigns rode the incoming social media wave to talk directly to voters, political operatives in the current age of AI are turning to chatbots, automated voter-targeting tools and other AI-powered wizardry to eke out a potential edge at the polls in 2024.
In Pakistan, jailed former leader Imran Khan ran a national election via campaign speeches and videos powered by generative AI tools. In Indonesia, ex-military chief — and alleged war criminal — Prabowo Subianto created an AI-generated cartoon of himself as part of a rebrand en route to winning the country’s February presidential election.
In India, incumbent Prime Minister Narendra Modi turned to AI to automatically translate his stump speeches into multiple local languages during the country’s ongoing vote. In Belarus, the country’s opposition backed an AI avatar — pretending to be a candidate in the country’s February election — that would answer people’s political questions without fear of imprisonment.
Not all of this will pan out.
Commercial vendors searching for new markets are eagerly pitching their untested wares to tech-illiterate campaigns, often enamored by promises of what AI can offer them. Other firms have rebranded long-standing campaign practices — like targeting people on social media based on their personal interests or using data to decipher voters’ intentions — as newfangled AI services in the hopes of striking it rich.
Even campaigners like O’Rourke admit that, as politicians rush to keep up with the latest trends, they must be careful not to rely too heavily on a technology that may overpromise or underdeliver for everyday citizens.
“Every vendor is always trying to find some edge, the next new thing,” said Katie Harbarth, a former U.S. Republican aide who, while later working at Facebook, helped educate lawmakers on the social media network’s campaigning potential during the earlier election-engulfing tech craze.
“The problem for campaigns is that they don’t know who’s delivering snake oil or who’s got the real deal,” she added.
Randy Saaf and Octavio Herrera have a basic pitch whenever they try to sell their software: They create an AI-powered clone of the potential customer’s voice.
The two techies started by helping music labels, in the early 2000s, to stop companies like Napster, the music-sharing service made famous in the dot-com era, from pirating their content. But when ChatGPT took the public’s imagination by storm in late 2022, the California-based team smelled an opportunity.
Within months, they had built a tool known as Wolfsbane AI, which embedded digital markers into audio and video content, making it impossible to clone such material via artificial intelligence. Users can upload content onto the startup’s platform, protecting audio and video clips from potential harm.
“We created it around the entertainment industry, but quickly realized this is a major security problem for deepfakes,” Saaf told POLITICO via a Zoom link from his home office in Los Angeles. “That’s when we started getting interested in reaching out to political figures.”
So far, Saaf and Herrera have signed up just one lawmaker, U.S. Democratic Representative Eric Swalwell — a Californian who sits on the House of Representatives’ Homeland and Judiciary committees.
In December, the duo heard him speaking about the threat AI posed to election security — and cold-called his office the next day. By early April, their company had started protecting Swalwell’s social media content from the threat of AI-generated deepfakes. (Neither Saaf nor Herrera would reveal the AI service’s cost.)
Herrera, Saaf’s co-founder, is realistic about the uphill challenge they face — even if there are potentially billions of dollars already earmarked for campaign funding in the U.S. alone ahead of November’s election. The pitch, he added, often includes mocking up a quick clone of a lawmaker’s voice, demonstrating what the technology can do to rebuff such efforts, and then quickly turning to more tech-literate staffers to figure out the specifics.
“That can take months,” he conceded, “and, unfortunately, the election is coming quickly.”
On the other side of the Atlantic, Vilnius-based Simona Vasytė-Kudakauskė has a similar problem.
As head of Perfection42, a boutique consultancy using so-called large language models to create thousands of pieces of AI-generated content for advertising agencies and brands worldwide, Vasytė-Kudakauskė also wants to tap into the technology’s potential for political campaigns.
Where many have focused on the risks — the deepfakes of politicians and the targeting of voters via algorithms — she argues that such tactics can also be harnessed to better reach would-be voters. In that world, Vasytė-Kudakauskė adds, AI can automatically translate digital campaign material into multiple languages; quickly generate political images for pennies on the dollar; and even tailor specific messages on social media to lure undecided voters.
“We work with some commercial agencies to create visual content, and elections are also just an advertisement campaign,” she said via a Google Meet video conference call earlier this month. “It’s the same, but in a different way for a different purpose.”
That may sound plausible — in theory. But the reality of political campaigning — especially ahead of the upcoming European Parliament election in June, when the bloc’s 27 member countries will hold separate, simultaneous votes — is completely different.
Despite Perfection42’s pitch, Vasytė-Kudakauskė admitted that, with only a little more than two weeks left until the EU election, her consultancy had yet to sign up a single campaign for its AI-powered offering.
POLITICO’s discussions with multiple other agencies across the EU also failed to uncover specific campaigns that had used outside consultants to supplement traditional campaigning tactics with AI — although several candidates had experimented, internally, with generating content via tools like ChatGPT.
“You can personalize content for your users, not just for bad influence, but also for good influence,” said the Lithuanian, quickly pivoting when POLITICO questioned why her company — despite its effort to portray the positives of AI for campaigning — had yet to find any takers. “For some reason, people aren’t doing that. They are losing the war because they are not playing on the same ground.”
Oren Etzioni has the ultimate flex when describing how his AI-focused nonprofit got off the ground.
The American tech entrepreneur and AI academic was at a meeting in San Francisco last summer with a notable headliner: U.S. President Joe Biden.
During a discussion with other experts, Etzioni told POLITICO, each went around the room to describe a so-called moonshot project, or an exciting idea they were working on. As he awaited his turn, the researcher realized that — in this year of global elections — there wasn’t a good way to quickly detect deepfakes for mass awareness.
“There really wasn’t an adequate tool available to the press, available to fact-checkers, and to the public to assess when you see an image, video or audio, whether it’s a deepfake or not,” said Etzioni. “We set out to build one.”
Commercial providers like Reality Defender or Sensity AI already charge hefty fees for such detection. But within months, Etzioni had tapped Garrett Camp, a co-founder of Uber, for funding and created TrueMedia.org.
The nonprofit splices together the detection tools of its fee-charging rivals and its own in-house methods to give users a percentage score to gauge how likely it is that an image, video or audio clip is fake. People can insert a web link to suspicious content or upload material directly. TrueMedia then rates the probability that something is AI-generated — though Etzioni admits the findings aren’t perfect.
When POLITICO used the free service, for instance, the system was able to detect about 85 percent of the deepfakes.
Etzioni said that, as of May, thousands of people — from academics and journalists to election officials and U.S. federal government staffers — were using his product. He would not say how much it took to run the detection tool. But costs, according to the AI expert, had come down as TrueMedia’s AI systems were trained on increasingly larger numbers of queries.
“It is, by design — now and in the future — a money-losing proposition,” he admitted — again declining to comment on his plans for the service after the U.S.’s November election. “Our idea is to be a public service, and public services have a cost.”
For Kate Dommett, a professor of digital politics at the University of Sheffield in the U.K., the rise of AI tools dedicated to this year’s election cycle — from commercial vendors and nonprofits alike — represents the starting gun, not the finishing line, in the technology’s evolution.
Dommett is an expert in how political campaigns worldwide have tapped into the latest tech advances. Amid the AI hype, she remains skeptical that the current cohort of services, especially those offering an inside track into reaching people on social media via complex algorithms and so-called data analytics, is anything more than smoke and mirrors.
It’s more consultants repurposing existing services with an AI label, she added, than something truly revolutionary.
“It feels so early, it’s really hard to know what’s really going on,” said Dommett. “Many of these tools are quite glitchy. I just don’t think we’re at the point, yet, where we can truly trust them to actually do a good job.”
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate. The article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.