How AI companies are reckoning with elections - The Verge

notion image
Cath Virginia / The Verge | Photos from Getty Images
The US is heading into its first presidential election since generative AI tools have gone mainstream. And the companies offering these tools — like Google, OpenAI, and Microsoft — have each made announcements about how they plan to handle the months leading up to it.
This election season, we’ve already seen AI-generated images in ads and attempts to mislead voters with voice cloning. The potential harms from AI chatbots aren’t as visible in the public eye — yet, anyway. But chatbots are known to confidently provide made-up facts, including in responses to good-faith questions about basic voting information. In a high-stakes election, that could be disastrous.
One plausible solution is to try to avoid election-related queries altogether. In December, Google announced that Gemini would simply refuse to answer election-related questions in the US, referring users to Google Search instead. Google spokesperson Christa Muldoon confirmed to The Verge via email the change is now rolling out globally. (Of course, that relies on the reliability of Google Search — something the company has been working on with an eye toward AI spam.) Muldoon said Google has “no plans” to lift these restrictions, which she said also “apply to all queries and outputs” generated by Gemini, not just text.
Earlier this year, OpenAI said that ChatGPT would start referring users to, generally considered one of the best online resources for local voting information. The company’s policy now forbids impersonating candidates or local governments using ChatGPT. It likewise prohibits using its tools for campaigning, lobbying, discouraging voting, or otherwise misrepresenting the voting process, under the updated rules.
In a statement emailed to The Verge, Aravind Srinivas, CEO of the AI search company Perplexity, said Perplexity’s algorithms prioritize “reliable and reputable sources like news outlets” and that it always provides links so users can verify its output.
In an email to The Verge, Microsoft representative Brian Gluckman says the company has rolled out updates to address concerns from a report last year about false election information provided by Copilot (formerly known as Bing). His email also pointed to the company’s blog post about combating abusive AI content as an example of the measures it’s applying, with provenance technology, bans for users who break the rules, and more.
All of these companies’ responses (maybe Google’s most of all) are very different from how they’ve tended to approach elections with their other products. Google has used (and continues to use) Associated Press partnerships to bring factual election information to the top of search results and has tried to counter false claims about mail-in voting by using labels on YouTube. Other companies have made similar efforts — see Facebook’s voter registration links and Twitter’s anti-misinformation banner.
Yet major events like the US presidential election seem like a real opportunity to prove whether AI chatbots are actually a useful shortcut to legitimate information. I asked a couple of Texas voting questions of some chatbots to get an idea of their usefulness. OpenAI’s ChatGPT 4 was able to correctly list the seven different forms of valid ID for voters, and it also identified that the next significant election is the primary runoff election on May 28th. Perplexity AI answered those questions correctly as well, linking multiple sources at the top. Copilot got its answers right and even did one better by telling me what my options were if I didn’t have any of the seven forms of ID. (ChatGPT also coughed up this addendum on a second try).
Gemini just referred me to Google Search, which got me the right answers about ID, but when I asked for the date of next election, an out-of-date box at the top referred me to the March 5th primary.
Many of the companies working on AI have made various commitments to prevent or mitigate the intentional misuse of their products. Microsoft says it will work with candidates and political parties to curtail election misinformation. The company has also started releasing what it says will be regular reports on foreign influences in key elections — its first such threat analysis came in November.
Google says it will digitally watermark images created with its products using DeepMind’s SynthID. OpenAI and Microsoft have both announced that they would use the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials to denote AI-generated images with a CR symbol. But each company has said that these approaches aren’t enough. One way Microsoft plans to account for that is through its website that lets political candidates report deepfakes.
Stability AI, which owns the Stable Diffusion image generator, updated its policies recently to ban using its product for “fraud or the creation or promotion of disinformation.” Midjourney told Reuters last week that “updates related specifically to the upcoming U.S. election are coming soon.” Its image generator performed the worst when it came to making misleading images, according to a Center for Countering Digital Hate report published last week.
Meta announced in November of last year that it would require political advertisers to disclose if they used “AI or other digital techniques” to create ads published on its platforms. The company has also banned the use of its generative AI tools by political campaigns and groups.
A screenshot summarizing the accord’s goals. It has a grid of six square boxes reading “Prevention,” “Provenance,” “Detection,” “Evaluation,” and “Public Awareness,” with a large rectangle below labeled “Resilience.” Each box holds a description, elaborating on the labels.
The “Seven Principle Goals” of the AI Elections accord. Image: AI Elections accord
Several companies, including all of the ones above, signed an accord last month, promising to create new ways to mitigate the deceptive use of AI in elections. The companies agreed on seven “principle goals,” like research and deployment of prevention methods, giving provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning from the effects of misleading AI-generated content.
In January, two companies in Texas cloned President Biden’s voice to discourage voting in the New Hampshire primary. It won’t be the last time generative AI makes an unwanted appearance in this election cycle. As the 2024 race heats up, we’ll surely see these companies tested on the safeguards they’ve built and the commitments they’ve made.
Update March 19th, 2024, 7:03PM ET: Added comment from Microsoft and additional context around Google’s Search and Associated Press partnership.