The influence of technology in swaying voters during the 2016 and 2020 elections is well-documented. However, the upcoming elections may witness an intensified involvement of AI, raising concerns about the potential manipulation of voters and the undermining of the electoral process.
Generative AI, a form of artificial intelligence capable of generating photos, written information, and other data, is at the heart of these concerns. This technology learns and processes raw data and user prompts to create new content. While some candidates may use AI as a cost-saving measure, others could potentially exploit it for more sinister purposes.
AI can be employed to identify and exclude ineligible voters from registries and match signatures. However, it could also inadvertently or intentionally suppress voters by removing those who are, in fact, eligible. Chatbots and algorithms can disseminate incorrect information to voters, potentially influencing their stance on certain candidates. In the most severe scenario, AI could amplify contentious issues, potentially inciting violence.
Despite these risks, tech companies are not investing sufficiently in election integrity initiatives. AI companies often lack the necessary connections and funding to manage the risks associated with their tools' use in elections. This lack of investment results in diminishing human oversight over AI-generated content and its usage.
The American Constitution's commitment to free speech could clash with the need to prevent and counter misinformation during the election season. The potential for classic mud-slinging between candidates is high, and foreign interference is also a concern. Countries such as China, Iran, and Russia have been found attempting to use AI-generated content to manipulate U.S. voters.
Social media platforms, which have revolutionized election campaigns, have implemented measures to handle election information and misinformation. YouTube, for instance, has updated its policy to stop removing content that advances false claims about widespread fraud, errors, or glitches in the 2020 and other past U.S. Presidential elections. Alphabet, YouTube's parent company, requires election advertisers to clearly disclose when their ads include realistic synthetic content that has been digitally altered or generated, including by AI tools.
Meta, the parent company of Facebook, Instagram, and Threads, will label images and ads created with AI to help users distinguish between real and synthetic content. This measure aims to prevent the spread of false or harmful information, especially during elections.
Several states, including California, Michigan, Minnesota, Texas, and Washington, have enacted laws regulating the use of political deepfakes. Despite these measures, the potential for AI misuse remains a concern, particularly given its potential impact on democracy.
However, the awareness of potential misuse could stimulate critical thinking among voters, encouraging them to scrutinize election candidates, issues, and information more closely. This could prompt voters to conduct their own research rather than passively absorbing information online or offline.
The decentralized nature of America's election system could also limit the misuse of AI, as votes are managed at the local level. Ultimately, each vote still carries weight.