AI in Elections and Democracy: Claudia Sheinbaum Announces Reforms to Regulate AI Use in Campaigns

Web Editor

January 23, 2026

a man in a suit and tie standing in front of a blue background with a black and yellow border, Eduar

Introduction

Claudia Sheinbaum, the President of Mexico City, announced that the upcoming electoral reform will include rules for using Artificial Intelligence (AI) in campaigns and penalties for its misuse. She stated that the initiative will also aim to reduce election costs. Her statement foreshadows a complex debate on how to safeguard democratic principles and voting integrity without stifling innovation.

Background and Global Context

The European Union has been at the forefront in addressing misinformation and fake news through a code of conduct and digital laws that mandate transparency from social media platforms. These measures include immediate removal of manipulated content intended to confuse or deceive voters. At the time, AI did not have the extensive use it does now, with multiple language models, applications, and agents.

Some countries have begun to tackle this issue. In the United States, state laws emerged in 2024 to prohibit or restrict deepfakes during election periods. However, a fragmented response in a federal system like the US creates gray areas that complicate the implementation of a general norm.

Other countries have chosen to protect individuals’ images and voices, like Denmark, focusing on copyright issues to curb deepfakes. South Korea has strengthened penalties for manipulated sexual content. These measures are not part of an electoral law but illustrate the trend of punishing harmful uses of AI against individuals and their image.

The Rise of AI-Generated Misinformation

AI Generative models can rapidly produce large volumes of realistic but false text, audio, and images. When combined with misleading or out-of-context messages, the concern grows significantly, especially if the goal is to ensure citizens receive accurate, objective, and timely information during elections.

A convincing deepfake can cast doubt on a candidate, alter a narrative, or prevent voters from trusting the information they receive. Generating thousands of micro-targeted content pieces with false messages exacerbates the damage. The possibility of combining bots, AI, and purchased digital spaces to disseminate altered content increases the risk of mass manipulation. The speed and scale of AI can erode democratic defenses and citizens’ free choice.

Regulating AI in Elections: Balancing Protection and Innovation

How can we regulate AI in elections without harming freedom of expression, press, or innovation? Clearly define prohibited behaviors, but remember that technology and political communication strategies are constantly evolving.

The law should focus sanctions on intentional acts aiming to deceive or invalidate votes during elections. However, each electoral process is unique and unprecedented. Who will be the subjects of regulation and penalties? Parties and candidates spreading false content affecting the process should face measures such as precautionary actions and fines.

Should media, journalists, social networks, influencers, or AI content generation platforms also be included? Some practices don’t limit AI use but require origin marking or metadata to trace content, facilitating accountability and control.

Another trend is placing part of the responsibility and solution on technology platforms and companies, obligated to detect and label synthetic content, offer reporting channels, and maintain records of political ad contracting.

Regulation allows audits and controlled access for electoral authorities to verify practices. It’s evident that journalism and everyday AI use need exceptions. A news outlet using AI to generate images or transcribe statements should label those contents, but their informative function and freedom of expression must not be hindered.

Punishing AI tools is absurd. The law should sanction the behavior and the actor when proven intent to deceive or confuse is clear. Should a platform be fined for not labeling a manipulated video, synthetic audio, or a candidate for disseminating it without identification?

Criminalizing tool use harms freedom of expression and innovation. AI evolves too rapidly; rigid legislation may become obsolete or encourage non-compliant practices. Organizations like the International Foundation for Electoral Systems (IFES) recommend training and technological literacy for electoral authorities and pilot tests before prohibitive regulations.

There are risks in legislating prematurely. Vague definitions may enable political censorship, while excessive sanctions could discourage research and journalism. A law intended to stop deepfakes might inadvertently penalize legitimate satire or critical messages. The law must contain precise definitions, freedom of expression and press protection, and swift appeal processes.

I recommend a minimum set of principles: transparency, proportionality, accountability, harm prevention, freedom of expression and press protection, independent technical oversight. These principles enable rules that penalize deception without banning the tool, ensuring democracy remains in citizens’ free choice.