Introduction
Mexico is currently without a clear roadmap to guide its efforts, protect its population, and foster innovation as it develops and implements artificial intelligence (AI) technologies. Claudia del Pozo, director and founder of Eon Institute, a think tank specializing in public policy on emerging technologies, states that Mexico lacks a genuine AI regulation.
Current Legal Framework
While Mexico has laws on data protection, copyrights, and non-discrimination that indirectly affect AI applications, there is no comprehensive law or strategy addressing this technology.
The only reference dates back to 2018, when the administration of Enrique Peña Nieto published a brief “National Digital Strategy,” which included an AI section in which Del Pozo participated. However, this initiative lacked follow-through: it did not set specific goals, define investment amounts, or establish evaluation processes, thus failing to consolidate as a long-term public policy.
Partial Proposals
In the absence of a federal plan, legislators have presented isolated initiatives. The most recent example is diputado Pablo Emilio García González’s proposal in Mexico City, focusing on regulating the use of biometric data and preventing deepfakes. Although it addresses legitimate concerns about non-consensual creation of images and voices, the project has made little progress in the capital’s Congress and lacks the scope to generate a systemic effect.
Del Pozo warns that “we still have a long way to go before knowing where to start with regulations” and emphasizes the need for a national AI strategy that sets priorities, adoption goals, budgets, and clear governance mechanisms.
Lessons from International Experiences
International experiences offer multiple approaches. The European Union leads with its AI regulation, based on a risk management model that categorizes applications into four levels: prohibited, high risk, medium risk, and low or no risk. Each category has proportionate obligations for transparency, conformity assessment, and supply chain control.
This framework has served as a reference for legislative debates worldwide, though its complexity and the involvement of multiple regulatory actors might exceed Mexico’s coordination capabilities.
In the United States, regulations are more fragmented and sector-focused: there are no federal AI-specific regulations, but in 2024, the Biden administration published “AI Responsibility Guidelines” without binding force. President Trump, on the other hand, rejected these guidelines and threatened to withdraw federal funds from states imposing local rules.
Canada was a pioneer in launching its Pan-Canadian AI Strategy in 2017, followed by the United Kingdom (2018), France, and Germany (2019). Many countries have since announced their own strategies with public investment plans, talent competencies, regulatory sandboxes, and public-private partnerships. Mexico, however, is already seven years behind the first international announcement.
Risks
The urgency for a national strategy stems not only from competitiveness but also from protecting rights and mitigating harm.
For instance, in the Netherlands, an automated fraud detection system for social support distribution led to unfair accusations against thousands of families, causing bankruptcies, suicides, and family separations due to biased algorithms discriminating against dual-nationality migrants.
In the United States, facial recognition use in surveillance systems has resulted in wrongful detentions of African Americans, as models were primarily trained on Caucasian faces and do not achieve the same accuracy for other ethnicities.
Deepfakes for defamation, electoral fraud, or privacy violation are also proliferating, a concern raised by legislators and digital security experts.
Del Pozo stresses that “good regulation does not hinder innovation; instead, it creates a framework where positive innovation thrives,” citing the example of mandatory seatbelts in cars, which enabled safer speed increases.
According to Del Pozo, Mexico should prioritize prohibiting clearly harmful uses (non-consensual explicit deepfakes, algorithmic discrimination systems) and progress in phases, with testing or “sandbox” environments to evaluate measures before definitive implementation.
Strategy Construction
Building a national AI strategy involves first agreeing on long-term objectives: how to use AI to improve public services, boost industrial productivity, enhance scientific research, and ensure citizens’ rights.
Secondly, a specific budget allocation and a leading authority with cross-sectoral interlocution capabilities (Economy, Education, Health, Home Affairs, and National Transparency Institute) are necessary.
Thirdly, a risk-based regulatory framework should be designed, adapted to Mexico’s size and capabilities, focusing on critical sectors like healthcare, justice, and social well-being before expanding to areas of lesser impact.
Lastly, regulatory sandboxes are crucial where the private sector, academia, and authorities can experiment with regulations in a controlled environment, learn from successes and mistakes, and adjust policies before nationwide deployment. “Mexico should be in the testing phase, but it isn’t,” Del Pozo stated.
Key Questions and Answers
- What is the current state of AI regulation in Mexico? Mexico lacks a comprehensive AI regulation, with only scattered laws indirectly affecting AI applications.
- What international experiences can Mexico learn from? The European Union’s risk-based AI regulation, Canada and the UK’s national AI strategies, and the US’s sector-focused approach offer valuable insights.
- What are the risks associated with insufficient AI regulation? Risks include privacy violations, discrimination, and the spread of deepfakes. The Netherlands’ automated fraud detection system and US facial recognition use in surveillance illustrate these concerns.
- What should Mexico prioritize in its AI strategy? Prioritize prohibiting harmful uses like non-consensual explicit deepfakes and algorithmic discrimination systems. Progress in phases with regulatory sandboxes for testing measures before implementation.
- What are the key components of a national AI strategy? Long-term objectives, specific budget allocation, a leading regulatory authority, risk-based framework, and regulatory sandboxes for experimentation.