Introduction
On Thursday last week, the Digital Transformation and Telecommunications Agency (ATDT) and the Secretariat of Science, Humanities, Technology, and Innovation (Secihti) unveiled the “Declaration of Ethics and Best Practices for AI Use and Development.” In their joint statement, they described this declaration as a “roadmap” to guide public policy decisions regarding AI use and specified that it is based on the “Chapultepec Principles,” also developed jointly.
Controversial Principles
Some of the principles included in the official communication have raised eyebrows, as they appear contradictory to the current practices of the very government and due to their ambiguous language, which seems to anticipate restrictive regulation.
Principle 10: Data as a Public Good
Principle number 10 states, “Data is a public good that must be cared for responsibly.” This principle is controversial, especially considering the day after the declaration’s presentation, the group known as Chronus reportedly hacked and leaked 2.3 TB of confidential information from 25 public entities, including IMSS Bienestar and the SAT, exposing data of 36.5 million people.
The ATDT issued an information card, not denying the leak outright but minimizing its risks by asserting that no sensitive data was published. However, not all data are public goods, and it’s clear that the government isn’t responsibly caring for these data, at least not based on their current actions.
Principle 3: Explainable Decisions
Principle number 3 asserts, “If a decision cannot be explained, it should not be automated.” While abstractly, automation in decision-making may imply risks, it’s inconsistent for the government to assure protection from these decisions while simultaneously using AI since 2024, as per Gari Flores, Administrator General of Revenue, to differentiate between “good” and “bad” taxpayers.
If the government’s automated decisions for revenue purposes are ethical, then privately delegated AI decisions should also be. If the Chapultepec Principles guide public policy for AI use, then the SAT’s current oversight strategies would be inconsistent and need adaptation.
Ambiguous Principles
Other principles are formulated so vaguely that they leave room for the government to restrictively regulate AI. Principles 4 and 5 suggest that “AI is best governed when decisions are collective,” and “It’s only valuable if it generates well-being for people.” Principle number 8 indicates that AI development requires strengthening “knowledge in the country.”
However, given Morena’s neolinguistic history, terms like “well-being” and “collective decisions” can be dangerous for human rights. Furthermore, the President has mentioned in her morning press conference on January 13 that AI regulation pertains to “information control,” which lies with “companies owning the platform,” and aims for “information democratization.” Thus, ambiguity in the Chapultepec Principles might lead to another official attempt at controlling public narratives.
Key Questions and Answers
- Q: What is the “Declaration of Ethics and Best Practices for AI Use and Development”? A: It’s a roadmap created by the ATDT and Secihti, based on the Chapultepec Principles, to guide public policy decisions regarding AI use.
- Q: Why are some principles controversial? A: Certain principles, like data as a public good and explainable decisions, contradict the government’s current practices and appear inconsistent with their use of AI.
- Q: How might ambiguous principles impact AI regulation? A: Ambiguous language in the Chapultepec Principles could enable restrictive AI regulation and potentially control public narratives.