Introduction to Facial Recognition Technology in Mexico
In Mexico, thousands of people are being recorded, analyzed, and identified daily by facial recognition systems (TRF) installed on streets, airports, and stadiums, according to the report “No nos vean la cara” by the Digital Rights Defense Network (R3D). The technology is rapidly advancing without transparency, specific regulation, and serious implications for human rights.
Flawed Technology
Facial recognition technology, a branch of artificial intelligence that compares biometric facial patterns to identify or verify individuals, is promoted as a security tool. However, the report reveals multiple technical deficiencies in this technology.
- Common errors include false positives (incorrectly identifying someone as suspicious) and false negatives (failing to recognize individuals who should be identified).
- These errors are more common in people with darker skin, women, and older adults.
- A study cited in the report by the US National Institute of Standards and Technology shows that algorithms have 10 to 100 times more errors identifying African American or Asian faces compared to Caucasian faces.
- Biased databases, overfitting during model training, and external factors like lighting or facial angle contribute to inaccuracy. Despite this, Mexican authorities continue deploying these systems without controls or independent evaluations.
Public Spaces Under Surveillance
The R3D document warns about the normalization of facial recognition technology use in streets, plazas, and public buildings. Although Mexican legislation requires explicit consent for collecting biometric data, cameras capture faces remotely and without notification.
“People have no opportunity to decide if they want their biometric data collected,” the report advises.
This hidden surveillance infringes on the right to privacy and discourages exercising other freedoms, such as protest or peaceful assembly, according to R3D.
Presumption of Innocence
One of the most critical criticisms is that facial recognition technology treats the entire population as potentially guilty.
“TRF systems treat every person in public spaces as a potential culprit,” the report accuses.
This undermines the principle of presumption of innocence, as surveillance is conducted without individualized justification or judicial order. In Mexico, R3D notes that there is no evidence that biometric systems improve public safety, but there are abundant risks of misuse, discrimination, and technical failures.
Discrimination
The report also documents how facial recognition technology exacerbates existing inequalities. Indigenous people, Afro-descendants, and LGBTTTIQ+ community members are more likely to be misidentified by algorithms.
This is due not only to biases in system design but also to the lack of representative data in training datasets. The issue is not just technical but structural: facial recognition technologies perpetuate and reinforce social exclusion patterns.
Indirect discrimination, where a seemingly neutral measure has a disproportionate effect on certain groups, is one of the main concerns highlighted in the report.
Lack of Regulation
One of the most concerning findings in the report is the absence of specific regulation on facial recognition technology use in Mexico.
Although general data protection laws exist, they do not address the particular risks of facial recognition. There are also no clear mechanisms for transparency or accountability.
Authorities install cameras, contract technological services, and collect data without consulting citizens or explaining their objectives.
“TRF use has expanded without democratic discussion or effectiveness evaluation,” R3D asserts.
Technosolutionism
The report criticizes the technosolutionist narrative that presents technology as a magical solution to security problems. This logic has led state and local governments to invest in surveillance systems without considering collateral damage.
Instead of preventing crimes, mass surveillance can discourage civic participation and foster fear.
“Knowing that a camera can identify us at any moment changes our behavior in public spaces,” the document warns.
Moreover, centralizing biometric data databases increases the risk of identity theft, leaks, and espionage. Once information is stolen, it cannot be recovered or replaced.
Facing this scenario, the report proposes a moratorium on facial recognition technology use in public spaces until robust regulation ensuring human rights respect is in place.
- The report also calls for clear limits on biometric data use, ensuring government contract transparency, and subjecting systems to independent audits.
- R3D proposes fostering less invasive technologies with a human rights focus and initiating a public debate on facial recognition risks and the state’s role in privacy protection.