The Double-Edged Sword of AI in Recruitment
While artificial intelligence (AI) can expedite the hiring process by better identifying talent, it may also inadvertently perpetuate human biases and discriminate against valuable profiles, particularly women. The very humans who instruct these AI systems can embed their own unconscious biases into the algorithms, leading to discriminatory outcomes.
AI Amplifies Existing Biases
According to Buk’s HR 2026 report, AI can inadvertently amplify biases and discriminate, as trainers may unknowingly incorporate their own prejudices into the data used to develop these systems. In Europe, up to 60% of workers fear that AI-driven performance or productivity assessments may lack transparency and impartiality.
Zinnya del Villar, Director of Data, Technology and Innovation at Data-Pop Alliance, reminded ONU Mujeres of Amazon’s case in 2018 where the system penalized women due to its configuration. This example highlights how AI can mirror existing biases if not carefully designed and monitored.
Gender Gaps Can Be Perpetuated by AI
Jacinta Girardi, Research Lead at Buk, acknowledges human biases but warns that AI is not immune. “If I have a bad day and review resumes, it will likely affect my decisions,” she admits. However, she sees AI as a supportive tool when used responsibly.
Del Villar emphasizes that the key lies in recognizing and addressing biases within our own selection processes. She recalls a case where algorithms were implemented in hiring but consistently favored male candidates, reflecting the company’s existing issues rather than a flaw in the technology.
Responsible Use of AI for Recruitment
In Latin America, the risk of AI in recruitment reproducing gender, age, and socioeconomic discrimination patterns is high, affecting women and informal sector workers disproportionately. Only 18% of companies using AI in the region have formal mechanisms for ethical review or employee participation in decision-making.
However, Buk’s report suggests that ethical AI implementation can mitigate traditional human biases in processes like recruitment and performance evaluation. It recommends ensuring adherence to ethical and regulatory standards through ethics committees, bias audits, or internal policies on responsible use.
- Inclusive Data Sets: Use diverse and representative datasets to minimize biases.
- Transparency: Ensure algorithm transparency to understand decision-making processes.
- Diversity and Inclusion: Promote diversity among AI researchers and developers.
- Solid Ethical Frameworks: Implement robust ethical guidelines.
In a rapidly changing job market, the challenge isn’t just adopting technology but critically examining talent definitions and addressing underlying biases.