Introduction
WASHINGTON, DC – At an event titled “Winning the AI Race,” the U.S. government under President Donald Trump unveiled an action plan, adding to multimillion-dollar data center deals announced during Trump’s Gulf trip in May. The shared goal is to strengthen U.S. leadership in AI. However, neither the plan nor previous announcements mentioned human rights, raising questions about what it means for the U.S. to “win” the AI race.
Assumed Alignment with Democratic Values
In Washington and Silicon Valley, many assume that U.S. technology is inherently aligned with democratic values. As stated by Sam Altman, CEO of OpenAI, before the U.S. Congress in May: “We want to ensure democratic AI outpaces authoritarian AI.” However, new technological systems do not automatically protect human rights. Authorities and businesses must proactively ensure AI usage adheres to certain standards and conditions, as seen in many other industries.
Existing International Frameworks and Corporate Responsibility
Recent reports from the UN Working Group on Business and Human Rights, the UN Human Rights Council, and the Freedom Online Coalition remind governments and businesses of their shared responsibility to assess the impact of AI systems on people’s rights. Existing international frameworks require all companies to respect human rights and avoid directly or indirectly causing human rights violations through their activities.
However, most AI companies have not acknowledged these responsibilities or reaffirmed their commitment to them. These warnings reinforce obligations already assumed in other industries.
U.S. AI Agreements and Lack of Human Rights Safeguards
The concern isn’t unclear standards; it’s that many companies (and governments) act as if applicable norms don’t exist. Consider Trump’s AI agreements in the Gulf: If realized, these investments could solidify the region’s aspiration to be a global AI hub, raising concerns that the U.S. and its leading tech companies might be reneging on previous commitments.
In the UAE, as part of a plan to build a massive AI campus in Abu Dhabi, the U.S. approved the transfer of advanced chips to G42, a local AI company. In Saudi Arabia, a newly backed state-backed firm recently disclosed multimillion-dollar deals with prominent U.S. firms for chip purchases and infrastructure construction; Starlink (Elon Musk’s project) also received authorization to operate in the country. None of these announcements mention safeguards against misuse.
Risks of AI Technology in Repressive Regions
This isn’t a hypothetical risk. The UAE’s use of spyware against journalists and dissidents is well-known, and Saudi Arabia has a history of transnational repression alongside its continued involvement in Yemen’s humanitarian crisis. New AI capabilities significantly enhance governments’ ability to suppress basic rights, such as creating detailed reports on dissidents, real-time surveillance, analyzing communications and social media posts, and controlling AI model query results.
Digital Transfer and Lack of Oversight
Unlike traditional goods or infrastructure, AI systems can be digitally transferred and used with minimal population awareness. A government developing and controlling AI systems within a country under an AI sovereignty plan could easily turn them into instruments of state power. As U.S. companies, backed strongly by the Trump administration, sign agreements with other countries without human rights safeguards, this is a dangerous turn.
Missed Opportunity for Influence
Conditioning access to cutting-edge U.S. technologies could be used as a tool for influence, promoting respectful human rights applications and preventing technology misuse. The UN Guiding Principles on Business and Human Rights (endorsed by the U.S. and numerous allies) obligate companies to avoid human rights violations and address potential harm.
The OECD Guidelines for Multinational Enterprises go further, requiring due diligence incorporation into all operations and supply chains. The Global Network Initiative (GNI), launched 17 years ago by leading tech companies, sets principles for protecting user privacy and freedom of expression in high-risk markets with periodic company assessments to verify compliance (the organization I represent was one of the founding partners).
Industry’s Lack of Explicit Commitment to AI Responsibility
Companies like Coca-Cola, Volkswagen, and Estée Lauder already apply these frameworks or undergo independent oversight, just as some tech companies do. However, the tech industry as a whole hasn’t explicitly endorsed these AI responsibilities. Joining the GNI and accessing its frameworks and networks could be highly beneficial for large AI companies that haven’t done so.
The Stakes
To understand what’s at stake, consider a leaked memo from Anthropic CEO Dario Amodei. Announcing the company’s intention to accept state investments in the Gulf (after previously suggesting it wouldn’t), Amodei stated: “It’s perfectly coherent to defend a policy that says ‘no one can do X’, and then, when that policy fails and everyone else is doing X, concede and do it ourselves too.”
If AI companies engage in an all-out “win at all costs” race without human rights safeguards, what hope remains for basic rights protection? If winning the AI race requires abandoning the values that distinguish us from authoritarian rivals like China, it would be a Pyrrhic victory. American technology alone won’t guarantee a desirable future.
Governments and businesses still have time to commit to applying existing human rights standards in AI. The tools exist, and there’s no excuse for not using them.