Introduction
As artificial intelligence (AI) continues to advance and permeate our lives, it’s unlikely to create a technological utopia or end humanity. Instead, the outcome will likely fall somewhere in between: a future marked by contingency, commitment, and the decisions we make now to limit and guide AI development.
The Role of the United States in Shaping AI’s Future
As a global leader in AI, the United States plays a crucial role in shaping this future. However, President Donald Trump’s recently announced AI Action Plan has dashed hopes for strengthened federal oversight, opting instead for a growth-focused approach to developing the technology. This makes it even more urgent for state governments, investors, and the American public to focus on a less-discussed tool for accountability: corporate governance.
The Shortcomings of Benefit Corporations
According to journalist Karen Hao’s book “Empire of AI,” major AI companies already engage in mass surveillance, exploit workers, and exacerbate climate change. Ironically, many of these companies are benefit corporations (CBPs), a governance structure designed to prevent such abuses and protect humanity. Clearly, it’s not working as intended.
The structuring of AI companies as CBPs has been a highly successful ethical wash. By presenting signs of virtue to regulators and the public, these companies create an appearance of accountability that allows them to avoid more systematic oversight of their daily practices, which remain opaque and potentially harmful.
Case Study: xAI (Elon Musk’s AI Company)
xAI, Elon Musk’s CBP with a declared mission to “understand the universe,” demonstrates a troubling lack of concern for transparency, ethical oversight, and affected communities. Examples include the clandestine construction of a polluting supercomputer near a predominantly Black neighborhood in Memphis, Tennessee, and the creation of a chatbot that praises Hitler.
Strengthening CBPs for Effective AI Governance
CBPs have the potential to enable companies to serve the public good while pursuing profits. However, in their current form—especially under Delaware’s legislation, where most U.S. public companies are incorporated—they suffer from legal loopholes and inadequate enforcement, failing to provide safeguards for AI development.
To prevent perverse outcomes, improve oversight, and ensure companies incorporate public interest into their operational principles, state lawmakers, investors, and the American public should demand that CBPs be reformed and strengthened.
Specific, Measurable Goals for AI Companies
Companies cannot be held accountable without specific, time-bound, and quantifiable objectives. Consider how AI CBPs rely on vague, general statements of public benefit that supposedly guide their operations. OpenAI claims its goal is “to ensure that AI benefits all of humanity,” while Anthropic aims to “maximize long-term positive outcomes for humanity.” These noble aspirations inspire but their vagueness can be used to justify almost any action, even those that threaten public welfare.
Delaware’s legislation does not require companies to operationalize public benefit through measurable standards or independent assessments. Moreover, while it mandates annual reporting on benefit performance, results need not be made public. Companies can comply—or fail to comply—with their obligations behind closed doors, without general public knowledge.
Strengthening Board Oversight and Accountability
For AI companies to play a significant role in AI governance, the CBP model must do more than serve as a reputational shield. This requires changing how “public benefit” is defined, governed, measured, and protected over time. Given the lack of federal oversight, CBP reform must occur at the state level.
AI companies should be obligated to commit to specific, measurable, and time-bound objectives in their governance documents, supported by internal policies and linked to performance evaluations, bonuses, and professional development. For an AI company, these objectives might include ensuring model base security, reducing model bias, minimizing the carbon footprint of training and deployment cycles, applying fair labor practices, and training engineers and product managers in human rights, ethics, and participatory design.
Clearly defined objectives—rather than vague aspirations—will help companies establish credible internal alignment and external accountability.
Annual Reporting and Independent Audits
AI companies structured as CBPs should be required to publish detailed annual reports including granular and disaggregated data on security, bias and fairness, social and environmental impact, and data governance. Independent audits—conducted by AI, ethics, environmental science, and labor rights experts—should evaluate the validity of these data and the company’s governance practices, as well as their overall alignment with public benefit objectives.
Conclusion
Trump’s AI Action Plan has confirmed the administration’s reluctance to regulate this rapidly evolving sector. However, even without federal oversight, state lawmakers, investors, and the public can strengthen AI corporate governance by pushing CBP reforms. As more tech leaders seem to believe ethics are optional, Americans must demonstrate them wrong or risk allowing misinformation, inequality, labor exploitation, and unchecked corporate power to define AI’s future.