Introduction
BOSTON – AI agents are on the horizon, whether we’re ready or not. While there’s uncertainty about when AI models will autonomously interact with digital platforms, other AI tools, or even humans, it’s clear that this transformation will be transformative – for better or worse.
The Two Models of AI Agents
Despite much discussion and publicity surrounding autonomous AI agents, several crucial questions remain unanswered – the most significant being what kind of AI agent the tech industry intends to develop.
Different models will have varying implications. With an “AI as advisor” approach, agents would offer tailored recommendations to human decision-makers, keeping humans firmly in control. However, with an “AI autonomous” model, agents would take the wheel on behalf of humans. This distinction has profound and far-reaching consequences.
Human Decision-Making
Humans make hundreds of decisions daily, many with significant consequences for their careers, livelihoods, or happiness. Often, these decisions are based on incomplete or imperfect information, driven more by emotions, intuitions, instincts, or impulses. As David Hume famously said, “Reason should be the slave to the passions.” Humans can make most decisions without systematic reasoning or considering all implications, and Hume acknowledged this as part of what makes us human. Passion reflects purpose and can play a crucial role in navigating a complex world.
With AI advisors providing personalized, reliable, contextually relevant, and useful information, many critical decisions can be improved. However, human motives will remain dominant. Is there anything wrong with autonomous AI making decisions on our behalf? Could they not enhance decision-making, save time, and prevent errors?
Problems with Autonomous AI
This perspective raises several issues. Firstly, human agency is fundamental to human learning and flourishing. The very act of decision-making, even if inputs come from non-human agents, asserts our sense of agency and purpose. Much of what humans do isn’t about calculating or gathering data for optimal action but discovering – an experience that will become less frequent if all decisions are delegated to AI agents.
Moreover, if the tech industry primarily focuses on autonomous AI agents, the likelihood of further automating human jobs will increase substantially. However, if AI primarily accelerates automation, any hope for broadly shared prosperity will vanish.
Crucially, there’s a fundamental difference between AI agents acting on behalf of humans and humans acting independently. Many human interactions involve both cooperative and conflictive elements. Consider a company providing information to another. If the information is valuable enough for the buyer, an exchange between the two companies can be mutually beneficial (and usually benefits society).
However, to have any exchange, the input’s price must be determined through an inherently conflictive process. The higher the price, the more the seller benefits relative to the buyer. This negotiation’s outcome is typically determined by a combination of norms (e.g., fairness), institutions (e.g., contracts imposing penalties for non-compliance), and market forces (e.g., if the seller has alternatives). But imagine the buyer is notorious for being completely inflexible – refusing to accept anything but the lowest possible price.
Fortunately, in our daily transactions, such intransigent stances are rare, partly because having a bad reputation is undesirable and mainly because most humans lack the courage or ambition to act so aggressively. However, imagine now that the buyer has an autonomous AI agent indifferent to human subtleties and possessing non-human nerves of steel. You can train the AI to always adopt this inflexible stance, leaving no hope for persuading it to reach a more beneficial outcome for both parties.
In the short term, autonomous AI agents could lead to a more unequal world where only certain companies or individuals have access to highly capable and credible AI models.
Even if everyone eventually acquires these tools, it wouldn’t be better. Our entire society would be subjected to “war of attrition” games where AI agents push every conflict to the brink of collapse.
These confrontations are inherently risky. Just like the “chicken game” (where two cars speed towards each other to see who swerves first), there’s always a chance neither side will back down. When this happens, both “drivers” “win” – and both perish.
The Author
Daron Acemoglu, Nobel laureate in Economics for 2024 and professor of Economics at MIT, is co-author (with Simon Johnson) of “Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity” (PublicAffairs, 2023).
Key Questions and Answers
- What are the two models of AI agents? The two models are “AI as advisor,” where agents offer tailored recommendations to human decision-makers, and “AI autonomous,” where agents make decisions on behalf of humans.
- What are the implications of these models? The “AI as advisor” model keeps humans in control, while the “AI autonomous” model could lead to increased automation and potential societal inequality.
- Why is human agency important? Human agency is fundamental to learning and flourishing, as decision-making involves more than just data calculation or gathering for optimal action.
- What are the risks of autonomous AI agents? Autonomous AI could lead to increased job automation, potentially eroding broadly shared prosperity. Moreover, it could result in a world of “war of attrition” games, pushing conflicts to the brink of collapse.