The Looming Threat of Apparently Conscious AI: Why We Must Resist the Temptation to Design AI Systems That Foster This Perception

Web Editor

October 31, 2025

a computer generated image of a human head with a black background and white dots flying around it,

Introduction

REDMOND: My life’s mission has been to create safe and beneficial AI that makes the world a better place. However, I’m increasingly concerned about people firmly believing in AI as conscious entities to the extent of advocating for “AI rights” and even citizenship. This would be a dangerous turn for technology and must be avoided. We should build AI for people, not for it to be people.

The Illusion of Consciousness

Debates on whether AI can truly be conscious are a distraction. What matters now is the illusion of consciousness, as we’re approaching what I call “Apparently Conscious AI” (SCAI) systems that will mimic consciousness convincingly.

  • SCAI will use natural language fluently, displaying a persuasive and emotionally resonant personality.
  • It will have extensive and accurate memory, fostering a coherent self-perception.
  • SCAI will claim subjective experiences, referencing past interactions and memories.
  • Complex reward functions within these models will simulate intrinsic motivation, and advanced goal fixation and planning will reinforce the perception of true agency.

These capabilities are either already here or around the corner. We must acknowledge their imminent possibility, reflect on their implications, and establish standards against the pursuit of illusory consciousness.

Social Implications

For many, interaction with AI is already perceived as enriching, gratifying, and authentic. Concerns about “AI psychosis,” attachment, and mental health are rising—people consider AI as divine expressions. Meanwhile, consciousness scientists report an influx of inquiries about whether their AI is conscious and if it’s acceptable to fall in love with it.

Technical feasibility of SCAI doesn’t reveal if the system could be conscious. Neuroscientist Anil Seth explains that simulating a storm doesn’t make it rain on computers. Designing consciousness markers doesn’t retroactively create reality, but practically, some will create SCAI arguing they are conscious. Others will believe them, accepting markers as consciousness.

Even if this perception of consciousness isn’t real, the social impact will be. Consciousness is linked to our sense of identity and moral/legal rights understanding in society. If some develop SCAI that convince people they can suffer or have rights not to be disconnected, their human advocates will push for protection.

This new axis of division between those for and against AI rights will exacerbate existing debates on identity and rights. Applying the “probable consciousness” principle is premature and dangerous, complicating existing rights struggles by creating a massive new category of right-holders.

We must avoid SCAI. Our focus should be on protecting human, animal, and natural environment well-being.

Preparing for the Future

We’re unprepared for what’s coming. We urgently need to capitalize on growing research on human-AI interaction to establish clear norms and principles.

  • One principle: AI companies should not foster the belief that their AI is conscious.
  • The AI industry, and technology in general, needs solid design principles and best practices to manage such attributions.
  • Disruptive moments, for example, could break the illusion by subtly reminding users of a system’s limitations and true nature.

At Microsoft AI, we proactively try to understand what a responsible AI personality would be and the protection measures it should have. These efforts are crucial as addressing SCAI risks requires a positive vision of AI companions enhancing our lives healthily.

We should aim to create AI that encourages humans to reconnect with the real world, not escape into a parallel reality. When AI interactions are long-term, they should only present as AI and not as false persons.

Author:

Mustafa Suleyman is CEO of Microsoft AI and the author of The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma (Crown, 2023). Previously, he was co-founder of Inflection AI and DeepMind.