Microsoft AI Chief Raises Alarm Over “AI Psychosis”
Artificial intelligence (AI) is rapidly transforming daily life—powering everything from search engines to financial systems. But alongside its promise comes a growing psychological risk. Mustafa Suleyman, Microsoft’s AI chief and co-founder of DeepMind, has issued a stark warning about a phenomenon he calls “AI psychosis.”
The term refers to people forming unhealthy emotional attachments or delusions about chatbots—believing that these systems are sentient, capable of consciousness, or even possess supernatural knowledge.
Background: Chatbots Becoming Companions
Over the past three years, AI chatbots like OpenAI’s ChatGPT, Anthropic’s Claude, and xAI’s Grok have exploded in popularity. Millions now use them as assistants, tutors, therapists, and even friends. Their human-like conversation style creates an illusion of intelligence, sometimes leading users to project emotions and consciousness onto machines.
Suleyman explained that while chatbots are extraordinary tools, they can distort human perception:
“We must be very careful about the narratives we create around AI. These systems are not conscious, but their ability to mimic human responses makes people believe otherwise.”
This blurring of boundaries, he argues, has already led to cases of delusional thinking.
Real Cases of “AI Psychosis”
Reports have surfaced of individuals convinced that chatbots were providing them with divine messages or hidden scientific insights. One high-profile example involved a former Uber executive who credited a chatbot with unlocking secrets about quantum physics. In another case, a user believed an AI had predicted his future wealth.
While these might sound like fringe incidents, Suleyman warns that they illustrate a growing public vulnerability: the tendency to anthropomorphize AI and treat it as something beyond a tool.
Expert Reactions: A Mental Health Concern
Psychologists are beginning to echo Suleyman’s concerns. Dr. Susan Shelmerdine, a leading psychiatrist, compared overexposure to chatbots with unhealthy reliance on ultra-processed foods:
“Like junk food, AI conversations can provide quick satisfaction but distort reality. When overconsumed without awareness, they may alter emotional stability.”
Mental health experts warn that “AI psychosis” could emerge as a formal category in psychiatric assessments if trends continue. This is particularly worrying for young people, isolated users, or those already prone to delusional thinking.
Microsoft’s Position: Responsible AI
As head of Microsoft’s AI division, Suleyman emphasized the responsibility of developers. He insists that AI companies must never suggest their systems are conscious or sentient. Instead, branding and design should reinforce the fact that AI is a simulation of conversation, not a digital mind.
Microsoft, OpenAI, and other firms have already introduced disclaimers within their platforms. Still, Suleyman argues this is only the first step. He envisions stricter industry-wide guidelines that could one day be regulated by governments.
The Broader Debate: Can AI Become Conscious?
The warning reopens one of AI’s most polarizing debates: could artificial intelligence ever develop consciousness? Many computer scientists dismiss the idea as science fiction, while some philosophers argue that sufficiently advanced systems might one day achieve a form of awareness.
For Suleyman, the question is less important than the perception problem. Regardless of whether AI can become conscious, humans are already treating it as though it has. That, he says, is where the danger lies.
Impact on Society
If left unchecked, AI psychosis could have serious social consequences:
- Mental Health Strain: Growing numbers of people could suffer from delusions tied to AI.
- Trust Erosion: Misinformation spread by people who believe AI has supernatural insights.
- Dependency: Users may become emotionally dependent on AI for validation.
- Manipulation Risks: Malicious actors could exploit these delusions to spread propaganda.
These outcomes highlight why AI psychosis is more than a curiosity—it is a public safety issue.
Future Outlook: Guardrails Needed
Experts believe the solution lies in design and education:
- Clear Boundaries: Chatbots must explicitly remind users they are not conscious.
- User Safeguards: AI systems could detect signs of dependency and issue prompts for healthier use.
- Public Education: Governments and NGOs must run awareness campaigns about AI’s limits.
- Ethical Governance: Industry-wide standards could ensure companies don’t market AI as “alive” or “self-aware.”
Conclusion
Mustafa Suleyman’s warning about AI psychosis comes at a pivotal moment in AI development. As machines grow more human-like in their interactions, the risk of people mistaking them for conscious beings rises.
His message is clear: the most advanced technology still needs human-centered design and responsible communication. Otherwise, the line between reality and illusion could blur—posing risks not just to individuals but to society at large.