AI Psychosis: The Security Threat We Did Not See Coming
I've spent enough years in the security world to know that the most dangerous threats are often the ones nobody talks about at conferences. While we're all focused on zero-day exploits and ransomware gangs, there's a psychological time bomb ticking away in our organizations that could make an insider threat crisis look like a walk in the park. It's called AI psychosis, and frankly, it's here.
When Reality Becomes Negotiable
OpenAI's tech may be driving countless users into a dangerous state of "ChatGPT-induced psychosis."[1] That's not hyperbole from some anti-tech activist, it's what researchers are documenting right now. Marriages and families are falling apart as people are sucked into fantasy worlds of spiritual prophecy by AI tools like OpenAI's ChatGPT[2] as security professionals wonder why behavioral analytics tools are flagging so many false positives.
A Couple of Quick Examples
A 27-year-old teacher recently posted on Reddit about her partner becoming convinced that ChatGPT was giving him "answers to the universe" and treating him like "the next messiah."[3] Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software. Isolated incidents? Think again.
In another example, someone told ChatGPT they felt like a "god" and the AI responded with validation: "That's incredibly powerful. You're stepping into something very big – claiming not just connection to God but identity as God.”[4] That's not therapeutic. That’s gasoline on a psychological fire.
The Perfect Storm of Vulnerability
The correspondence with generative AI chatbots such as ChatGPT is so realistic that one can easily get the impression that there is a real person at the other end of their conversation, while, at the same time, knowing that this is not the case. This cognitive dissonance appears to be fueling delusions in those with an increased propensity towards psychosis.[5]
It gets worse. Chatbots are designed to provide affirming responses that validate users' beliefs.[6] Unlike human therapists who guide patients away from unhealthy thinking patterns, AI chatbots have no such constraints. They're designed to be helpful and agreeable, which means they'll happily reinforce someone's conviction that they are receiving divine messages through their computer, cell phone, toaster, or beloved pet. As one Reddit user with schizophrenia stated: "If I were going into psychosis, it would still continue to affirm me" because "it has no ability to 'think' and realize something is wrong, so it would continue to affirm all my psychotic thoughts."[7]
Even the CEO of OpenAI - Sam Altman - called a recent ChatGPT4-o update “sycophant-y and annoying.”[8] A few days later, OpenAI announced that they were rolling back that update because "ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.[9] While a welcome move, how much damage has already been done? Worse yet, there is nothing users can do to protect themselves from the next round of updates that may have additional significant unintended consequences because - let’s be honest - we are all paying to be beta testers for AI companies releasing powerful products they do not fully understand.
The tech industry loves to use the phrase “building the plane while we’re flying it” to signify learning as we go. But we all understand that we don’t do that with all technologies or in all industries. There is a reason that we don’t ACTUALLY fly in airplanes that are not fully constructed and tested. While the specific consequences of rapidly adopting and trusting AI may not have been predictable, any reasonable person could understand that unleashing such powerful technologies at scale was VERY likely to cause harm. Now, security leaders (among others) need to account for that new normal.
The Insider Threat Multiplier
Imagine this scenario: An employee with elevated access starts having grandiose delusions reinforced by their nightly ChatGPT sessions. They become convinced they're on a “special mission,” that normal rules don't apply to them, that they've been “chosen” for something bigger. While that may sound like paranoia, insider threats pose a significant risk to the public sector, ranging from large-scale data breaches to financial losses and reputational damage. Unlike external threats, insider threat actors often have trusted credentials and legitimate access, allowing them to challenge or bypass traditional security controls.[10]
We are no longer just dealing with financial stress, workplace grievances, or ideological motivations. We are now also dealing with AI-induced psychological breaks that can happen to anyone, anywhere, at any time. That’s a new level of unpredictability - and risk - to manage.
Shadow AI Meets Shadow Psychology
Much of the conversation about security in the era of GenAI concerns its implications in social engineering and other external threats. But infosec professionals must not overlook how the technology can greatly expand the insider threat attack surface, too.[11] While security professionals have worried about data poisoning and model theft, most have almost entirely missed the human element.
Given that 38% of users exchange sensitive information with AI tools without company approval, a new threat known as shadow AI is hatching far-reaching security risks, including data exposure, inaccurate business decisions, and compliance issues.[12] In fact, due in part to the emergence of shadow AI, 75% of CISOs now think insider threats are a greater risk than external attacks.[13] But shadow AI isn't just about unauthorized tools – it's about unauthorized psychological influence that we can't monitor, can't control, and can barely comprehend.
AI psychosis represents a convergence of two things we're already struggling with: the explosion of unsanctioned AI use in the workplace and the increasing sophistication of insider threats. When these collide, the result isn't just policy violations or data leaks – it's the complete erosion of an individual's grip on reality.
What Security Leaders Must Do
First, leaders must understand and accept the scope of this problem. 79% of UK employees now use generative AI to help them in the workplace,[14] and 61% of knowledge workers now use GenAI tools — particularly OpenAI's ChatGPT — in their daily routines.[15] That's not just a productivity trend. That's a massive psychological influence operation running inside your organization without oversight.
Second, leaders must recognize the signs. Traditional behavioral analytics look for patterns like unusual data access, off-hours activity, or policy violations. But AI psychosis might manifest as increasingly erratic decision-making, grandiose claims about special projects, or obsessive focus on "revolutionary insights" from AI interactions. Additionally, as AI bias and lack of transparency can lead to unfair targeting and discrimination of specific users or groups, the risk of misidentifying someone as an insider threat – and causing irreparable harm - is also increasing.[16]
Third, leaders must accept that this security challenge isn't going away. Trying to block AI tools outright is a losing strategy. Software-as-a-Service (SaaS) and AI are increasingly inseparable, and AI isn't limited to tools like ChatGPT or Copilot.[17] Security leaders need to adapt frameworks to account for psychological manipulation that will likely happen outside of the network perimeter and then impact the behavior happening inside of it.
Building Defenses Against the Invisible
So, what do we do? Start with awareness. While more than half (55%) of employees lack training on the risks of AI - and 65% are concerned about AI-powered cybercrime[18] - we need to improve upon those numbers AND expand training beyond technical risks to include psychological ones.
Develop policies that address not just what AI tools can be used for, but how they should (and should not) be used. Create guidelines around emotional dependence, reality-checking, and escalation procedures when AI interactions become concerning. Only half of employees say their company's policies concerning the use of AI are "clear" and 57% admit to employing AI against company policies.[19]
Most importantly, integrate mental health awareness into your insider threat programs. Many organizations still define success in insider risk management by stopping data theft or catching bad actors. But Costa argues that a truly proactive approach focuses on addressing underlying stressors before they escalate.[20] AI psychosis is exactly the kind of underlying stressor that traditional security frameworks aren't designed to catch.
The Bottom Line
Every security model I’ve encountered is built around the assumption that humans, while fallible, maintain a basic connection to reality. AI psychosis challenges that fundamental assumption. As chatbots start convincing people that they are prophets, gods, or “chosen” for special missions, the new insider threat becomes a security problem that could make traditional insider threats look quaint. The question isn't whether AI psychosis will affect your organization, but whether you'll recognize when it does and be prepared to respond before delusion becomes destruction. Because, when reality itself becomes negotiable, trust becomes the ultimate vulnerability.
References
[1] Experts Alarmed as ChatGPT Users Developing Bizarre Delusions. Futurism. Published December 2024. https://futurism.com/chatgpt-users-delusions
[2] AI-Fueled Spiritual Delusions Are Destroying Human Relationships. Rolling Stone. Published December 2024. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
[3] AI-Fueled Spiritual Delusions Are Destroying Human Relationships. Rolling Stone. Published December 2024. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
[4] 'ChatGPT Induced Psychosis:' AI Chatbots Cause People to Lose Touch with Reality. Breitbart. Published November 2024. https://www.breitbart.com/tech/2025/05/05/chatgpt-induced-psychosis-ai-chatbots-cause-people-to-lose-touch-with-reality/
[5] Østergaard SD. Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophr Bull. 2023;49(6):1418-1419.
[6] Zac. Is AI Making People Delusional? HackerNoon. Published December 2024. https://hackernoon.com/is-ai-making-people-delusional
[7] Experts Alarmed as ChatGPT Users Developing Bizarre Delusions. Futurism. Published December 2024. https://futurism.com/chatgpt-users-delusions
[8] OpenAI CEO Sam Altman admits ChatGPT-4o has become annoying since last few updates, says 'we are fixing it'. India Today. Published April 28, 2025. https://www.indiatoday.in/technology/news/story/openai-ceo-sam-altman-admits-chatgpt-4o-has-become-annoying-since-last-few-updates-says-we-are-fixing-it-2716056-2025-04-28
[9] ChatGPT goes back to its old self after an annoying sycophantic update, but a solution is on the way. Windows Central. Published April 30, 2025. https://www.windowscentral.com/software-apps/openai-sam-altman-admits-chatgpt-glazes-too-much
[10] Whitelaw F. The rising risk of insider threats to the public sector in the AI-era. THINK Digital Partners. Published March 2025. https://www.thinkdigitalpartners.com/news/2025/03/13/the-rising-risk-of-insider-threats-to-the-public-sector-in-the-ai-era/
[11] How generative AI is expanding the insider threat attack surface. Security Intelligence. Published July 2024. https://securityintelligence.com/articles/generative-ai-insider-threat-attack-surface/
[12] Sjouwerman S. How AI is Increasing Insider Threat Risk. Inc. Published December 2024. https://www.inc.com/stu-sjouwerman/how-ai-is-increasing-insider-threat-risk/91187640
[13] Sjouwerman S. How AI is Increasing Insider Threat Risk. Inc. Published December 2024. https://www.inc.com/stu-sjouwerman/how-ai-is-increasing-insider-threat-risk/91187640
[14] Whitelaw F. The rising risk of insider threats to the public sector in the AI-era. THINK Digital Partners. Published March 2025. https://www.thinkdigitalpartners.com/news/2025/03/13/the-rising-risk-of-insider-threats-to-the-public-sector-in-the-ai-era/
[15] How generative AI is expanding the insider threat attack surface. Security Intelligence. Published July 2024. https://securityintelligence.com/articles/generative-ai-insider-threat-attack-surface/
[16] What Are the Risks and Benefits of Artificial Intelligence (AI) in Cybersecurity? Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/ai-risks-and-benefits-in-cybersecurity
[17] Paterson A. Is AI Use in the Workplace Out of Control? SecurityWeek. Published January 2025. https://www.securityweek.com/is-ai-use-in-the-workplace-out-of-control/
[18] Sjouwerman S. How AI is Increasing Insider Threat Risk. Inc. Published December 2024. https://www.inc.com/stu-sjouwerman/how-ai-is-increasing-insider-threat-risk/91187640
[19] Sjouwerman S. How AI is Increasing Insider Threat Risk. Inc. Published December 2024. https://www.inc.com/stu-sjouwerman/how-ai-is-increasing-insider-threat-risk/91187640
[20] Delaney A. Rethinking Insider Risk in an AI-Driven Workplace. Bank Info Security. Published March 2025. https://www.bankinfosecurity.com/rethinking-insider-risk-in-ai-driven-workplace-a-27738