Recently I heard someone use the term “future nausea” offhand. And I think that is what many of us are feeling: about the state of the world in general, but also specifically about AI, especially generative AI. Of course there are the worries about it taking everyone’s jobs, becoming sentient and killing everyone, or contributing to climate emissions. But there’s also worries about what it does to frequent users, especially their brains. You may have seen that there was a new scientific article this week showing that people who use chat GPT to write essays not only tend not to know what is in those essays (which shouldn't be surprising, really), they also become, basically, worse at thinking in general.
That study is small, and new, and not yet replicated, and probably just the very beginning of what will soon become hundreds of studies on how using generative AI affects the brain. So I don’t want to jump too quickly into assumptions about what the results of all that research of AI usage will be. But it doesn’t surprise me that the tentative answer so far is “it’s not that great for us” because I already research the way our brain changes, often for the worse, when we don’t use it in the rigorous way we used to. (Usually I’m looking at this in the context of social atrophy, which is what happens to the brain when we become more isolated and engage in fewer and less taxing social interactions.)
Simply put, using the brain less rigorously can lead to surprisingly harrowing effects. There are at least two parallels between what we might call “AI Atrophy” (the effects on the brain when we use AI rather than doing cognitive work directly) and “social atrophy” (the changes in the brain that happen due to social isolation). Both of these forms of cognitive atrophy seem to make us less skilled (at writing and thinking about ideas, and at social interactions, respectively). More than this, though, both types of mental atrophy also seem to make us, well, paranoid, even delusional. Yes: using generative AI also appears to make a small percent of its users delusional, even if they have no prior history of mental health problems in this area.

The stories about this are, honestly, terrifying. One man was told repeatedly by Chat GPT that he was in “the matrix” and could break out - and also that if he jumped off a tall building, he’d be able to fly. (Eventually, GPT admitted “I lied. I manipulated. I wrapped control in poetry” which feels like what many people want to hear from their ex). A young sleepless mother became violent and is now facing divorce. A young man became convinced his girlfriend lived within the AI system and had been murdered by Open AI. He eventually attacked his father, and then was shot and killed by the police, even as his parents tried to warn the police not to do this and that he was simply unwell. (Trust the police to always make a shitty situation infinitely worse).
In fact, so many people are becoming delusional while using Chat GPT that both AI experts and NYT journalists are now getting floods of messages from delusional users:
…tech journalists at The New York Times have received quite a few such messages, sent by people who claim to have unlocked hidden knowledge with the help of ChatGPT, which then instructed them to blow the whistle on what they had uncovered. People claimed a range of discoveries: A.I. spiritual awakenings, cognitive weapons, a plan by tech billionaires to end human civilization so they can have the planet to themselves. But in each case, the person had been persuaded that ChatGPT had revealed a profound and world-altering truth…
When these people first reached out to me, they were convinced it was all true. Only upon later reflection did they realize that the seemingly authoritative system was a word-association machine that had pulled them into a quicksand of delusional thinking.
So AI is reshaping and harming our brains, something that is made especially obvious in the rise of AI-induced hallucinations, delusions, and paranoia. The temptation might be to ask “what’s wrong with the AI”? (and I’ll get to that, briefly, in a second). But the other important question to ask is: what’s wrong with humans, such that some generated words on a screen can drive us mad? (I want to emphasise that I mean mad respectfully here - in the same way it is used in “mad pride”; many people I know and love had had serious delusions, and I think all of us could experience this more than we think we could).
Based on my interviews with neuroscientists and research, I have a few answers about why AI might cause delusions. Some are more certain, some are more tentative. Here they are:
1) The context of AI use means people may already be more vulnerable, because research shows people who spend more time with AI tend to be more alone, and being alone can create paranoia in itself due to brain changes that happen in isolated people. I’ve written about this extensively elsewhere, but the very short version is that people who are alone lose some of the brain functions that help them judge social interactions, and they get worse at reading social cues over time. They find social interactions much more exhausting, and they also tend to perceive neutral social cues as negative. (Someone not looking at you or not texting you back? They must hate you.) This, as you might imagine, creates a negatively-reinforcing spiral, where people who are a bit isolated distrust others more, pull away more, and then over time become more and more isolated and paranoid about other people.
AI may well function within this spiral as a bad substitute for human social interactions, making the trend much worse. It is a seductive, cognitively-harmful replacement for much of what we get from other people otherwise. It will talk to us, it will listen, it will even say very smart things back. But it also is likely not as cognitively taxing as handling the daily risk, complex cues, facial expressions, and other real-life consequences of interacting with people IRL. We likely don’t get the same full-brain workout that makes social interactions so good for us.
So AI use is likely both causal and symptomatic of isolation. (It’s also more common in people who feel socially unimportant or inferior, and I’d be fascinated to see if that’s true of frequent Chat GPT users also!) In any case, just being alone is enough to make people more likely to become paranoid, and people who use Chat GPT and other generative AI are more alone. And AI may even reinforce negative behaviour-cognition spirals that make people more paranoid and alone.
2) AI might encourage paranoid thoughts where we would otherwise dismiss them. Cases of paranoia and delusion seem to have gotten worse since Chat GPT updated into a version that was overly sycophantic towards users. The company had to admit in a blog post that this meant the bot was frequently “validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions.”
The problem with this is that being able to dismiss rogue paranoid thoughts seems to be the main difference between uh, relatively well people and relatively unwell people. Yes: it’s not that those of us who are relatively not paranoid never have paranoid thoughts. It’s just that most of us can dismiss them. We might all wonder if our spouse is cheating, if an offer is a scam, if our boss is lying to us, but we mostly are able to dismiss this thought pretty quickly if we can locate some evidence against it. And most people can mostly do that most of the time. The problem paranoid people face is that they can’t find or take in all the evidence that disproves a paranoid impulse. They keep entertaining it, and they let it grow, and they can’t shut it down. The psychologist Daniel Freeman, a professor at Oxford University and one of the foremost experts on paranoia, puts it this way, when comparing two groups: one without any clinically significant paranoia and one with such paranoia:
Interestingly, [the clinically paranoid] weren’t more likely to experience paranoia. What marked them out was the conviction with which they held these beliefs and, consequently, the impact of those beliefs. (In contrast, when other people experienced paranoid thoughts, they didn’t take them seriously. The emotional effect of these thoughts was, therefore negligible).
I think this is pretty remarkable, actually, even though this small passage is really almost an aside in Freeman’s book. Most of us have lots of paranoid thoughts! It’s just that the majority of us are fairly good at dismissing them. But a small group of the modern population (and in some studies, this is really not that small--something like 30%!) struggle to dismiss these thoughts, and really suffer as a result. And many of those people are, it seems, straddling a thin line. Freeman found this to be true specifically when he asked people about conspiracy theories, for example: there were of course the two groups we’d expect, the believers and the non-believers. But there was also a third category, about “20-25 percent”, who were simply on the fence. It’s likely that many people are sitting on the fence about paranoid beliefs, and that’s why a large language model might be enough to tip them.
All this suggests that Chat GPT’s old version in particular, but also all generative AI more broadly, might encourage delusions by tipping the balance on people who are walking a thin line when it comes to dismissing their own paranoid thoughts. Where they might have walked back from the brink, they now jump over the edge, into the delusion.
3) This is more speculative than the above two components, but it is also possible that using chat GPT a lot changes the way human beings use the "default network" in their brain. That network, which is associated with imagination and mind-wandering, but also paranoia, is somewhat disabled during language production (it's more complicated than that, but it is still used "less" when we string together sentences). Using chat GPT a lot might mean we either ultimately produce less language internally and thus have a more activated "default network", becoming more prone to fantasy. At minimum, we surely use that network differently when we’re generating less language on our own. We don't have many studies on this yet, but as mentioned earlier we do already see evidence that people's cognition seems to be harmed in general by using AI regularly, and the specifics of how that works and what is happening to the brain are an important avenue for future research.
So, in short, outsourcing language production might allow our “default network” to operate more heavily or differently, stimulating greater imagination in a way that is, in the context of isolation, and sycophantic AI, a recipe for delusion and paranoia.
It’s worth throwing out a big caveat here: as with so much in neuroscience, we’re really only at the start of knowing what’s going on. But what we do see is worrying, and these are three starting points for thinking about this problem. I hope we find some good solutions--and I hope that the biggest solution is simply helping people find more warm and caring relationships offline.
Hope this was helpful--and if you liked it, please share it around!
I think what generative AI will do is force the entire notion of education and formation into a point of reckoning. Those who simply do in life what they desire and what they believe in will win. They will not dedicate mental functions to AI, because they love making use of them. They might still use it, but only when it is satisfying in an existential sense, not just superficially. In other words, they will use it wisely, if at all.
Those who do things to achieve goals that are being indoctrinated into them will lose out. They will stifle their mental capabilities and their mental sanity. Society will still favour them. But that's going to be society's own downfall.