As a physician in the field I find your use of psychosis incredibly sloppy. Usage of LLM: s as possible causative etiology of psychosis is not supported in the literature.
I don’t know if you have any knowledge in psychiatry, but I’m giving you the benefit of the doubt. As a clinician I can say that people presenting with psychosis in the psyciatric emergency department increasingly mention AI. This is especially true for paranoid manifestations (schizophrenic amphetamine-induced psychosis). Increasingly the patients have interacted with LLM:s and/or incorporated them into their (deeply flawed) model of reality. This is not to say the use LLM is the driving (causative) factor. Rather, the symptoms of the paranoid psychotic state is influenced by the patients interaction with the environment.
As an example the subjects that play major roles in paranoid psychosis has varied through my career. I practice in Sweden and SÄPO (Swedish Security Service) and MUST (Military Intelligence and Security Service) are common themes in psychotic models of reality. A year ago Putin and Russia was common. Before that covid was common. None of these are driving factors. They are just common themes in society in general and also meet the criteria of excellent basis for paranoid and persecutory delusions. This is the connection between LLM:s and psychosis, just a result of AI getting a lot of attention in general and easily fits a world view of a paranoid psycotic.
Its not common but it is a thing. Buddy of mine (already was predisposed to schizophrenia) had an llm trigger a psychotic break. They are hallucination machines. Thats their entire purpose.
Right I have no doubt that is happening. I watched it with friends who did the same weed and media and church and COVID and math. They’re just same old psychotic they always were. Which is why this is bullshit journalism.
This is a real thing. It turns out more people than you would think are prone to paranoia and schizophrenia and if they spend too much time talking to LLMs they start to believe that the LLM is their friend, and then that the LLM is omniscient, and then from there it can spiral out of control as the LLM makes up a world of delusions for them. It’s hard to know how many people are affected because neither the victim nor the LLM realize there is a problem, and the problem is relatively new.
The problem is how the whole thing was presented to people. You just need to pass by subreddits related to ChatGPT to see the amount of misunderstandings about how it works, just an example:
This whole thing is kinda scary. About how easily some people can spiral into delusion when over-relying on LLMs.
These models don’t truly “know” anything, they improvise, fill gaps with plausible-sounding but often enough fabricated information.
It’s understandable how non technical users treat their outputs as profound revelations, mistaking AI-generated fiction for hidden truths.
As a physician in the field I find your use of psychosis incredibly sloppy. Usage of LLM: s as possible causative etiology of psychosis is not supported in the literature.
I don’t know if you have any knowledge in psychiatry, but I’m giving you the benefit of the doubt. As a clinician I can say that people presenting with psychosis in the psyciatric emergency department increasingly mention AI. This is especially true for paranoid manifestations (schizophrenic amphetamine-induced psychosis). Increasingly the patients have interacted with LLM:s and/or incorporated them into their (deeply flawed) model of reality. This is not to say the use LLM is the driving (causative) factor. Rather, the symptoms of the paranoid psychotic state is influenced by the patients interaction with the environment.
As an example the subjects that play major roles in paranoid psychosis has varied through my career. I practice in Sweden and SÄPO (Swedish Security Service) and MUST (Military Intelligence and Security Service) are common themes in psychotic models of reality. A year ago Putin and Russia was common. Before that covid was common. None of these are driving factors. They are just common themes in society in general and also meet the criteria of excellent basis for paranoid and persecutory delusions. This is the connection between LLM:s and psychosis, just a result of AI getting a lot of attention in general and easily fits a world view of a paranoid psycotic.
What are you talking about. How many people do you know falling into psychosis from using AI lol
AI fear is just a new modern hysteria. You people are making shit up based on over blown yellow journalism
Its not common but it is a thing. Buddy of mine (already was predisposed to schizophrenia) had an llm trigger a psychotic break. They are hallucination machines. Thats their entire purpose.
Right I have no doubt that is happening. I watched it with friends who did the same weed and media and church and COVID and math. They’re just same old psychotic they always were. Which is why this is bullshit journalism.
This is a real thing. It turns out more people than you would think are prone to paranoia and schizophrenia and if they spend too much time talking to LLMs they start to believe that the LLM is their friend, and then that the LLM is omniscient, and then from there it can spiral out of control as the LLM makes up a world of delusions for them. It’s hard to know how many people are affected because neither the victim nor the LLM realize there is a problem, and the problem is relatively new.
Sure
Look the fuck around, it’s everywhere, look at Q, then MAGA, they’re all being fed bullshit that’s incomprehensible.
I think it’s more likely that you all are being mislead by yellow journalism more than people are becoming psychotic from using it.
The problem is how the whole thing was presented to people. You just need to pass by subreddits related to ChatGPT to see the amount of misunderstandings about how it works, just an example:
https://www.reddit.com/r/ChatGPT/comments/1ld6dot/a_close_friend_confessed_she_almost_fell_into/
https://www.reddit.com/r/ChatGPT/comments/1koadmg/testing_gpts_response_to_delusional_prompts_it/
https://www.reddit.com/r/ChatGPT/comments/1low386/this_is_what_recursion_looks_like/
https://www.reddit.com/r/technopaganism/comments/1lwlvfc/i_may_have_opened_a_doorway_i_didnt_know_was/
A while back I heard someone describe LLMs as a magic 8-ball with infinite answers that uses math to nudge it towards the answer you want.