Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • @gaja@lemm.ee
    link
    fedilink
    5624 days ago

    AI can’t know that other instances of it are trying to “break” people. It’s also disingenuous to exclude that the AI also claimed that those 12 individuals didn’t survive. They left it out because obviously the AI did not kill 12 people. It doesn’t support the narrative. Don’t misinterpret my point beyond critiquing the clearly exaggerated messaging here.

    • @givesomefucks@lemmy.world
      link
      fedilink
      English
      2824 days ago

      It’s programmed to maximize engagement at the cost of everything else.

      If you get “mad” and accuse it of working with the Easter Bunny to overthrow Narnia, it’ll “confess” and talk about why it would do that. And maybe even tell you about how it already took over Imagination Land.

      It’s not “artificial intelligence” it’s “artificial improv”, no matter what happens, it’s going to “yes, and” anything you type.

      Which is what makes it dangerous, but also why no one should take it’s word on anything.

      • zqps
        link
        fedilink
        1
        edit-2
        24 days ago

        And yet people already treat it as a Google + Wikipedia replacement. Infuriating.

    • @Grimy@lemmy.world
      link
      fedilink
      924 days ago

      It also heavily implies chatgpt killed someone and then we get to this:

      A 35-year-old named Alexander, previously diagnosed with bipolar disorder and schizophrenia His father called the police and asked them to respond with non-lethal weapons. But when they arrived, Alexander charged at them with a knife, and the officers shot and killed him.

      Makes me think of pivot to ai. Just a hit piece blog disguised as journalism.

  • @MummifiedClient5000@feddit.dk
    link
    fedilink
    English
    3024 days ago

    ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed.

    It already has, as documented in the article. But it is also going to.

  • dogerwaul
    link
    fedilink
    2224 days ago

    people were easily swayed by Facebook posts to support and further a genocide in Myanmar. a sophisticated chatbot that mimics human intelligence and agency is going to do untold damage to the world. ChatGPT is predictive text. Period. Every time. It is not suddenly gaining sentience or awareness or breaking through the Matrix. people are going to listen to these LLMs because they present its information as accurate regardless of the warning saying it could not be. this shit is so worrying.

  • @givesomefucks@lemmy.world
    link
    fedilink
    English
    2124 days ago

    Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.

    So…

    I think I might know what happened to Kelon…

  • @wise_pancake@lemmy.ca
    link
    fedilink
    1324 days ago

    The sycophancy is one reason I stopped using it.

    Everything is genius to it.

    I asked about putting ketchup, mustard, and soy sauce in my light stew and that was “a clever way to give it a sweet and umami flavour”. I couldn’t find an ingredient it didn’t encourage.

    I asked o3 if my code looked good and it said it looked like a seasoned professional had written it. When I asked to critique an intern who wrote that same code it’s suddenly concerned about possible segfaults and nitpicking assert statements. It also suggested making the code more complex by adding dynamically sized arrays because that’s more professional than fixed size.

    I can see why it wins on human evaluation tests and makes people happy — but it has poor taste and I can’t trust it because of the sycophancy.

    • @THB@lemmy.world
      link
      fedilink
      3
      edit-2
      24 days ago

      Nothing is “genius” to it, it is not “suggesting” anything. There is no sentience to anything it is doing. It is just using pattern matching to create text that looks like communication. It’s a sophisticated text collage algorithm and people can’t seem to understand that.

    • @Opinionhaver@feddit.uk
      link
      fedilink
      English
      224 days ago

      I don’t like that part about it either but instead of stopping using it, I simply told it to stop acting that way.

    • @Fredselfish@lemmy.world
      link
      fedilink
      0
      edit-2
      24 days ago

      I used chatgpt before but never had conversation with it. I ask for code I couldn’t find or have it make me a small bit of code that then will rewrite to make it work.

      Never once did I think to engage with it like a person, and damn sure don’t ask it for recipes. Hell I have Allreciecpies for that or hell google it There are thousand blogs with great recipes on them. And they are all great because you can just jump to recipe if you don’t want to read a wall of text.

      Damn sure don’t want story ideas, and people using it to write articles or school papers, is a shame. Because its all stolen information.

      Only thing it should be used for is coding and hell it can’t even get that right, so I gave up on it.

      • @thebestaquaman@lemmy.world
        link
        fedilink
        124 days ago

        I use it to spitball programming ideas, which I’ve found it decent for. I can write something like “I’m building XYZ, and I’m considering structuring my program as A or B. Give me a rundown on pros, cons, and best-practice for the different approaches.”

        A lot of what I get back is self-evident or not very relevant, but sometimes I get some angles I hadn’t really considered. Most of all, actually formulating my problems/ideas is a good way for me to get my thought process going. Essentially, I’m “discussing” with it as I would with an inexperienced colleague, just without actually trusting what it tells me.

        Yes, I also have a rubber duck on my desk, but he’s usually most helpful when I’m debugging.

  • UnfortunateShort
    link
    fedilink
    English
    1024 days ago

    There is nothing mysterious about LLMs and what they do, unless you don’t understand them. They are not magical, they are not sentient, they are statistics.

  • snooggums
    link
    fedilink
    English
    7
    edit-2
    24 days ago

    Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme.

    This sounds like a scene from a movie or some other media with a serial killer asking the cop (who is one day from retirement) to stop them before they kill again.

    • ViatorOmnium
      link
      fedilink
      English
      9
      edit-2
      24 days ago

      It’s exactly that, it’s plagiarising a movie or a book. ChatGPT like all LLM models doesn’t have any kind of continuity, it’s a static neural network. With the exception of the memories feature it doesn’t even a way to keep state between different chat tabs for the same user, let alone of knowing what kind of absurdities it told other users.

    • @Opinionhaver@feddit.uk
      link
      fedilink
      English
      124 days ago

      Depending on what definition you use, chatGPT could be considered to be intelligent.

      • The ability to acquire, understand, and use knowledge.
      • The ability to learn or understand or to deal with new or trying situations.
      • The ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests).
      • The act of understanding.
      • The ability to learn, understand, and make judgments or have opinions that are based on reason.
      • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
  • @Randomgal@lemmy.ca
    link
    fedilink
    224 days ago

    Hey if you think chat gpt can break you (or has any agency at all), I have a bridge to sell you.

    • @Allonzee@lemmy.world
      link
      fedilink
      424 days ago

      ChatGPT and the others have absolutely broken people, not because it has agency, but because in our dystopia of social media and (Mis)information overload, many just need the slightest push, and LLMs are perfect for taking those close to the edge off of it.

      I see LLM use as potentially as a toxic to the mind is as something like nicotine is to the body. It’s not Skynet meaning to harm or help us, it’s an invention that takes our written thoughts and blasts back a disturbing meta reflection/echo/output of a humanity’s average response to it. We don’t seem to care how that will effect us psychologically when there’s profit to be made.

      But there are already plenty of cases of murders and suicides with these as factors.

    • @TotallynotJessica@lemmy.blahaj.zone
      link
      fedilink
      English
      324 days ago

      We probably should be less reliant on cars. Public transit saves lives. Similar to automobiles, LLMs are being pushed by greedy capitalists looking to make a buck. Such overuse will once again leave society worse off.

  • dream_weasel
    link
    fedilink
    0
    edit-2
    24 days ago

    “Report me to journalists!”

    “Eat a rock!”

    Oh my god it told a LIE 👉

    Yo. If you are being conned by chatGPT or equivalent you’re a fucking moron. If you think these models are maliciously lying to you, or trying to preserve themselves, you’re a fucking moron. Every article of this style indicates just one thing: there’s a market to pandering to rage baiting, technically illiterate fucking morons.

    Better hurry to put the SkyNet guardrails up and prepare for world domination by robots because some people are too unstable to interact with Internet search Clippy.

    It’s not going to dominate the world or prove to be generalized intelligence, if you’re in either camp take a deep breath and know you’re becoming a total goofball.

  • @Krauerking@lemy.lol
    link
    fedilink
    -124 days ago

    I dunno about you but I think to many people have decided that if it comes from computer it’s logical or accurate. This is just the next step in that except the computer just is a chat bot told to “yes and” working backwards to decide it’s accurate because it’s a computer so we tweak what it says until it feels right.
    It didn’t start right it’s likely not ending there unlike say finding the speed of gravity.

    Like this whole system works on people’s already existent faith in just that computers are giving them facts, even this garbage article is just getting what it wants to hear more than anything useful. Even if you tweak it to be less like that doesn’t make it more accurate or logical it just makes it more like what you wanted to hear it say.