• boonhet@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    ChatGPT started coaching Sam on how to take drugs, recover from them and plan further binges. It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations. The AI tool even recommended playlists to match his drug use.

    The meme of course doesn’t mention this part.

      • boonhet@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        Yeah if it actually managed to stick within the safeguards, that would’ve been good news IMO. But no, it got a kid killed suggesting doses.

          • boonhet@sopuli.xyz
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            4 days ago

            No company should sell a product that tells you different ways to kill yourself. User being stupid isn’t an excuse. Always assume user is a gullible idiot.

            • I_Has_A_Hat@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              4 days ago

              People will always find a way to kill themselves no matter how many warnings and guardrails are put into place. This is just Darwinism shaking the tree.

              • boonhet@sopuli.xyz
                link
                fedilink
                arrow-up
                1
                ·
                4 days ago

                Yea but the product that’s pretending to be a super smart conversational partner that can give you advice, should not tell you how to kill yourself. Or advise you to kill your family members. Yeah that happened, ChatGPT gaslit a guy into killing his mom.

                It’s just not a product that should be available if safeguards don’t work.

                • I_Has_A_Hat@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 days ago

                  So what?

                  Seriously, so what?

                  Chatbots have been around for ages, way longer than the current “AI” trend. It’s always been possible to get them to say some version of “kill yourself”.

                  ChatGPT didn’t gaslight a guy into killing his mom. A mentally ill man killed his mom. If a fucking chatbot is the thing that triggered it, then anything could have. Same thing with the people killing themselves because a chatbot told them too. That’s just cold Darwinism. We didn’t suddenly ban Catcher in the Rye because some schizophrenic guy decided it was telling him to kill John Lennon; we recognized that there are simply crazy people in this world who could be set off by anything.

                  The “safeguards” you want in place are not feasible because you want them to account for people with mental illness, or people so stupid that something else would have killed them first.

                  You want the real story to this article? Dumbass dies from using drugs irresponsibly, parents blame anything they can except their son because they are too blinded by grief to recognize that their precious little junkie was a fucking idiot. ChatGPT did not force him to take drugs. ChatGPT did not supply him with drugs. That was all him. The only one to blame for his death is his dumbass self.

                  • boonhet@sopuli.xyz
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 days ago

                    ChatGPT didn’t gaslight a guy into killing his mom. A mentally ill man killed his mom. If a fucking chatbot is the thing that triggered it, then anything could have.

                    ChatGPT provided encouragement to do it, claiming it analyzed the description of a situation and the mother is an imminent danger to his life. If a human being does that, they’re accessory to murder. If OpenAI does, it’s fine. If OpenAI can’t be held responsible for the things ChatGPT says, ChatGPT shouldn’t be allowed to be offered to the public. Same for all other LLMs of course.

                    The “safeguards” you want in place are not feasible because you want them to account for people with mental illness, or people so stupid that something else would have killed them first.

                    I don’t want safeguards in place, I want this bullshit to not exist. Why are we accelerating climate change for THIS? Or on a more selfish level: Why do I now have to pay 5x as much for RAM because of THIS? It’s not useful for anything where being wrong is an issue. Most art production is being replaced by worthless slop.

                    Just fucking ban GenAI altogether if it can’t be prevented from giving advice that kills you or generating nudes of kids.

                    Hell, these days if ChatGPT gives you bad advice on drug dosage and you go google it to make sure, the results there are going to be AI too. First the AI summary, then most content everywhere else is generated by LLMs too… You literally can’t trust anything on the Internet anymore, yay.

            • Electricd@lemmybefree.net
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              4 days ago

              Can’t ever do anything with this logic

              At this point competitive videogames should be banned, they’re just “kys” machines