• Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    19
    ·
    edit-2
    1 day ago

    Pretty sure you can get it to say just about anything with the right prompts.

    People need to stop acting like Grok is a sentient being.

      • notarobot@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        25
        ·
        1 day ago

        I do not think so. Answers are statistical. Anything can come up. Except that whenever someone get something odd. It gets reported like AI is the president

        • Biyoo@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 day ago

          Statistical doesn’t mean it can’t spit out what they want.

          They can train or fine tune the AI for praising Hitler, they can alter the default prompt to hit a more right wing dataset, they can have filters that retry when the answers are not what Musk expects…

          There are a ton of ways to get fascist output from an AI.

          • notarobot@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            8
            ·
            1 day ago

            Look at that. It was you who didn’t understood the word. So much so that what you just said does not contradict what I said.

            Yes. An AI can be tuned to praise Hitler. But I think it’s more likely that someone by chance got a fascist output or that they purposely promoted it to provide a fascist output and then went “OMG. I can believe it produce a fascist output”

            I’m not defending musk nor grok. My basis for that statement is that it is a pattern that the “let’s report and AI output” is a pattern you see for every AI.

            • Biyoo@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 day ago

              I see, I did misread your comment.

              You meant something like : it’s not more racist than before, it’s just a random fascist output that got blown out of proportion.

              And there has to be fascist outputs since it’s statistical and there is fascism in the training data.

              So I have no idea, I don’t use grok, not sure if they edited their AI for that further recently or not.

            • EvilBit@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 hours ago

              You replied to “It was modified to make these kinds of responses much more likely” with the rebuttal that it’s “statistical”. When something has a varying degree of likelihood, that’s exactly what statistical means.

        • e$tGyr#J2pqM8v@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 day ago

          I am all in favor of hating Musk and all his products, but I think you’re right. It seems rather unlikely that they would instruct their LLM to go out of its way to give fascist replies. That’s not to say that it shouldn’t be instructed to not give fascist output, which apparently it hasn’t been. Sadly, increasingly people form their view of the world based on the output of LLMs, so it would be helpful if these LLMs would help create worldviews that are beneficial to humanity at large, or at the very least prevent ones that are evidently harmfull. Which begs the question, who is to decide what is helpful and what is harmful. Musks answer is probably ‘freedom of speech, who is to say that we can’t spoonfeed hate to little children’. Which seems to me to be an example of when ideas of freedom turn into nihilism. But where they’re right is that government should also not be the one who tells people how to view the world. It’s people who should tell government, and the reverse, though perhaps well intended, is itself rather dangerous. I think the solution, as per usual, is to free it all up, make FOSS LLMs, and let people choose the limitations which they deem proper. I would certainly not want my kids on ‘freedom of speech’-style unrestricted AI, just like I don’t want some amoral nihilist as my kids’ school teacher. I want someone who teaches them love, kindness, forgiveness, harmony, honesty, sincerity, etc.