Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • wise_pancake@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    11 hours ago

    I’m a bit surprised the grok staff are capable enough to make grok briefly the top rated model, and incompetent enough they don’t know that putting things like this in the prompt poisons the model to always try and be politically incorrect.

    LLMs are like Ron Burgundy, if it’s in the prompt they read it. Go fuck yourself XAI.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      I’m not. What would you do in this situation? Let’s throw in that you’re on a visa, so you can’t just quit

      I’d maliciously comply.

      You want access to the prompt? Here you go boss man. You want grok to share your Nazi views? Sorry sir, we’ll have to totally start over with training data. Or we could use a modified RAG

      You want help with the prompt? Sure boss man, what do you want it to do? Oh, you want it to notice Jewish names? Sure boss man, I don’t know what you mean by that, but now it keeps saying it’s “noticing”. That’s weird

      Oh, you want to fine-tune it on your tweets? Sure thing boss man… Oh, would you look at that, it thinks it’s you. Nothing can be done about that, it’s too much data from one source. Well, should we roll it back boss man? Your call

      I’d just keep playing this game… Elon isn’t going to come out and say “I want grok to be a Nazi”, and I’m not going to read between the lines for him. I’m not going to come up with ideas to solve the problem, I’m going to let Elon’s ego direct the course and throw out “we’ve designed grok to seek truth over all else” as much as possible

      • wise_pancake@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 hours ago

        XAI was founded in 2023, 6 months after Elon acquired Twitter and did his layoffs. 4 months after XAI was created, when it was publicly announced, Musk stated that a politically correct AI would be dangerous

        Anyone working at XAI already knew the game by then, they weren’t on visas who got legacied in.

        During a launch event Friday afternoon, the mogul argued that politically correct AI is “incredibly dangerous” because it requires the technology to provide misleading outputs, citing the lies told by HAL 9000, the murderous AI in Stanley Kubrick’s 1968 film, “2001: A Space Odyssey.”

        https://www.politico.com/news/2023/07/17/ai-musk-chatgpt-xai-00106672#%3A~%3Atext=During+a+launch+event+Friday+afternoon%2C+the+mogul+argued+that+politically+correct+AI+is+“incredibly+dangerous”+because+it+requires+the+technology+to+provide+misleading+outputs%2C+citing+the+lies+told+by+HAL+9000%2C+the+murderous+AI+in+Stanley+Kubrick’s+1968+film%2C+“2001%3A+A+Space+Odyssey.”

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          You can change jobs if the new one also sponsors you, and it’s my understanding that xAI tapped people from Tesla, but I might be wrong about that

          Anyways, what’s happening sure looks like malicious compliance to me… It’s really not that hard to get an AI to list far right talking points, it’s just hard to bake it into the model

          So you have people that made a pretty good model, but also can’t figure out basic AI infrastructure? I find that very hard to believe

          • wise_pancake@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            Had no idea they were doing that, but that’s plausible

            And yes, it would shock me they can build this model this well and fuck this up.

            I just hold little sympathy for the employees.

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              I mean… It is genuinely hard to work for someone not evil. Let’s say you’re an AI engineer… Meta is probably the best because most of the non-corporate LLMs flow from there… But they’re also using it to build personalized echo chambers, which is horrible

              OpenAI is at the top and Microsoft has shown every inclination to make it a monopoly, so I could understand wanting to work on competitors

              You could go smaller and work somewhere like anthropic, but then you don’t have the resources to be on the cutting edge (depending on your specialty)

              I blame people who buy Teslas more than those who work at Tesla at this point. Especially when they slow walk the bad things…I mean, Twitter would probably be less Nazi if more talent stayed onboard to resist institutionally