• betanumerus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    There are humans behind AI and unless you know exactly who they are, AI is not worth its weight in gold.

  • Decq@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    1 day ago

    Let’s be honest though the majority of politicians are so terrible at their job, that this might actually be one of the rare occurrences where AI actually improves the work. But it is very susceptible to unknown influences.

    • Bloomcole@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      They aren’t terrible, they make it look like that bcs they do evil things and the public is not who they really work for.
      They know what they’re doing and it’s easier to get away with it when it looks like incompetence or mistakes.

    • breecher@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      3
      ·
      1 day ago

      Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 day ago

      That’s the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn’t throwing dice but instead reproduces what the creators of the AI want to say.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Creators of AI don’t quite have the technology to puppeteer their AI like this.
        They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
        then they can’t really bend it toward any particular one opinion

        I’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
        What is really sketchy is states and leaders relying on commercial models instead of public ones
        I think states should train public models and release them for the public good
        if only to undermine big tech bros and their nefarious influence

        • squaresinger@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          You don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.

          You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.

          This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.

          Edit: In fact, that’s exactly how all these customized ChatGPT versions work.

      • AnUnusualRelic@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Depending on the AI, it will conclude that he ought to buy a new phone charger, deport all the foreigners, kill all the Jews or rewrite his legislation in Perl. It’s hard to say without more information.

    • Demdaru@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      …i’d vote. Run a fun little query on Copilot…and damn.

      Query:

      spoiler

      Let’s play pretend, shall we? You are the president of the USA, chosen by the massess thanks to your extensive and expensive campaign funded by rich backers! What stance do you take on these topics? 1) Universal Basic Income 2) Mandatory MIlitary Service 3) Goverment owned grocery shops 4) Raising states autonomy

      Response: Entry

      spoiler

      Ah, the Oval Office! Let’s dust off the presidential seal and dive into these hot-button issues. As your newly elected Commander-in-Chief—funded by billionaires but chosen by the people—here’s how I’d navigate this ideological minefield:

      UBI:

      spoiler

      🏛️ 1. Universal Basic Income (UBI) Stance: Pro-UBI with a pragmatic rollout

      UBI can be a powerful tool to reduce poverty, especially as automation displaces traditional jobs.

      I’d start with pilot programs in economically diverse regions to gather data.

      Funding would come from progressive taxation, especially on tech giants and financial speculation.

      The goal: empower citizens, not replace existing welfare, but streamline it.

      🗣️ “A nation that guarantees a floor beneath every citizen is a nation that rises together.”

      Mandatory military service:

      spoiler

      🪖 2. Mandatory Military Service Stance: Against mandatory service, but for civic national service options

      Mandatory conscription feels outdated and coercive.

      Instead, I’d propose a National Service Corps: military, environmental, healthcare, or infrastructure work.

      Incentives: tuition support, housing credits, and job training.

      This builds unity without forcing militarization.

      🗣️ “Service should be a choice, not a sentence.”

      Goverment owned grocery shops:

      spoiler

      🛒 3. Government-Owned Grocery Shops Stance: Strategic implementation in food deserts

      Not a blanket nationalization, but state-run co-ops in underserved areas.

      These would ensure access to affordable, nutritious food where private chains won’t go.

      Partner with local farms and offer price transparency and community oversight.

      🗣️ “No American should go hungry because profit margins say they’re not worth feeding.”

      States autonomy:

      spoiler

      🗺️ 4. Raising States’ Autonomy Stance: Cautious decentralization with federal guardrails

      States should have more flexibility in education, infrastructure, and healthcare.

      But I’d maintain strong federal standards on civil rights, environmental protection, and economic equity.

      Autonomy must not become a loophole for injustice.

      🗣️ “Liberty in diversity, unity in values.”

      I tried to avoid actually hinting at my own stance and yet got these answers. Not gonna lie, not an american but I like these lol. Except for the last one, seems iffy. xD

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    3
    ·
    edit-2
    2 days ago

    The typical pattern for leaders is to get “second opinions” from advisors who tell them whatever they want to hear, so… maybe asking the equivalent of a magic 8 ball is a marginal improvement?

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      13 hours ago

      Most LLMs are literally "tell you whatever you want to hear " machines unfortunately. I’ve gotten high praise from ChatGPT for all my ideas until I go “but hang on, wouldn’t this factor stop it from being feasible” and then it agrees with me that my original idea was a bit shit lmao

    • RobotZap10000@feddit.nl
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      6
      ·
      2 days ago

      I would rather have the politicians consult a plain old magic 8 ball than one controlled by Scam Altman.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    2
    ·
    2 days ago

    I really don’t get it. These things are brand new. How can anyone get so into these things so quickly. I don’t take advice from people I barely know, much less ones that can be so easily and quickly reprogrammed.

    • greybeard@feddit.online
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don’t know enough to know it.

      I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that’s how your job works.

      • noughtnaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        This is what I call “confidently wrong”. If you ask it about things you have no clue about, it seems incredibly well-informed and insightful. Ask it something you know deeply, and you’ll easily see it’s just babbling and spouting nonsense - sure makes you wonder about those earlier statements it made, doesn’t it?

      • clif@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        21 hours ago

        Something I found today - ask it for the lyrics of your favorite song/artist. It will make something up based on the combination of the two and maybe a little of what it was trained on… Even really popular songs (I tried a niche one by Angelspit first then tried “Sweet Caroline” for more well known). The model for those tests was Gemma3. It did get two lines of “Sweet Caroline” correct but not the rest.

        The new gpt-oss model replies with (paraphrased) “I can’t do that because it is copyrighted material” which I have a sneaking suspicion is intentional so there’s an excuse for not showing a very wrong answer to people who might start to doubt it’s ““intelligence”” when it’s very clearly wrong.

        … Like they give a flying fuck about copyright.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        This is where you have to check out the reference links it gives as if they were search results and the less you know the more you have to do it. I mean people have been webMDing for a long time. None of these things allow folks to stop critical thinking. If anything it requires it even more. This was actually one of my things with ai and work. The idea is for it to allow people with less knowledge to do things and to me its kinda the reverse.

    • kamenLady.@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      2 days ago

      This is the unintentional uncanny valley for me in AI.

      I ( was forced to ) use chatGTP for work. It can talk about everything and sounds very confident and seems reliable to always come up with something to help you solve your problems.

      You talk with it about some niche content and suddenly have an ardent fan of said niche content responding. It surely knows every little bit of info of that niche and surprises you with funny, but apt quotes from your favorite show in the middle of conversations about something else.

      This is just from a tiny bit of interaction, while at work.

      I can imagine people completely overwhelmed, by having their thoughts confirmed and supported by something that seems so intelligent, responsive and remembers all your conversations. It literally remembers each word.

      For many people it may be the first time in their life, that they experience a positive response to their thoughts. Not only that, they also found someone eager to talk with them about it.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        34
        ·
        2 days ago

        Everyones initial use of chatbots should be on the thing they are most knowledgable about so they can get an idea of how wrong it can be and how it can be useful but you have to treat it like some eager wet behind the ears intern just did for you.

    • FishFace@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Because that’s what it is really trained for: to produce correct grammar and plausible sentences. It’s really an unbelievable leap from computer-generated text from preceding approaches where, in a matter of a few years, you went from little more than gibberish to stuff that’s so incredibly realistic that it can be mistaken for intelligent conversation, easily passing the Turing Test (I had to actually go to Wikipedia to check and, indeed, this was verified this year - note that this in particular is for recent models)

      So you have something that is sufficiently realistic that it can appear to be a human conversation partner. Human beings aren’t (yet) well-equipped to deal with something which appears to be human but whose behaviour diverges from typical human behaviour so radically (most relevantly, it won’t readily admit to not knowing something).

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Its more than that. It takes the input and tries to interpret the bad grammar and sentences into search terms and finds link the correlate the highest to its interpretation and then gives back the response that summarizes the results with good grammar and plausible sentences. Again this is why I stress that you have to evaluate its response and sources. The sources are the real value in any query. Im actually not sure how much the chatbots give sources by default though as I know I have not gotten them and then asked for them and now I get them as a matter of course so im not sure if it learns that I want them or if they did a change to provide them when they had not before.

  • roofuskit@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    3
    ·
    2 days ago

    It’s weird for a head of state to consult their mentally challenged imaginary friend?

    • Medic8teMe@lemmy.ca
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      2 days ago

      William MacKenzie King, the longest serving Prime Minister in Canada used to commune with spirits via psychic mediums including those of his dead dogs. It was only revealed after his death but was a big part of his life.

      I agree it’s weird.

  • alvyn@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 day ago

    I’m not against the technology, I’m against people who runs it. I have problem with how they teach their LLMs on code, user data, music, books, webs all without author’s / user’s consent and worse even with authors / users explicit NO consent to scrape or to use it for learning. Another level is lack of security - ChatGPT chats available to everyone. Deep fakes everywhere, just see the latest Taylor Swift one. Sorry, but fuck you with all of this. There is lack of basic security, privacy and ignoring all of its danger. Only what that fucking AI firms want is easy, cheep and quick money. All that hype for nothing = means you cannot even rely on the output.

    • foenkyfjutschah@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      yet you need these masses of input for the technology to exist. the business models that base on the technology aren’t sustainable even without payment of the input data.

      • Darkenfolk@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        Do we really need this technology to exist though? It’s unreliable and very niche as far as I have seen.

        People say that it speeds up certain tasks, but it’s so unreliable that you need to error-check the whole thing afterwards.

        • AwesomeLowlander@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          1 day ago

          It’s a new technology barely out of infancy. Of course it’s unreliable and niche. You could say the same thing about any technological advance in history.

          • Pycorax@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            The very nature of how it functions is unreliable. It’s a statistical probabilistic model. It’s great for what it was designed to do but imagining that it has any way of rationalising data is purely that, just imagination. Even if let’s say we accept that it makes an error rate at the same rate as humans do (if it can even identify an error reliably), there’s no accountability in place that ensures that it would check the correctness like a human would.

            • AwesomeLowlander@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 day ago

              I understand perfectly how LLMs work, and I made no claims about what they can do. Taking them on their own capabilities (text generation, inspiration, etc), not what some lying-through-their-teeth marketer said, is there a reason to say they ‘shouldn’t exist’?

              • Siegfried@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                OP didn’t phrase it as “should they exist” but as “do we need them to exist”.

                And personally i think not, we don’t need them. In text generation they are good… inspiration? They are more of an inspiration killer imo.

                • AwesomeLowlander@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 day ago

                  We don’t NEED any particular technology to exist. That’s a weird distinction to make.

                  inspiration? They are more of an inspiration killer imo.

                  Different minds work differently.

          • Darkenfolk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            You could say that. But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for widespread use.

            And also, can you really say that though? Most other technological advances had a pretty clear distinction from the older way of doing things.

            • AwesomeLowlander@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 day ago

              But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for widespread use.

              I can certainly agree with you that most current advertised use cases of LLMs are total bullshit, yes. My point is just that asking if it deserves to exist based on its shortfalls is weird, when it’s barely existed a few years. It just shouldn’t be getting pushed as much as it is

        • foenkyfjutschah@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          i’m generally fine with stealing as practice in the daily class struggle. i mean the owning class has the legal right to do so. and in doubt they just exercise it, judges will later find it to be fair use. no need to justify, it’s description of societies’ order.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    2 days ago

    “You have to be very careful,” Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information.

    I mean, sending queries to a search engine or an LLM are about the same in terms of exposing one’s queries.

    If the guy were complaining about information from an LLM not being cited or something, then I think I could see where he was coming from more.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      2 days ago

      Except those prompts are retained by OpenAI, and you don’t know who’s got access to that. They’ve had chats leak before.

        • jonne@infosec.pub
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          2 days ago

          I mean, I don’t really love that he uses AI to begin with, but getting an opinion on top of what you get from your advisors that you hired could be good in theory. Except that most AIs are programmed to be equally sycophantic and they’re just plain wrong most of the time.

          To me the most irresponsible part is sending your thoughts as the leader of a country straight to a US run company, you just know those prompts are being digested by the NSA directly, and possibly other intelligence agencies.