• HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    2
    ·
    2 days ago

    I really don’t get it. These things are brand new. How can anyone get so into these things so quickly. I don’t take advice from people I barely know, much less ones that can be so easily and quickly reprogrammed.

    • greybeard@feddit.online
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don’t know enough to know it.

      I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that’s how your job works.

      • noughtnaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 hours ago

        This is what I call “confidently wrong”. If you ask it about things you have no clue about, it seems incredibly well-informed and insightful. Ask it something you know deeply, and you’ll easily see it’s just babbling and spouting nonsense - sure makes you wonder about those earlier statements it made, doesn’t it?

      • clif@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        20 hours ago

        Something I found today - ask it for the lyrics of your favorite song/artist. It will make something up based on the combination of the two and maybe a little of what it was trained on… Even really popular songs (I tried a niche one by Angelspit first then tried “Sweet Caroline” for more well known). The model for those tests was Gemma3. It did get two lines of “Sweet Caroline” correct but not the rest.

        The new gpt-oss model replies with (paraphrased) “I can’t do that because it is copyrighted material” which I have a sneaking suspicion is intentional so there’s an excuse for not showing a very wrong answer to people who might start to doubt it’s ““intelligence”” when it’s very clearly wrong.

        … Like they give a flying fuck about copyright.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        This is where you have to check out the reference links it gives as if they were search results and the less you know the more you have to do it. I mean people have been webMDing for a long time. None of these things allow folks to stop critical thinking. If anything it requires it even more. This was actually one of my things with ai and work. The idea is for it to allow people with less knowledge to do things and to me its kinda the reverse.

    • kamenLady.@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      2 days ago

      This is the unintentional uncanny valley for me in AI.

      I ( was forced to ) use chatGTP for work. It can talk about everything and sounds very confident and seems reliable to always come up with something to help you solve your problems.

      You talk with it about some niche content and suddenly have an ardent fan of said niche content responding. It surely knows every little bit of info of that niche and surprises you with funny, but apt quotes from your favorite show in the middle of conversations about something else.

      This is just from a tiny bit of interaction, while at work.

      I can imagine people completely overwhelmed, by having their thoughts confirmed and supported by something that seems so intelligent, responsive and remembers all your conversations. It literally remembers each word.

      For many people it may be the first time in their life, that they experience a positive response to their thoughts. Not only that, they also found someone eager to talk with them about it.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        34
        ·
        2 days ago

        Everyones initial use of chatbots should be on the thing they are most knowledgable about so they can get an idea of how wrong it can be and how it can be useful but you have to treat it like some eager wet behind the ears intern just did for you.

    • FishFace@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Because that’s what it is really trained for: to produce correct grammar and plausible sentences. It’s really an unbelievable leap from computer-generated text from preceding approaches where, in a matter of a few years, you went from little more than gibberish to stuff that’s so incredibly realistic that it can be mistaken for intelligent conversation, easily passing the Turing Test (I had to actually go to Wikipedia to check and, indeed, this was verified this year - note that this in particular is for recent models)

      So you have something that is sufficiently realistic that it can appear to be a human conversation partner. Human beings aren’t (yet) well-equipped to deal with something which appears to be human but whose behaviour diverges from typical human behaviour so radically (most relevantly, it won’t readily admit to not knowing something).

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Its more than that. It takes the input and tries to interpret the bad grammar and sentences into search terms and finds link the correlate the highest to its interpretation and then gives back the response that summarizes the results with good grammar and plausible sentences. Again this is why I stress that you have to evaluate its response and sources. The sources are the real value in any query. Im actually not sure how much the chatbots give sources by default though as I know I have not gotten them and then asked for them and now I get them as a matter of course so im not sure if it learns that I want them or if they did a change to provide them when they had not before.