• RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 day ago

    If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 day ago

      I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.

      The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.