Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
here’s my kneejerk reaction: my prime minister is basing his decisions partly on the messages of an unknown foreign actor, and sending information about state internals to that unknown foreign actor.
He explicitly states that no sensitive informarion gets used. If you believe that, then I have no issue with him additionally asking for a third opinion from an LLM.
Absolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.
(Succinct response taken from Reddit post discussing the topic)
“Horseshoe Theory is slapping “theory” on a strawman to simplify WHY there’s crossover from two otherwise conflicting groups. It’s pseudo-intellectualizing it to make it seem smart.”
This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don’t offer a counter argument in your response so I can only assume it’s along the lines of, “technology is inevitable, would you have said the same if the Internet?” Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.
I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.
Furthermore, your insistance continues to push an acceptance of AI on those who clearly don’t want it, contributing to the anger we feel at having it forced upon us
You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it.
From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.
You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.
If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.
I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.
The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.
Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
here’s my kneejerk reaction: my prime minister is basing his decisions partly on the messages of an unknown foreign actor, and sending information about state internals to that unknown foreign actor.
whether it’s ai or not is a later issue.
He explicitly states that no sensitive informarion gets used. If you believe that, then I have no issue with him additionally asking for a third opinion from an LLM.
… a bridge to sell you.
Don’t be naive.
i don’t have any reason to believe it, given the track record.
also, the second half of the problem is of course the information that comes back, what it is based on, and what affects that base.
Absolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.
(Succinct response taken from Reddit post discussing the topic)
“Horseshoe Theory is slapping “theory” on a strawman to simplify WHY there’s crossover from two otherwise conflicting groups. It’s pseudo-intellectualizing it to make it seem smart.”
This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don’t offer a counter argument in your response so I can only assume it’s along the lines of, “technology is inevitable, would you have said the same if the Internet?” Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.
I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.
Furthermore, your insistance continues to push an acceptance of AI on those who clearly don’t want it, contributing to the anger we feel at having it forced upon us
You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it.
From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.
You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.
If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.
I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.
The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.