That’s a misconception how LLMs work. It is how SF-authors imagined AI would work.
LLMs won’t give you logical solutions to your problems — they’ll give you the essence of the data they were fed with that are statistically likely to be associated with the words you used to prompt them. And since they are usually trained on the enshittified internet, well, you get what you paid for.
An early AI was once asked, “Bob has a headache. What should Bob do?” And the AI replied, “Bob should cut off his own head.”
The point being: AIs will give you logical solutions to your problems but they won’t always give you practical ones.
That’s a misconception how LLMs work. It is how SF-authors imagined AI would work.
LLMs won’t give you logical solutions to your problems — they’ll give you the essence of the data they were fed with that are statistically likely to be associated with the words you used to prompt them. And since they are usually trained on the enshittified internet, well, you get what you paid for.
except they won’t always give you logical answers.
Yes, eating one small rock a day is logical.