Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
I can mostly follow, just want to exclude the last paragraph which contains assumptions about a black box.
That being said, how is the human brain different from what you describe?
You think by processing the probabilistic association between word sequences? Humans think through world models, we have imagination, a physical and metaphysical simulation of the world around us. Absolutely none of that is involved in how LLMs work. There’s a lot to be said about the utility of association of knowledge embedded in symbols, and having a magic book that can retrieve pre existing information in context is incredibly useful and I think it will have an impact on the level of the printing press and the internet, but just because it’s incredibly useful at retrieving knowledge doesn’t mean it works anything like how a human brain works.
Sorry, I could have been more clear. I did not mean to equate current LLMs with human brains. The question was rather:
Can’t we describe the working of (other) human brains in a very similar fashion as you did before? Or where exactly is the difference which sets us apart?
AIs which can and need to interact with the physical world have those, too. Naturally, an AI which is restricted to language has much less necessity and opportunity to develop these, much like our brain area for smell is probably not so good at estimating velocities and catching a ball.
I think your approach of demystifying technology is valid and worthwhile. I’m just not sure if it does what you maybe think it does; highlight the difference to our intelligence.
We know the math and the mechanisms of how LLMs work. The only thing we don’t understand is the significance and capabilities of the probabilistic associations it prescribes to symbol sequences.
While we don’t know how a human brain works in detail, we do know how a human brain tackles problem solving because we’re sentient beings and we can be introspective about how we think through a problem.
We can look at how vectors flow through a neutral network (remember, LLMs don’t even have a concept of words, it transforms tokens into vectors that it then builds mathematical associations between, it’s all numbers) and we can see through the data that there’s nothing resembling a world simulation in how it actually works.
Also keep in mind that the LLMs you interact with don’t even learn from your interactions. The data is all baked in at training time. If you turn the temperature of the LLM output generation to zero it will come up with the same probability answer every time. The more you learn about how they work under the hood, it becomes more and more clear that there is no there there when it comes to sentience.
I will say that I do think that the capabilities and significance of symbol association and pattern matching has been wildly under estimated. Word sequences need to follow a pattern to make sense, and if you stumble upon the right sequence of words, that sequence of words could be incredibly impactful and it doesn’t really matter how you come up with them. If you were to pull words out of a hat at random, there’s an infinity small possibility that you’ll get a sequence of words that happen to expose the secrets of the universe. LLMs improve on that immensely on that they use probability to reduce that sequence space to the set of word sequences that make sense, and in that reduced space are generative sequences that may produce real value, and we can improve on making that space more and more relevant and useful.