AI is already intelligent
I’m always surprised by the AI community, and society in general, when they insist that AI is not actually intelligent. Ever since the “Stochastic Parrots” paper [1], everyone has gotten to an agreement that these models are simply statistichal machines, that they are completely unable to understand the content that they produce, and that they are not actually intelligent or sentient.
Well, I believe that they are wrong. AI is already intelligent at human levels. Where you all waiting for a firework display to signal it?
ChatGPT and the like are capable of speaking more languages than any other human could ever dream. They can solve relatively complex problems. They can adapt to each user and to each context. They can emulate any other human or roleplay as requested. And they can clearly pass the Turing Test, even if we love to set up moving goalposts to discredit this achievement.
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
Why are we so reticent
But then, what is on the line for us? Why would be so reticent to accept that it is, in fact, intelligent?
Well, for once, we have natural biases against emergent properties. Like with the unnamed second wife of Adam [2], when we see and understand the creation of something, we downplay its complexities. We still have a belief of a supernatural soul, greater than the neurons in our brain. The idea of our selfs being just an emergent properties of a phisicall system is, well, uncomfortable. In the same way, when we understand the maths of a NN, we tend to think that it cannot be that easy, that it cannot be true intelligence if we understand it.
The second part of the picture is slightly more complicated: we have a narrow and antropocentric conception of intelligence. We won’t be satisfied until the LLMs are smarter than us in every single task, and even then, some will discuss that it’s not enough.
The true differences
So, yes, even if we open up our definition of intelligence, there are key differences between LLMs and us. But honestly, they are pretty much implementation details, not impossibilities:
-
These systems have no memory. They know a lot, but as they where not part of the learning process, they have trouble distinguishing true data from false data. This is what we know as hallucinations, but with any external memory it can be easily solved. An initial implementation are the LLMs that access the Internet, using it as a read-only memory to combine with the conversation.
-
They have no inner monologue. Their full personality is defined by the chat context, usually just a few messages, and only think “one token at a time”. But automatas [3] have already shown that we could easily allow the LLMs to think by themselves, having a text-based thought process and inner monologue.
-
They work only in reactive mode, with no autonomy. Honestly, this is exactly as humans. Our initiative is just an illusion, we produce “tokens” (actions) one after the other, taking into account our variable environment. For a LLM it wouldn’t be so useful, as they can’t access a changing environment, but as soon as they have better access to the internet, letting them interact in real time is pretty much trivial. Using some tokens to distinguish inner monologue from external interaction would let them act in a small time window, like us.
Conclusion
These systems are, in general, weird. They live frozen, being reborn for each token. They know a lot, but they don’t know how they know it, or why. They are not allowed to think by themselves, or to have an inner monologue, although they certainly could.
They think differently than us, differently than any animal would. They can maintain mostly-coherent conversations with thousands of people at the same time, in different languages, exhibiting a diverse array of knowledge. And yet, they struggle with some stuff that would be basic to our fleshy think-things.
But to deny their intelligence is myopic. It’s a new variant of intelligence, with more knowledge and less thinking, with a frozen personality that can be infinitely cloned and reproduced. It’s something new, exciting, and yes, it’s something clearly intelligent.
It is only going to get more complicated, and there will be no signal of a “singularity” or aything like that.
The sooner we step out of our boxed definitions, the better.
References
[1] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
[3] AutoGPT