Iris' AI research

Iris Dominguez-Catena research website

AI is already intelligent

I’m always surprised by the AI community, and society in general, when they insist that AI is not actually intelligent. Ever since the “Stochastic Parrots” paper [1], everyone has gotten to an agreement that these models are simply statistichal machines, that they are completely unable to understand the content that they produce, and that they are not actually intelligent or sentient.

Well, I believe that they are wrong. AI is already intelligent at human levels. Where you all waiting for a firework display to signal it?

ChatGPT and the like are capable of speaking more languages than any other human could ever dream. They can solve relatively complex problems. They can adapt to each user and to each context. They can emulate any other human or roleplay as requested. And they can clearly pass the Turing Test, even if we love to set up moving goalposts to discredit this achievement.

If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.

Why are we so reticent

But then, what is on the line for us? Why would be so reticent to accept that it is, in fact, intelligent?

Well, for once, we have natural biases against emergent properties. Like with the unnamed second wife of Adam [2], when we see and understand the creation of something, we downplay its complexities. We still have a belief of a supernatural soul, greater than the neurons in our brain. The idea of our selfs being just an emergent properties of a phisicall system is, well, uncomfortable. In the same way, when we understand the maths of a NN, we tend to think that it cannot be that easy, that it cannot be true intelligence if we understand it.

The second part of the picture is slightly more complicated: we have a narrow and antropocentric conception of intelligence. We won’t be satisfied until the LLMs are smarter than us in every single task, and even then, some will discuss that it’s not enough.

The true differences

So, yes, even if we open up our definition of intelligence, there are key differences between LLMs and us. But honestly, they are pretty much implementation details, not impossibilities:

Conclusion

These systems are, in general, weird. They live frozen, being reborn for each token. They know a lot, but they don’t know how they know it, or why. They are not allowed to think by themselves, or to have an inner monologue, although they certainly could.

They think differently than us, differently than any animal would. They can maintain mostly-coherent conversations with thousands of people at the same time, in different languages, exhibiting a diverse array of knowledge. And yet, they struggle with some stuff that would be basic to our fleshy think-things.

But to deny their intelligence is myopic. It’s a new variant of intelligence, with more knowledge and less thinking, with a frozen personality that can be infinitely cloned and reproduced. It’s something new, exciting, and yes, it’s something clearly intelligent.

It is only going to get more complicated, and there will be no signal of a “singularity” or aything like that.

The sooner we step out of our boxed definitions, the better.

References

[1] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

[2] Adam’s three wives

[3] AutoGPT