The Intelligence Race
Humans vs machines — a book review
The race for Artificial Intelligence is also a race to understand how human intelligence works. Max Bennett’s A Brief History of Intelligence is a great read on both — and on the race between the two.
Bennett, a software entrepreneur expert in evolutionary neuroscience, traces the rise of intelligence from well before humans came onto the scene, and maps it to our increasingly sophisticated attempts to create AI. In both cases, success hinges on starting from the simplest features and building up. For humans, it all started some 600 million years ago, when simple organisms like the nematodes developed the ability of steering, directing their movement for a purpose — like the Roomba, the vacuum-cleaning robot.
The book lays out a clear evolutionary explanation: intelligence develops to enable behavior that increases the chances of survival. Therefore, it is linked to emotions like craving, satiation, fear. We learnt to move towards a reward and away from a threat. Towards food, away from an enemy.
Then gradually the brain learnt to predict that certain actions will likely lead to desirable consequences: dopamine reacts to the increased prediction of a reward, the equivalent of temporal difference reinforcement learning in AI. Nature developed it 500 million years ago. I don’t mean to belittle the impressive advances in AI, but we should not underestimate what nature is capable of.
Consider the following:
Programmers usually avoid the problem by freezing the AI systems after they are trained. We don’t let AI systems learn things sequentially; they learn things all at once, and then they stop learning.
The problem Bennett refers to here is catastrophic forgetting: when you train a neural network to recognize a new pattern or to perform a new task, it forgets what it previously learned. Therefore, an AI model can’t keep learning new tricks once it’s been released in the wild. To put things in perspective, even the simplest vertebrates like fish avoid catastrophic forgetting, and we do not yet understand how they do it.
Similarly, convolutional neural networks are still very poor at recognizing the same object from a different perspective (the invariance problem). Fish do it effortlessly. To be fair, though, your goldfish can’t pass the bar exam, and ChatGPT can.
Imagine if…
Another crucial leap in the development of intelligence is the ability to simulate and imagine, developed by nature about 100 million years ago. Since neurons move faster at warmer temperatures, being a warm-blooded creature gives you an evolutionary advantage in terms of developing intelligence (I’ll keep that in mind next time I run into an alligator…). Curiosity and the rewards of surprise are a powerful engine of learning — one of my favorite insights in the book. Here the discussion becomes fascinating. For example, Bennett reminds us that
You don’t perceive what you actually see. You perceive a simulated reality that you have inferred from what you see.
We perceive fragmented signals with our senses, and our brain fills in the gaps. That’s how we can understand a conversation in a noisy environment or over a poor phone connection. And in terms of the mechanics of it, remembering is the same as simulating a past reality — helpful for rewriting history, and a dangerous enabler of implanted memories.
Consider this:
Causation is constructed by our brains to enable us to learn vicariously from alternative past choices.
The roads not taken.
The book offers a near-philosophical discussion on whether causation can ever be established, but the bottom line is we have an evolutionary need to see causation everywhere. We can see it reflected in our tendency to weave stories to make sense of the world around us, including with religion.
Simulating and imagining are followed by mentalizing (10-30 million years ago), the ability to model one’s own mind and visualize the intent of others. Then comes speaking. Going back to the comparison with AI, Bennett makes an interesting observation:
One of the reasons why the neocortex is so good at what it does might be that, in some ways, it is less general than our current artificial neural networks.
Our neocortex might be so powerful because it is less general than current neural networks and it works on explicit, narrow assumptions about the world. The unanswered question is how the neocortex learns these assumptions. And the intriguing implication is that we might have it all backwards as we try to build more and more complicated and general AI models.
Key takeaways:
The book highlights how much we still do not understand about the brain and human intelligence. For example, “Perhaps the motor cortex doesn’t generate the motor commands but rather motor predictions,” because unlike humans, cats with a damaged motor cortex can move just fine. Similarly, language is our superpower, but its origin remains “the hardest problem in all of science”.
There is a circularity in our efforts to understand human intelligence as we build the artificial kind. We started from a very limited understanding of how our brain works and tried to replicate it in silicon. Then we developed more sophisticated AI techniques, and now we might be attributing to our intelligence some of the features we have built in AI.
There are still fundamental differences between AI and human intelligence. With LLMs, Bennett notes, “the massive size of these models, along with the astronomical quantity of data on which they are trained, in some way obscures the underlying differences between how large language models think and how humans think”. We still need simulating and mentalizing for AI to approximate what we do.
Steering, reinforcement learning, simulating and imagining, mentalizing, speaking. Every animal intelligence breakthrough builds on previous ones, over hundreds of millions of years. It would seem safe to assume that our own intelligence will keep developing. Indeed, Bennett argues, the invention of AI is the new breakthrough, part of the evolution of our own intelligence, one that will unshackle intelligence from the physical human limitations. Here I would counsel caution: Our physical characteristics are inextricably linked to our intelligence, so seeing them as limitations might be looking at it in an entirely wrong way.
A final, philosophical point: does it even make sense to envision the existence of abstract intelligence for its own sake? Where does the drive come from? The book started from the observation that intelligence developed as a mechanism for successful survival. Once survival is no longer an issue, would intelligence itself continue to exist? Bennett says, “Even evolution itself will be abandoned.” Does it imply that Richard Dawkins’ selfish gene would commit suicide? That seems counter-intuitive and contrary to the entire premise of the book. Still, an excellent text, very much worth reading.




Whoever approaches biological intelligence and make the closest version for computers, wins. People forget that the relationship between AI and humans must be cherished and designed to be the closest to human behavior. In the future, we will have companions of a lifetime - that, my friend, is our ticket out of this race.
Thank you Marco. I've just ordered the book.
One thought I had was the fact that apart from the millions of years of evolution, humans (and other animals) continuously keep learning from the minute they are born. That too is a bit different from the way AI LLMs behave.