The Coming AI Wave
Critical reflections on AI’s promises and risks — and Suleyman’s containment strategy.
Photo by Matt Paul Catalano on Unsplash
Let’s take a break from the conundrums of the economy and talk about Artificial Intelligence. After all, AI promises to boost productivity and thereby solve the multiple economic challenges that bedevil policymakers and all of us. And if AI does deliver on its loftiest promises it might make economics as we know it obsolete: if it delivers true abundance — unlimited availability of whatever we might need — then trade-offs will no longer matter, and without trade-offs economics is dead.
I’ve just read The Coming Wave, by Mustafa Suleyman, cofounder of DeepMind and Inflection AI. It seemed like a good opportunity to resume my tradition of sharing thoughts on selected books — you can find previous reviews here. Suleyman is one of the people who spearheaded the latest AI renaissance — DeepMind developed AlphaGo, the AI that defeated humans at the most complex board game on earth. His book discusses the benefits and threats of AI, and suggests a strategy for “containment” i..e. a way to keep the technology under control and harness its benefits while limiting the risks. Containment is the crucial challenge, because Suleyman does believe that advances in AI and synthetic biology could threaten human extinction.
One caveat up-front: I love good writing and the writing in The Coming Wave really grated on me, which undoubtedly colors my view of the book. The prose is overwrought and overenthusiastic, it never uses less than ten words where three would suffice, and too many sentences sound like boilerplate corporate marketing. Every noun seems to deserve at least three adjectives; superlatives and metaphors multiply at digital speed.
The litany of breathless superlatives ends up making a strident contrast to the so far pedestrian achievements of the technology — especially because Suleyman makes it sound like every AI-driven life-changing achievement is just around the corner, a matter of a few years at most, maybe months.
For example, Suleyman posits it will take just three years for AI to reach human-level performance — across the board, I assume. This prediction requires either an overly positive take on the speed of AI’s development, or an overly negative view of human intelligence — perhaps the latter would be more justifiable.
But the book does offer some good food for thought:
Are you sure you want to hear the answer?
The big promise is that AI will soon help us tackle the really big issues with which humanity is struggling, such as climate change. A very exciting prospect, but here are a few complications which the book ignores: First, who decides what key challenges should be outsourced to the AI? The list could include population aging, poverty, public health, pollution, world peace and many others. Who decides which challenges get priority? This is crucial if unpleasant trade-offs refuse to die and progress in one area requires compromise in another. Should the AI decide? And what happens if we do not like the AI’s solutions? Our societies have developed a growing reluctance to accept sacrifices of any kind, and it might be a big leap of faith to think that AI’s solutions will be painless and to our liking.
Intelligence’s baggage
Suleyman asserts with great conviction that AI will soon equal and surpass human intelligence. He makes a very clean argument: neural network models like ChatGPT are built to mimic the structure of the human brain. And, Suleyman observes, the human brain has a fixed and limited size: close to 100 billion neurons forming about 100 trillion connections. Since AI models are instead rapidly growing in size, they will soon surpass human abilities. The underlying assumption is that human intelligence is the product of a physical “machine” — our brain. If we can replicate the machine — albeit in software rather than wetware — the result should be indistinguishable from human intelligence.
I am quite open-minded as to whether AI will become intrinsically identical to human intelligence. But if it does, will all our messy emotions come with the package? They are also the manifestation of physical processes inside that same brain. Messy emotions like ours can be a heavy baggage, with a considerable impact on how important decisions are taken, as we know all too well. We are manifestly unable to take full advantage of our rational thinking — will a human-like artificial intelligence do any better?
The puppeteer is back
The puppeteer problem returns. Suleyman argues that the power of large language models lies in their ability to learn everything by themselves, to discover, invent and be creative with absolute freedom. However, they learn directly only from the internet, and so we inevitably come to the issue of bias. Here the book says that AI models, “will casually reproduce and indeed amplify the underlying bias and structure of society, unless they are carefully designed to avoid doing so.” I see two major problems here. First, an intelligence that “casually reproduce biases and structures” does not sound like the kind of intelligence that would discover, invent and be creative with absolute freedom. Quite the contrary. Second, carefully designing the AI so that it will not reproduce bias conjures the image of the puppeteer indoctrinating the AI on the “right” views. Topics where non-consensus views are already considered wrongthink include climate change and pandemic policy, so you can see how indoctrinating the AI might limit the scope for life-changing benefits.
You can’t have it both ways: either this super intelligence moves with absolute freedom and we trust it to do better than our biased actions, or it just parrots us and needs to be re-educated, in which case it’s unlikely to show unrestrained creativity.
Can’t have it both ways — even with AI
Some of the book’s suggested steps to achieve containment are worth taking seriously. I particularly like the stress on the need for “makers”, that is AI experts and practitioners, to take the lead in putting innovation on the right track, because nobody else can do it as well.
Others though I find naive, and point to another glaring contradiction in the book: Suleyman argues that the power of AI and synthetic biology comes from the fact that they are becoming cheap and easily and widely accessible. That also makes them uniquely dangerous: anybody will soon be able to unleash a world-ending pandemic. So the proposed solution often becomes to ring-fence, limit access and control use of the technology. For example, Suleyman argues that every DNA-synthesizer in the world — including “household use” ones — should be connected to a secure central system that will scan for pathogenic sequences. Wouldn’t that be nice. We could have eradicated the dark web already. Again, you cannot have it both ways: you can’t get the benefits of universally available technologies with the safety of tightly controlled ones.
Skim, don’t read
Overall, I found the book disappointing. If you like skimming books and are interested in AI, you might enjoy flipping through this one; and the ten commandments for containment will give you a good rundown of the possible steps that are being contemplated to manage the risks of AI. But I did not find it particularly novel or insightful — at best an echo of the breathless enthusiasm and panic that we see everywhere else.