The Quickest Revolution - Technology And Economics Collide
Jacopo Pantaleoni's insightful book on the AI revolution is a must read.
Jacopo Pantaleoni’s ‘The Quickest Revolution’ is the most thoughtful treatment of the unfolding AI revolution that I have come across. I highly recommend it, and not just because I agree with many of his arguments—with a few caveats that I will come to below.
A technologist with plenty of experience at the cutting edge of AI, Pantaleoni has written an ambitious book, where he sets out to examine the AI revolution with a broad lens, eschewing the hyper-specialization and excessive optimism of most experts while still bringing to the table the deep domain expertise that most critics and commentators lack. He succeeds, thanks to an impressively broad erudition and refreshing intellectual honesty and curiosity—as well first rate technical chops.
The book starts by re-living the history of scientific thought from the very beginning, extremely helpful to put the most recent advances in context. As befits a technologist, Pantaleoni is very bullish on the pace of technological advancement. He notes that we are experiencing an ongoing exponential increase in compute capacity, and argues that Moore’s law itself, far from getting exhausted, is accelerating.
The wheat from the chaff
At the same time, he has a very sober take on the Large Language Models that have sparked the latest hype. He documents a number of powerful use cases of LLMs, but without the excitement of those arguing that ChatGPT has brought Artificial General Intelligence within reach. Pantaleoni makes an important and insightful observation where he points out that some of the LLM’s complex capabilities might be emerging from characteristics that are completely different from those of human intelligence. This is not to diminish what these models can do—but it’s an important reminder that we should not extrapolate and assume that they’ll be able to reproduce all other feats of which the human mind is capable.
‘The Quickest Revolution’ shines a harsh light on the dangers of AI development, including the potential damage from LLMs and image generation AI models. Pantaleoni points out that many of the key pitfalls of digital technology are not a bug but a structural feature of the technology’s very design and the financial incentives it operates under. In fact, one of the book’s strength is the way it looks at the interaction of the technology with the economics.
The attention economy is bankrupt
This is most evident in the case of the ‘attention economy,’ i.e. the way that apps and social media platforms’ business models consist in capturing our attention through incessant prods and addictive content, and then monetizing it via advertising. Pantaleoni warns this is having a disastrous impact on our collective cognition—and here I could not agree more. He is skeptical of the bullish argument that easier access to information and smoother communication will make us smarter, fostering a new kind of ‘collective intelligence.’ Here’s his pithy assessment:
‘ …looking at the majority of the information content flowing through social networks like Instagram and TikTok, or the amount of misinformation plaguing social media, it doesn’t seem that we are getting particularly smarter as a whole.’
He notes that in several countries IQs have been dropping since the early 2000s—reversing an earlier upward trend—likely because of the information overflow and barrage of social media’s addictive features. Pantaleoni points the finger at our apparent “reduced capacity to assimilate information and condense it into knowledge” because of the speed at which we’re being pushed to do everything. The book flags very persuasively the huge risk that our cognitive abilities will suffer a major continued deterioration. I fully agree, and have raised similar concerns in my piece with Mickey McManus, The Great Cognitive Depression.
What seems to be the problem?
Pantaleoni repeatedly cautions us against thinking that AI can deliver magical solutions—and prods us to consider more seriously what problems we are actually trying to solve. On the push to create a brave new world in the Metaverse, for example, he asks: is it because our physical reality is too complex to fix? And what makes us confident that we will not carry over into the Metaverse the same economic incentives and human shortcomings that shape our physical life? Similarly Pantaleoni—who strongly believes in the imperative to improve justice and fairness—is skeptical of the current drive to eliminate implicit bias from AI models. Reality is biased, he observes. We can strive to correct certain injustices and prejudices, in AI as in the real world, but we should not pretend there is some value-free objectivity that we can aim for.
‘beware our tendency to build simplified models of the world we live in and exchange them for reality itself.’
Pantaleoni also offers some insightful reflections on whether AIs can ever become conscious and self-aware. The strongest argument in favor of this unsettling possibility can be summarized as follows: consciousness emerges from the workings of our brain, which is a physical machine. Therefore, if we can replicate the brain in silicon, we have to allow for the possibility that consciousness would similarly emerge. In fact, some experts argue that consciousness will inevitably arise.
Pantaleoni soberly points out that this argument assumes that consciousness is ‘substrate-independent’, a phenomenon that will emerge independently of the physical properties of the underlying ‘machine’. But, he cautions, ‘in the real world the phenomenon is not separate from the substrate.’ In other words, it might well be the case that consciousness hinges on on some intrinsic properties of the ‘wetware,’ our brain. If that is the case, we do not really have any idea if our efforts to replicate the human brain in silicone are truly replicating any of the elements necessary for consciousness to arise. We should ‘beware our tendency to build simplified models of the world we live in and exchange them for reality itself.’
I was especially happy to see my same exact reaction to the notorious ‘simulation hypothesis,’ namely that we might all just be characters living in a computer simulation created by a far superior intelligence. The question of whether that might be the case is—he argues—both nonsensical and irrelevant. We can never prove it or disprove it, and it has no bearing on any aspect of our lives.
Chapters 17 and 18 should be required reading for everyone involved—or interested—in the field of AI.
In defense of the Luddites
As I mentioned, ‘The Quickest Revolution’ has a strong focus on the interplay of technology and economics, and Pantaleoni highlights two concerns. The first is the possible—even likely—impact on the workforce. I especially love the fact that he decided to rise in defense of the Luddites. The Luddites, he argues, have been reduced to a stereotype of not-very-bright people, self-defeatingly opposed to technological progress. In reality, he notes, they very rationally fought against the destructive impact that specific technologies were having on their livelihoods. He is right.
I am less convinced than he seems to be that technology will cause a further worsening of income inequality, but I do agree it is an issue that needs monitoring and that we should be thinking about mitigating strategies.
Pantaleoni’s biggest concern by far, however, is the concentration of power in the hands of a few private corporations. The tech giants have massive financial firepower and near-unlimited access to data. This gives them enormous power: to sell our data, to monetize our attention, to manipulate our views and to censor our speech. Pantaleoni argues that it would be foolish to believe that markets can be self-regulating in this case—and I believe he’s right.
I do have to sound an alarm bell, though, on his loud call for government intervention and regulation. I don’t disagree on the rationale for regulation, but I want to stress a very important caveat, which I feel is not emphasized enough in the book. Governments are well aware of the power that big tech companies wield. And governments have already made very successful efforts to co-opt and use that power. The Covid pandemic has provided the textbook examples: it has now been documented (for example by
) that the US government strong-armed tech and social media giants into censoring unwelcome opinions. The companies might have needed only limited encouragement, as their own ideology seemed well-attuned to the government’s, but the coordinated censorship campaign was reminiscent of Orwell’s prophetic insights—and is extremely alarming. Far from being the solution, government might be the bigger threat.Here’s looking at you, kid
Pantaleoni is right on the money when he highlights the danger of enormous financial and informational power concentrated in the hands of a few private companies. But let’s remember that actual government is not the benevolent social planner of classic economic theory, determined to maximize the well-being of all its subjects. Real-life governments seem to be irresistibly attracted to the Orwellian Big Brother model.
To be fair, Pantaleoni stresses that the responsible, active involvement of everyone of us is as important as government intervention—in fact it is the necessary counterweight to maximize the chances that government intervention here will help solve the problem, not exacerbate it. And here, again, I could not agree more.
A must read (did I say this already?)
Brilliant review Marco. I'm definitely going to get and read this book. I thinka more detailed analysis and study of the whole 'consciousness' issue is worth a whole article!
Re: "I was especially happy to see my same exact reaction to the notorious ‘simulation hypothesis,’ namely that we might all just be characters living in a computer simulation created by a far superior intelligence. The question of whether that might be the case is—he argues—both nonsensical and irrelevant. We can never prove it or disprove it, and it has no bearing on any aspect of our lives."
It can matter. We assume that we live in a world where everything has to make sense according to the laws of physics. If our world were just a simulation, it would be understandable that some of our existence doesn't make sense because the programmers who implemented our simulation didn't implement those non-sensical aspects.
"The only thing that matters is that one day, everything will matter." -- Todd Rundgren, 1974