Artificial Intelligence: Between Armageddon And Promised Land
Let’s take a hard look at the value AI delivers; and let’s safeguard our own intelligence, creativity and critical thinking — they will remain our most powerful tools for a very long time.
Image generated with DALL-E
Much ado about Artificial Intelligence these days. I have been fascinated by the topic for a long time, so I wanted to offer some reflections.
The current debate oscillates between biblical warnings of doom and visions of an economic promised land. Two recent articles appeared back to back in the Wall Street Journal illustrate this dichotomy of views. In the past weekend edition, Peggy Noonan notes that Apple’s iconic logo, the apple with a bite taken out, echoes the Fall and expulsion from the garden of Eden. The collective tech subconscious, she argues, knew that hubris would be our downfall — a hubris plain to hear in the tech visionaries’ claims that they are creating a god-like super intelligence.
The Journal’s tech guru, Andy Kessler, argues all we need is a “kill switch” that we can flip at the first sign of trouble. A bit like Elon Musk’s rocket was ordered to self-destruct once it started spinning in a worryingly unplanned way. A kill switch, opines Kessler, is easy to build; whereas attempts to regulate AI would negate potentially enormous benefits out of sheer superstitious fear of the unknown.
Intelligence, what art thou?
The most fascinating aspect of the AI journey is that it forces us to realize how little we understand intelligence itself.
We understand a lot about computations — we’ve built calculators of superhuman mathematical prowess. But intelligence?
Is human intelligence just a supremely sophisticated form of computational ability? Or is there something more?
From its very beginning, AI research has tried to understand how our intelligence works, tried to build processes that can mimic it, or at the very least mimic its results. It started with computational rules. Then it tried to replicate the way the human brain learns, with neural networks and machine learning.
We have reached some remarkable results. ChatGPT has everyone in awe, but I have long been much more impressed by image processing softwares like Adobe Photoshop: the way they can recognize different features of an image, down to strands of hair, is breathtaking.
This has raised deep questions: how do we learn? How do we make decisions? How do we set goals and priorities?
If it looks like a brain and it ticks like a brain…
You see, once we conclude that learning is nothing but a sophisticated computational exercise, and that a machine can master it, it becomes a lot harder to argue that that same machine will never be able to set its own goals and make its own decisions. We like to think that making decisions and setting goals is something fundamentally different, but we can’t be sure and we cannot prove it.
Taking the argument to the extreme, if we could build a perfect silicon replica of the human brain, is there any reason to think it would not behave in a way indistinguishable from a human? After all, everything we do, the way we look at the world around us, the love, fear and pain we feel, they all map one-to-one to physical processes, to chemical and electrical impulses. That’s why drugs can alter our moods.
In this mechanistic view of the world, if we build the “right” machine it should be indistinguishable not just from a human brain, but from a human — therefore potentially able to formulate its own goals. Except of course it could be immensely more powerful, much like calculators and chess engines. And life has higher stakes than a game of chess. Hence the dystopian predictions of a “singularity”, where a superhuman AI will start building a more powerful version of itself, which will then build a yet more powerful one, and so on and so forth, until machines take over the world.
If this is what AI can be — and we can’t rule it out — assuming that all we need is a kill switch is farcical: we may simply not be able to create one.
The economic promised land
Why take this big risk? Well, the potential benefits of AI are, in theory, phenomenal.
Think of the many ways in which technology has made our lives better: improvements in health care, travel, energy. (Pretend for a moment it has not given us social media). Digital-industrial innovations are set to gradually boost productivity and economic growth. AI could enhance all these benefits by orders of magnitude, and bring many more.
As an economist, I often dream of an AI that could take over economic policymaking and save us from amateurish catastrophic mistakes we keep making over and over again…
With a true General Artificial Intelligence, able to tackle any problem and not just excel at narrow specialized tasks, we could defeat any disease, stabilize the climate, ensure strong and equitable economic growth, enforce world peace, you name it.
Surely all this is worth a little risk? After all, we’ve managed to keep under control all the powerful technologies developed so far. Somehow we’ll manage to build a kill switch, and maybe we don’t even need one, maybe we can build a benevolent AI.
Are we there yet?
How close are we to the promised land, or to self-destruction?
ChatGPT can deliver some clever performances, but as its creators admit, it can make big mistakes and even make stuff up. It can be a powerful research assistant if you learn how to use it, giving it very detailed prompts and several rounds of feedback — and double-check its results.
But the fact that it makes stuff up and is unable to distinguish reality from fiction is rather disturbing. It is also inevitable: the AI learns from our online world, and in that world, to name but one example, most media channels create their own “narratives,” distorting facts for the sake of some political or social agenda. No wonder the AI gets confused. If you wish to rely on it for something important, say a medical diagnosis, or building a rocket, its inability to tell fact from fiction would seem to be reason for caution.
And before we get too excited with the progress made so far, let’s ask a very basic question: has AI solved any important hard problem that had so far eluded humans?
Found a cure for cancer? Discovered a new source of abundant clean energy? Mapped a way to address climate change?
No, no and no.
We are not there yet. We are not even close.
Groucho Marx’s AI
The utopian and dystopian views of AI both envisage that we can create something as awe-inspiring as human intelligence and perhaps more.
Here is a more sober way of looking at it: if we can build it, maybe it’s not that awe-inspiring after all. If intelligence is just a very complex computational system that we’ll be able to replicate, then maybe there’s less to it than meets the eye.
Maybe the AI will be as bumbling and error-prone as we are.
Groucho Marx memorably said he would never join a club that would have him as a member.
Maybe we should never put our future in the hands of an intelligence that even we can build.
Three lessons
Even if we cannot build a true General AI, we could still end up doing a lot of damage
We might further cripple our own cognitive abilities as we surrender to the temptation to get easy answers without worrying too much about their quality
We might put too many decisions on automatic pilot, in the hands of a not yet clever enough AI, especially if in some fields this becomes a competitive race and the AI confers a speed advantage — applications of AI to warfare are especially troubling in this respect.
I would sum this up in three lessons:
AI research should push forward, and it inevitably will. It is part of an important quest to build ever more powerful tools. We should continue to ponder extreme risk scenarios, because however fanciful, we simply cannot rule them out. This will be a fascinating journey into the nature of intelligence and creativity — provided we don’t just ask the AI to provide all the answers.
We should maintain some perspective, not get overly excited about the progress we are making — it is only as remarkable as the value it delivers, and a recitation on socks in the style of the Declaration of Independence does not bring much value. Similarly, Armageddon is still more likely to come from gain of function research on some new virus in some lab somewhere.
Most importantly, we should continue to safeguard, train and bolster our own intelligence, creativity and critical thinking, because they will remain our most powerful tools for a very long time. Artificial intelligence is no match for human stupidity.



Very well balanced article, Marco.
However, one of the key differences, I think, that will differentiate AI from us is the whole very grey area of conscience, ethics and variable value ranking. This is such a huge, largely uncharted ocean, that even trying to decipher it ourselves has not brought us much closer to understanding it. Getting an AI on this path would be truly difficult, at least for those who care. There are, of course, unscrupulous people in the world for whom this may not be a real issue!
Bobby