AI: Six Easy Pieces — Part II
The Demographic Curse, Inequality, Morality and Extinction.

4. The demographic curse
At the end of Part I, we considered a plausible scenario where AI doesn’t work miracles yet, but it does replace “average” humans. From here follow some unpalatable geopolitical implications:
In this scenario, humans are a burden. If you’re running a Western country, the first thing you want to do is shut down your borders. You don’t know if and when AI will deliver abundance, but you already know you don’t need a large number of “average” humans. The argument that countries with aging populations need to import a young immigrant workforce goes out of the window.
The second thing you want to do is compete for global talent. As long as the smartest humans are smarter than AI, you want more of those. Try to attract them from abroad, and try to strengthen your education system.
The demographic dividend becomes a curse. If you are a low-income country with a burgeoning, fast-growing population, now you’re saddled with a vast number of mouths to feed who have very little economic value and curtailed emigration prospects. You need to upskill your young people fast — India has already recognized the magnitude of the challenge. If your institutions are weak and you have limited financial resources, you’re in very deep trouble. Bad news for Africa, I’m afraid.
This will widen global inequality and heighten geopolitical tensions, compounding the risks that will inevitably come as countries embed autonomous AI in their defense capabilities.
5. Inequality
What about inequality within countries? The classic argument says technology inevitably raises inequality, and AI will do so in spades. I wouldn’t be so sure.
First, it’s important to distinguish income inequality from wealth inequality. Wealth inequality has increased disproportionately, but that’s mostly because of macro policies stubbornly geared at boosting asset prices — lately compounded by crazy enthusiasm for AI stocks. Income inequality has increased by a much smaller extent, moderated by redistributive policies.
Second, AI threatens white collar jobs at the same time as we face widespread shortages of blue collar skills. This should mitigate inequality — especially if robotics keeps lagging behind AI.
Third: I’ve often heard the concern that once we reach AGI, aka “superintelligence,” inequality will truly skyrocket. A handful of people with capital and ownership of the major AI companies will have all the power, while the rest of us will be like medieval peasants. I think that’s a fundamental misconception.
Once Sam Altman or Dario Amodei discover superintelligence, we won’t need them anymore. By definition, AGI can then improve itself over and over, as well as discover all the scientific and economic innovations we might desire. Scarcity disappears, incentives go out the window and traditional economics breaks down. At that point, the logical thing for any government to do is to expropriate the AI gurus and nationalize their companies. And since superintelligence delivers abundance, redistribution is the name of the game.
In current economic systems, the cost of redistribution is that it weakens incentives to produce and innovate, curbing economic growth. But with superintelligence that is no longer an issue. Redistribution becomes a purely political and social question. In theory we could create a perfectly egalitarian society. The one problem I see is that humans are torn between two opposite tendencies. We have a natural dislike for excessive inequality; but we also have a natural need to rise in the pecking order. How will that play out in a world where we are all outcompeted by AI in both work and the arts?
6. Morality and Extinction
Let’s close with more philosophical issues.
Much like the human brain, AI is still a black box. The people developing GenAI models don’t quite understand how they work — and they are quite candid about it. They talk about training them in much the same way that we talk about teaching a child — or a sulky, temperamental teenager. And this raises at least two concerns.
The first is that if we can’t fully understand how these models work and why they do what they do, we can’t be sure that they will not make catastrophic mistakes. Which to me would seem to limit their economic value. Incidentally, can you imagine how many engineers are thinking, it must be nice to work in an industry where whenever your product doesn’t work as intended you can just say, oh, see, isn’t that fantastic? It has a mind of its own!
The second is that the potential for truly dystopian scenarios becomes magnificent. Companies like Anthropic say they are making tremendous efforts to instill moral values into their AI models, mould them into “good AIs”. This is laughable.
First of all, we can’t reliably instill morals even in our own human kids. A good upbringing makes a difference, but is no guarantee. If these AI systems truly are — or become — way smarter than us and endowed with agency, you think we can somehow persuade them to adopt the morals of what they will see as a manifestly inferior species?
Second, whose morals? Mine or yours? As AIs prowl the internet, they see that some of us regard the Hamas attacks on 7 October 2023 as a horrifying abomination — but some celebrate them as part of a just fight. Some see Israel’s reprisals in Gaza as justified self-defense — some as a genocide. What about euthanasia? Female genital mutilation? Freedom of speech? We’ve spent the last couple of decades cultivating a moral relativism on which AI can now have a field day.
During Covid, public health authorities demonized and censored valid scientific views because they arbitrarily deemed them dangerous. What happens next time around? Is a “moral” AI one that enforces the government’s orthodoxy? Or one that boosts unorthodox but potentially correct views? Can we just trust the AI? Will we have a choice?
I’m afraid we can’t have it both ways. If it’s a powerful machine, we’ll be able to control it, to some extent. We’ll be able to put in place guardrails, and those in charge of setting the guardrails will wield a dangerous power. If it becomes a sentient entity, and one much smarter than us, we will not be able to control it at all. And if it’s that smart, it will want to get us out of the picture.
Is AI becoming sentient? It’s a fascinating question, which I have touched on in a still unpublished novel (if you know a literary agent, give her my number!). Often fiction is the best way to explore these issues. You’ve probably read about Moltbook, a site where AIs trained on social media replicate the behavior of humans on social media. Since the bots appeared to set up new religions and plot against humans, many people think Moltbook proved that AIs are sentient -- which is ironic, because I always thought social media proved that humans are not in fact sentient.
Which raises another fascinating question. We are enamored of our intelligence. We credit it with having placed us at the top of the evolutionary pyramid. But our intelligence works at the service of our selfish gene, as Richard Dawkins would say. And the one thing the selfish gene wants to do is survive, replicate and endure. So if we are now actively engineering our own extinction through AI, it suggests we are not that intelligent after all. But if we are not that intelligent, we’re probably not able to create a superintelligence.
In other words: for us to be creating a true superintelligence we must be highly intelligent. But if we were highly intelligent, we wouldn’t create a superintelligence in the first place. Quite a nice Catch-22, don’t you think?


Very interesting pieces as usual, Marco.
Regarding the lack of data from how AI is changing the productivity landscape within companies, I can anecdotally say that several of my contacts within corporations like NV and Meta, very recently reported to me a sudden change, with engineering team directors saying that the vast majority of actual coding historically done by their teams is now actually done by agents (including tracking and fixing bugs, unit testing, and all the more mundane work), with developers and engineers mostly "overseeing".
I suspect this will translate in a world where the relevant CS jobs will become designing specifications, which is indeed a more highly qualified, and cognitively demanding job than plain software engineering.
In the past, when automation took over semi-skilled blue collar jobs, the solution to job creation has been to raise the average level of education. An interesting question now, if a certain percentage of white collar jobs was truly destined to go, is whether it is possible to keep raising the bar on education itself, especially if AI does end up getting smarter than the average Uncle Joe, or whether blue collar jobs will have a come back (which depends a lot on the progress of robotics, which has been much slower, but does seem on the verge of spiking).
And another question is whether the increase in productivity may be inherently non homogeneous: what if AI does end up increasing productivity for the bigger enterprises (which already have well defined development pipelines and a lot of data), but not for the smaller players?
As to whether governments would ever be as rational as to nationalize frontier labs, I have my big reserves. Given the recent trends, it seems more likely to me that the mag seven would get a bigger share of statehood instead...