AI's Double Blind
Policymakers are eager to launch policies to manage AI’s impact on the economy. They should take a deep breath and build optionality instead.
Artificial Intelligence dominated the discussions at Davos, and the International Monetary Fund stirred the debate with an interesting research paper and an issue of its Finance and Development magazine devoted to the economic impact of AI. The upshot: we have absolutely no idea how this will play out, but we must quickly put in place appropriate policies! You do see the problem here, right…?
Can it, will it…
The IMF authors introduce a novel angle: they assess not just which jobs AI will be able to do, but also which jobs it will be allowed to do. For example, while AI might soon be perfectly able to administer justice, most societies will probably feel uncomfortable with the idea — hence judges’ jobs look safe.
Previous studies have taken a more mechanistic approach: divide a job into its constituent tasks, see how many could be automated, and translate that into a probability that the job will be taken over by machines. I’ve never been quite convinced by this methodology: I think that if your job has nine tasks that could be automated but one crucial task that cannot, the probability that it will be taken over by a machine is zero, not 90% — though the job will undoubtedly change, augmented by technology. I have always taken mischievous pleasure in pointing out that one of the most quoted studies on “machines will take all the jobs”, by Frey and Osborne at Oxford (2013), ranked the profession of fashion models among the most at risk, with a 98% probability of being taken over by a bad (but presumably good-looking) robot.
Ministry of the bleeding obvious
Most of the IMF paper’s findings, however, simply highlight how little we know. The study divides jobs in three categories:
Low exposure to AI — if your job is washing dishes in a restaurant, AI should not be at the top of your concerns;
High exposure, low complementarity — if you are a telemarketer, AI will soon take your job (and very few people will feel sorry for you);
High exposure, high complementarity — if you are a surgeon or a judge, AI will be a powerful teammate, provided you learn how to use it.
This all sounds very sensible and pragmatic; and the IMF’s predictions are equally sensible, but shaded by enormous uncertainty, so that they all follow the classic “on the one hand, on the other hand” noncommittal protocol:
Higher educated workers are the most exposed to AI, but could also benefit most from the complementarity; how this plays out will depend on how much companies invest, how quickly AI gets deployed, whether workers have or can acquire the skills needed to team up with the machines. And of course, for jobs where the authors assume that society is not ready to leave the machine in charge, if society’s preferences change the high complementarity could quickly disappear.
Since high complementarity jobs tend to carry higher salaries, if the complementarity effect dominates then AI will exacerbate income inequality; if instead AI ends up displacing higher-educated workers, it will reduce it. Again, we don’t know which way this will go.
If AI finally delivers on the promise of a productivity miracle, the resulting boost in economic growth might make everyone better off; but this again depends on how effectively the technology gets deployed at scale, and how quickly different sectors of the economy can change operational and management practices to take advantage of it. We don’t know.
Advanced economies, with a greater share of skilled cognitive jobs, seem both more exposed to AI and better placed to reap its benefits; or then again emerging markets might use it to leapfrog — the way that India has been deploying digital technologies to accelerate financial inclusion and make the social safety net a lot more efficient.
In the end, the most useful paragraph is perhaps the one detailing uncertainty, on page 5. And the authors conclude, with disarming candor, that “No-one knows for sure how the labor market as a whole and individual workers will be able to adjust.” Hard to disagree.
In a related article in the IMF’s Finance & Development magazine, MIT economists Erik Brynjolfsson and Gabriel Unger make similarly points, though they astutely relabel the classic “on the one hand, on the other hand” as “forks.” In this case, the forks on the road mean that AI could lead to, (Fork a): low productivity or higher productivity; (Fork b): lower inequality or higher inequality; and (Fork c): lower industrial concentration or higher industrial concentration. In the same issue, Daren Acemoglu and Simon Johnson, also at MIT, agree that whether AI will increase productivity or not, and whether it will replace or augment human workers, are very much open questions.
Overall, these papers offer useful frameworks for thinking about the economic impact of AI, but do not give us a better line of sight into how things will actually play out.
As AI is a black box, its developers are bumbling blindly to create various applications whose impact is highly uncertain; do we want governments to blindly tinker with policies and regulations to add yet another layer of uncertainty?
To blindly go…
Undaunted, all three sets of authors suggest that governments should get busy designing and launching policies that will steer AI in the right direction, towards delivering higher productivity and better job opportunities for humans.
As you might expect given how little we understand about where AI might be going, most policy prescriptions are unhelpfully vague. The IMF says, “Policies must promote the equitable and ethical integration of AI and train the next generation of workers in these new technologies; they must also protect and help retrain workers currently at risk from disruptions.” (Retrain them to do what?) They also urge governments to “launch adequate regulatory frameworks.” (Adequate how?) Acemoglu and Johnson at least are more concrete, suggesting the use of taxes to put labor at less of a disadvantage, and steering investment towards human-complementary technologies (but again, how?).
Overall, this combination of papers makes a compelling case against early government intervention, in my opinion.
Given how little we understand — as all the authors repeat at length — can we really hope to devise policies and regulations that will steer AI in a desirable direction, for example to complement human skills rather than substitute them? As AI is a black box, its developers are bumbling blindly to create various applications whose impact is highly uncertain, then they often sit back and say, “wow, I never thought it would do that”; do we want governments to blindly tinker with policies and regulations to add yet another layer of uncertainty? This does not sound like a helpful double-blind experiment.
Buy optionality instead
Unfortunately, the fact that AI is a black box with potentially massive impact seems to encourage economists to think in a central planning mindset, something that personally makes me uncomfortable, and rarely ends well.
Here’s a modest alternative: since we all agree on the massive degree of uncertainty ahead, governments would do better to focus on policies that will maximize optionality. I would start with the basics: (a) reduce public spending and get budgets under control; because if AI just destroys jobs without boosting growth, we’ll need resources to cushion the blow; (b) strengthen school and education systems with a meritocratic bent, because workers with a solid education will have a better chance to adapt; and (c) instead of dreaming up new regulations, simplify existing regulations so the private economy can adapt faster and create new jobs as the need arises.
Too boring? Boring can be good; leave the games and entertainment to the AI.
Ask the AI
One ray of hope: the IMF report notes that “AI offers unprecedented opportunities for solving complex problems and improving the accuracy of predictions”…maybe this will eventually help economists make better predictions on the impact of AI, laying the ground for better policy prescriptions….