New research shows that gains from AI will be 'modest'
Elevated levels of human agency and prosperity is the key to democracy and the future of AI.
It's not difficult to find opinions augmenting the future impact and value of artificial intelligence on productivity.
Not too long ago Goldman Sachs published a forecast where they estimated that AI would increase annual global GDP by 7% - the IMF are equally bullish on the massive upside boost in productivity that the technology could facilitate.
So when a paper is published by The National Bureau of Economic Research, written by Daron Acemoglu, a lauded MIT economist and researcher, which indicates that the plausible projected gains in total factor productivity (TFP) growth will be modest; it certainly puts the over confidence of many into sharp perspective.
Acemoglu calculates that 'total factor productivity (TFP) effects of AI within the next 10 years should be no more than 0.66% in total—or approximately a 0.064% increase in TFP growth annually.'
So best case scenario here is that AI will add just two-thirds of one percent in 10 years?
These gains are also very generous according to the paper, because gains from AI are calculated on improvements in the deliver of fairly simple tasks. Solving more complex tasks is going to be a lot more difficult for the current technology to deliver on.
'existing estimates of productivity gains and cost savings are in tasks that are “easy-to-learn”, which then makes them easy for AI. In contrast, some of the future effects will come from “hard-to-learn” tasks, where there are many context-dependent factors affecting decision-making, and most learning is based on the behavior of human agents performing similar tasks (rather than objective outcome measures). Productivity gains from AI in these hard tasks will be less—though, of course, it is challenging to determine exactly how much less.'
The paper successfully argues that much of the upside gains that many commentators are claiming are over optimistic. That doesn't mean that there will be no gain, it just throttles back the numbers somewhat to present a picture that is more realistic.
The most interesting thing about the paper however are Daron Acemoglu's concluding remarks:
My assessment is that there are indeed much bigger gains to be had from generative AI, which is a promising technology, but these gains will remain elusive unless there is a fundamental reorientation of the industry, including perhaps a major change in the architecture of the most common generative AI models, such as the LLMs, in order to focus on reliable information that can increase the marginal productivity of different kinds of workers, rather than prioritizing the development of general human-like conversational tools. The general purpose nature of the current approach to generative AI could be ill-suited for providing such reliable information. To put it simply, it remains an open question whether we need foundation models (or the current kind of LLMs) that can engage in human-like conversations and write Shakespearean sonnets if what we want is reliable information useful for educators, healthcare professionals, electricians, plumbers and other craft workers.
The focus of the AI industry so far has really been to create tools that mimic humans, but that are fallible to hallucinations and producing wild inaccuracies. The industry should rather reorientate their attention on creating tools that are really useful to the work done by people, which will actually produce the massive productivity gains that would be possible if this level of maturity existed in the industry. The answer is to create a tool that facilitates human flourishing - to truly enable the agency of people.
Perhaps when that happens we will be collective gains for all rather than just PR stunts that are aimed to shock people and induce unnecessary fear.
More: