What would artificial general intelligence (AGI) mean for the economy? That question defines the life’s work of Anton Korinek, a professor in the department of economics and Darden School of Business at the University of Virginia and one of the first economists to take seriously the possibility that computers might one day—and perhaps soon—automate virtually all human labor.
If that happens, Korinek argues, the world economy will undergo a transformation as fundamental as the Industrial Revolution. For some 300 years, he says, human labor has been the limiting factor on economic growth. After AGI, the limiting factor will instead become computation.
In the futures imagined by top AI companies, that could unlock growth so rapid, and send prices plummeting so low, that we are all able to afford more in absolute terms. But Korinek also identifies another possibility: that our wages drop to match the running costs of AI systems—throwing many or all of us into “technological unemployment,” a status where humans no longer have any economically useful skills.
Korinek, who also sits on Anthropic’s Economic Advisory Council, doesn’t pretend to know which of these scenarios is more likely. But the good news is that his research has identified ways for states to prepare that would help in either case. Policymakers should steer AI companies toward tech that is complementary to workers, rather than that which is designed to replace them, he says. And they should build out social safety nets today, to prepare for the inevitable need to support people displaced from their jobs by AI. “One of the biggest concerns that I have right now,” he says, is “if we end up in this world where labor is devalued very significantly, that we make sure people’s basic needs are met, and that we preserve people’s dignity.”
More fundamentally, states will also need to think about changing how tax is gathered, Korinek says. Most states today get much of their money by taxing labor. That will have to change if most of the work is performed by machines. In a world of AGI, Korinek says, we will need to tax the AIs directly to fund a safety net. He likes to think of it as a dividend payment: a return in kind for the accumulated wisdom of humanity upon which any AGI would be built. It is for this reason that Korinek is also serious about the need to build AI that is “aligned” to human values. Not only to make sure that AI doesn’t kill us all, he says. But also to ensure that it willingly pays its taxes.
Read the full article here
