Machine learning – a kind of sub-field of artificial intelligence (AI) – is a means of training algorithms to discern empirical relationships within immense reams of data. Run a purpose-built algorithm by a pile of images of moles that might or might not be cancerous. Then show it images of diagnosed melanoma. Using analytical protocols modelled on the neurons of the human brain, in an iterative process of trial and error, the algorithm figures out how to discriminate between cancers and freckles. It can approximate its answers with a specified and steadily increasing degree of certainty, reaching levels of accuracy that surpass human specialists. Similar processes that refine algorithms to recognise or discover patterns in reams of data are now running right across the global economy: medicine, law, tax collection, marketing and research science are among the domains affected. Welcome to the future, say the economist Erik Brynjolfsson and the computer scientist Tom Mitchell: machine learning is about to transform our lives in something like the way that steam engines and then electricity did in the 19th and 20th centuries.
Signs of this impending change can still be hard to see. Productivity statistics, for instance, remain worryingly unaffected. This lag is consistent with earlier episodes of the advent of new ‘general purpose technologies’. In past cases, technological innovation took decades to prove transformative. But ideas often move ahead of social and political change. Some of the ways in which machine learning might upend the status quo are already becoming apparent in political economy debates.
The discipline of political economy was created to make sense of a world set spinning by steam-powered and then electric industrialisation. Its central question became how best to regulate economic activity. Centralised control by government or industry, or market freedoms – which optimised outcomes? By the end of the 20th century, the answer seemed, emphatically, to be market-based order. But the advent of machine learning is reopening the state vs market debate. Which between state, firm or market is the better means of coordinating supply and demand? Old answers to that question are coming under new scrutiny. In an eye-catching paper in 2017, the economists Binbin Wang and Xiaoyan Li at Sichuan University in China argued that big data and machine learning give centralised planning a new lease of life. The notion that market coordination of supply and demand encompassed more information than any single intelligence could handle would soon be proved false by 21st-century AI.
How seriously should we take such speculations? Might machine learning bring us full-circle in the history of economic thought, to where measures of economic centralisation and control – condemned long ago as dangerous utopian schemes – return, boasting new levels of efficiency, to constitute a new orthodoxy?
A great deal turns on the status of tacit knowledge. On this much the champions of a machine learning-powered revival of command economics and their critics agree. Tacit knowledge is the kind of cognition we refer to when we say that we know more than we can tell. How do you ride a bike? No one can say with any precision. Supervision helps, but a beginner has to figure it out for herself. How do you know that a spot is a freckle and not a cancer? A specialist cannot teach a medical student simply by spelling out her thinking in words. The student has to practise under supervision until she has mastered the skill for herself. This kind of know-how cannot be imparted or downloaded.
Can robots assimilate tacit knowledge? Mid-20th-century arguments against centralised planning assumed that they could not. Some of the achievements of machine learning – such as eclipsing specialist doctors at spotting cancer – suggest otherwise. If robots can retain tacit knowledge, AI-powered central planning might well outperform decentralised market interactions in coordinating economic activity. But there is good reason to believe that the mid-century anti-planners were right. Tacit knowledge will probably remain the preserve of human beings – with implications not only for the prospect of a return of the command economy, but also for broader fears and hopes about a future powered by machine learning.
Economists have long believed that the market is the most efficient means of coordinating economic activity. The market’s strength is its capacity to aggregate available information and thereby equilibrate supply and demand. No single intelligence could possibly encompass all this information as effectively as the market mechanism. Individual knowledge was divided and fragmentary but, in the aggregate, the volume of information brought to bear upon the coordination of economic activity in and through the market is immense. Governments could know enough to manipulate demand to smooth out fluctuations in the trade cycle – this was the Keynesian wager widely embraced after 1940 – but supply was another matter, better left to the unfettered interaction of individuals and firms.
Mid-20th-century arguments against planning focused on the quantity of information that any coordinating authority would need to muster in order to make decisions as effectively as the free market. Planners knew that no one intelligence could know half as much as everyone combined, however imperfect our several perspectives were. But the disparity between what an individual could know and what the whole of a society could see was narrowing. The Polish economist and diplomat Oskar Lange saw that the relations between supply and demand could be formulated algebraically. All the relationships between buyers and sellers in a market for steel, for instance, could be mapped out as a series of simultaneous equations. A capable mathematician with enough time on her hands could solve all the equations to quantify the price at which supply matched demand precisely. A government could then fix that price, signalling optimal quantities for purchase and production to buyers and sellers instantly, eliminating the inefficiencies that precipitated surpluses and shortages.
When Lange died in 1965, it was impossible to solve all these equations in time to make centralised price-setting work. The Russian economist and Nobel laureate Leonid Kantorovich spent six years trying to figure out an optimal price for Soviet steel production in the 1960s – far too slow to be useful in practice. But in 1965, the American IT pioneer Gordon Moore observed that the processing power of computers doubled each year; ‘Moore’s law’ has held good ever since. Horizons of possibility soon started shifting. The calculations involved in coordinating an economy might be impracticably laborious for human clerks, but computers changed the game. If Lange was right, and relations between supply and demand in a given market could be formulated algebraically, the exponential growth of computer power made it only a matter of time before determining price centrally by Lange’s method became feasible. The market – Lange wrote shortly before his death – would soon be seen as ‘a computing device of the pre-electronic age’, as outmoded as an abacus.
Critics of centralised planning had one more card to play. The Anglo-Austrian economist F A Hayek had argued in 1945 that the need to turn concrete particulars into generic statistics was part of what made central planning inferior. Plotting out market relations algebraically for the convenience of the central planners involved compressing the rich and complex data dispersed among individuals in a given market into a set of statistics. Details about the location or quality of commodities under analysis were abstracted out. The specific, local knowledge through which people filtered price signals in a free market enhanced individual decisions. This in turn helped to ensure that floating prices remained reliable indicators of fluctuations in supply and demand. This specific, local knowledge was irreducible to statistical form. Any centralised planning system would therefore have to do without it. Whatever quantities of information the planners could summon, the quality of their information would never be as good.
Are not robots now doing what anti-planners said they could not – assimilating tacit knowledge?
During the 1950s and ’60s, the Anglo-Hungarian philosopher-scientist Michael Polanyi took the qualitative argument against centralised planning a step further, reworking Hayek’s observations for the computer age. If local knowledge was irreducible to statistics and thus incomprehensible to central-planning bureaus, tacit knowledge was unspecifiable even in words, and thus impossible for humans to program into computers. Market coordination engaged the tacit dimension of human cognition innately: every manager thinking about hiring and every homemaker sizing up a side of beef drew on inarticulate know-how in making their decisions. But centralised planning through supercomputers would leave all this know-how untapped. The machines could not compute it, however powerful they became.
Does the argument against computer-powered centralised planning that Hayek and Polanyi framed still apply? At first glance, machine learning suggests not. Cancer-spotting seems to involve algorithms emulating tacit knowledge. Medical students need years of book-learning and practical instruction by senior doctors before they can make the same discriminations. The inarticulate know-how that enables specialists to apply the relevant learning can be imparted only in person – that’s why, the world over, students follow doctors on ward rounds. A specialist doctor recognises malignant pigmentation but cannot articulate precisely what leads her to that conclusion. Now algorithms can perform the same feats of cognition. Are not robots now doing precisely what earlier anti-planners said they could not – assimilating tacit knowledge?
In fact, machine learning does not actually code the cognitive powers that enable humans to know more than they can say into an algorithm. Machine learning equips AI engineers to build applications capable of drawing some of the kinds of conclusions that humans use tacit knowledge to reach. The robots get there by figuring out empirical relationships within quantities of data that no human could process. The apprenticeship that humans go through to learn to tell a freckle from a cancer is not simply a matter of processing piles of empirical information to discern what function of X input (symptoms with which potentially cancerous patients present) Y (cancer) happens to be. The human student learns the equivalent skill – a sense or feel for which spots are innocent, and which spell trouble – in tandem with a whole vocation of which diagnosing melanoma forms only part. In the specific matter of which moles are dangerous, algorithms now outperform the doctors. But some few diagnostic techniques aside, doctors remain entirely irreplaceable by algorithms. Like so many of the cognitive powers that humans possess innately, large parts of the vocation transmitted between specialist and student consists of unspecifiable tacit knowledge.
Most of this broader know-how is nowhere near being replicated by AI systems. A great deal of it never will be. Current economic incentives favour the refinement of task-specific AI systems – algorithms such as the mole-spotter that can solve specific problems once trained in them. But even if the incentives shifted to channel investment towards the design of so-called ‘general artificial intelligence’ systems – robots such as Hal in Stanley Kubrick’s film 2001: A Space Odyssey (1968) – nothing in the development of machine learning so far suggests that such investments would bear any fruit. Machine learning works not so much by aping tacit knowledge as by finding reliable shortcuts to the same conclusions that humans reach by exercise of tacit knowledge. Despite appearances, robots that can range like humans across diverse tasks, wielding contextual nous enough to know which capability to deploy, remain the stuff of science fiction.
The kinds of AI applications that become conceivable by virtue of machine learning will frequently be able to answer specific, tightly framed questions (‘Is this mole cancerous?’) better than human beings can. But for more general problems – such as, ‘What’s making this person unwell?’ – these new kinds of AI remain useless. By analogy, it is easy to see how machine learning will produce algorithms with any number of immensely beneficial microeconomic and macroeconomic applications. ‘How many bags of rocket will this south London Tesco sell in the second week of March?’ is the kind of question that – given the right data – machine learning can answer better than any human being. Optimising fresh-food supply chains will reduce waste in the passage from farm to table. There might well be specific macro-economic matters that machine learning makes more tractable. In political economy, the crude and increasingly dated concept of gross domestic product (GDP) remains the measure of success. Economists led by Diane Coyle and Mariana Mazzucato have been trying with some success to supplant GDP with more meaningful and nuanced metrics of economic performance. Machine learning might give more strength to their arms. Given the right data, real-time regressions parsing relationships between output statistics and other indexes of wellbeing could be made available, giving new momentum to attempts to shift focus away from GDP.
As for the bigger issues – such as which is the better agency of economic coordination between state and market – machine learning is not really likely to force open all the old arguments again. Just as machine learning brings us no nearer the advent of ‘general’ AI systems, it does not shunt us into some new epistemological paradigm where tacit knowledge-based arguments against centralised planning suddenly lose their validity. It is conceivable in principle that algorithms might one day exist that could set a price for every one of the 12 billion or so commodities produced in a modern economy, just as it was in Lange’s time. Most engineers still think that the processing power required to run all these programs simultaneously is inconceivable. Moore’s law is still on the planners’ side. But even if algorithms representing the world in a storm of zeroes and ones recreated the market electronically under centralised control, the price signals that their system sent wouldn’t work as effectively as those generated now. Tacit and otherwise unspecifiable parts of the knowledge that feed into market-based economic decisions would be lost in the translation of currently dispersed information into the new planning tsars’ digital code.
Some of this unspecifiable knowledge would survive, because the most advanced applications of machine learning – the kinds of systems that resemble human cognition in their discharge of task-specific functions – depend upon continual oversight by, and interaction with, humans for their success. The most promising machine learning-powered AI systems keep human controllers ‘in the loop’. That is, they interact with human engineers, crunching huge reams of data to frame and reframe problems, and then watching humans solve them once the kinds of cognition called for exceed AI capabilities, growing more adept with each iteration. On the supposition that robots will never assimilate tacit knowledge, humans must remain central to these systems, refining and extending the capabilities of AI systems. And for as long as humans remain ‘in the loop’, centralising control of economic activity by mobilising machine learning and the AI systems it enables would also concentrate power in the hands of the humans who make the systems what they are.
By controlling access to data, companies such as Google and Facebook are monopolising a valuable resource
The fact that the most promising applications of machine learning keep humans ‘in the loop’ means that, if ever the prospect of an AI-powered revival of the command economy made it out of the pages of academic journals and into mainstream discussion, there would be reason to worry about whom the algorithms really empowered. There would be reason, that is, to keep a very close eye on the humans ‘in the loop’ in any AI-powered programme of centralised economic planning. There might indeed be reason to pay closer attention to who remains ‘in the loop’ and whom they keep out right now, even if the prospect of a revived command economics remains remote.
The idea that a clique of mandarins could have a right to unchecked political power on the strength of machine learning, and the supposition that it works at arm’s length from humans, remains far-fetched for now. Yet the economic and political clout that the largest tech companies have amassed by stockpiling market power behind a rhetoric of restless innovation is another matter. These companies control the data without which machine learning goes nowhere. What they do with that power for good or ill is rightly a subject of great concern. These concerns tend to cluster around identity and privacy. It might be not that tech companies are doing too much with customers’ data, but too little. By controlling access to data, companies such as Google and Facebook are monopolising a valuable resource. Lifting the productivity of the major Western economies out of their current rut probably necessitates putting that data to work more productively. To make that happen, the power that Facebook, Google and others enjoy over data needs to be curbed. Too much of the money they make out of that power is rent – private gain extracted through control of a resource without any commensurate contribution to the common economic good. Getting more and different humans ‘in the loop’ to experiment with available datasets and to see what kinds of systems might be built is necessary to realise the promise of machine learning. But, to do that, we need some means of breaking the hold that the tech monopolies hold over data.
Finding the human figures embedded in these remote, impersonal systems that are building the future out of zeroes and ones calibrates and to some degree calms certain fears about the import of AI. It also raises hope. Automation can figure as a frightening dystopia where jobs vanish and livelihoods wither. But machine learning will probably create more jobs than it replaces, many of them both well-paid and meaningful. Some people will need to retrain – intendant planners and data monopolists among them. But more – nurses and teachers, for instance; or the men and women who spend their days raising children and making homes – might sense a kind of retraining happening around them. As the limits to AI capabilities become clearer, so too does the peculiar dignity of the kinds of human cognition and action that cannot be automated.