The New York Times technology reporter Kevin Roose begins his new book with some good news.
Futureproof: Nine Rules for Humans in the Age of Automation, (2021) adopts an upbeat tone in proposing how we might live well in this age of automation. “Artificial intelligence could be unbelievably good for humankind, if we do it right. A world filled with AI could also be filled with human creativity, meaningful work and strong communities.” Furthermore, artificial intelligence and automation “could bring us together, armed with new superpowers, to solve some of our biggest problems.” What problems? Big, big ones: to “eliminate poverty, cure diseases, solve climate change, and fight systemic racism.” Great!
But his repeated use of the conditional verb “could” invokes caution. Is this vision only hypothetical?
It appears to be so: Roose notes that “none of this will happen without us.” Well, that’s not a surprise. What do “we” need to do?
Therein lies the rub. Roose offers no plan because he lacks a problematic. Yes, he is rightfully skeptical “that the private sector will save us.” Yes, he thinks a strong welfare state and extensive retraining programs are needed to handle the disruptions. Yes, he implores us to stop bowling alone and recreate community in order to back “collective action.” But how do we overcome atomization, and for what sort of joint action? No answer, except for a stirring call to “arm the rebels” within the high-tech oligopolies – the usual suspects: Amazon, Google, Apple, Facebook, Alibaba, Microsoft, and others. Well, that’s something of a political nature.
But very thin gruel. Roose’s idea of arming tech rebels fighting for “ethics and transparency” is to provide them with “tools, data and emotional support.” That doesn’t sound promising, though further revelations of algorithmic manipulations and the unsavoury military contracts of big tech are important.
Not surprisingly, with little to offer, Roose concludes by urging his readers not to get “too discouraged.” If “we” are determined, “we” may still harness technology for the common good.
Am I too harsh in my judgment? You judge. Roose’s approach is emblematic of what we might term a “progressive neoliberal” stance. That stance deplores certain social trends, hopes for good social and ecological outcomes, but it hasn’t a clue how to bring them about. The failure stems from avoiding an analysis of systemic power relations, of which technological development is one pillar. The analyst turns instead to the easier question of how individuals can best accommodate existing trends. The author’s “Nine Rules for Humans in the Age of Automation” enumerate common-sensical ways in which humans can individually adapt to, and perhaps even thrive with, this runaway monster. Believing that another glorious world is possible leads nowhere if you lack a strategy.
But you may ask: is there really a problem with AI that requires a problematic? Can’t we just keep clicking happily away on our phones while hoping for the best? After all, learned economists claim that, in the long run, everyone benefits from automation because more and better jobs are created than are lost. Overall, living standards and satisfactions allegedly rise. Let’s click and be happy!
Sorry, no. The techno-optimists’ view is suspect. Although saying a good word on behalf of the machine-wrecking Luddites is considered a mark of intellectual ineptitude, we may at least understand their anger. The following questions just touch the surface of what is at issue:
- Why should we assume that what may have happened in the past –new jobs and higher living standards for most – will recur in today’s different circumstances?
- Is there an ethical case for sacrificing a generation or two for the long-run gain of future generations? What happened in the first industrial revolution in the West, and what is happening now in many countries of the global South, was and is brutal. The “satanic mill” involved exploitation, child labour, shortened life-spans – until the advent of trade unions, labour solidarity and the first stirrings of democratic government. Can we justify sacrificing lives for technological change?
- Can today’s workers aspire to fill the new and better jobs that are created in this” fourth industrial revolution” powered by AI? In general, no. According to various reports, massive numbers of jobs are being lost – albeit, as Roose suggests, “invisibly.” Currently, the pandemic is providing both motivation and cover for companies to replace disease-prone humans with machines. The job losses are occurring at all levels, though particularly hard hit are the low-income occupations. Most of the new, good jobs involve high-tech/advanced-skills. Can unskilled and semi-skilled workers be retrained to fill these jobs? Probably not, unless exceptional retraining and income-replacement programs .exist, which is not the case except in the Nordic countries. Thus, many displaced workers end up in the insecure and poorly remunerated livelihoods in the gig economy, providing services for the winners of technological change and financialization.
- If AI is our master rather than our useful servant, do we want it? As Roose points out, AI and automation have not made workers happier. Just the opposite: they suffer more stress than 30-50 years ago. Futureproof tell us why. AI is programmed to squeeze the last bit of productivity out of every worker. The Amazon warehouse is the paradigmatic case. The lot of growing numbers of workers is constant surveillance and monitoring by machines. The aim is “algorithmic management” – machines with the capacity to monitor, assess, reward, and even fire workers. Employees effectively will work for machines. And surveillance does not stop at the factory/warehouse/mine/office door. ”Surveillance capitalism” denotes a society in which advanced surveillance equipment (such as facial recognition and cellphone tracking) allows intelligence agencies to keep tabs on anyone deemed suspicious. Artificial intelligence is generating a cornucopia of tools for monitoring and controlling dissenters and dissidents – a boon for authoritarian leaders worldwide. Not to mention the latter’s new capacity for censorship and online disinformation campaigns. The latter are used to discredit opponents and reinforce extremist political identities. Advanced algorithms using globally networked digital technology spread conspiracy theories and deploy memes to reaffirm the false beliefs of virtual communities. Fascism gains a new lease on life.
Yes, there is a problem with AI. We live in a world faced with destruction from new generations of nuclear weapons, hatred spawned by the manipulated identities of those left behind by technological and social change, and accelerating global warming. All these problems grow worse as companies devote billions of dollars, thousands of PhDs, and millions of hours to perfecting algorithms to control our shopping and political preferences. From a societal viewpoint, it’s mad.
Thus, we return to the issue of a problematic. Lacking a problematic is a debilitating weakness when it comes to harnessing technological development. If you don’t understand the dynamics, you can’t think through feasible means of taming the beast.
So what is the problematic? It is devastatingly simple. I learned this truth years ago when I taught the political economy of technological change. Technological development is not a force of nature; it does not just happen. There are many innovations and inventions that could be harnessed. Those that come to fruition – receive finance and nurture – are the ones that suit the economic and power interests of the dominant elites. This self-evident truth increasingly applies to university-based research as well as commercial research. The incentives make it so.
Strategically, the implication is that if you want to change the pattern of technological development, you have to challenge the power structure. Roose is right: artificial intelligence could be “unbelievably good for humankind.” With enhanced productivity and new ways of interacting, we could solve our major problems and extend the realm of freedom for all. But not by following Roose’s nine rules of individual adaptation. Yes, “none of this will happen without us.” Yet that admonition requires substance.
Can “we” take on power structures? Can we organize a progressive mass movement that avoids the usual problems of sectarianism, identity clashes and schism? Can we develop a program that might have wide appeal, despite the rise of populist-nativism? It is a daunting challenge, especially in this age of surveillance capitalism powered by AI. But we have no other option but to try – if we want to be “futureproof.”
Richard Sandbrook is an emeritus professor of political science at the University of Toronto and president of Science for Peace Canada.