If it sounds like something out of a comic-book movie, well – that’s because it is out of a comic-book movie. On 19 December, it emerged that Facebook honcho and all-round tech futurist Mark Zuckerberg had installed a robot ‘butler’ in his home called Jarvis. Not a physical, clanking, wheezing robot – but one buried in his walls as a mass of circuits and hooked into every domestic appliance. At every uttered command, Jarvis will play music, switch on lights, adjust the air conditioning, etcetera, etcetera. Named after a sentient computer in the home of Iron Man protagonist (and fellow billionaire) Tony Stark, Zuckerberg’s Jarvis wields the most cutting-edge artificial intelligence that money can buy.
Oh, and as if that weren’t enough, he’s also voiced by Hollywood legend Morgan Freeman – donating samples of his deliciously burring vowels and consonants to Zuckerberg for an undisclosed, but presumably whopping, fee.
Head spinning? Don’t worry – it’s just the natural side effects of living in the future. And artificial intelligence is a technology that, by virtue of its rapid development, has fast-tracked itself to have an ever-greater impact on our lives with each passing year.
The question is: to what extent should the rise of artificial intelligence alarm or disturb those of us who are involved in, and deeply care about, the world of work?
Taxi to dystopia?
Artificial intelligence has given business leaders a lot to think about. How are they going to implement it? What kinds of effects will it have upon the labour market? How will it influence the hiring process?
Or, in the worst-case scenario, will there even be any hiring process to influence?
One leader – as in, scientific leader – who has been thinking about it a great deal is Professor Stephen Hawking: a futurist who, one suspects, could run rings around Zuckerberg in terms of farsightedness. In a recent, doom-laden column for the Guardian, the renowned physicist (or is that prophet?) warned: “The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”
Hmm. Not a pretty picture, then.
Hawking added: “This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.”
A high-speed taxi to dystopia, by the sounds of it. But Hawking is not the only fine mind on the case, and views on the potential effects of artificial intelligence on the workplace are many and various. One other individual who has devoted significant time and energy to understanding how increasingly sophisticated digital tools will impact on our prospects is outgoing US President Barack Obama. His White House science team has recently unveiled a pair of reports (here and here) that delve deep into the likelihood of artificial intelligence removing our hands from industry’s pumps, and the tone the reports strike is rather more upbeat than Professor Hawking’s.
As the second report (Artificial Intelligence, Automation and the Economy) notes, we of the flesh-and-blood brigade “still maintain a comparative advantage over artificial intelligence (AI) and robotics in many areas. While AI detects patterns and creates predictions, it still cannot replicate social or general intelligence, creativity or human judgment … Further, given the current dexterity limits of the robotics that would be needed to implement mass AI-driven automation, occupations that require manual dexterity will also likely remain in demand in the near term”.
So, perhaps Hawking overlooked a critical factor – that in order for AI to come up to snuff and compete with the human race, it would actually need humans to manage and oversee it. To quote the report once more: “Employment in areas where humans engage with existing AI technologies, develop new AI technologies, supervise AI technologies in practice, and facilitate societal shifts that accompany new AI technologies will likely grow.”
Perhaps this isn’t so dystopian after all, then? And importantly, there’s evidence that the productive relationship between people and technology cited in the White House report is already well underway…
Back in July, three employment experts at the Centre for European Economic Research published an intriguing research paper that explored the pace of what they called ‘routine-replacing technological change’ (RRTC) across 27 European countries. In their definition, RRTC is basically any equipment- or machinery-based setup that takes over from humans in the completion of rote tasks. By looking at how RRTC developed in Europe from 1999 to 2010, the team aimed to get the measure of how AI could bed in as the next wave of tech that will take the reins from humans in certain industries.
“Overall,” they said, “we find that the net effect of RRTC on labour demand has been positive.” While they learned that technological change did indeed lessen labour demand by around 9.6 million jobs in some sectors, they pointed out that this was offset by product demand and a host of other “spill-over effects” stemming directly from the introduction of new technologies. Together, those knock-on trends contributed to a boost in labour demand by some 21 million jobs: a net gain of more than 11 million.
“As such,” the researchers wrote, “fears of technological change destroying jobs may be overstated: at least for European countries over the period considered, we can conclude that labour has been racing with, rather than against, the machine, in spite of these substitution effects.”
Nothing happens in isolation. Technology isn’t something that just seizes jobs and then sits there viewing our livelihoods with stony contempt. Technology produces ripples, and it is on those ripples that people can transition from one type of job to another. So, what’s the message that business leaders should glean from this? Quite simply: that the AI challenge compels them to think more creatively than ever about learning and development. And that doesn’t just mean within their workforces – but among themselves, too.
If AI is to be harnessed as a stimulant for our creativity, then leaders need to understand how it can be used to supplement their human staff – not merely swap them out. As hungry, technology-savvy Millennials become more of a feature in the workplace, there is an instant base of people who are willing to roll with whatever punches AI delivers. And by learning more about technology themselves, leaders will be able to stay on top of the fast-moving events that will stem from AI’s increasing adoption.
The future won’t run away from us if we learn how to keep up.