AI poses a threat on the scale of the pandemic – but it won’t herald the death of humanity
By any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped in the far-distant future by the cerebrations of AI, says Prof Lord Martin Rees
Deep Mind’s AlphaGo Zero computer famously beat human champions in the games of go and chess. It was given just the rules, and “trained” by playing millions of games against itself in just a few hours. These were superhuman achievements, enabled by the greater speed and memory storage of electronics than flesh-and-blood brains.
AI, because of its ever-rising processing speeds, can cope better than humans with data-rich fast-changing networks. The implications of this for our society are already apparent. If we’re sentenced to a term in prison, recommended for surgery, or even given a poor credit rating, we would expect the reasons to be accessible to us – and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans they have usurped.
AI systems will become more intrusive and pervasive. Records of all our movements, our health, and our financial transactions, are in the cloud, managed by a multinational quasi-monopoly. The data may be used for benign reasons (for instance, for medical research, or to warn us of incipient health risks), but its availability to internet companies is already shifting the balance of power from governments to globe-spanning conglomerates.
Clearly, machines will take over much of manufacturing and retail distribution. They can supplement, if not replace, many white-collar jobs: accountancy, computer coding, medical diagnostics, and even surgery. Indeed, I think the advent of ChatGPT – the main excitement in recent months – renders legal work especially vulnerable. The vast but self-contained volume of legal literature can all be digested by a machine.
In contrast, some skilled service-sector jobs – plumbing and gardening, for instance – require non-routine interactions with the external world and will be among the hardest jobs to automate.
The digital revolution generates enormous wealth for innovators and global companies, but preserving a healthy society will surely require redistribution of that wealth. Indeed, to create a humane society, governments will need to vastly enhance the number and status of those who care for the old, the young and the sick. There are currently far too few, and they are poorly paid, inadequately esteemed, and insecure in their positions.
Computers learn to identify dogs, cats and human faces by “crunching” through millions of images – not the way babies learn. ChatGPT is trained by “reading” all the words on the internet and learning how they correlate and link together to make sentences. But it only “understands” words, not things.
But understanding the actual world and acquiring what we might call “common sense” won’t be so easy for them. It involves observing actual people in real homes or workplaces. A machine would be sensorily deprived by the slowness of real life – for them, our day-to-day is like watching paint dry.
Nor can electronic sensors match biological ones. Indeed, robots are still clumsier than a child when it comes to moving pieces on a real chessboard. They can’t jump from tree to tree like a squirrel.
So what should worry us most? ChatGPT will surely confront us, writ large, with the downsides of existing computers and social media: fake news photos and videos, unmoderated extremist diatribes, and so forth.
Excited headlines this week have quoted some experts talking about “human extinction”. This may be an exaggeration, but the misuse of AI is certainly a potential societal threat on the scale of a pandemic. My concern is not so much the science-fiction scenario of a “takeover” by super-intelligence as the risk that we – and indeed the entire world’s economic and social infrastructure – will become dependent on interconnected networks whose failure – leading to disruption of the electricity grid, or the internet – could cause a societal breakdown that cascades globally.
Regulation is needed – and innovations like ChatGPT need to be thoroughly tested before wide deployment, by analogy with the rigorous testing of drugs that precedes government approval and release. But regulation is a special challenge in a sector of the economy dominated by a few vast multinational conglomerates.
But let’s look still further ahead.
What if a machine developed a mind of its own? Would it stay docile, or “go rogue”? Futuristic books portray a “dark side” to such technology – where AI gets out of its box, infiltrates the internet, and pursues goals misaligned with human interest.
Some AI pundits take this seriously. But others, like Rodney Brooks (inventor of the Baxter robot), regard these concerns as premature – and think it will be a long time before artificial intelligence will worry us more than real stupidity.
Personally, I think it’s likely that machines will achieve dominance in the post-human era. This is because there are chemical and metabolic limits to the size and processing power of “wet” organic brains. Maybe we’re close to these already.
By any definition of “thinking”, the amount and intensity that’s done by organic human-type brains will be utterly swamped in the far-distant future by the cerebrations of AI. Moreover, the Earth’s biosphere is far from optimal for AI – interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological “brains” may develop insights far beyond our imaginings.
But we humans shouldn’t feel too humbled. Even though we are surely not the terminal branch of an evolutionary tree, we could be of special cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities, spreading their influence far beyond the Earth, and far transcending our limitations.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments