Prototype humanoid robots at the Intelligent Robotics Laboratory in Osaka, Japan
Prototype humanoid robots at the Intelligent Robotics Laboratory in Osaka, Japan © Eyevine

Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, St Martin’s Griffin, RRP$16.99, 336 pages

In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, by George Zarkadakis, Rider, RRP£12.99, 384 pages

Eclipse of Man: Human Extinction and the Meaning of Progress, by Charles T Rubin, Encounter Books, RRP$23.99, 200 pages

Smarter Than Us: The Rise of Machine Intelligence, by Stuart Armstrong, Machine Intelligence Research Institute, RRP£2.99/$4.99, 62 pages

We humans have got where we are today by being the cleverest creatures in town. In the absence of claws, wings or venom, intelligence is our evolutionary special power. And it has served us well, as we have risen to dominate great swaths of this planet. But now, institutions across the world — including universities, defence agencies and internet giants — are striving to create something that will knock us off this top spot. They are working towards machines that will be cleverer than we are; towards not merely artificial intelligence, but artificial super-intelligence. As a species, we are racing to create beings that will supplant us in our own evolutionary niche. What are we thinking?

Some of our most celebrated human brainpower is now worrying about just this question. Professor Stephen Hawking, for example, recently warned that artificial intelligence (AI) could be “a real danger in the not-too-distant future”, joining voices from within the technology industry including Microsoft co-founder Bill Gates, Jaan Tallinn, co-founder of Skype, and the entrepreneur Elon Musk, who have all suggested we think hard before we summon this particular genie. A batch of fascinating recent books reveals the current state of hard thinking on this topic and how much more there is to be done.

It is tempting to suppose that AI would be a tool like any other; like the wheel or the laptop, an invention that we could use to further our interests. But the brilliant British mathematician IJ Good, who worked with Alan Turing both on breaking the Nazis’ secret codes and subsequently in developing the first computers, realised 50 years ago why this would not be so. Once we had a machine that was even slightly more intelligent than us, he pointed out, it would naturally take over the intellectual task of designing further intelligent machines. Because it was cleverer than us, it would be able to design even cleverer machines, which could in turn design even cleverer machines, and so on. In Good’s words: “There would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Good’s prophecy is at the heart of the book Our Final Invention: Artificial Intelligence and the End of the Human Era, in which writer and film-maker James Barrat interviews leading figures in the development of super-clever machines and makes a clear case for why we should be worried. It is true that progress towards human-level AI has been slower than many predicted — pundits joke that it has been 20 years away for the past half-century. But it has, nonetheless, achieved some impressive milestones, such as the IBM computers that beat grandmaster Garry Kasparov at chess in 1997 and won the US quiz show Jeopardy! in 2011. In response to Barrat’s survey, more than 40 per cent of experts in the field expected the invention of intelligent machines within 15 years from now and the great majority expected it by mid-century at the latest.

Following Good, Barrat then shows how artificial intelligence could become super-intelligence within a matter of days, as it starts fixing its own bugs, rewriting its own software and drawing on the wealth of knowledge now available online. Once this “intelligence explosion” happens, we will no longer be able to understand or predict the machine, any more than a mouse can understand or predict the actions of a human.

Our only guide to living alongside super-machines comes from science fiction, whether the apocalyptic Terminator films or Douglas Adams’ Marvin the Paranoid Android, whose massive intellect only made him feel lonely and misunderstood. In his rich new book, In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, George Zarkadakis interweaves sci-fi visions with explorations of the philosophy, technology and deep history of artificial super-intelligence (ASI). An AI researcher before turning to writing, he demonstrates how the goals and ambitions of the technology industry have been shaped by centuries of “successive metaphors and conflicting narratives of fear and love” — from golems to Faust and Frankenstein — and how they might be misleading us.

We have an innate tendency to anthropomorphise, Zarkadakis argues, and this is, therefore, how we try to make sense of our technology. We imagine humanoid robots such as Marvin or Arnold Schwarzenegger’s Terminator; we imagine we are fulfilling the ancient dream of creating a creature in our own image. But an ASI will be far from human: it will not share our million-year evolutionary history, nor be limited by a confined flesh-and-blood brain. Who knows what its goals and values will be, or how it will regard us humans — perhaps as nothing more than handy bags of carbon that it could use for some higher purpose of its own?

The sheer otherness of ASI is also a theme of political philosopher Charles T Rubin’s book, Eclipse of Man: Human Extinction and the Meaning of Progress. Rubin explores the roots of our desire to radically alter the human condition through technology. This urge has brought real advances in medicine, food production and many other fields. But Rubin identifies a disquieting tendency among technologically-minded idealists to regard not the human condition but humanity itself as the problem. Such utopians hope that superior machines will take decisions out of our unreliable hands and so solve all our problems.

But these hopes require that such machines be wise in ways that make sense to us. Like Barrat and Zarkadakis, Rubin believes they are more likely to be utterly incomprehensible. Instead of improving us, our technology might simply supplant us; he concludes that “if this kind of posthuman hyperintelligence were to arrive on our doorsteps tomorrow, it is hard to see how it would look different from a hostile alien invasion”.

If this were a Hollywood movie, the camera would now switch to a modest suite of offices in a narrow street in Oxford, where a group of earnest young men and women are, unusually for philosophers, working against the clock to stop global catastrophe. This is the Future of Humanity Institute, a pioneering research centre given the task of worrying about the prospects for human civilisation, and wellspring of the most advanced thinking about the problems and potential of ASI. Nick Bostrom, the institute’s director, has written the most definitive analysis to date: Superintelligence , reviewed previously in these pages. This work is complemented by a brilliant short book from Stuart Armstrong, one of the institute’s fellows. Both Armstrong and Bostrom have been thinking hard about how ASI could be made to be “friendly” — and concluded that it would be very difficult indeed.

Armstrong, in Smarter Than Us: The Rise of Machine Intelligence, makes beautifully clear how challenging it would be to communicate with the profoundly alien being that is a computer mind. Imagine we ask it to cure cancer — and so it wipes out the human race; hey presto, problem solved. Or you command it to get your mother out of a burning building, so it blows a gas main sending your mother’s body high into the air. Or you ask it to increase GDP, so it burns down Los Angeles, creating a boom in reconstruction. Our values are based on a good deal of common sense and unstated assumptions, and as such are — as any moral philosopher will tell you — extremely hard to spell out. To turn those values into programming code would take centuries, Armstrong argues, and to avoid catastrophe, we would “need to get it all exactly right”.

A theme of all these books is that ASI would not need to hate us in order to destroy us. Even if its goal were to bake the perfect Victoria sponge, it might decide to wipe out all of humanity just in case one of us was tempted to turn the oven off early. We may hope that it would not do such a thing to us, its makers, but instead regard us with a sense of affection and filial obligation. But that is to project on to it those un-programmable human sensibilities. And anyway, if we discovered that we were created by bacteria, would we be nicer to them? Probably not much.

Perhaps more worrying than the difficulties of creating friendly machines is that most AI developers are not even trying to or, indeed, are striving for the opposite. As Barrat points out, most of the research is sponsored by business and designed to do things such as make money on the stock market — well over half of all Wall Street’s equity trades are already made by automated systems. The other main sources of funding are defence agencies. Darpa, the US defence department’s research body, has long been a major sponsor of AI. According to Barrat, alongside the US, at least a further 55 countries are “developing robots for the battlefield”. In other words, the serious money is going into AI designed to be decidedly unfriendly — AI that is, in fact, designed to kill humans. What could possibly go wrong?

This brings us back to the question: what are we thinking? The answer is that there is no coherent “we”, but a diverse group of competing interests striving to make the big breakthrough. Their motives differ, from utopian hopes of curing cancer to just making money; from natural curiosity to the narcissism of creating a being in our own image. And behind all this is simply the pursuit of power. Why would we invent something that could destroy us? The answer is that which underlies all arms races: because we hope we could use it to destroy our enemies first. The difference is that, this time, the arms will have minds of their own.

But perhaps this is all just a fantasy, another hair-raising fairytale in the long line from golems to the Terminator. As one researcher has pointed out, we have not yet managed to invent a machine that could walk into your house and make a cup of tea. But if an AI did decide to take over the world, it wouldn’t be by walking in through the kitchen door, or even knocking it down Schwarzenegger-style. It would be by taking over the digital infrastructure on which we increasingly depend; or perhaps by persuading us of its innocent intentions so that we open the door for it; or by some means that we mere humans cannot imagine. Given what is at stake, even if there is a small chance that this fairytale might come true, then these authors are right to suggest we should be worried.

Stephen Cave is author of ‘Immortality: The Quest To Live Forever and How It Drives Civilisation’ (Biteback/Crown)

Photograph: Nick Hannes/Eyevine

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments