© MEESON

Elon Musk, the founder of SpaceX and Tesla Motors, believes that artificial intelligence is “potentially more dangerous than nukes”. The “biggest existential threat” to humanity, he thinks, is a Terminator-like super machine intelligence that will one day dominate humanity. Luckily, Mr Musk is mistaken.

Plenty of machines can do amazing things, often better than humans. For instance, IBM’s Deep Blue computer played and beat the Grand Master Garry Kasparov at chess in 1997. In 2011, another IBM machine, Watson, won an episode of the TV quiz show Jeopardy, beating two human players, one of whom had enjoyed a 74-show winning streak. The sky, it seems, is the limit.

Yet Deep Blue and Watson are versions of the “Turing machine”, a mathematical model devised by Alan Turing which sets the limits of what a computer can do. A Turing machine has no understanding, no consciousness, no intuitions — in short, nothing we would recognise as a mental life. It lacks the intelligence even of a mouse.

Believers in the coming of AI disagree. Stephen Hawking has argued that “the development of full artificial intelligence could spell the end of the human race”. He is right — but the same is true of the appearance of the Four Horsemen of the Apocalypse.

Ray Kurzweil, the American inventor and futurist, has predicted that by 2045 the development of computing technologies will reach a point at which AI outstrips the ability of humans to comprehend and control it. Scenarios such as Kurzweil’s are extrapolations from Moore’s law, according to which the number of transistors in computers doubles every two years, delivering greater and greater computational power at ever-lower cost.

However, Gordon Moore, after whom the law is named, has himself acknowledged that his generalisation is becoming unreliable because there is a physical limit to how many transistors you can squeeze into an integrated circuit.

In any case, Moore’s law is a measure of computational power, not intelligence. My vacuum-cleaning robot, a Roomba, will clean the floor quickly and cheaply and increasingly well, but it will never book a holiday for itself with my credit card.

In 1950 Turing proposed the following test. Imagine a human judge who asks written questions to two interlocutors in another room. One is a human being, the other a machine. If, for 70 per cent of the time, the judge is unable to tell the difference between the machine’s output and the human’s, then the machine can be said to have passed the test.

Turing thought that computers would have passed the test by the year 2000. He was wrong. Eric Schmidt, the former chief executive of Google, believes that the Turing test will be passed by 2018. We shall see. So far there has been no progress. Computer programs still try to fool judges by using tricks developed in the 1960s.

For example, in the 2015 edition of the Loebner Prize, an annual Turing test competition, a judge asked: “The car could not fit in the parking space because it was too small. What was too small?” The software that won that year’s consolation prize answered: “I’m not a walking encyclopaedia, you know.”

Anxieties about super-intelligent machines are, therefore, scientifically unjustified. Existing “smart” technologies are not a step towards full-blown AI, just as climbing to the top of a tree is not a step towards the moon, but the end of the journey. These applications can certainly outsmart us, outperform us and replace us in carrying out a growing number of tasks. This is not because they deal intelligently with the world, however, but because we are making the world increasingly friendly to them.

Take industrial robots. We do not unleash them in the world to build cars; we build artificial environments around them to ensure their success. The same is true of the billions of smart artefacts that will soon be communicating with one another in the so-called internet of things.

No AI version of Godzilla is about to enslave us, so we should stop worrying about science fiction and start focusing on the actual challenges that AI poses. In the final analysis, humans, and not smart machines, are the problem, and will remain so for the foreseeable future.

Our priority must be to avoid making painful and costly mistakes in the design and use of our technologies. There is a serious risk that we may misuse them to the detriment of both the species and the planet.

Winston Churchill once said: “We shape our buildings; thereafter they shape us.” The same applies to smart technologies in the “infosphere”.

The writer is professor of philosophy and ethics of information at the University of Oxford

Letter in response to this article:

Humans can take any innovation and turn it into a weapon / From Jem Eskenazi

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments