© Meeson

Pablo Picasso once declared: “Computers are useless. They can only give you answers.”

The Spanish artist’s joke may have been true of the 20th century, when computers were for the most part souped-up calculating machines performing clearly prescribed functions. But the expansion of computing power in the early 21st century means that computers are now posing some of the most testing questions of our times. And it is not clear who is responsible for providing the answers.

Technological advances in artificial intelligence, biotechnology, nanotechnology, robotics and neuroscience — to name a few — have left policymakers, business people and consumers scrambling to understand their full social, economic and ethical implications.

Consider just three questions: first, is artificial intelligence, as Elon Musk believes, “potentially more dangerous than nukes”?

The idea of rogue robots destroying their creators has been a popular theme of science fiction for decades. But now some serious scientists, such as Stephen Hawking, and prominent tech entrepreneurs, including Mr Musk, who runs Tesla Motors and SpaceX, are expressing their concerns about this.

How can we ensure that AI is used for beneficent, rather than unethical, purposes? The prospect of super-intelligence capable of threatening human life still appears to be decades away, if it happens at all.

At the end of last year, Mr Musk, Peter Thiel and other Silicon Valley entrepreneurs committed $1bn to funding a new non-profit company, called OpenAI, with the aim that AI should remain “an extension of individual human wills”.

“It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly,” OpenAI’s founders wrote in a blog post.

Mr Musk has also donated $10m to the Future of Life Institute, a Cambridge, Massachusetts-based organisation, which is studying the social and ethical dimensions of AI.

Of its mission the institute states: “Technology is giving life the potential to flourish like never before . . . or to self-destruct. Let’s make a difference.”

A second difficult question is how to build “ethical elasticity” into self-driving cars. For better or worse, human car drivers are infinitely flexible in evaluating the ethics of different situations — breaking “no overtaking” rules to give more space to cyclists, for example. But how should self-driving cars be programmed to react when confronted with a real crisis? Should you provide owners with adjustable ethical settings?

In a speech last year, Dieter Zetsche, chief executive of the German carmaker Daimler, asked how autonomous cars should behave “if an accident is really unavoidable and where the only choice is a collision with a small car or a large truck, driving into a ditch or into a wall, or risk sideswiping a mother with a stroller, or an 80-year-old grandmother?”

The Daimler and Benz Foundation has spent more than €1.5m since 2012 supporting a team of 20 scientists examining the social effects of autonomous driving and some of the ethical dilemmas that it raises.

Such questions have previously been the realm of moral philosophers, such as Professor Michael Sandel at Harvard, when discussing “is murder ever justified?” But now boardrooms — and car owners — may increasingly find themselves having to debate the merits of Immanuel Kant’s categorical imperative versus the utilitarianism of Jeremy Bentham.

Developments in healthcare also create new dilemmas. Should cognition-enhancing drugs be banned for casual users? In their book Bad Moves, the neurologists Barbara Sahakian and Jamie Nicole LaBuzetta highlight the ethical challenges of using smart drugs to boost academic performance.

Why, they ask, do we take such a dim view of athletes who use steroids to cheat in the Olympic Games but ignore students who use smart drugs to boost their performance when they are about to take university entrance exams?

Students at Duke University in North Carolina have pressed the authorities to amend the institution’s academic honesty policy to consider “unauthorised use of prescription medication” as cheating. But few other universities, or employers, appear to have considered this dilemma.

“These medications have the potential to change society in dramatic and unexpected ways,” say Sahakian and LaBuzetta in their book. “Now is the time to have informed discussion and debate of the ethics of these ‘smart drugs’ and the role they should play in our future society,” they conclude.

Above all such gnarly questions looms a far bigger one: who is responsible for ensuring that the latest technological developments are not abused?

National governments and parliaments, preoccupied with far more pressing concerns such as fiscal austerity or refugee flows, rarely have the political bandwidth to consider such abstract challenges, still less to help set international standards or regulations.

As in so many other spheres, it seems inevitable that regulation will drag behind reality. Besides, what is to stop rogue nations from ignoring any international norms and putting gene editing or machine learning or cyber technologies to destructive uses?

University departments and think-tanks already play a useful role in disseminating knowledge and stimulating debate. But they are often dependent on funding from the private sector and are unlikely to come up with radical solutions that will seriously restrict their paymasters.

That largely leaves the tech companies to regulate themselves. Some are by far the best-placed organisations to understand the potential dangers of technology and to do something to counter them. Companies such as Google are forming ethics boards to help monitor their own activities in areas such as artificial intelligence.

But, as we saw in the run-up to the financial crisis of 2008, private sector institutions can often hide behind a narrow interpretation of the law.

Some banks also proved adept at exploiting international legal and regulatory arbitrage.

Pushing the law to the limit clearly corroded ethical standards and led to a number of abuses across the financial sector. By last summer, financial institutions had paid more than $235bn in fines for breaching regulations, according to data compiled by Reuters.

As one former banker says: “Not everything that is legal is ethical.”

This is an issue that tech companies will have to confront if their own industry is not to suffer a regulatory whiplash in the future.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments