COMMENTARY: The conundrum of artificial intelligence
Cybersecurity expert Rod Beckstrom takes a look at the profound questions raised by the rapid development of artificial intelligence, the basis for three of his new ventures.
When Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race,” he focused the world’s attention on an emerging technology with the power to dramatically improve human life, or to cause it irreparable harm. While he made clear that AI as it exists today poses no threat, his long-term prognosis suggested to many that perhaps it was time to put the brakes on the rapid rate of AI development.
Hawkings based his concern on our inability to control artificial intelligence, which is evolving much more rapidly than human intelligence.
If AI makes the leap from performing specific programmed tasks to a more general ability to decide its own actions – essentially, to think for itself – are we prepared for the day when machines are smarter than we are? How will we know when that day has come…if it comes?
Imagine a computer programme replacing human judgment. How much trust can we – should we – place in its decisions? Should we trust a machine to make minor decisions in our everyday lives? Significant decisions? Life or death decisions?
Artificial intelligence is now a fact; it is being widely adopted and its further and rapid development is inevitable. As an indication of my own passion for the topic, three of the start-up security companies I am now involved in are primarily based on AI. As these companies demonstrate, it will have a great capacity for good, improving safety, security and the environment as well as generally making life easier for humans.
But as computers get smarter, as algorithms become more advanced and potentially capable of wholly independent action, it’s worth asking ourselves: have we sufficiently thought this through? And how should policy change to ensure the public is protected?
When a computer makes a decision with no human involvement, it’s unclear who is responsible.
Self-driving cars are an example of this conundrum. A great technological development with significant potential social, economic and environmental benefits, driverless cars are not perfect and must function in a human environment. If one strikes and kills a pedestrian, who is responsible? The passenger? The pedestrian? The person who wrote the software? The car manufacturer?
Humans will claim, “it’s not my fault”, and machines lack the ability to accept blame. Yet in modern society, establishing responsibility matters. How can liability be determined or assigned?
This is not a problem to be kicked down the road. In the next ten years, maybe sooner, self-driving cars could be commercially available and in widespread use. They will be the forerunner of other, as yet unknown, technologies.
One obvious answer is transparency. Providing a log file of information from all of the key sensors in a car provides an accurate record of the car’s actions; this is already the case with Tesla (until the computer is rebooted) and it is likely to become the norm for driverless cars. Ensuring a comparably high degree of transparency as AI develops across other industries should be a priority, and there will also need to be new regulations and laws as we begin confronting the many consequences of these new technologies.
But another more challenging dilemma with very broad implications could emerge from the success of driverless cars. If self-driving cars are 100 percent safer, should human drivers even be allowed? Or should we be outlawed? At what point will we have to accept that machines are smarter than we are and cede control to them?
These are the kinds of issues that artificial intelligence will raise, and for which we are currently unprepared. And we don’t have the luxury of pausing its progress as we sort these things out; important decisions must be made in parallel, as the technology develops. As yet there is no mechanism to do so.
Someday machines may imagine things humans cannot. Human emotions such as empathy and compassion could also surface in machines. Could they develop appropriate responses to the limitless unpredictable circumstances that humans can generate? Can the logic of a computer programme ever competently deal with the illogic of the human mind and imagination?
Someday computers will be smarter than we are. Perhaps they will support and enable us but, as Hawking so powerfully suggests, perhaps not.