Who's Afraid of AI?
Elon Musk is an inspiring entrepreneur. As a self-made person who established himself from nothing, he is considered today to be one of the leading entrepreneurs in the world. He heads Tesla, which has transformed the world of electric cars, and professes to be the first to operate autonomous vehicles on a commercial level, and also leads SpaceX, which has set a goal of sending a human to Mars by 2030. Today, every one of his statements finds many attentive listeners and a growing fan base.
As an enthusiast of groundbreaking technologies, one might have expected that he would embrace the significant progress made in the field of artificial intelligence – that many claim will change our world – but here too, Elon Musk surprises us. In the past few years, he has consistently claimed that the greatest threat lurking over humanity is the tremendous advancement in the technological developments of artificial intelligence. In his recent speech before United States governors, he called upon them to actively promote regulation that will address, monitor, and limit the technological developments in this field, before it's too late.
Elon Musk is not alone. The late Prof. Stephen Hawking, Bill Gates, and other opinion leaders have begun warning of the dangers of AI. In apocalyptic scenarios, one can imagine AI robots storming mankind uncontrollably in the streets and wreaking havoc and destruction, just like in Hollywood films, but the expected dangers of AI are much more complex and require plenty of thought and planning on behalf of decision-makers - certainly more than is being done today.
The great beauty of AI is found exactly in the place that scares Musk and others: the ability to produce unexpected results exactly in such places where human beings do not necessarily have the abilities (or the resources) to do this by themselves. It is especially evident where decisions made are based on large, unstructured set of data, when it is difficult to show the rational connection between the data and the decision. It is especially disturbing when such a decision led to harm's way to a human being or to property. In legal language, we will say that it will be difficult to show the causal connection between the act which caused the harm and the person who committed the act, and this might undermine the foundations of the legal perception upon which modern society exists.
Take, for example, a doctor treating a patient. The doctor consults with an AI utilizing a computer, like IBM's Watson. The computer determines that at a high probability the patient's symptoms are evidence of disease X. The doctor, being familiar with Watson's success rates, is not convinced, but elects to rely on the program, and executes a treatment, which in retrospect is erroneous. In the normal world, we would say that in a medical negligence lawsuit, the plaintiff must prove that the doctor deviated from accepted common medical practices, and if so decided, the plaintiff will be awarded damages. However in an AI world, quite quickly, the accepted common practice will be to rely on Watson, and consequently, it will be required to determine whether Watson was negligent in forming the opinion. Now, this is not a simple task. First, because Watson is programmers, researchers, operators and managers - each one in this chain could have been negligent. Second, it is very difficult to examine decision-making systems reviewing an abundance of data, which human beings cannot do. The conclusion then is that the existing legal system will have great difficulty dealing with such cases in the future, and one might expect these cases will increase as more AI Watson-like systems will be part of the medical decision-making system.
This is only one risk. In an age when everything is connected to a computer, and every computer is connected to the web, and the web is connected everywhere, it is possible to see how AI will turn into a dominant force, roaming the web and using its resources, almost uncontrollably. For example, it has recently been reported that two AI systems communicating with each other, had begun to develop a language of their own, unknown to humans. In a different case, where AI-based hacking programs were used; the program transformed itself autonomously and hacked computers. In Switzerland, an autonomous robot purchased drugs independently via the Agora platform. It used Bitcoins of course. And there are many more examples.
Musk is right that AI poses ethical, legal, and practical challenges unlike any we have ever faced before. The same is true for the widespread phenomena of Human Enhancement and gene manipulation, nanotechnology, Blockchain, robots, autonomous cars, and more. All of these require the attention of decision-makers and regulators. The problem is that technology is developing rapidly - oftentimes too fast for regulators. And the risk lies in the growing gap between the two. This is the reason why a clear statement by a technological pioneer like Musk made to legislators is significant. What will regulators do with this? We will have to wait and see.
Advocate Roy Keidar is a special consultant in the field of emerging technologies in the Yigal Arnon & Co. Law firm.