[Lee Jae-min] AI changes everything, but how far can it go?
By Lee Jae-minPublished : April 17, 2018 - 17:30
As Artificial Intelligence is rippling through every corner of our lives, ethical challenges and philosophical issues are also coming our way. The most important question is: what is the outer limit of the AI technology? Recent incidents involving AI in the military sector offer some silhouette of a possible answer.
Now, AI is making an inroad into the defense industry. Autonomous weapons are becoming a reality. Taken to its technical extreme, it will not be too long before we see fully autonomous warfare machines with self-analysis and self-decision capabilities appear in future battle fields. The pace and consequences, however, raise a voice of concern. Unlike other areas of AI application, the military’s use of it is in a fundamentally different league because it may well be lethal: it is feared that these “deploy and forget” types of machines can identify a target and initiate an attack on its own. Existing regulation of combat and law of war may not cover the entire spectrum of these new non-human combatants.
It is this sentiment that underlies foreign academics’ concern over the Korea Advanced Institute of Science and Technology’s recent plan to participate in a joint research program with a private corporation to develop, possibly, autonomous weapons. The concern was pointed at the development of autonomous weapons “lacking meaningful human control.” To paraphrase: human beings must have a final say in the decision making process of AI machines. Faced with a foreign scholars’ threat of a boycott of cooperative activities, KAIST now officially denies any intention of conducting such research and the case seems to be closed.
Take another example. It has been several years since Korea deployed sentry robots or equipment along the DMZ. In response to similar concerns from foreign watchers, assurances have been given over the years that the machines remain under soldiers’ control at all times.
Consider yet another one. Some of Google’s employees sent a letter in early April to its management taking issue with the company’s cooperation with the Department of Defense to enhance video imagery that may facilitate targeting process of drones. The concern expressed by the employees is largely similar. Both Google and the Pentagon have stated that the research would not lead to autonomous weapons that initiate an attack without human control. Here again, the key term was “human control.”
This question is also related to who should be held responsible for consequences from activities of autonomous lethal machines. By having human beings in the loop, it can be ensured that final responsibility still lies in the person in charge - a strong deterrence to any misuse or abuse of new machines.
AI is taking over dangerous and difficult tasks from us. It is and will be replacing human beings in many areas and activities. What is not clear yet is who and how to supervise these autonomous machines. Recent incidents represent a compelling consensus at least in the field of military operation: regardless of the technological development or feasibility, it is critical to put AI under the control of human beings and to ensure that a final decision is only to be made by human beings. This logic may also apply to other areas where AI encounters core societal values or weighs critical ethical questions. In other words, there are tasks where human oversight can never be dispensed with.
The rise of AI and the advent of autonomous machines throw out complex legal and philosophical questions. AI in the military application brings these questions to the fore. A UN conference on this very topic is being held this week in Geneva under the auspice of the Convention on Certain Conventional Weapons. Perhaps the discussions there may lead to an adoption of legal norms that can guide the development and utilization of these autonomous lethal weapons in the future.
Lee Jae-min
Lee Jae-min is a professor of law at Seoul National University. He can be reached at jaemin@snu.ac.kr. -- Ed.
Now, AI is making an inroad into the defense industry. Autonomous weapons are becoming a reality. Taken to its technical extreme, it will not be too long before we see fully autonomous warfare machines with self-analysis and self-decision capabilities appear in future battle fields. The pace and consequences, however, raise a voice of concern. Unlike other areas of AI application, the military’s use of it is in a fundamentally different league because it may well be lethal: it is feared that these “deploy and forget” types of machines can identify a target and initiate an attack on its own. Existing regulation of combat and law of war may not cover the entire spectrum of these new non-human combatants.
It is this sentiment that underlies foreign academics’ concern over the Korea Advanced Institute of Science and Technology’s recent plan to participate in a joint research program with a private corporation to develop, possibly, autonomous weapons. The concern was pointed at the development of autonomous weapons “lacking meaningful human control.” To paraphrase: human beings must have a final say in the decision making process of AI machines. Faced with a foreign scholars’ threat of a boycott of cooperative activities, KAIST now officially denies any intention of conducting such research and the case seems to be closed.
Take another example. It has been several years since Korea deployed sentry robots or equipment along the DMZ. In response to similar concerns from foreign watchers, assurances have been given over the years that the machines remain under soldiers’ control at all times.
Consider yet another one. Some of Google’s employees sent a letter in early April to its management taking issue with the company’s cooperation with the Department of Defense to enhance video imagery that may facilitate targeting process of drones. The concern expressed by the employees is largely similar. Both Google and the Pentagon have stated that the research would not lead to autonomous weapons that initiate an attack without human control. Here again, the key term was “human control.”
This question is also related to who should be held responsible for consequences from activities of autonomous lethal machines. By having human beings in the loop, it can be ensured that final responsibility still lies in the person in charge - a strong deterrence to any misuse or abuse of new machines.
AI is taking over dangerous and difficult tasks from us. It is and will be replacing human beings in many areas and activities. What is not clear yet is who and how to supervise these autonomous machines. Recent incidents represent a compelling consensus at least in the field of military operation: regardless of the technological development or feasibility, it is critical to put AI under the control of human beings and to ensure that a final decision is only to be made by human beings. This logic may also apply to other areas where AI encounters core societal values or weighs critical ethical questions. In other words, there are tasks where human oversight can never be dispensed with.
The rise of AI and the advent of autonomous machines throw out complex legal and philosophical questions. AI in the military application brings these questions to the fore. A UN conference on this very topic is being held this week in Geneva under the auspice of the Convention on Certain Conventional Weapons. Perhaps the discussions there may lead to an adoption of legal norms that can guide the development and utilization of these autonomous lethal weapons in the future.
Lee Jae-min
Lee Jae-min is a professor of law at Seoul National University. He can be reached at jaemin@snu.ac.kr. -- Ed.