News

Navigating the Ethical Maze of Smart Weapons: International Efforts to Ensure Responsible AI in Warfare

  This year, artificial intelligence represented by ChatGPT has set off a new round of technological and industrial revolution. At the same time, some smart weapons and equipment and auxiliary decision-making systems have shown their talents in the Ukraine crisis. The international community has long been worried about the risk of loss of control of smart weapons, accountability dilemmas, and ethical issues in war. The United Nations has been paying attention to lethal autonomous weapons systems (LAWS) since 2013 and has conducted many government expert group consultations. Although some consensus has been reached, Due to the fierce competition between the parties, it will be difficult to conclude a multilateral treaty in the short term. In view of this, international non-governmental organizations and the knowledge community represented by scientists, jurists, industry experts and think tanks are gradually building an international cognitive community on the risks and legal rules and standards of smart weapons through a bottom-up model, and through Extensive dialogue and exchange mechanisms continue to expand the concentric circles to include more and more countries learning and internalizing the relevant rules.
  The efforts of international non-governmental organizations can be roughly divided into two categories: one is a complete ban; the other is step-by-step control. The two are not an either/or relationship, but each has its own emphasis while partially overlapping. In the first category, the “Ban Killer Robot Movement” clearly proposes to ban the development of weapons that cannot be effectively controlled by humans. This proposal has been supported by more than 130 groups in more than 60 countries (regions) around the world. At the 2018 “International Joint Conference on Artificial Intelligence”, more than 2,000 scientists and entrepreneurs from around the world signed the “Declaration on Lethal Autonomous Weapons”, pledging not to participate in the development, development, trade and use of related weapon systems. These international non-governmental organizations mainly demand a complete ban on LAWS and urge the Convention on Certain Conventional Weapons (CCW) to reach an international treaty on this as soon as possible. In addition, employees of Google, Deepmind, Boston Dynamics and other companies have forced their companies to withdraw from the U.S. Department of Defense’s smart weapons projects on the grounds of “not doing evil.”
  The second type of path is basically the same as the first type in terms of LAWS issues. But in addition to LAWS, it also focuses on broader military application issues such as artificial intelligence and weapons systems, intelligence and reconnaissance, assisted decision-making, and command and control. Because if there is a lack of sufficient high-quality training data, data contamination, loopholes in the algorithm itself and the “algorithm black box”, or the complexity of the real battlefield environment, it may lead to huge differences between the relevant systems in the laboratory and actual deployment, and even produce Unpredictable consequences may in reality escalate conflicts or cause humanitarian disasters. As early as 2011, the International Committee of the Red Cross (ICRC) began to pay attention to the robustness and responsibility of relevant systems, and required countries to establish international standards for restricting smart weapons as soon as possible and abide by the relevant rules of international humanitarian law. In 2019, Tsinghua University’s Center for Strategic and Security Studies and the Brookings Institution of the United States launched a series of second-track dialogues around “Artificial Intelligence and International Security Governance,” focusing on how to set attack zones for smart weapons and issues of international legal supervision.
  At present, the plan advocating step-by-step control of smart weapons has been further advanced from the principled consensus in early years to the direction of specific rules and standards to reduce risks. Among them, the “Humanitarian Dialogue Center” and INHR think tank in Geneva invited the Center for a New American Security, the Brookings Institution, and experts from China and Europe to jointly focus on the design and development, testing and evaluation, deployment and use, and accountability mechanisms of smart weapons. , building trust and international cooperation, a test, evaluation, verification and verification (TEVV) risk management framework has been designed. TEVV emphasizes that any deployment of smart weapons must comply with the requirements of international humanitarian law for testing and evaluation of weapon systems. Due to the unique robustness and accountability issues of smart weapons, testing and evaluation should continue throughout the entire life cycle of the relevant system. TEVV further requires governments to coordinate with international civilian standards when developing smart weapons and make each country’s TEVV principles and review procedures public to the public, thereby increasing transparency and creating a responsible image. TEVV also recommends setting up a professional organization like ICAO to analyze and collect AI failure cases in various countries, supervise compliance issues in the development of smart weapons, and organize professional forces to open the AI ​​”black box” in the event of an accident. Determine responsibilities and avoid escalation of conflicts.
  With the efforts of relevant international non-governmental organizations, the “Responsible Use of Artificial Intelligence in the Military Field” (REAIM) Summit, which was participated by many countries, international organizations and non-governmental organizations around the world, released the “Action Initiative for the Responsible Use of Artificial Intelligence in the Military Field” Outcome document. Under pressure from international non-governmental organizations and technology companies, the U.S. Department of Defense also updated the document “DoD Directive 3000.09” on weapons autonomy this year, and proposed that smart weapon systems will be subject to review principles similar to TEVV. However, it needs to be pointed out that the United States often adopts an attitude towards the initiatives put forward by relevant international non-governmental organizations. It adopts the attitude of using it if it is suitable and discarding it if it is not suitable. It defines rules that are beneficial to it as “responsible behavior” and avoids the establishment of international rules to constrain its hegemony. treaty. This will not be conducive to the international community’s promotion of international rules for artificial intelligence.

The rapid advancement of artificial intelligence (AI) is transforming various industries, and its impact on the military sector is no exception. AI-powered weapons systems, also known as smart weapons, hold the potential to revolutionize warfare by enhancing targeting capabilities, improving decision-making, and increasing efficiency. However, the development of these weapons has also sparked intense debate about the ethical implications of delegating life-and-death decisions to machines.

International non-governmental organizations and the knowledge community, represented by scientists, jurists, industry experts, and think tanks, are playing a crucial role in shaping the narrative around the responsible use of AI in warfare. These entities are advocating for clear guidelines and frameworks to ensure that AI-powered weapons systems remain under human control, adhere to international humanitarian law, and are subject to accountability mechanisms.

One approach gaining traction is the concept of step-by-step control, which advocates for gradual implementation of AI in military applications while simultaneously establishing safeguards to mitigate potential risks. This approach emphasizes the need for rigorous testing and evaluation, transparent development processes, and clear accountability mechanisms to ensure that AI-powered weapons systems are used responsibly and ethically.

The “Test, Evaluation, Verification and Verification” (TEVV) risk management framework, developed by the Humanitarian Dialogue Center and INHR think tank in Geneva, provides a comprehensive framework for assessing the risks associated with AI-powered weapons systems. TEVV emphasizes the importance of continuous testing and evaluation throughout the entire life cycle of these systems, ensuring that they comply with international humanitarian law and ethical principles.

Another notable initiative is the “Responsible Use of Artificial Intelligence in the Military Field” (REAIM) Summit, which brought together representatives from governments, international organizations, and non-governmental organizations to discuss the responsible use of AI in warfare. The summit resulted in the release of an “Action Initiative for the Responsible Use of Artificial Intelligence in the Military Field” Outcome document, outlining principles and guidelines for the development and deployment of AI-powered weapons systems.

The efforts of international non-governmental organizations and the knowledge community are crucial in shaping the future of AI in warfare. By advocating for responsible use, establishing clear guidelines, and promoting transparency, these entities can help ensure that AI is harnessed for the benefit of humanity, not its detriment.