Weapon systems that select and engage targets without meaningful human control are unacceptable and need to be prevented. All countries have a duty to protect humanity from this dangerous development by banning fully autonomous weapons. Retaining meaningful human control over the use of force is an ethical imperative, a legal necessity, and a moral obligation. In the period since Human Rights Watch and other nongovernmental organizations launched the Campaign to Stop Killer Robots in 2013, the question of how to respond to concerns over fully autonomous weapons has steadily climbed the international agenda. The challenge of killer robots, like climate change, is widely regarded as a grave threat to humanity that deserves urgent multilateral action.
A growing number of legislators, policymakers, private companies, international and domestic organizations, and ordinary individuals have endorsed the call to ban fully autonomous weapons. Since 2018, the United Nations Secretary-General António Guterres has repeatedly urged states to prohibit weapons systems that could, by themselves, target and attack human beings, calling them "morally repugnant and politically unacceptable."
The formal debate over lethal autonomous weapons systems—machines that can select and fire at targets on their own—began in earnest about half a decade ago under the Convention on Certain Conventional Weapons, the international community's principal mechanism for banning systems and devices deemed too hellish for use in war. But despite yearly meetings, the CCW has yet to agree on what "lethal autonomous weapons" even are, let alone set a blueprint for how to rein them in.
Meanwhile, the technology is advancing ferociously; militaries aren't going to wait for delegates to pin down the exact meaning of slippery terms such as "meaningful human control" before sending advanced warbots to battle.
Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of the real future of killer robots.
On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships, and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store.
And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms.
This is taking the world to a dangerous place, with a host of moral, legal, and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see Turkey emerging as a major drone power.
And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies.
The widespread use of sophisticated autonomous aids in the war would be fraught with unknown unknowns. An algorithm with the power to suggest whether a tank should use a small rocket or a fighter jet to take out an enemy could mark the difference between life and death for anybody who happens to be in the vicinity of the target. But different systems could perform that same calculation with widely diverging results. Even the reliability of a single given algorithm could vary wildly depending on the quality of the data it ingests.
Eventually, the lead-up to a strike may involve dozens or hundreds of separate algorithms, each with a different job, passing findings not just to human overseers but also from machine to machine. Mistakes could accrue; human judgment and machine estimations would be impossible to parse from one another, and the results could be wildly unpredictable.
These questions are even more troubling when you consider how central such technologies will become to all future military operations. As the technology proliferates, even morally upstanding militaries may have to rely on autonomous assistance, in spite of its many risks, just to keep ahead of their less scrupulous AI-enabled adversaries.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.