Humanity is faced with a grave new reality – the rise of autonomous weapons. And yes, it does sound like a Hollywood script but unfortunately the risk is real – humans so far removed from wartime choices that life-and-death decisions will be left to sensors and software
What sort of future looms if the taking of a human life is relegated to algorithms? Robots and computer systems lack the distinct and unique human ability to understand both the complex environment they’re operating in and the weapons they’re using.
Moreover, the more complex weapon systems become – for example incorporating machine learning – and the greater freedom they are given to take action without human intervention, the more unpredictable the consequences in their use.
Technological advances represent great opportunities. Whether it’s in the fields of medicine, transport, agriculture, commerce, finance or virtually any other domain, robotics, AI and machine learning are having a transformative effect by advancing how we analyze and act upon data from the world around us. So it only makes sense that this technology be considered for national security and defense. Today, many countries are investing heavily in Al and military robotic systems. But when it comes to armed conflict, we must not forget that even wars have limits.
Governments that must now define the limits for autonomous weapon systems need to ensure compliance with international humanitarian law and be firmly rooted in the principles of humanity and the dictates of public conscience. The good news is that when the Group of Governmental Experts charged with examining autonomous weapon systems met in April this year in Geneva, Switzerland, there was broad agreement that human control must be retained over weapon systems. That is the easy part. The hard part is answering the question: what level of human control is required to ensure both compatibility with out laws and acceptability to our values?
The answer to this matters because some weapon systems with autonomy in their “critical functions” of selecting and attacking targets are already in use in limited circumstances for very specific tasks, such as shooting down incoming missiles or rockets. After activation by a human operator it is the weapon system that selects a target and launches a counterstrike.
However, the scope of possible future AI-powered weapon systems is much broader. It encompasses the full range of armed robotic systems, and potentially builds on the technology of AI “decision-aid” systems already being tested that analyze video feeds and detect possible targets.
With new advances happening at a breakneck pace, governments, with the support of the scientific and technical community, must take urgent action to agree on limits that will preserve the crucial human element, without stifling or slowing the technological progress that brings obvious benefits.
The alternative is the deeply unnerving prospect of a new arms race and a future where wars are fought with algorithms, with uncertain outcomes for civilians and combatants alike.
Critics are also concerned that advanced artificial intelligence (AI) could develop in directions not anticipated by scientists. Because of this unpredictability, the US military has indicated that it will never remove humans from the decision loop completely. While unmanned weapons systems will become gradually more autonomous so that they can carry out very specific missions with less human direction, they may never entirely replace human soldiers on the battlefield.
Source: Rise of Autonomous Weapons – Algorithmic warfare is coming, Humans must retain control; Autonomous weapons are already here. How do we control how they are used