Here
and
Now

opinions

Killer robots: Why we should ban autonomous weapons systems

19 Comments
By The Mark News

Nations around the world agreed in November to continue deliberations on “lethal autonomous weapons systems” – that is, weapons systems that would be able to select their targets and use force without any further human intervention.

There are serious concerns that fully autonomous weapons systems – or “killer robots,” as they are also called – would not be able to distinguish between soldiers and civilians, or judge whether a military action is proportional.

Countries could choose to deploy these weapons more frequently and with less critical consideration if they do not have to worry about sacrificing troops. Proliferation of these weapons systems could spin out of control easily, both for military and police use.

At the prompting of nongovernmental organizations and United Nations experts, discussions began earlier this year to address the many technical, legal, military, ethical and societal questions relating to the prospect of lethal autonomous weapons systems.

The debate should be expected to deepen and broaden as the talks continue. The hope is that they will lead rapidly to formal negotiations on a new treaty pre-emptively banning weapons systems that do not require meaningful human control over the key functions of targeting and firing.

Such weapons in their fully autonomous form do not exist yet, but several precursors that are in development in the United States, China, Israel, Russia, South Korea, the United Kingdom, and other nations with high-tech militaries demonstrate the trend toward ever-increasing autonomy on land, in the air, and on or under the water.

If the military robotic developments proceed unchecked, the concern is that machines, rather than humans, could ultimately make life-or-death decisions on the battlefield or in law enforcement.

By agreeing to keep talking, the 118 nations that are part of the Convention on Certain Conventional Weapons (CCW), an existing international treaty, acknowledged the unease that the idea of such weapons causes for the public.

A new global coalition of nongovernmental organizations called the Campaign to Stop Killer Robots continues to pick up endorsements, with more than 275 scientists, 70 faith leaders, and 20 Nobel Peace laureates joining its ranks in calling for a pre-emptive ban on the development, production, and use of fully autonomous weapons. In August, Canada’s Clearpath Robotics became the first private company to endorse the campaign and pledge not to knowingly develop and manufacture such weapons systems.

The U.N. expert on extrajudicial, summary, or arbitrary executions, Christof Heyns, has called on all countries to adopt a moratorium on these weapons. Austria has urged nations engaged in the development of such weapons systems to freeze these programs, and has called on nations deliberating about starting such development to make a commitment not to do so.

Talking about the issue is good, but diplomacy is moving at a slow pace compared with the rapid technological developments. The commitment of the CCW talks – a week of talks over the course of an entire year – is unambitious. It is imperative for diplomatic talks to pick up the pace and create a new international treaty to ensure that humans retain control of targeting and attack decisions.

In the meantime, nations need to start establishing their own policies on these weapons, implementing bans or moratoriums at a national level. The United States has developed a detailed policy on autonomous weapons that, for now, requires a human being to be “in the loop” when decisions are made about using lethal force, unless department officials waive the policy at a high level. While positive, the policy is not a comprehensive or permanent solution to the problems posed, and it may prove hard to sustain if other nations begin to deploy fully autonomous weapons systems.

One thing is clear: Doing nothing and letting ever-greater autonomy in warfare proceed unchecked is no longer an option.

© Japan Today

©2019 GPlusMedia Inc.

19 Comments
Login to comment

Don't hold your breath on this one. Every weapon ever invented has been used. This will be no different.

1 ( +2 / -1 )

This is ridiculous; autonomous systems would be far, far safer than humans, especially in warfare. People who oppose things like this tend to be those who cannot for the life of them comprehend how systems like these operate and likely think they'll become self-aware or something. With robots at least you can defined very clear criteria for action, whereas humans you have no idea what they may do even after years of training..

-4 ( +0 / -4 )

"fully autonomous weapons" - with no OFF switch.

These weapons would take no orders because any interference in the programmed killing calculator could be an attempt to alter the effectiveness of the weapon.

To be truly fully autonomous once built, mankind would have to live to satisfy the machine's standards. No action could alter the perfect calculus and no action could stop these machines as they would make war their own jack in the box surprise scenario wiping cities from the maps and slaughter without thought or care.

Just like now, so what would the difference be?

0 ( +1 / -1 )

One word: Skynet

(points to anyone who knows the reference)

0 ( +2 / -2 )

I can understand the concerns, but to be honest I'd probably trust a machine to do its job than a Human. Robots have no bias, no emotions, no free will. When was the last time you heard Azimo run off to join ISIS? When was the last time SIRI responded to your query with a racial slur or homophobic comment? That's the problem with Humans; we're ruled by our emotions, and we're too easily manipulated. If a drone is hijacked by the enemy, it's our fault for not being able to prevent the hijack. If a soldier defects, then we blame the soldier, even though the concept is effectively the same: the opposing force has taken control of a military asset it didn't previously possess. Sure, we need to make sure that automated weapons don't kill civilians or use unnecessary force, but an outright ban is overkill. What happens if a nation that doesn't support the ban develops and implements them? Lives will be lost because we let paranoia limit our options.

0 ( +0 / -0 )

The 3 Laws!

2 ( +2 / -0 )

The United States has developed a detailed policy on autonomous weapons that, for now, requires a human being to be "in the loop" when decisions are made about using lethal force,

Not sure if i would be trusting if it was an American in the loop, with their record of friendly fire.

2 ( +2 / -0 )

NOOOOOOOOOOOOOO! don't ban them if the world did that I would be out of work!!! Its my living!! To design and build these killer machines!!

-1 ( +0 / -1 )

KC touched on the real issue. Once killing each other becomes even "cleaner and safer", more wars and killing are likely to happen. Wars are horrible, remove the horror and you remove the last restraint.

2 ( +2 / -0 )

Lol right. How many pointless treaties like this will the the world attempt? Despotic countries will build them Anyway and the reasonable nations will find themselves invaded by these robots. It's happened before because restrictions on weapons favors aggressive nations.

0 ( +0 / -0 )

"remove the horror and you remove the last restraint" - sensei258

Sadly illustrated by the Bush/Cheney Terror Wars which gainfully utilized the media as its own information sanitizer.

In essence, by making the truth, the horror, invisible, and then spending the next eight years pretending the whole thing never happened or it was someone's else's fault proved successful and a firm foundation to justify an auto-kill option.

If the next tier in the mad race for selling death as necessary safety equipment includes fully automatic guillotines of retribution, which countries will spend more to create a mechanical master wrapped in a layer of software uncertainty? With no OFF switch, of course, because, well, what could go wrong? Laughable.

2 ( +2 / -0 )

A very effective autonomous weapon already exists and it has killed or maimed large numbers of people, including children.

Landmines.

2 ( +3 / -1 )

So... I just hope Sarah Connor is raising her son properly, but in the end it wont matter.... "welcome to Zion" the capital of humanity...

1 ( +1 / -0 )

dcog9065 wrote from his depth of experience: '...those who cannot for the life of them comprehend how systems like these operate and likely think they'll become self-aware or something'

That is a very subjective opinion and those who can't '...comprehend how systems like these operate' can't imagine machine sentience.

So I will take Stephen Hawking's^ perspective on machine code over yours dcog9065.

Stephen Hawking said recently: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded"

"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it."

lostrune wrote: 'The 3 Laws!' *

This is not about human laws, written in Isaac Asimov's .

We are having a conversation about machines developing their own laws.

It would be naive to believe with all the knowledge we have gathered over thousands of years that machines could not write their own laws after accessing it digitally.

The question is will humans be redundant?

^ Stepen Hawkings on machine code: http://goo.gl/urgMDz

The three laws: https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Cyberdyne does exist: http://www.cyberdyne.jp/english/products/HAL/

1 ( +1 / -0 )

Conceptually, these killing machines are: "weapons systems that would be able to select their targets and use force without any further human intervention."

A sort of manmade "god's eye", that looks down from space guiding the land, sea and space targeting and destruction systems, without any further human intervention.

In effect, a 'mechanical god' that judges and smites and there's nothing that can be done about it. Charming, will it be out by Christmas? Hope so.

0 ( +0 / -0 )

Robot armies remove the human shield aspect of human armies, so it may be that national boundaries will coalesce at some point due to them, like grease spots in soup.. I don't expect a ban to have an effect except on those countries that actually conform to it (as opposed to non-signatory or lip-service countries).

Expect that meat-brained hackers would find the holes in internetworked robot armies a lot sooner than any central computer system becomes aware. The problem is, will system designers be smart enough to lock them out. We've already given up a top-end drone to Iran, with Iran claiming they hacked it to get it.

John Varley's "Press Enter []" novella (1984) probably better and more clear re "central computer becomes overlord" than the Terminator series' Skynet description. Robert A. Heinlein's "The Moon is a Harsh Mistress" novel (1966) had a good depiction of a beneficial self-aware central computer.

0 ( +0 / -0 )

Rarely a cosmic ray (or other magnetic anomaly?) could strike one of these systems, changing 1's to 0's. Possibly the system could continue to operate with its malfunction and seem like it functions self aware.

0 ( +0 / -0 )

changing 1's to 0's

Critical systems should be able to detect and handle flipped bits.

Error-Correcting Code memory (ECC memory) can correct flipped bits if not too many bits in a word have been flipped. If too many bits were flipped for the ECC memory to correct, the memory can signal its failure to the system owning the memory, in which case the owning system can take other corrective action such as rebooting itself.

https://en.wikipedia.org/wiki/ECC_memory

https://en.wikipedia.org/wiki/Hamming_code#SECDED (Single Error Correction Double Error Detection)

0 ( +0 / -0 )

The United States has developed a detailed policy on autonomous weapons that, for now, requires a human being to be “in the loop” when decisions are made about using lethal force, unless department officials waive the policy at a high level.

... I just love how there's a loophole the size of the grand canyon in this policy, it is so typical. Countdown to misuse in 10..9..8..7..6..5..

And as for autonomous weapons systems, the big thing that everyone is forgetting is that they'll be designed by humans. This means they'll share all of our flawed logic and stupidity. The machine's logic is only as good as the designer. We set the parameters.

The machines will do what human soldiers do, but faster and more efficiently.

And this is really the core problem. HUMAN stupidity and illogicality. The problem isn't the machines, it is the humans who make, program and control them.

So this is really the wrong discussion. Putting a human behind a screen controlling the robot doesn't solve anything.

In Pakistan drones (controlled by CIA operatives) were quite prepared to kill dozens of innocent wedding guests because they had the misfortune to have unknowingly invited one person with terrorist connections (maybe - the identification protocols are laughable).

If anything a robot would probably be more moral, or at least more consistent, and the rules it operates by could be discussed and agreed, whereas organisations like the CIA seem to operate by no consistent rules and throw around phrases like "acceptable losses" and "collateral damage" without ever disclosing their definitions (I suspect they have none, and that if we examined the statistics we'd find they boil down to them doing whatever they like).

0 ( +0 / -0 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites