US to build AI killer robots as Russia and China ‘already making them’

America has a "moral imperative" to build killer robots because it can’t risk falling behind in an AI arms race with Russia and China, according to the National Security Commission on Artificial Intelligence.

During a two-day discussion on January 25 and 26, the panel’s vice-chairman Robert Work said that AI-controlled weapons would make fewer mistakes in the heat of battle – making “friendly fire” incidents less likely and minimising casualties.

“It is a moral imperative to at least pursue this hypothesis,” he said.

The panel said that AI would inevitably be used in war – both by nation states and terror groups.

“The AI promise — that a machine can perceive, decide, and act more quickly, in a more complex environment, with more accuracy than a human — represents a competitive advantage in any field,” the panel said, predicting “it will be employed for military ends, by governments and non-state groups.

Lt. General John N.T. “Jack” Shanahan, director of the Joint Artificial Intelligence Centre, says that America’s rivals are forging ahead with AI-controlled weapons systems, or so-called “killer robots.”

Speaking to National Defense in March 2020, he said that Russia has shown a “greater willingness to disregard international ethical norms and to develop systems that pose destabilising risks to international security,” he said.

He warned that Moscow is already using artificial intelligence to power global disinformation campaigns and to develop weapon systems that can operate without human control.

The three main superpowers are not the only countries pursuing cybernetic warfare systems. In November, General Nick Carter estimated that by 2030 the British army could have up to 30,000 robots working alongside up to 90,000 troops.

He insisted, though, that humans would always make the final decision on whether robots opened fire.

But it may not be possible to stick to that rule once AI-powered conflict breaks out in earnest.

General John Murray, head of the US Army Futures Command, said: ”When you are defending against a drone swarm, a human may be required to make that first decision, but I am just not sure any human can keep up.”

But there are dissenting voices. The Campaign to Stop Killer Robots has already got the support of some 30 governments worldwide who have signed up to its ban on AI weapons and warns that allowing machines to decide “who lives and dies, without further human intervention” would “cross a moral threshold.”

Source: Read Full Article