Human rights commission to ask questions about war crime implications for future tech
The United Nations is discussing the impact of ‘killer robots’ at a meeting of the Human Rights Council.
The meeting is expected to call for a moratorium on their use while ethical questions are answered, reports the BBC.
The UK, US and Israel are developing robots that – unlike drones – can be set to operate autonomously; including deciding whether to kill a human or not.
Supporters of the technology claim that “lethal autonomous robots” can save lives by reducing the number of soldiers on the battlefield.
However, critics have pointed out that once in order to win a war, the opposition’s supply of human soldiers will also have to be depleted – in other words, the robots merely delay human deaths, rather than preventing them.
The meeting will discuss issues such as who takes the final decision to kill and can a robot distinguish between a soldier and a civilian?
In addition, it will seek to answer who will be convicted of war crimes if a lethal autonomous robot does kill or wound civilians intentionally. Currently, robots can not be prosecuted for war crimes.
“The traditional approach is that there is a warrior, and there is a weapon,” says Christof Heyns, the UN expert examining their use told the BBC.
“But what we now see is that the weapon becomes the warrior, the weapon takes the decision itself.