Tumgik
#Human Rights Implications of the Usage of Drones and Robots in Warfare
Video
youtube
         This video was produced by the Foreign Press Association and published on the YouTube channel of the Don't Extradite Assange (DEA) campaign on February 19, 2022. With permission from the DEA campaign, we have published this video on our channel to raise awareness of this issue in Germany and worldwide. Visit the DEA campaign's YouTube channel here:  /deacampaign            ABOUT NILS MELZER. Prof. Nils Melzer is the Human Rights Chair of the Geneva Academy of International Humanitarian Law and Human Rights. He is also Professor of International Law at the University of Glasgow.          On 1 November 2016, he took up the function of UN Special Rapporteur on Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment. Prof. Melzer has served for 12 years with the International Committee of the Red Cross as a Legal Adviser, Delegate and Deputy Head of Delegation in various zones of conflict and violence. After leaving the ICRC in 2011, he held academic positions as Research Director of the Swiss Competence Centre on Human Rights (University of Zürich), as Swiss Chair for International Humanitarian Law (Geneva Academy) and as Senior Fellow for Emerging Security Challenges (Geneva Centre for Security Policy), and has represented civil society in the Steering Committee of the International Code of Conduct for Private Security Service Providers. In the course of his career, Prof. Melzer has also served as Senior Security Policy Adviser to the Swiss Federal Department of Foreign Affairs, has carried out advisory mandates for influential institutions such as the United Nations, the European Union, the International Committee of the Red Cross and the Swiss Federal Department of Defence, and has regularly been invited to provide expert testimonies, including to the UN First Committee, the UN CCW, the UNSG Advisory Board on Disarmament Matters, and various Parliamentary Commissions of the European Union, Germany and Switzerland. Prof. Melzer has authored award-winning and widely translated books, including: “Targeted Killing in International Law” (Oxford, 2008, Guggenheim Prize 2009), the ICRC’s “Interpretive Guidance on the Notion of Direct Participation in Hostilities” (2009) and the ICRC’s official handbook “International Humanitarian Law – a Comprehensive Introduction” (2016), as well as numerous other publications in the field of international law. In view of his expertise in new technologies, Prof. Melzer has been mandated by the EU Parliament to author a legal and policy study on “Human Rights Implications of the Usage of Drones and Robots in Warfare” (2013) and has also co-authored the NATO CCDCOE “Tallinn Manual on the International Law applicable to Cyber Warfare” (Cambridge, 2013), and the NATO MCDC “Policy Guidance Autonomy in Defence Systems”, (NATO ACT, 2014).          Throughout his career, Prof. Melzer has fought to preserve human dignity and the rule of law through the relentless promotion, reaffirmation and clarification of international legal standards offering protection to those exposed to armed conflicts and other situations of violence.
5 notes · View notes
Text
Rights and Responsibilities in the Context of Digital Citizenship
“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology,” says Klaus Schwab, a renowned German engineer and economist that founded the World Economic Forum (The Future of Growth, para 7). With the recent advancements of technology, robotics, social media and communications, and artificial intelligence as a whole, there is no doubt that the responsibilities paired with these developments have grown concurrently. Moreover, the ethical implications of said progress raise new questions regarding the role in which technology plays in our lives. The rights and responsibilities associated with the increased involvement of day to day digital usage is one that is as multifaceted and pressing as any other hard-pressing issue in the 21st-century. 
In regards to how technology can be utilised and applied appropriately, it is also vital that users are made aware of the implications associated with the downside of digital usage. Today, there are a plethora of platforms in which those with digital access can communicate over technology, with Whatsapp, Wechat, and Facebook Messenger being main outlets; however, those who engage in such digital communication must be held to a particular set of rights and responsibilities. Unfortunately, this has not been the case in India over the past year. As reported by the New York Times (2018), the rapid spread of rumours--over the messaging app owned by Facebook called “Whatsapp”--regarding the false kidnapping of schoolchildren in remote Indian villages, being dubbed the “child lifters,” has wreaked havoc across the country (Goel). On multiple occasions, outsiders approaching towns were mistaken for these “child lifters,” and, hence, were beaten, tortured, and ultimately, left for dead; thus, leading to an intense period of skepticism, anarchy, propaganda, and murder. The root of it all? Fake news and the spread of false information over social media platforms. With the increased accessibility to information, regardless of its factuality, individuals must hold their sources to a far higher standard than before. 
With the brutal murders occurring in India, spurred on by false information spread on “Whatsapp,” questions have arisen regarding the vetting of news sources and the ways in which users are held accountable for their participation in the spreading of fake news. Is it their fault, though? Or the fault of the platform, like Google or Facebook, in which the fake news is spread? Ultimately, social media sites and digital platforms are only able to combat falsified information to a certain extent; thus, anything beyond their capabilities is up to the user's discretion. In a world where 35% of American receive their daily news from social media platforms, according to a 2017 study conducted by the Pew Research Center, this is a weighty responsibility. At the core of morality and the ethical aspects of technological usage in regards to the spread of (mis)information, lies the disingenuity and sanctity of the press--something that's validity is decided upon by the collective understanding of the people (Mitchell).  
Continuing onwards with the moral and ethical issues coupled with the rights and responsibilities associated with the development of technology, it is crucial to address the increased detachment of human involvement in the realm of digital affairs; moreover, in synthesis, the improved capabilities of artificial intelligence itself. The English Oxford Living Dictionary defines artificial intelligence as “...the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” With recent developments in AI and its increasing involvement in facets ranging from day to day life to intensive military operations, it is of utmost importance that humans manage these systems effectively; however, now more than ever, it seems that human interference with artificial intelligence is taking a back seat approach. While this may seem convenient, abandoning oversight is gross neglect of responsibility that may have dire consequences.
The rights and responsibilities regarding the oversight of artificial intelligence is a realm that is mostly unexplored, but, vitally important. Recently, Amnesty International, along with Access Now, and several other international organisations, partnered to develop the Toronto Declaration with the intention of establishing a set of human rights in the regards to machine learning systems. In regards to the control over artificial intelligence and other technologies of the like, the declaration contends that: 
The right to an effective remedy and that those responsible for abuses are held to account. The Declaration calls on governments to ensure standards of due process for the use of machine learning is in the public sector, act cautiously on the use of machine learning system in the justice system, outline clear lines of accountability for the development and implementation of ML applications, and clarify which bodies or individuals are legally responsible for decisions made through the use of such systems.
With the introduction of a declaration of human rights, tech companies are then, in turn, bound to their responsibilities and held accountable for the subsequent actions of machine learning systems. Therefore, the direct involvement in ensuring artificial intelligence systems are only benefiting society, rather than hindering it, is a crucial aspect of the technology itself (New human rights principles on artificial intelligence, para 11). 
Moreover, it is vital that as artificial intelligence develops, the relationship between technology and humanity is one of balanced growth and direct involvement. As things such as self-driving cars and drone warfare grow towards normality, it is of utmost importance that humans maintain authority. With what comes incredible innovation and the development of technology that conjures up science-fictionesque dreams comes even greater personal responsibility; hence, maintaining the “man in the loop” relationship between artificial intelligence and human interference. For example, the majority of weapons that utilise artificial intelligence are directly controlled by humans; however, there is still an aspect of removed involvement. Perhaps one of the most important factors of humanity is the ability to feel empathy. Sir Roger Carr, chairman of weapons manufacturer BAE Systems, urges for an umbilical cord of sorts to connect man and machine, as the “responsibility for the actions of the machine and compliance with the laws of war should be assigned to the human not the machine,” as machines are “devoid of responsibility” (Welsh, para 5). Artificial intelligence is one of the most exciting technological endeavours that humans have embarked on in recent history, though, perhaps, it is also the one that could lead to our demise if not correctly handled. 
As Schwab stated, the establishment of answers to the moral and ethical questions centred around technology--moreover, the rights and responsibilities to then uphold those answers--is of paramount importance. With the development of digital systems, humankind must address their responsibilities of maintaining truth, empathy, and ultimately, self-control, when handling the machines that grow alongside us.
References
Artificial intelligence | Definition of artificial intelligence in English by Oxford Dictionaries. (n.d.). Retrieved from https://en.oxforddictionaries.com/definition/artificial_intelligence
Goel, V., Raj, S., & Ravichandran, P. (2018, July 18). How WhatsApp Leads Mobs to Murder in India. Retrieved from https://www.nytimes.com/interactive/2018/07/18/technology/whatsapp-india-killings.html
Mitchell, A. (2018, January 03). How Americans Encounter, Recall and Act Upon Digital News. Retrieved from http://www.journalism.org/2017/02/09/how-americans-encounter-recall-and-act-upon-digital-news/
New human rights principles on artificial intelligence. (n.d.). Retrieved from https://www.openglobalrights.org/new-human-rights-principles-on-artificial-intelligence/
The Future of Growth: Technology-Driven, Human-Centred. (n.d.). Retrieved from https://www.weforum.org/de/open-forum/event_sessions/the-future-of-growth-technology-driven-human-centred
Welsh, S. (2018, September 19). We need to keep humans in the loop when robots fight wars. Retrieved from https://theconversation.com/we-need-to-keep-humans-in-the-loop-when-robots-fight-wars-53641
0 notes
clarenceomoore · 6 years
Text
Use of Robots in War
The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
Advancements in technology have always increased the destructive power of war. The development of AI will be no different. In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of robots for warfare.
Most of the public discourse about automation relates to employment, which is why we spent so much time examining it. A second area where substantial debate happens is around the use of robots in war.
Technology has changed the face of warfare dozens of times in the past few thousand years. Metallurgy, the horse, the chariot, gunpowder, the stirrup, artillery, planes, atomic weapons, and computers each had a major impact on how we slaughter each other. Robots and AI will change it again.
Should we build weapons that can make autonomous kill decisions based on factors programmed in the robots? Proponents maintain that the robots may reduce the number of civilian deaths, since the robots will follow protocols exactly. In a split second, a soldier, subject to fatigue or fear, can make a literally fatal mistake. To a robot, however, a split second is all it ever needs.
This may well be true, but this is not the primary motivation of the militaries of the world to adopt robots with AI. There are three reasons these weapons are compelling to them. First, they will be more effective at their missions than human soldiers. Second, there is a fear that potential adversaries are developing these technologies. And third, they will reduce the human casualties of the militaries that deploy them. The last one has a chilling side effect: it could make warfare more common by lowering the political costs of it.
The central issue, at present, is whether or not a machine should be allowed to independently decide whom to kill and whom to spare. I am not being overly dramatic when I say the decision at hand is whether or not we should build killer robots. There is no “can we” involved. No one doubts that we can. The question is, “Should we?”
Many of those in AI research not working with the military believe we should not. Over a thousand scientists signed an open letter urging a ban on fully autonomous weapon systems. Stephen Hawking, who also lent his name and prestige to the letter, wrote an editorial in 2014 suggesting that these weapons might end up destroying the species through an AI arms race.
Although there appears to be a lively debate on whether to build these systems, it seems somewhat disingenuous. Should robots be allowed to make a kill decision? Well, in a sense, they have been for over a century. Humans were perfectly willing to plant millions of land mines that blew the legs off a soldier or a child with equal effectiveness. These weapons had a rudimentary form of AI: if something weighed more than fifty pounds, they detonated. If a company had marketed a mine that could tell the difference between a child and soldier, perhaps by weight or length of stride, they would be used because of their increased effectiveness. And that would be better, right? If a newer model could sniff for gunpowder before blowing up, they would be used as well for the same reason. Pretty soon you work your way up to a robot making a kill decision with no human involved. True, at present, land mines are banned by treaty, but their widespread usage for such a long period suggests we are comfortable with a fair amount of collateral damage in our weapon systems. Drone warfare, missiles, and bombs are all similarly unprecise. They are each a type of killer robot. It is unlikely we would turn down more discriminating killing machines. I am eager to be proved wrong on this point, however. Professor Mark Gubrud, a physicist and an adjunct professor in the Curriculum in Peace, War, and Defense at the University of North Carolina, says that with regards to autonomous weapons, the United States has “a policy that pretends to be cautious and responsible but actually clears the way for vigorous development and early use of autonomous weapons.”
And yet, the threats that these weapon systems would be built to counter is real. In 2014, the United Nations held a meeting on what it calls “Lethal Autonomous Weapons Systems.” The report that came out of that meeting maintains that these weapons are also being sought by terrorists, who will likely get their hands on them. Additionally, there is no shortage of weapon systems currently in development around the world that utilize AI to varying degrees. Russia is developing a robot that can detect and shoot a human from four miles away using a combination of radar, thermal imaging, and video cameras. A South Korean company is already selling a $40,000,000 automatic turret which, in accordance with international law, shouts out a “turn around and leave or we will shoot” message to any potential target within two miles. It requires a human to okay the kill decision, but this was a feature added only due to customer demand. Virtually every country on the planet with a sizable military budget, probably about two dozen nations in all, is working on developing AI-powered weapons.
How would you prohibit such weapons even if there were a collective will to do so? Part of the reason nuclear weapons were able to be contained is because they are straightforward. An explosion was either caused by a nuclear device or not. There is no gray area. Robots with AI, on the other hand, are as gray as gray gets. How much AI would need to be present before the weapon is deemed to be illegal? The difference between a land mine and the Terminator is only a matter of degree.
GPS technology was designed with built-in limits. It won’t work on an object traveling faster than 1,200 miles per hour or higher than 60,000 feet. This is to keep it from being used to guide missiles. But software is almost impossible to contain. So the AI to power a weapons system will probably be widely available. The hardware for these systems is expensive compared with rudimentary terrorist weapons, but trivially inexpensive compared with larger conventional weapon systems.
Given all this, I suspect that attempts to ban these weapons will not work. Even if the robot is programmed to identify a target and then to get approval from a human to destroy it, the approval step can obviously be turned off with the flip of a switch, which, eventually, would undoubtedly happen.
An AI robot may be perceived as such a compelling threat to national security that several countries will feel that they cannot risk not having them. During the Cold War, the United States was frequently worried about perceived or possible gaps in military ability with potentially belligerent countries. The bomber gap of the 1950s and the missile gap of the 1960s come to mind. An AI gap is even more fearsome for those whose job it is to worry about the plans of those who mean the world harm.
To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.
0 notes
babbleuk · 6 years
Text
Use of Robots in War
The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
Advancements in technology have always increased the destructive power of war. The development of AI will be no different. In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of robots for warfare.
Most of the public discourse about automation relates to employment, which is why we spent so much time examining it. A second area where substantial debate happens is around the use of robots in war.
Technology has changed the face of warfare dozens of times in the past few thousand years. Metallurgy, the horse, the chariot, gunpowder, the stirrup, artillery, planes, atomic weapons, and computers each had a major impact on how we slaughter each other. Robots and AI will change it again.
Should we build weapons that can make autonomous kill decisions based on factors programmed in the robots? Proponents maintain that the robots may reduce the number of civilian deaths, since the robots will follow protocols exactly. In a split second, a soldier, subject to fatigue or fear, can make a literally fatal mistake. To a robot, however, a split second is all it ever needs.
This may well be true, but this is not the primary motivation of the militaries of the world to adopt robots with AI. There are three reasons these weapons are compelling to them. First, they will be more effective at their missions than human soldiers. Second, there is a fear that potential adversaries are developing these technologies. And third, they will reduce the human casualties of the militaries that deploy them. The last one has a chilling side effect: it could make warfare more common by lowering the political costs of it.
The central issue, at present, is whether or not a machine should be allowed to independently decide whom to kill and whom to spare. I am not being overly dramatic when I say the decision at hand is whether or not we should build killer robots. There is no “can we” involved. No one doubts that we can. The question is, “Should we?”
Many of those in AI research not working with the military believe we should not. Over a thousand scientists signed an open letter urging a ban on fully autonomous weapon systems. Stephen Hawking, who also lent his name and prestige to the letter, wrote an editorial in 2014 suggesting that these weapons might end up destroying the species through an AI arms race.
Although there appears to be a lively debate on whether to build these systems, it seems somewhat disingenuous. Should robots be allowed to make a kill decision? Well, in a sense, they have been for over a century. Humans were perfectly willing to plant millions of land mines that blew the legs off a soldier or a child with equal effectiveness. These weapons had a rudimentary form of AI: if something weighed more than fifty pounds, they detonated. If a company had marketed a mine that could tell the difference between a child and soldier, perhaps by weight or length of stride, they would be used because of their increased effectiveness. And that would be better, right? If a newer model could sniff for gunpowder before blowing up, they would be used as well for the same reason. Pretty soon you work your way up to a robot making a kill decision with no human involved. True, at present, land mines are banned by treaty, but their widespread usage for such a long period suggests we are comfortable with a fair amount of collateral damage in our weapon systems. Drone warfare, missiles, and bombs are all similarly unprecise. They are each a type of killer robot. It is unlikely we would turn down more discriminating killing machines. I am eager to be proved wrong on this point, however. Professor Mark Gubrud, a physicist and an adjunct professor in the Curriculum in Peace, War, and Defense at the University of North Carolina, says that with regards to autonomous weapons, the United States has “a policy that pretends to be cautious and responsible but actually clears the way for vigorous development and early use of autonomous weapons.”
And yet, the threats that these weapon systems would be built to counter is real. In 2014, the United Nations held a meeting on what it calls “Lethal Autonomous Weapons Systems.” The report that came out of that meeting maintains that these weapons are also being sought by terrorists, who will likely get their hands on them. Additionally, there is no shortage of weapon systems currently in development around the world that utilize AI to varying degrees. Russia is developing a robot that can detect and shoot a human from four miles away using a combination of radar, thermal imaging, and video cameras. A South Korean company is already selling a $40,000,000 automatic turret which, in accordance with international law, shouts out a “turn around and leave or we will shoot” message to any potential target within two miles. It requires a human to okay the kill decision, but this was a feature added only due to customer demand. Virtually every country on the planet with a sizable military budget, probably about two dozen nations in all, is working on developing AI-powered weapons.
How would you prohibit such weapons even if there were a collective will to do so? Part of the reason nuclear weapons were able to be contained is because they are straightforward. An explosion was either caused by a nuclear device or not. There is no gray area. Robots with AI, on the other hand, are as gray as gray gets. How much AI would need to be present before the weapon is deemed to be illegal? The difference between a land mine and the Terminator is only a matter of degree.
GPS technology was designed with built-in limits. It won’t work on an object traveling faster than 1,200 miles per hour or higher than 60,000 feet. This is to keep it from being used to guide missiles. But software is almost impossible to contain. So the AI to power a weapons system will probably be widely available. The hardware for these systems is expensive compared with rudimentary terrorist weapons, but trivially inexpensive compared with larger conventional weapon systems.
Given all this, I suspect that attempts to ban these weapons will not work. Even if the robot is programmed to identify a target and then to get approval from a human to destroy it, the approval step can obviously be turned off with the flip of a switch, which, eventually, would undoubtedly happen.
An AI robot may be perceived as such a compelling threat to national security that several countries will feel that they cannot risk not having them. During the Cold War, the United States was frequently worried about perceived or possible gaps in military ability with potentially belligerent countries. The bomber gap of the 1950s and the missile gap of the 1960s come to mind. An AI gap is even more fearsome for those whose job it is to worry about the plans of those who mean the world harm.
To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.
from Gigaom https://gigaom.com/2018/06/07/use-of-robots-in-war/
0 notes