As reported by Engadget: If you don't like the thought of autonomous robots brandishing weapons, you're far from alone. A slew of researchers and tech dignitaries (including Elon Musk, Stephen Hawking and Steve Wozniak) have backed an open letter calling for a ban on any robotic weapon where there's no human input involved. They're concerned that there could be an "AI arms race" which makes it all too easy to not only build robotic armies, but conduct particularly heinous acts like assassinations, authoritarian oppression, terrorism and genocide. Moreover, these killing machines could give artificial intelligence a bad name. You don't want people to dismiss the potentially life-saving benefits of robotic technology just because it's associated with death and destruction, after all.
There's nothing legally binding in the letter, but it lends weight to the United Nations' preliminary talk of a global ban on deadly automatons. If officials, academia and the tech industry are all against removing humans from the equation, it's that much more likely that there will be rules forbidding lethal bots. While that doesn't preclude rogue nations and less-than-ethical companies from forging ahead with their own equipment, you might not see a world full of AI-driven warriors.
If we are unable to abstract reason within ourselves to uphold such social contracts, embedding them in an intelligent machine that one day could meet or exceed our own level of consciencness our world will evolve in ways that will be fraught with danger previously unimagined in the history of humanity. Without something like the above, we run the risk of creating an intelligence that could be considered purely sociopathic by human standards, while being superior in many ways (think a robotic Hannibal Lecter); which is why we're so fascinated of late with tales of Terminator (Skynet), iRobot, The Matrix, Transcendence, HUM∀NS, The Age of Ultron, and Ex Machina.
However, even if the western societies are in agreement regarding limits on AI, can we depend on other societies with a different view of this technology such as China or Russia to adhere to these rules; especially if it gives them access to a highly competitive technology (think Atomic Bomb)? What about potential tech savvy terrorist organizations with a desire to destroy any opposing society standing against them (ISIL comes to mind)? It also seems possible that at some point even benign organizations may consider advanced defensive AI technology out of fear or distrust; and thanks to modern filmmakers, we all have some idea of how that may turn out.
Maybe that is why Elon Musk is shelling out millions to study how to potentially mitigate AI related disasters in the future, as well billions in a technological space-race to establish a Martian colony as quickly as possible.
There's nothing legally binding in the letter, but it lends weight to the United Nations' preliminary talk of a global ban on deadly automatons. If officials, academia and the tech industry are all against removing humans from the equation, it's that much more likely that there will be rules forbidding lethal bots. While that doesn't preclude rogue nations and less-than-ethical companies from forging ahead with their own equipment, you might not see a world full of AI-driven warriors.
Perhaps we only need a refresher on Issac Asimov's, Three Laws of Robotics:
1. A robot (or AI system) may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
1. A robot (or AI system) may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later, Asimov added a fourth or zeroth law that preceded the other in terms of priority: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Issac Asimov was one of the first to foresee AI/Robotics in the 1960's, and the need for a code of 'morals' for the machines to operate by in order to work cooperatively and safely with humans. His book of short stories - 'I Robot', was related to the implications of tampering with these basic laws, and their inherent pitfalls.
Any attempts to disable, or circumvent these basic functions should render the system useless; which from a design standpoint is easier said than done. That is why it may be difficult or impossible to engineer or design into future AI systems in a reliable way. Keep in mind that 1% (or more) of the human population can and do violate the first law regularly without regard to the social or moral contract with those around them; and they do this even when it's not in their own self-interest; and in some cases simply because it's 'fun'.
HUM∀NS does a good job of portraying humanoid AI systems within the 'uncanny valley' of creepiness. |
However, even if the western societies are in agreement regarding limits on AI, can we depend on other societies with a different view of this technology such as China or Russia to adhere to these rules; especially if it gives them access to a highly competitive technology (think Atomic Bomb)? What about potential tech savvy terrorist organizations with a desire to destroy any opposing society standing against them (ISIL comes to mind)? It also seems possible that at some point even benign organizations may consider advanced defensive AI technology out of fear or distrust; and thanks to modern filmmakers, we all have some idea of how that may turn out.
Maybe that is why Elon Musk is shelling out millions to study how to potentially mitigate AI related disasters in the future, as well billions in a technological space-race to establish a Martian colony as quickly as possible.