Search and compare Media and Public Relations firms to publicize your business and enhance its reputation.
Welcome Guest | Sign In
TechNewsWorld.com

Tech Leaders Urge UN to Ban AI-Based Lethal Weapons

By David Jones E-Commerce Times ECT News Network
Aug 23, 2017 12:01 PM PT
ban-lethal-autonomous-weapons-letter-un

A group of the world's top technology leaders, led by Tesla founder Elon Musk, on Sunday published an open letter urging the United Nations to ban the use of artificial intelligence in weapons systems, amid growing concerns that autonomous killer robots could wind up taking control.

The group of 116 companies, most of which specialize in robotics or artificial intelligence, argued that the growing use of AI could revolutionize modern warfare in irreversible ways and pose a danger to nation states.

"Lethal autonomous weapons threaten to become the third revolution in warfare," the letter states. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, at timescales faster than humans can comprehend."

Such weapons could be placed in the hands of terrorists, the letter points out. Despots could use them against innocent civilians. Further, cybercriminals possibly could hack them, turning the weapons against innocent victims.

AI All Stars

Among the signatories to the letter are many of the world's top names in AI: Musk, who also is founder of SpaceX and OpenAI; Mustafa Suleyman, founder and head of applied AI at Google's DeepMind; Esben Ostergaard, founder and CTO of Universal Robotics in Denmark; and Jerome Monceaux, founder of Aldebaran Robotics.

A key figure behind the letter, Toby Walsh, a professor of AI at the University of New South Wales in Sydney Australia, presented it at the opening of the International Joint Conference on Artificial Intelligence in Melbourne, Australia. The annual event is considered the world's most important gathering of AI experts.

Walsh also was behind a 2015 letter that warned of the dangers of using AI in autonomous weapons development, noting that such systems could spark a public backlash that would create long-term damage for AI technology in general.

Walsh likely will attend the next UN Convention on Conventional Weapons meeting to discuss the open letter, he told the E-Commerce Times.

Building Consensus

Sunday's joint statement from the world's leading experts in the field could mark a turning point on the issue.

A group of 123 member nations of the UN's Review Conference on the Convention on Conventional Weapons last year unanimously agreed to begin formal discussions on the use of lethal autonomous weapons, University of New South Wales officials said. So far, 19 member nations have called for an outright ban.

AI specialists previously have raised questions about the potential for lethal autonomous weapons to create very destructive and lasting security issues for the world.

Musk just last month got into a public spat with Facebook CEO Mark Zuckerberg over the potential for unregulated AI to put the public at increased risk.

Musk's position is that global regulatory changes must be enacted to maintain long-term peace and security, said Charles King, principal analyst at Pund-IT.

However, "past history makes it difficult to imagine a scenario where many global powers would be willing to [accept] such restrictions," he told the E-Commerce Times. "Some would argue that they require such weaponry for their own protection, where others will say the threat of robotic firepower will deter their enemies from pursuing war."

It is possible that the outlook portrayed by Musk and others is more than a little hyperbolic, suggested Jim McGregor, principal analyst at Tirias Research.

"This is a person that is promoting the augmentation of humans through technology and the use of AI for self-driving cars," he told the E-Commerce Times, "but AI in robots is bad? Didn't anyone see Maximum Overdrive?"

Tech Industry's Responsibility

The question of whether or how to regulate lethal AI weaponry is one that is deeply connected to values, suggested Ryan Gariepy, CTO of Clearpath Robotics, which previously had called for an outright ban.

It is a choice between using AI for good versus using it for lethal means, Gariepy, who was the first to sign Sunday's letter, told the E-Commerce Times.

Clearpath and its founders have made a conscious decision to align their business values to their personal values of using technology for good.


David Jones is a freelance writer based in Essex County, New Jersey. He has written for Reuters, Bloomberg, Crain's New York Business and The New York Times.


Content Marketing on ALL EC
Facebook Twitter LinkedIn Google+ RSS
Which form of smartphone security do you rely on most?
Face ID or Fingerprint
Strong Password
App Locks
Storage Encryption
VPN with Public WiFi
I don't use any smartphone security tech.