Explore Newsletters from ECT News Network » View Samples | Subscribe
Welcome Guest | Sign In
Salesforce Industries Summit

Elon Musk Calls for Preventive AI Demon Wrangling

By Richard Adhikari
Oct 28, 2014 2:44 PM PT

Elon Musk, CEO of both SpaceX and Tesla Motors, among his many roles, this week warned about the threat humans face from artificial intelligence.

AI is probably our biggest existential threat, he told students at the Massachusetts Institute of Technology.

"With artificial intelligence we are summoning the demon," Musk said, indicating that it might not be possible to control it.

Musk repeatedly has warned about the dangers of AI, as have renowned physicist Stephen Hawking and other scientists.

England's Cambridge University has set up the Center for the Study of Existential Risk to study the threats AI may pose in the future, among other things.

The impact of AI also is being studied at the Future of Humanity Institute at Britain's Oxford University.

"Untethered and fully autonomous AI is definitely not something we would want," said Mukul Krishna, a senior global director of research at Frost & Sullivan.

"From a sensory perspective, you can create automation to an extent -- but because it's a human creation, it could always be flawed," he told TechNewsWorld.

The Winter of Musk's Discontent

Musk began warning about AI earlier this year after reading a 2012 paper by Nick Bostrom of FHI.

The paper examines two theses on the relation between intelligence and motivation in artificial agents: orthogonality; and instrumental convergence.

The orthogonality thesis holds that it's possible to construct a superintelligence that values characteristics such as benevolent concern for others or scientific curiosity or concern for others. However, it's equally possible -- and probably technically easier -- to build one that places final value exclusively on its task, Bostrom suggested.

The instrumental convergence thesis contends that sufficiently intelligent agents with any of a variety of final goals will pursue similar intermediary goals because they have instrumental reasons to do so.

In science fiction, this would be why AIs designed to protect the planetary flora and fauna combine with others designed to eliminate greenhouse gases -- and decide to rid the planet of humans along the way, because people impede their goals.

This why the boffins are agitating. They want rules and codes of conduct built in before AI systems get too advanced. Think of Isaac Asimov's Three Laws of Robotics, for instance.

AI's Limits

"There's not a single computer out there that won't crash or be hit by a virus, so AI will always have limitations," Krishna said. Fully autonomous systems can be taken out by huge electromagnetic pulses such as those caused by solar flares, for instance, "so there has to be a manual override."

At this moment, we don't have to worry about a Terminator scenario in which AI robots rule the Earth.

IBM, Google, Facebook, Twitter and other high-tech companies are investing heavily in AI, but "you're talking about code and algorithms written by humans and essentially running on a bunch of transistors, which are little more than switches," Jim McGregor, principal analyst at Tirias Research, told TechNewsWorld.

Still, AI can be misused. Financial trading is one area where AI already is used heavily, and "it might even be possible for systems to proactively and artificially create situations or scenarios that benefit certain groups at the expense of others," Dan Kara, a practice director at ABI Research, told TechNewsWorld.

"You must have human cognizance as oversight, because most of the models for AI are based on deductive logic, which is self-defeating," Frost's Krishna said. "No one knows everything."

The Good That Technology Does

"We should look at technology from a holistic viewpoint in terms of benefits and the impact on every aspect of our world," McGregor said.

AI "will result in some job losses, but it will also create new types of jobs that don't exist now," Brad Curran, a senior industry analyst at Frost & Sullivan, told TechNewsWorld.

"On the whole, and over time, society has benefited from new technologies," ABI's Kara said. "There is often a period of social and economic displacement when new technologies arrive."

Richard Adhikari has written about high-tech for leading industry publications since the 1990s and wonders where it's all leading to. Will implanted RFID chips in humans be the Mark of the Beast? Will nanotech solve our coming food crisis? Does Sturgeon's Law still hold true? You can connect with Richard on Google+.

Salesforce Industries Summit
How will the pandemic influence your holiday shopping habits this year?
I will shop online exclusively, for my own safety and to help limit the spread of the coronavirus.
I will do some shopping online, and some in-person because I want to support merchants in my area.
I will shop online definitely, and I will consider local retailers -- but only if they provide curbside service.
I will only shop in-person because the risks associated with e-commerce outweigh my chance of catching COVID-19.
I will not do any holiday shopping this year due to circumstances related to the pandemic.
Digital River - Sell Like a Local