Emerging Tech

Musk Donates $10 Million to Keep AI From Going Rogue

The Future of Life Institute on Monday announced 37 winners of grant funding provided by Elon Musk and the Open Philanthropy Project. Musk contributed US$10 million toward the effort.

A total of $7 million will be awarded to fund the research teams for up to three years, with many of the projects beginning in September.

Another $4 million is available to support research grants for areas that emerge as most promising.

The teams will focus on the common goal of keeping artificial intelligence beneficial — and to avoid anything like the gloomy Hollywood scenarios of machines rising against humanity.

The researchers will ponder more than the technology behind making machines or computers “think.” The winning teams — which were selected from almost 300 grant proposals from around the world — will research a host of questions in computer science, law, policy, economics and other fields relevant to advances in AI.

The Future of Life Institute was founded with the mission to save humanity from potential existential threats posed by AI. “Technology has given life the opportunity to flourish like never before… or to self-destruct,” its website notes.

Skype cofounder Jaan Tallinn and MIT cosmologist Max Tegmark cofounded the institute, and its panel of advisors includes Musk, Stephen Hawking and actors Alan Alda and Morgan Freeman.

Grant Funding

The goal of the Future of Life Institute is to ensure that AI doesn’t become a destructive force. Toward that end, it has chosen to fund research into ways to ensure that the technology is developed as one that truly can serve humankind. Study topics include teaching ethics to AI, and dealing with AI that breaks laws.

“There’s a growing consensus among AI researchers that powerful AI has incredible potential to benefit society, but that we have our work cut out for us foreseeing and avoiding any pitfalls,” said Jesse Galef, Future of Life Institute spokesperson.

“There’s a whole host of questions to work on. These 37 projects are just the tip of the iceberg,” he told TechNewsWorld. “We hope they’ll be the first of many more, as our program inspires others to take an interest in this challenging new field of AI research.”

Some of the studies that are being funded have rather ominous names — such as “Lethal Autonomous Weapons, Artificial Intelligence and Meaningful Human Control,” led by Heather Roff, a professor at the University of Denver.

The “lethal autonomous weapons” part does sound like something scripted in Hollywood. However, it’s a subject the United Nations recently took up. The actual key is that such systems should require “meaningful human control to be acceptable.” How to define “meaningful” is just one of the questions that needs clarification.

Other studies are geared toward less ominous concerns. For example, Michael Webb of Stanford University is studying the economic and social impact of AI in the workplace, and the economic fallout that could result as AI replaces human workers.

Hollywood Clich or Real Threat

Warnings over the potential danger of AI seem straight from a Hollywood blockbuster, but they have gained traction with Musk, Hawking and other notables in the tech world raising the alarm. The question is whether such fears are justified, or if sufficient precautions are being taken.

“The depiction of the negative or deadly sides of technology are such a Hollywood staple that they have become clich,” said Charles King, principal analyst at Pund-IT.

“Even technology deemed positive often reinforces those same tired stories — as in The Terminator requiring a ‘good’ robot to vanquish a ‘bad’ robot,” he told TechNewsWorld.

That is why comments and efforts like Musk’s do appear to reflect those long-familiar themes King added.

“At the same time, modern technology is the closest it’s ever been to enabling the kinds of devices and capabilities doomsayers imagine,” he suggested. “With individual systems and entire data centers capable of self-monitoring and effectively autonomously repairing themselves, it isn’t difficult to imagine The Terminator’s Skynet coming to pass.”

Unwarranted Threats

However, unjustified concerns could have a negative impact on AI research.

“This entire topic distresses me,” said Jim McGregor, principal analyst at Tirias Research.

“With every generation of technological advancement, there have been fears — usually founded in ignorance,” he told TechNewsWorld. “The Terminator scenario that many fear and that Hollywood has sensationalized is based on the belief that machines will eventually think as humans.”

Instead the fear should be of human overpopulation he suggested.

“As Alan Turing put it, it’s not a question of if machines can think, but how they think. While combining artificial intelligence with advanced robotics would be a monumental feat, they would still not think the same way as humans,” McGregor noted.

“Artificial intelligence is the highest form of machine learning, but it is still based on the same technology and the development of advanced algorithms — even if done automatically,” he explained. “At this point, any concerns about machine learning are more founded in far-flung theories and fear than rational thinking.”

Peter Suciu

Peter Suciu is a freelance writer who has covered consumer electronics, technology, electronic entertainment and fitness-related trends for more than a decade. His work has appeared in more than three dozen publications, and he is the co-author of Careers in the Computer Game Industry (Career in the New Economy series), a career guide aimed at high school students from Rosen Publishing. You can connect with Peter on Google+.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Peter Suciu
More in Emerging Tech

Technewsworld Channels