Researchers at Google’s DeepMind subsidiary in England have developed an artificial agent they call a “deep Q-network” that learned to play 49 classic Atari 2600 arcade games by just diving in.
The DQN algorithm performed at more than 75 percent of the level of a professional player in more than half the games, and came up with far-sighted strategies that let it achieve the maximum attainable scores in certain games, such as Breakout.
IT outperformed previous machine learning methods in 43 of the games.
DQN was able to work straight out of the box across all the games, given only the raw screen pixels, the set of available actions, and game score.
“There is nothing bespoke about the system that’s tailored to Atari games, specifically; they’re just a very handy and useful testing ground,” said Demis Hassabis, VP of engineering at Google DeepMind.
“The idea is for future versions of this system to be able to generalize to tackle any sequential decision-making problem even when the input data is very high-dimensional,” he told TechNewsWorld.
How DQN Works
Several features enabled the power of deep neural networks to be combined scalably with reinforcement learning for the first time in DQN.
Reinforcement learning, or RL, is an area of machine learning that doesn’t present correct input/output pairs or explicitly correct suboptimal actions, unlike standard supervised learning. It focuses on finding a balance between exploration of uncharted territory and exploitation of current knowledge. In other words, it’s a sink-or-swim approach.
A feature called “experience replay” played a key role in combining deep neural networks, or DNN, with RL. DQN was trained on samples drawn from a pool of stored episodes during the learning phase, similar to what’s physically realized in the brain through flashbacks or dreams. That was critical to DQN’s success — disabling it led to a severe deterioration in performance.
Rise of the Machines?
The fear that machines could take over our world has been expressed not only in sci-fi stories such as The Terminator and Battlestar Galactica, but also by visionaries such as Stephen Hawking, Elon Musk and Bill Gates — all of whom have expressed concerns about untrammeled research into artificial intelligence.
Ironically, Musk had invested in DeepMind, which was purchased by Google in January of 2014, precisely because of his concerns over the dangers of AI.
“There should be ways in which there’s control by humans,” Frost & Sullivan analyst Mukul Krishna told TechNewsWorld in an earlier discussion on this issue. “The human world is extremely nuanced. There are too many abstractions that are not logical.”
Where AI Will Come in Handy
In certain scenarios, such as in medicine and in the financial markets, AI might provide an advantage, said Dan Kara, practice director for robotics at ABI Research.
For example, diagnostics could be supported by opinions and outcomes drawn from very large databases of historical cases that are analyzed, connected to others worldwide, and continually updated.
The stock markets already use AI tools and techniques for predictive modeling and for making complex trades, but they “will be dwarfed by future systems” that will be able to learn as they go, Kara told TechNewsWorld.
Those programs will be able to learn from other trading systems, as well as draw conclusions from seemingly unrelated events occurring throughout the world, Kara suggested.
Miles to Go Before Humanity Sleeps
Forget the Terminator scenario for now, said Jim McGregor, principal analyst at Tirias Research. It “would require advanced robotics combined with a machine capable of supporting a very high number of unimaginably complex algorithms and unlimited memory, especially for real-time interactions.”
In the near term, research on AI will lead to a re-evaluation of how we architect and program computing solutions “that includes everything from the silicon up through the networks,” he told TechNewsWorld. In the long run, it will lead to advancements in “everything from medicine to space travel.”