Photo from Pierre Lecourt/Flickr
Google DeepMind, formerly Deepmind Technologies, has developed an Artificial Intelligence, or AI, so advanced that it can beat Atari games from the 1980’s. Their findings were published by Nature on Feb. 25, 2015.
Neuroscientist Demis Hassabis, along with Shane Legg and Mustafa Suleyman, began development of DeepMind in 2011. Over the course of three years, DeepMind Technologies grew to around 75 employees.
In 2014, Google renamed it Google DeepMind when it bought the company for several hundred million dollars.
DeepMind can actively learn while playing a game because it is constantly being given feedback on how it is doing in the form of ones, zeroes and negative ones. In general, if the game’s score increases, or otherwise the computer is not dying in the game, it is given a one. It is programmed to maximize the number of times it is given a one and minimize the number of negative ones.
The idea has worked in the past for living beings. For example, you give a dog a treat to give him incentive to learn a new trick. However, this is the first time that it has been implemented in a computer.
Deep Blue is a computer developed by IBM that beat a chess grandmaster 2-1 in 1997. However, it can win at chess because it was only designed to play chess. If you presented it with any other game, it would have little to no idea what to do. DeepMind was created without any specific game in mind. It was designed to learn and adapt to new situations.
“The only information we gave the system was the raw pixels on the screen and the idea that it had to get a high score. And everything else it had to figure out by itself,” said Hassabis.
According to the British Broadcasting Company, Google tested DeepMind against 49 Atari games, ranging from “Pong” to boxing games. In games such as “Breakout,” it performed comparably or better than professional humans, while it seemed to struggle in others such as “Pac-Man.” For all of the games, it started by not knowing how to play, and over the course of a few hours becoming as proficient or even better than humans.
The Verge reported that DeepMind performed better than previous AI systems in 43 of the 49 games.
Google has a reason for testing the AI against video games. The games, even comparatively simple ones from the 1980s, are unpredictable and chaotic, especially when the computer begins interacting with a game that it has never seen before and does not know the rules to.
By designing an AI that can learn how to adapt to a new situation and beat a game without a preprogrammed solution, scientists have gotten closer to creating a computer that can adapt to the real world. As Hassabis said, “You can’t preprogram every eventuality that might happen.”
To some people, this may bring up images of “Terminator” or “The Matrix,” but Google has two safeguards against an apocalyptic rise of the machines scenario. It has stated that it will not go into military research, and as per the conditions of the acquisition, Google has started an AI Ethics Committee to ensure that the technology is not mishandled.
The technology is not without its critics though. According to the British Broadcasting Company, physicist Steven Hawking said last December that full artificial intelligence “could spell the end of the human race.”
For now though, Hassabis is not focusing on making a Terminator, a self driving car, or giving Siri sentience. The next hurdle for his team is refining DeepMind to be able to beat a game form the 1990s.