Google Invents an AI System That Plays Video Games on Its Own
Here is the opening of this interesting article from Google:
Google has created the computer equivalent of a teenager: an artificial-intelligence system that spends all of its time playing—and mastering—video games. The company introduced the new development in machine-learning technology on Wednesday, describing it as "the first significant rung of the ladder" to building intelligent AI that can figure out how to do things on its own.
The research project, built by a London startup called DeepMind Technologies that Google acquired last year, exposed computers running general AI software to retro Atari games. The machines were shown 49 games on the Atari 2600, the home console beloved by all ’80s babies, and were told to play them, without any direction about how to do so.
When the computers passed a level or racked up a high score, they were automatically rewarded with the digital equivalent of a dog treat. Google's AI system surpassed the performance of expert humans in 29 games, and outperformed the best-known algorithmic methods for completing games in 43 instances. Some games, like Ms. Pac-Man, can't be easily beaten with a mathematical formula. In others, like Video Pinball, the AI crushed human players with a system that was more than 20 times better than a professional human game tester.
The goal of the experiment wasn't to try to find a better way to cheat at video games. The principles of being given a task and finding the best solution can be applied to real-life scenarios in the future. The system, at its base, should be able to look at the world, navigate around it, and take actions accordingly. One day, Google's self-driving cars could learn how to drive based on experience, rather than needing to be taught, says Demis Hassabis, a co-founder of DeepMind and vice president of engineering at Google. This research marks the "first time anyone has built a single learning system that can learn directly from experience and manage a wide range of challenging tasks," he says.
When IBM’s Deep Blue computer finally beat World Chess Champion Gary Kasparov in a six-game contest in 1997, it was being programmed between games by computer specialists, some of whom were also chess champions. Today, Google’s AI System learns on its own, and I would love to know how many games of chess it would have to play to develop the standard of a grandmaster.
Back to top