This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
AI researchers at Google’s DeepMind have trained an artificial intelligence agent on Quake III and published a paper on how it performed in matches with and against humans.
Video games can be a useful tool for researchers looking to evaluate the capabilities of AI technology, and several including StarCraft II and Dota 2 have attracted the attention of companies like DeepMind and OpenAI.
As outlined in a Science paper spotted by Kotaku, researchers at DeepMind trained the AI agent “For the Win” on thousands of hours of Quake III play data with the goal of demonstrating the technology’s ability to function in cooperative environments.
The company broke down the whole process in a blog post on Google’s DeepMind last year, and the Science paper published today is the result of those efforts.
Playing in a modified version of Quake III and with inputs slowed to human levels, DeepMind’s For the Win agent was able to beat human players in two-on-two capture-the-flag matches with an average margin of 16 captures. Researchers note that, even after 12 hours of practice, human players only won around 25 percent of their matches against AI agents. When a team of two AI agents played against a team of one human player and one agent, the human team was able to win roughly 5 percent more matches than the strict AI-versus-humans matchup.
"Artificially intelligent agents are getting better and better at two-player games, but most real-world endeavors require teamwork," reads the paper. "The agents were trained by playing thousands of games, gradually learning successful strategies not unlike those favored by their human counterparts. Computer agents competed successfully against humans even when their reaction times were slowed to match those of humans."