DeepMind releases an AI learning environment for the card game Hanabi
Working alongside Google’s AI research team Google Brain, researchers at DeepMind have released an AI training environment for the cooperative card game Hanabi.
Video games and board games have proven to be excellent tools for AI researchers to teach and observe AI agents thanks to the subtly complex tasks that make up a round of play. In the case of Hanabi, the card game offers its players imperfect information and requires them to use those limited resources to work together towards a collective victory, making it a promising environment for AI research.
Hanabi itself, as outlined in a paper on the AI learning environment, is a sort of cooperative solitaire. Each player is dealt a hand of cards, but, while players can see the hands of those around them, they are unable to see the cards they themselves drew. Instead, everyone works within the rules of the game to share information on those hidden hands, and collaboratively play cards to win the game.
Researchers say the combination of cooperation, imperfect information, and limited communication found within the game provides AI agents with an excellent opportunity to grasp what they call ‘theory of mind’, or the ability to recognize mental states, including thoughts and intentions, in themselves and others.
Other games like StarCraft II and some Atari 2600 titles have been the subject of DeepMind research in the recent past, including a recent bout between a DeepMind AI agent and professional StarCraft II players. AI-minded devs curious about DeepMind’s recent work with Hanabi can find the learning environment in its entirity up on GitHub.