The following blog post, unless otherwise noted, was written by a member of Gamasutras community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.
Battle of the Bulge is a turn based Euro style board game for the iPad. It simulates a historic asymmetric battle and features a variable number of turns per game day. The goal conditions are complex, and until late in development, were still in flux. Yet, much like the Allies that cold December 16th morning, with the odds stacked against them, the Agents of the Bulge Artificial Intelligence (AI) must claw their way to a Win State and survive to fight in the final release.
My goal is to simplify some of the pitfalls encountered in dealing with this problem, and distill for you the answers and techniques I used to create the Bulge AI Architecture. The code is not revolutionary, but implements decision making using Utility Normalization. Dave Mark covers this concept in depth on his site
and articles. The idea is to be able to score and compare a variety of actions in a way that is intuitive and modular.
This is achieved using a top-down 3 tier approach that focus' on loose coupling, and the end goal of creating an architecture that requires minimal support from the programming team.
Control Yoke (AI Controller) - Interacts with the Interface, requests Decisions
Strategic Brain - Stores, Gathers, and Evaluates Tactical Nodes
Tactical Nodes - Evaluate and Store the Raw Utility values of selecting this Node
"My Agent will eat your breakfast, burn down your house, and kick your dog."
Modeling the Player not the Win Condition
Very early on I wanted a concept of an AI Agent as just another player. "Face-to-Face" games and Solitaire games would be identical experiences. This calls into question, "What is winning?", and until I can give code access to Tiger Blood, my answer usually starts with the ability to troll players. Building on that was creating a concept of what types players/strategies would be fun to play against:
Genius - Classic "Best" player, makes all the right moves
Coward - Avoid Confrontation
Psycho - Seeks Confrontation, make Risky Choices
Defender - Cluster Units - Gain and Hold defendable locations
Exploiter - Seek Enemy Weakness
Comedian/Troll - Actively go against normal win conditions, force the player out of their comfort zone.
Many designers start with the "Genius" and quickly run into the wall of it never being good enough, then stop, leaving the player to deal with a sub par opponent, and an architecture hard coded to fail. These archetypes became springboard to deeper thoughts into what actually goes into playing Bulge.
What it comes down to is by not attempting to define a single "best" move, showed me what moves an Agent COULD make, and what that move could mean to a player. Focusing on archetypes over win conditions means freedom from overdone difficulty mechanics, and forces loosely coupled Agent design from the first line of code.
Building a Control Yoke
Amusingly the first lines of code had nothing to do with good (cruel) gameplay or strategy. In deciding to implement the AI as a player, and in the interest of reducing code duplication, the top level was designed to automate a human player's interactions with the interface. So in essence, the human player's equivalent of the Head and Hands. If you prefer, it is the AI's controller that allows it to interact with the game.
The Control Yoke was implemented as a Hierarchical Finite State Machine (HFSM) based on game state. Each game state was broken to a FSM consisting of:
Rising Edge - Initial state prepares AI internals for action this game state
Idle - Gather information about world and game state
Decide Action - Pass Gathered information to decision maker
Commit Action - Perform function that has been decided upon
Wait - Filler state to provide output while waiting for Animations to complete
Not all states were always used. This allowed for flexibility as features were added or moved. This structure was also good for recovering from failure. If an error was encountered at any step, the state could be kicked back to the Rising Edge to ensure the AI was working with clean data.
A fun feature of the Control Yoke is, because it runs in real-time, each state change can be given a time delay. Creating a 1-2 second delay on each state change made debugging a breeze. Not only was I able to read all the console output, I could introduce "Noise" or undo effects I did not like using existing game User Interface.
The next major benefit of abstracting the AI at the player level was that instantiating a second AI Control Yoke was pretty trivial. Having AI vs AI play not only increased the number of tests per day, but provided the additional stress on the UI to expose some pretty important bugs to fix.
Moving Randomly with Purpose
At this point, no smarts have gone in yet. The "Decide Action" state literally picks a Random Legal Action and executes it. All is going according to plan. In order to provide measurable success to stakeholders the following goals were set:
Make Random Legal Moves (Random)
Perform Moves with Purpose (Tactical)
AI tries to win (Tactical)
AI prioritizes moves (Strategic)
Externalize Agent that represents the "Genius" Type ( Archetype )
Externalized Agents that challenge the player (Additional Archetypes)
This made for great demos because, the aforementioned AI vs. AI could be set with each new iteration making short work the of the previous. The real win in being forced to justify each step along the way was keeping the scope the changes in check.
Designing Utility Functions
Utility was the best choice for Bulge because of it's flexibility. As an asymmetric game, each faction's strategy would need to be vastly different. Determining if an action led to a win condition was made difficult by complex movement and combat rules that encourage pins and grinding down.
Here's were the Utility function comes in. Given a certain criteria, it scores how useful an action would be. This is especially helpful if none of the actions available absolutely lead to a Win State. By choosing and accumulating actions based on weighted criteria, the Agents can not only track towards their goals, but react to situations as they arise.
The easiest way to think of Utility is with the question, "If I only care about XXX, how do I rate my options?" XXX can be anything, Moonshine, Vin Diesel Movies, or useful things like bullets and health. In my example, however, I will be using Apple Pie.
The Utility Function has done it's work and returned values of each Action. Numbers should make sense to the casual observer and be easy to explain. If it is hard to explain, the function may be doing too much.
To reinforce this idea, notice the "plain pie" and "A la mode pie" return the same score. This is because this is not a test for decadence (where it would score higher), or edibility (I am lactose intolerant), or any other factor that make that one goal better than the other. It is important that Utility functions tell the Agent one specific thing about the world. Other utility functions will boost or decrease the utility of this node as they are evaluated.
Yet, how can "Decadence" be compared to "Lactose Intolerance"? This is done with value Normalization, or making sure the number lies between 0.0 - 1.0. There are lots of different ways to accomplish this, I'll be using the simplest:
Node Value - Smallest Value
---------------------------------------------------- or (Node - Min) / (Max - Min)
Largest Value - Smallest Value
All my pre-normalized values are integers to help prevent incorrect mixing of data sources. This simple function does have pitfalls to be leery of. The first is range:
This happened early in development, where in an attempt to making a certain goal the MOST important, all other values were rendered worthless. This is a valid design, but make sure it is intended behavior.
Another pitfall related to the previous, is creating false positives from equally poor choices. Having other utility functions, cutoffs, and weights helps reduce the occurrence of this. Now that all the number are comparable, a weighted sum reveals the best action to take.
Problem: Miguel wants Apple Pie… very badly.
i.e.: (Apple Pie Value * 1.0) + (Decadence * 0.25) - (Lactose Intolerance * 0.5)
Node: Plain Apple Pie = 1.00 (1.0 + 0.0 - 0.0)
Node: Apple Pie Ala Mode = 0.75 (1.0 + 0.25 - 0.5)
Node: Apple = 0.5 (0.5 + 0.0 - 0.0)
Node: Pi = 0.425 (0.3 + 0.125 - 0.0)
At the core, utility breaks down large goals (Win the Game) into small sub goals (Take Enemy ground + Don't die + Win battles).
Ideally, utility functions are created with information from subject matter experts. Yet this does not have to be the case. Start with a best guess and work from there. It was really fun to learn how to play Bulge alongside the AI. Experimentation, observation, and failure built the foundation of my functions, but it was through sharing my failures early that I was able to gain the insight into creating a finished function.
Battle of the Bulge ships Dec 13th, 2012, with 8 Agents. More are planned for future releases. Each Agent has a personality and distinct strategy that was achieved through tuning an external script file.
Using this architecture, compelling agents can be built quickly, iterated upon, and updated without any input from a programmer. It is flexible enough to grow as our understanding of the game grows. And should be able to be carried forward into future projects that use the same basic system.
- No complicated searches or tree-structures
- Can cope with extreme / strange situations
- Modular. Externalizing Utility Weights can create huge variety in behavior
- Doesn't need to win, just get closer to the goal
- Logic is fuzzier, so for better or worse, the developer can be surprised by outcome
- Tuning can be time consuming
There are many different ways to implement and normalize Utility. What matters is the ability to understand that the numbers are meant to map to behaviors. Behaviors can be more than winning, and when reinforced by solid Archetypes, a player can forget the artificiality of their opponent. This moment, where fiction and reality converge, sticks in the minds of players, urging them to discover new facets to their opponent. In seeking more than just "Win State" Artificial Intelligence can create new memorable experiences.
Thanks so much for your support of Bulge, and please drop me a line if you have any questions or want to know more!