This is meant to be an in-depth look at how the AI makes decisions in Age of Rivals, a strategy card game. This AI is considered to be pretty strong by players, so hopefully this will be useful to anyone approaching the challenge of building a competitive AI opponent for a strategy game.
To anyone unfamiliar with Age of Rivals, a quick video overview can be found at https://www.youtube.com/watch?v=S2WhRlh2tqI.
Briefly, Age of Rivals is a card drafting game in which 2 players compete to build the best ancient civilization. Players draft 16 cards to form their city over the course of 3 rounds and then select their best 8 cards for a final showdown round. Each round also consists of a War phase during which cards attempt to knock each other out and a Scoring phase used to earn the points that ultimately determine the winner. City cards each have various stats and a special ability that can influence the outcome of the game.
The initial goal of the AI was to provide a good learning experience for new players, at which point players would transition into playing multiplayer exclusively. But as we realized that there was a significant number of players who wanted to play single player, I kept improving the AI to play at a higher level.
During the course of any AoR game, there are basically two kinds of decisions a player must make:
To make the first decision (what to draft), the AI scores each card across 17 different categories. For each category, a function examines the card and the current state of the game and returns a score between 0 and 1. Then the AI weights each of these results, averages them into an overall score, and drafts the card with the best overall score. Weights for the categories range from 1 (not very important) to 20 (very important).
The 17 categories are:
The first 10 categories are fairly straightforward. The AI looks at the current state of the game (how much culture/armor/attack/economy each player has and what round it currently is) and tries to find the card that will best benefit it.
The last 7 categories attempt to handle Abilities, which are trickier. It is assumed that the cost of each card is fair and accurately reflects its value under ordinary circumstances, so the AI is basically looking for extraordinary circumstances to take advantage of. If it has a lot of Infantry cards, then Shieldwall is a high value card since that ability represents a combo opportunity with Infantry cards.
Categories 11-16 (combos and counters) handle most card abilities, and for a long time that seemed okay enough. But it eventually became obvious that for some cards it’s harder to describe what a combo or counter is, and so category 17 was added to provide custom logic for those cards. Examples include Plaguebearer, Master Thief and The Oracle. These are cards that you should only buy under very specific circumstances, and if the AI buys one when it shouldn’t, it can ruin the illusion of intelligence completely. About 25 cards have such custom logic.
The first version of the AI probably only had about half of these categories. Whenever I embarked upon a series of AI improvements, I would simply play a game and wait for the AI to do something that seemed obviously sub-optimal, and then I would analyze its logic to figure out why it made that choice. This usually resulted in a re-write of a category, the addition of a new one, or the adjusting of weights.
Combat damage assignment follows a similar pattern, but with only 10 categories. The AI randomly assigns damage and then calculates the total value of the cards left over. It does this 1000 times and then picks the permutation that resulted in the highest total value. I probably could have done this in a more rigorous way, but in practice it seemed to work pretty well so I never improved it.
The 10 combat categories are:
Over the past year I’ve released several sets of category weightings, including ones that favored more combat, more culture, and everything in-between. I’ve measured their various win rates and removed poorer performers over time. There is only one AI out in the wild currently.
So what’s the difference between the Normal and Hard AIs? They both use the same set of weights and the same categories of logic. But the Normal AI just chooses to ignore some of the logic some of the time, randomly. This is an attempt to simulate a human player occasionally missing something.
So how well does the AI actually perform? Over the last 6 months:
So both AIs become more “beatable” as players get more experience. But the Hard AI in particular has the potential to stay fairly competitive.
But of course there are some players who beat the average and crush the AI 80-90% of the time. They have probably come up with a play style that is particularly effective against the AI's logic and weights, and over time I can try to improve the AI by analyzing their games in particular.
In my opinion, the AI’s main advantages are:
The AI’s main disadvantages are:
Some next steps towards improving the AI could be to analyze games against players that have high win rates vs. the AI, teaching the AI about more abilities, teaching the AI to be more strategic with the Guaranteed Cards system, and continuing to test alternative sets of weights.
And I’ll say it one more time. The AI does NOT cheat in any way whatsoever. It has no access to extra information. It does not manipulate the random draft. It does not know which card you drafted before it makes a decision.
I hope this has been interesting, and I’m happy to answer any questions!