[In this design analysis, originally published in the September 2010 issue of Gamasutra sister publication Game Developer magazine, Spore/Civilization IV designer & programmer Soren Johnson looks at the fallibility of AI and how games should tailor themselves to its capabilities.]
"My experience with Empire: Total War is this parabola of fondness. At first I don’t like it, so I’m at the bottom of the curve. I don’t like it because they do a terrible job with their documentation – it’s got a terrible manual; they want you to play through this scripted campaign if you want to learn anything; the tool-tips are really screwy. So, I’m hating it.
But then I’m playing it, and I’m learning it, and I’m liking it, so I’m climbing up that parabola. At the very top of the curve, I think, “Hey, I sort of figured it out. I like this game.” But then I start to discover that the AI is terrible, that it’s a dumb game, and I start coming down the far end of the parabola, and I am no longer fond of Empire: Total War.
Commonly, there’s this curve where I enjoy a game, and then I master the system, and then – unless it’s got a good AI – I lose all interest because I realize that mastering the system is where the challenge ends. Once I reach that point, the game is dead for me, and I hate that! That’s when the game should really start to take off."
Many veteran gamers will recognize this feeling from their own experiences – the rising enjoyment that comes from learning an interesting game system followed by an inevitable deflation as the challenge slowly disappears.
Sometimes, a simple technique or exploit becomes obvious that renders the rest of the game balance irrelevant. However, usually the culprit is a weak adversary as the artificial intelligence cannot grasp certain core game mechanics to offer the player a robust challenge. The problem is that the game’s designers have made promises on which the AI programmers cannot deliver; the former have envisioned game systems that are simply beyond the capabilities of modern game AI.
Symmetry Matters
Still, not all games suffer from the Chick Parabola. Many are so fundamentally asymmetrical – Super Mario Bros., Grand Theft Auto, World of Warcraft, Half-Life – that the AI is simply a speed bump that can be easily tuned to provide the right level of challenge. The games which suffer the most are ones where the computer is forced to play the same game as the human.
These symmetrical games – StarCraft, Street Fighter, Puzzle Quest, Halo – have a unique challenge in that each game mechanic must not simply be judged on its own merits but also by asking whether the AI can reasonably understand the option and execute it successfully. Unfortunately, asking this question often disqualifies many interesting ideas.
Artificial intelligence is notoriously poor at handling issues of trust and betrayal, of long-term investments, of multi-front wars, and of avoiding traps obvious to any human. The question of trust, in particular, has torpedoed multiple attempts to make a viable single-player version of the classic board game Diplomacy, which relies so acutely on being able to read one’s enemies, one’s friends, and one’s supposed friends.
Thus, to avoid the Chick Parabola, designers of symmetrical games must weigh carefully the implications of various game mechanics. An interesting play option which over-taxes the AI runs the risk of making the game more interesting in the short-term – as the player learns the system – but less interesting in the long-term – once the player masters the system and can use the mechanic to run rings around the artificial intelligence.
Of course, designers of symmetrical games built primarily for multi-player – such as the Battlefield series or the fighting genre – can choose to sacrifice single-player longevity for multi-player depth. Non-conventional weapons are fine if we assume that veterans of the game are only interested in playing the game with each other.
The human brain is remarkably flexible, with the ability to easily process novel mechanics which are orthogonal to the rest of the game. This approach has many advantages; Valve has been able to radically change the multi-player-only Team Fortress 2 with each character update (giving the Demoman a sword and shield, for example) without having to worry about toppling over an increasingly rickety AI.
Designing for the AI
However, symmetrical single-player games need to be designed as much for the artificial intelligence as for the humans themselves. Even if painful, designers must be willing to leave some of their most orthogonal – and often most creative – ideas off the table for the sake of the AI. Game design is a series of trade-offs, and empowering the AI is important for avoiding the downward slope of the Parabola.
Nonetheless, creative developers can solve this problem at the design stage before it even reaches some doomed AI programmer. One game mechanic that pushed Chick over the edge with Empire: Total War was amphibious invasion. The AI was simply incapable of coordinating its land and naval forces together to launch a coherent and effective invasion of an overseas target. Smart players would quickly learn that if the AI could not attack amphibiously, then the strategic balance can be gamed easily. Maybe England’s troops are not such a threat after all?
This problem is not unusual; strategy games with transportation units almost always suffer from ineffective artificial intelligence. Coordinating land and naval units to be ready in the same place and at the same time – along with the necessary escort ships – is a non-trivial task.
Rise of Nations, Big Huge Games’s historical RTS, presented a blunt but effective solution to this problem; land forces which approach the shore simply turn into boats to carry themselves across the water. Once they reach their destination, the boats transform back into the original land units. No transportation ships ever needed to be built or managed at all.
With one simple stroke, Brian Reynolds, the game’s designer, removed a classic AI problem from the game, enabling water maps to remain interesting for veteran players. The design may have sacrificed the “realism” of requiring the player to build transport ships along with other naval units, but the upside was extending the game’s longevity significantly.
Furthermore, many design changes meant to bolster the AI by simplification often have the side effect of making the game itself more enjoyable for the player. Quite a few players did not miss having to build and herd transports in Rise of Nations.
Civilization 3 and Civilization 4 introduced global unit support and city production overflows, respectively; both changes helped the AI manage its resources but also made the game more enjoyable for the average player by drastically reducing micromanagement.
Tough Choices
The designer’s biggest challenge comes when a mechanic which is demonstrably fun or core to the game’s theme needs to be simplified or dropped. Occasionally, a game can get away with assuming that a certain option will be human-only; in the original Civilization, Sid Meier added nukes to the end-game but didn’t allow the AI to use them.
He reasoned that because the super-weapon came only at the end of a game with such scope, players who used them were not abusing the game; they were simply having a bit of crazy fun at the end.
Further, if the designer wants to maintain a mechanic that the AI can’t use, cheating is not a viable solution for balancing away the AI’s disadvantage. Allowing too many human-only systems effectively turns a symmetrical game into an asymmetrical one, which will eventually affect the strategic balance.
In the Empire: Total War example, once players know that the AI will never launch an effective amphibious invasion, the rest of the game changes immediately. Maybe players don’t need to bother defending their coastal territories?
Maybe land-based allies are more important than water-based ones? Maybe the AI can be tricked into wasting its resources on futile invasions? Most importantly, the player is no longer playing like a queen – she is playing like a gamer who knows that the AI doesn’t work, one who is on the downhill side of the Parabola.
Ultimately, the designer may have to make a tough choice – drop a beloved mechanic or risk shortening the replayability? Many options do exist to extend a game’s longevity outside of pure balance – scripting a variety of scenarios, supporting procedural content generation, providing robust mod support, developing post-release content, and so on.
However, for robust replayability, nothing compares to pure strategic depth with a competent computer opponent. Sacrificing the game’s longevity to provide a few moments of fun for the human is essentially eroding the design at the foundation. As Chick puts it, when the player finally learns a system, “That’s when the game should really starts to take off.” The joy of learning is a big reason why games are fun, but no one wants to study for a test which doesn’t exist.
[For those interested in receiving similar columns from Johnson and other major columnists monthly as part of the world's leading magazine for video game creators, Game Developer is currently available for print subscription, and also in web-readable and PDF form via the Game Developer Digital service.]
I think "design around the limitations of the AI" should be a last resort.
The real question is why we have so many limitations to work around in the first place.
It would be better to make AI more central to the development process, hire designers who know how to design AI, and ensure that we have the support we need on the AI development side to make it happen -- a large enough AI development staff, a stable underlying engine, a realistic schedule, a solid development process, and the executive producer's unflagging support.
Very few of the problems we face in modern AI development are technologically intractable. Nearly all of them are basic production issues.
I am not arguing for infinite AI development budgets and schedules. I have never worked a "when it's done" type of project, nor would I ever do so.
What I am saying is simply that while I agree with Soren that design should fit the AI, the bigger question is why we so often find ourselves constrained by AI, often unnecessarily so.
@Paul: production issues certainly play a part, but I think a bigger problem is that the vast majority of players (and game testers) may not want better AI!
On the production side, the issue is that true AI is generally difficult to implement, as it makes NPC behaviour unpredictable: in turn, this means the game is very difficult to debug and tune, increasing the cost and time of completing production. A further issue is that modern games tend to be realtime *and* involve dozens (if not hundreds) of separate entities: we may have multi-core CPUs and gigabytes of ram to play with, but simultaneously managing all of these entities still takes up considerable amounts of resources.
Even if a developer does manage to get out a true AI before the project overruns and gets canned, the AI is liable to be able to handily defeat the vast majority of human players, thanks to a combination of a perfect memory, a complete overview of all elements of the game (e.g. troop positions, health, weapons, etc) and the ability to issue commands far more quickly and accurately than even the quickest South Korean Starcraft player. The only solution is to cripple the AI... and this takes us back to where we are today: deterministic, resource-light finite state machines with a few optional modifiers to allow the difficulty level to be adjusted.
Personally, I quite like games with deterministic AI - Advance Wars on the GBA is a case in point. The AI prioritises destruction of APCs (which carry fuel/ammo reloads) above all other activities: as these vehicles are both cheap and relatively tough, they make for ideal distractions when trying to protect vital units and bottlenecks.
Indeed, once you've learned how the AI responds to your actions, the game evolves into something of a stylised dance: piece A goes here to counteract piece X, piece B wounds piece Y and piece C follows up with a devastating attack on piece Z. In many ways, it's akin to slotting bricks home in Tetris...
The "our players may not want better AI" approach is a very convenient, trite, cop-out. If players didn't want better AI, they wouldn't bitch about how bad our AI sucks.
Too often we use these cute little sayings to justify to ourselves (as an industry) why it is OK that we haven't solved a particular AI problem... or on a company level, why we aren't spending the time and money to create better AI in our game. I'm sure Aesop would be very amused by this high-tech version of "The Fox and the Grapes".
Is there a place for shooting-gallery AI or the "AI as puzzle" approach? Sure. But at that point, you aren't creating AI but rather seeding a game of Tetris with art assets that look like sentient agents. Does that even remotely make sense in a strategy game, FPS, or most RPGs? No.
Also your claim that good AI is not desirable because it will beat the player is a strawman. We aren't creating Skynet... as AI developers we will still have control over our creations. That said, in order for an AI to "fail", it doesn't have to be deterministic or predictable.
It is quite possible to create good-looking, reasonable-acting AI that provides the appropriate (non-Tetris) challenge, doesn't do anything glaringly stupid, and yet still loses. Often, it can be as simple as nerfing the weapon that the AI has. If the opponent hits the player just as often as the player hits him, but his weapon is 90% the strength of the player's, guess who is going to win? But a subtlety of mathematical game mechanic like that is not apparent to the player whereas stupid behavior certainly is.
Game AI programmers have ALWAYS known that our job is to optimize our AI for entertainment value rather than simply making it more challenging.
So that's not even what I was referring to. It's not even a discussion among game AI developers because we all take it as a basic and blindingly obvious assumption.
When I refer to "limitations" in AI, I'm referring to things that the industry could have solved a long time ago if we'd put our minds to it. We still have multimillion-unit-selling, "AAA" titles where characters get stuck jittering in behavioral state thrashing, get stuck on walls or can't find simple paths from A to B, or where squadmates and henchmen fail to follow player commands or place themselves in harm's way when the player obviously would never want them to do that ...
Worse yet, we have tons of "AAA" games where designers went out of their way to design around the idea that their AIs necessarily need to be totally mindless automatons.
There is this prevailing assumption in parts of the industry that creating decent AI is somehow impossibly hard and there are all these things that we just don't know how to do, and we have to create these extremely "dumbed down" game designs because OH GOD HELP US DON'T MAKE US TOUCH THE AI IT WILL KILL US!!!!
So we keep designing around AI "limitations" that quite often just don't exist.
And it's disappointing, because we know how to do a hell of a lot with AI these days, and there are already plenty of games out there that do a great job of many aspects of it.
Look at all the stuff in the AI Game Programming Wisdom books and the stuff on AiGameDev.com and the AI sections of the Game Programming Gems books. It's not rocket science.
So the problem isn't that these are somehow damningly difficult problems, or that there's any kind of big gap in our knowledge as an industry.
It's that much of the industry would still rather make excuses than put in the effort to do it right ... or, in many cases, even to do it reasonably well.
> "On the production side, the issue is that true AI is generally difficult to implement, as it makes NPC behaviour unpredictable"
No, not necessarily. It depends how you design it.
What on earth do you mean by "true" AI, by the way? I'm not sure where you get this term, or what you think it means.
I am calling for *GOOD* AI ... which a lot of games already have.
> "A further issue is that modern games tend to be realtime *and* involve dozens (if not hundreds) of separate entities ..."
Game AI developers have many tools at our disposal to deal with this. Most behavioral systems in common use, such as HFSMs and behavior trees, are extremely fast, and there has been a lot of work done in LOD systems for AI to better handle large numbers of entities.
Look at all the RTS games that already have hundreds of "separate entities" running around and doing pathfinding and AI logic in real-time. Clearly it's not a showstopper for these games, right?
The worry that game AI *MIGHT* be slow should never force us to dumb down our AI in the design stage.
I mean, that's what the Technical Director / Engineering Lead is there for, right? Designers should be able to come up with ambitious ideas, take them to the TD, and ask, "Is this realistic? Do I need to dumb this down, or not?"
As Knuth said, "premature optimization is the root of all evil." Designing AI to be dumb because you're worried any kind of intelligence in your AI might slow things down is the worst kind of premature optimization ... and it's totally unnecessary to boot.
> "Even if a developer does manage to get out a true AI before the project overruns and gets canned, the AI is liable to be able to handily defeat the vast majority of human players, thanks to a combination of a perfect memory"
You are completely misunderstanding everything I'm saying.
As an AI programmer, I can say that teams where the production understand the value of AI and is willing to push the boundaries, while the AI programmers have sufficient know-how to do it and succeed, well those teams are pretty rare.
Game AIs can never fill the role of players. They may be able to sort of, somewhat play the part of a player, but until someone creates an AI that can actually pass the Turing test, there will always be huge differences in the play experience.
Playing against a person allows us to leverage the players' models of each others' minds (and their models of each others' models, and so on) to create interest and depth. Playing against algorithms is just that; learning algorithms. They may pretend, one layer in, to respond like a player would, but they have no theory of the mind, no personality, no emotions. So player-mimicking AI is mostly a dead end.
The best we can generally do is to avoid that problem and try to create character-mimicking AI. They simulate characters in the game world; not players controlling those characters. It might seem like a subtle difference, but I think it's huge and informs every decision in AI design. These types of AIs are not supposed to simulate intelligence, but to act as pawns with a moderately complex ruleset, which can be arranged to create interesting challenges for players. Each part of their AI is a design element no different than the rules of a gun firing.
That's simply not true. There are some games where AIs have already passed for human-level players, particularly in RTS games.
I've also seen many cases where actual humans failed the Turing test and believed they were playing "crappy AIs" when they were, in fact, playing the person directly across the table from them.
I won't deny that there are distinct advantages to playing against an actual human being. That's part of what makes multiplayer so entertaining, especially when real-life friends are involved.
But to say that AIs can NEVER fill the role of players ignores the fact that in many cases, they already do.
Yes, if you're a master-level StarCraft 2 or Civilization 5 player, you may find the computer player in those games predictable and unchallenging ... but if you're not, chances are it's just fine.
> "They may pretend, one layer in, to respond like a player would, but they have no theory of the mind, no personality, no emotions."
Many actual humans I know also lack these qualities.
In cases where a theory of mind, a personality, or an emotion can enhance the gameplay experience, there's a great deal that we can do to simulate them in-game.
I do think it's fair to call for better AI, as Paul and Dave have, but this only works without budget limitations. Also, some AI problems are fundamentally hard to solve, and their solutions may overly tax the gaming device.
For those reasons I think the advice given in this article makes a lot of sense. I would add, though, that making the game less symmetrical may be the best strategy in many cases, as it provides the player with a challenge without the developer having to sacrifice those fun elements of the design that give the AI fits.
Soren lists Starcraft as a symmetrical game, but unlike, say, Diplomacy, it is rarely symmetrical. At the most basic level, expert players can pit themselves against multiple hostile AI opponents. This is a crude form of asymmetry, granted. The better form is to design levels that pose challenges relevant to underlying game strategy -- limit the player in some way or give the AI some advantage that tests the players understanding of the game mechanics. In most cases there's little advantage gained by forcing the computer to play by the same rules as the human -- let the AI build units faster or whatever, so long as it provides an interesting challenge.
In the end, a master player will only ever find a challenge in another human, but short of coding a "Deep Blue' AI system, there's nothing that can be done about that.
> "I do think it's fair to call for better AI, as Paul and Dave have, but this only works without budget limitations."
I think the budgetary implications are greatly exaggerated. There's a great deal more we could be doing with our existing budgets.
Why do you think we see so many cases where we see two developers with similar team sizes and similar budgets, and yet one spits out games with high-quality AI, while the other doesn't?
Sometimes it comes down to the design aspects that Soren discusses, but that's clearly not the whole answer.
> "In the end, a master player will only ever find a challenge in another human, but short of coding a "Deep Blue' AI system, there's nothing that can be done about that."
Most of the power of Deep Blue came from the hardware; the software was, at its core, a game tree search.
So if what you're saying is true, and all it requires is a Deep Blue level of AI, then this is actually only a temporary problem -- at some point down the road, Moore's Law will ensure that we do have the computing power we need to provide master-level players with truly challenging opponents when they set the game to that difficulty setting.
All the more reason, I think, for the industry to move beyond a fatalistic attitude toward game AI.
Starcraft 2 is actually a good example of exactly what you describe here.
Once the AI is turned higher than "Hard" ("Hardest" and "Insane") they begin to gain game advantages over players, the most obvious one being that they harvest resources faster -- the player receives 5 units of minerals every turn in, an Insane AI receives 7. Being a strategy game of speed, this becomes a greater and greater advantage for the AI as the game continues.
Of course, the game still lands in some of the pitfalls described in the article, for example the AI (on every difficulty) build a balanced military force, so if their only way to get to a player is through a large air assault the game becomes broken.
I would guess that in an example like Starcraft 2, the pitfalls are due to a lack of priority in player vs. AI games that are not campaign driven. To that end the AI serves very well, as a player will never encounter these AI issues within the campaign itself. Therefore these issues only really show themselves in custom games or co-op player vs. AI matches.
Obviously the AI cannot be expected to react properly to custom games in all situations (there are some moderate AI "habit" conditions you can change when creating custom games).
In co-op player vs AI matches, the games are limited to specific maps which prevent players from blocking themselves off from land assaults, allowing the AI to serve its purpose.
Anyway...that was a bit more of a rant than I meant it to be. The point I was trying to make was that even on large budget games, designers are known to cater to their AI limitations, rather than commit themselves to high-cost, low benefit solutions.
One of the potential uses for strong RTS AI is providing a training partner. A player may want to train against a specific strategy, and it is difficult to find a human opponent that will execute the desired strategy.
For example, if I want to train against a 2-hatch Mutalisk strategy in StarCraft, I can play against Berkeley's Overmind bot. While the bot does not execute a wide range of strategies, it executes a specific strategy very well.
Chess is a simple, formalized game where solving AI problems boils down to optimizing search algorithms.
Starcraft, or any other RTS game, is a tremendously complicated system which involves orders of magnitude more variables than Chess, everything from unit types, to build orders, to how you occupy space, to attack timings, to scarce resource management, to various psychological aspects (bluffing, rushing, etc).
"Solving" these sorts of AI problems isn't about CPU, it's about coming up with algorithms that simulate various human-designed strategies, while obeying basic game rulesets. And these algorithms aren't always easy to craft or discover. If you think AI limitations don't exist, try making an AI that plays Go sometime.
Andrew, no one is arguing that limitations don't exist -- only that the limitations are further out than we often lead ourselves to believe, and designers often impose limitations that simply aren't there.
I've written RTS AI before, so I'm familiar with the problems you're talking about. StarCraft and Go do have quite a bit more common than you might think.
Both games are fundamentally about reasoning under uncertainty. With all the advances that have been made over the last decade in both videogame AI and academic AI systems for games such as "Go," there's no reason to think this is somehow an intractable problem.
The best approaches to 'Go' currently involve Monte-Carlo-based game tree search -- see http://remi.coulom.free.fr/JFFoS/JFFoS.pdf
Master-level StarCraft AI has recently become an active area of research (see http://eis-blog.ucsc.edu/2010/10/starcraft-ai-competition-results ). Given how encouraging the results have been so far, I don't see any reason to rule out the notion that we'll eventually end up with an AI that can beat the South Koreans.
It may not happen in the near term, or even in the next decade, but I think there's no reason to believe a computer can't do it if human players can.
The history of AI is littered with arguments about "Only humans can do X," "Only humans can do Y," "Z is way too complicated," and every time, we find out that it's not really impossible, and the right kind of AI can do it, too.
> ' "Solving" these sorts of AI problems isn't about CPU, it's about coming up with algorithms that simulate various human-designed strategies, while obeying basic game rulesets.'
They don't necessarily need to simulate human-designed strategies. That's often the case right now, but there are certainly games where AIs can improvise, plan, reason, and come up with their own strategies and tactics.
I understand your point, however game developers don't usually have infinite time to sit around and collaborate with researchers to come up with top-tier algorithms. If they're only now discussing master-level AI for Starcraft 1 (a 12-year-old game), it implies that it's a difficult problem space.
Also:
- Game designs are usually iterated on up until shortly before launch, so the AI problem is a constantly moving target.
- Your average AI programmer doesn't spend their days combing through university literature to find academic solutions to problems, typically things are approached from a more pragmatic perspective (i.e. FSM's on top of FSM's, not machine learning or Monte-Carlo simulations)
- I don't believe that RTS's are isomorphic to each other in a strong sense, so just because you spend 20 years researching an finding good algorithms for Starcraft 1 doesn't mean they'll necessarily work for Starcraft 2, much less different RTS games
So, personally, I don't see anything wrong with a designer making some smart choices to make the AI's life significantly easier. A lot of people only play single-player, and in that situation the AI really controls the entire game experience for them.
> "I understand your point, however game developers don't usually have infinite time to sit around and collaborate with researchers to come up with top-tier algorithms. "
Of course, Andrew. Of course.
No one is making a claim that we need savant-level AI systems and crazy AI budgets on every project.
But don't you think that a lot of developers, right now, do, and should, have the time to make their AIs not pathfind into walls, or get stuck on crates?
And a surprising number of developers don't do that.
And even if master-level StarCraft is only in its infancy, don't you think that someday -- whether years or decades down the road -- we'll see game AI at a point where there's a lot more research, a lot more middleware, and a lot more computing power available, and that becomes a much more tractable problem?
Again, I'm not really arguing against Soren's point here, because Soren is right.
My point is that we have a lingering, irrational phobia of AI that too often holds us back, and designers often consider AI as a problem to be worked around rather than the insanely useful tool that it really is.
And although we often find ourselves asking HOW to constrain the design to the AI, we should ask ourselves at least as often WHY we have to constrain it, and whether it truly is necessary.
At the end of the day, it's about the customer experience.
As a customer, I paid too much for Fallout 3 for the wasteland savages I pick fights with to constantly get stuck on fences and sandbags, too much for Dragon Age: Origins to have to deal with uncontrollable party members who continually get themselves and each other killed, and far too much for World of WarCraft to have to continually micromanage my pets to keep them from getting stuck.
These are not games that have insanely demanding design goals where AI is concerned.
I don't disagree with Soren's prescription, but I don't think that medicine would have cured what ails those games. There's nothing you could do in terms of constraining the AI goals even further that would make any of them more tractable.
"If you think AI limitations don't exist, try making an AI that plays Go sometime."
And yet there exist many Go AIs that play at high level amateur play, far beyond what most people with several years of regular experience can achieve. Sure, being in the top 0.01% is very difficult, even for a computer, but being in the top 1% has pretty much been done. I've seen the Starcraft AI counter cheese and heavy macro alike, and it's running with 6 other AIs on my machine in 4vs4 custom game (and not chugging too much either). Sure, you can claim all these different aspects that require more thought/planning, but zealot kills zergling is a pretty easy heuristic to write, store as a lookup-table and evaluate upon scouting/encountering the enemy.
>"As a customer, I paid too much for Fallout 3 for the wasteland savages I pick fights with to constantly get stuck on fences and sandbags"
Right, that's not really AI though, that's pathfinding. More of a geometric problem than a logic one. Pathfinding is essentially a solved problem (A*, nav meshes, etc), but when games can't even get that right, IMHO it points more towards a process/bugfixing problem than anything else. Did the designers put the crates down without updating the navmesh? Is there a math bug in the solver? Is the bounding box of the sandbag set incorrectly? Did the NPC actually spawn inside of the fence? etc etc
That's exactly it, Andrew. You've summed up my own point exactly.
Pathfinding is a solved problem, like so many other things in game AI ... and the fact that we continue to trip over it again and again points toward process problems.
That's exactly why I am in this comment thread, egging people on -- to get them to ask the same questions you are asking here.
"Occasionally, a game can get away with assuming that a certain option will be human-only; in the original Civilization, Sid Meier added nukes to the end-game but didn’t allow the AI to use them."
If this is true, then I must have discovered a bug in the original Civilization. Because I distinctly remember getting nuked by Julius Ceasar.
All this talk of "it would take a lot of time and money to research stuff" becomes moot when you consider that there are no shortage of techniques in print, on the web, and at the GDC AI Summit that companies actually outright refuse to use because of the original point of this thread... that "our players may not want good AI." If it has already been done, is being presented in open forum, and is reasonably simple to implement (relative to what AI implementation poses in general), why would there be such a steadfast refusal if it isn't born out of self-imposed (by which I mean "industry-imposed") belief systems that are likely the result of fear?
Look around at reviews, blogs, and YouTube. People are laughing at our AI. They don't have to be... we continue to insist on giving them something to laugh at on the one hand, and tell ourselves that they aren't really laughing on the other.
I think people are underestimating the difficulty of writing good AI. You can't just point to modern AI techniques and algorithms, and assume that this can be used to easily produce good AI code.
Low level aspects of game technology, such as rendering, sound, networking, or physics, can more easily be abstracted into solid, flexible, reusable middleware, because these technologies do not depend as much on the design of the game. AI does. You could reasonably use the same rendering, sound, networking, or physics middleware to create an RPG, RTS, FPS, flight sim, or turn-based strategy game. This is far less true for AI.
Even within a single genre, the requirements of the AI is drastically dependent on the particular design of that game. Different combat rules, different environments, and different strategy often demand distinct approaches to how the AI should work.
This is one of the reasons AI is so commonly flawed - there are great barriers against creating and refining the "perfect reusable AI code". We're shooting at a moving target that changes genre by genre, game by game, and design decision by design decision.
Pointing at one game with good AI and wondering "see? why can't all games have good AI?" is like pointing at one game with good design and wondering "see? why can't all games have good design?". Or like pointing at one piece of software with no bugs and wondering, "see? why can't all software have no bugs?" Each game is different. Each game presents new AI challenges that provide new technical hurdles and fresh opportunities for AI design flaws.
I even extend this argument to path finding. There are always different considerations, like: group movement, finding cover, crowd behavior, unique animation and environmental challenges (e.g. climbing), different kinds of physics and dynamic obstacles, different combat styles, and the list goes on. The AI coder may need to be actively making changes in their code depending on what the designer, animator, and level builder are doing. AI code that works in one place doesn't work in others. It's not a "solved problem". It's a problem that needs to be resolved all the time on new game projects.
The real question is why we have so many limitations to work around in the first place.
It would be better to make AI more central to the development process, hire designers who know how to design AI, and ensure that we have the support we need on the AI development side to make it happen -- a large enough AI development staff, a stable underlying engine, a realistic schedule, a solid development process, and the executive producer's unflagging support.
Very few of the problems we face in modern AI development are technologically intractable. Nearly all of them are basic production issues.
but there has to be some guarantee that the AI won't have any major flaw when it's time to ship the game.
It's a problem for many games, and the article clearly offers a solution to that problem.
Your way of thinking would be suitable for a "when it's done" type of schedule, and I don't think many studios still work that way.
I am not arguing for infinite AI development budgets and schedules. I have never worked a "when it's done" type of project, nor would I ever do so.
What I am saying is simply that while I agree with Soren that design should fit the AI, the bigger question is why we so often find ourselves constrained by AI, often unnecessarily so.
On the production side, the issue is that true AI is generally difficult to implement, as it makes NPC behaviour unpredictable: in turn, this means the game is very difficult to debug and tune, increasing the cost and time of completing production. A further issue is that modern games tend to be realtime *and* involve dozens (if not hundreds) of separate entities: we may have multi-core CPUs and gigabytes of ram to play with, but simultaneously managing all of these entities still takes up considerable amounts of resources.
Even if a developer does manage to get out a true AI before the project overruns and gets canned, the AI is liable to be able to handily defeat the vast majority of human players, thanks to a combination of a perfect memory, a complete overview of all elements of the game (e.g. troop positions, health, weapons, etc) and the ability to issue commands far more quickly and accurately than even the quickest South Korean Starcraft player. The only solution is to cripple the AI... and this takes us back to where we are today: deterministic, resource-light finite state machines with a few optional modifiers to allow the difficulty level to be adjusted.
Personally, I quite like games with deterministic AI - Advance Wars on the GBA is a case in point. The AI prioritises destruction of APCs (which carry fuel/ammo reloads) above all other activities: as these vehicles are both cheap and relatively tough, they make for ideal distractions when trying to protect vital units and bottlenecks.
Indeed, once you've learned how the AI responds to your actions, the game evolves into something of a stylised dance: piece A goes here to counteract piece X, piece B wounds piece Y and piece C follows up with a devastating attack on piece Z. In many ways, it's akin to slotting bricks home in Tetris...
Too often we use these cute little sayings to justify to ourselves (as an industry) why it is OK that we haven't solved a particular AI problem... or on a company level, why we aren't spending the time and money to create better AI in our game. I'm sure Aesop would be very amused by this high-tech version of "The Fox and the Grapes".
Is there a place for shooting-gallery AI or the "AI as puzzle" approach? Sure. But at that point, you aren't creating AI but rather seeding a game of Tetris with art assets that look like sentient agents. Does that even remotely make sense in a strategy game, FPS, or most RPGs? No.
Also your claim that good AI is not desirable because it will beat the player is a strawman. We aren't creating Skynet... as AI developers we will still have control over our creations. That said, in order for an AI to "fail", it doesn't have to be deterministic or predictable.
It is quite possible to create good-looking, reasonable-acting AI that provides the appropriate (non-Tetris) challenge, doesn't do anything glaringly stupid, and yet still loses. Often, it can be as simple as nerfing the weapon that the AI has. If the opponent hits the player just as often as the player hits him, but his weapon is 90% the strength of the player's, guess who is going to win? But a subtlety of mathematical game mechanic like that is not apparent to the player whereas stupid behavior certainly is.
We need to get past that kind of thinking.
Game AI programmers have ALWAYS known that our job is to optimize our AI for entertainment value rather than simply making it more challenging.
So that's not even what I was referring to. It's not even a discussion among game AI developers because we all take it as a basic and blindingly obvious assumption.
When I refer to "limitations" in AI, I'm referring to things that the industry could have solved a long time ago if we'd put our minds to it. We still have multimillion-unit-selling, "AAA" titles where characters get stuck jittering in behavioral state thrashing, get stuck on walls or can't find simple paths from A to B, or where squadmates and henchmen fail to follow player commands or place themselves in harm's way when the player obviously would never want them to do that ...
Worse yet, we have tons of "AAA" games where designers went out of their way to design around the idea that their AIs necessarily need to be totally mindless automatons.
There is this prevailing assumption in parts of the industry that creating decent AI is somehow impossibly hard and there are all these things that we just don't know how to do, and we have to create these extremely "dumbed down" game designs because OH GOD HELP US DON'T MAKE US TOUCH THE AI IT WILL KILL US!!!!
So we keep designing around AI "limitations" that quite often just don't exist.
And it's disappointing, because we know how to do a hell of a lot with AI these days, and there are already plenty of games out there that do a great job of many aspects of it.
Look at all the stuff in the AI Game Programming Wisdom books and the stuff on AiGameDev.com and the AI sections of the Game Programming Gems books. It's not rocket science.
So the problem isn't that these are somehow damningly difficult problems, or that there's any kind of big gap in our knowledge as an industry.
It's that much of the industry would still rather make excuses than put in the effort to do it right ... or, in many cases, even to do it reasonably well.
No, not necessarily. It depends how you design it.
What on earth do you mean by "true" AI, by the way? I'm not sure where you get this term, or what you think it means.
I am calling for *GOOD* AI ... which a lot of games already have.
> "A further issue is that modern games tend to be realtime *and* involve dozens (if not hundreds) of separate entities ..."
Game AI developers have many tools at our disposal to deal with this. Most behavioral systems in common use, such as HFSMs and behavior trees, are extremely fast, and there has been a lot of work done in LOD systems for AI to better handle large numbers of entities.
Look at all the RTS games that already have hundreds of "separate entities" running around and doing pathfinding and AI logic in real-time. Clearly it's not a showstopper for these games, right?
The worry that game AI *MIGHT* be slow should never force us to dumb down our AI in the design stage.
I mean, that's what the Technical Director / Engineering Lead is there for, right? Designers should be able to come up with ambitious ideas, take them to the TD, and ask, "Is this realistic? Do I need to dumb this down, or not?"
As Knuth said, "premature optimization is the root of all evil." Designing AI to be dumb because you're worried any kind of intelligence in your AI might slow things down is the worst kind of premature optimization ... and it's totally unnecessary to boot.
> "Even if a developer does manage to get out a true AI before the project overruns and gets canned, the AI is liable to be able to handily defeat the vast majority of human players, thanks to a combination of a perfect memory"
You are completely misunderstanding everything I'm saying.
Playing against a person allows us to leverage the players' models of each others' minds (and their models of each others' models, and so on) to create interest and depth. Playing against algorithms is just that; learning algorithms. They may pretend, one layer in, to respond like a player would, but they have no theory of the mind, no personality, no emotions. So player-mimicking AI is mostly a dead end.
The best we can generally do is to avoid that problem and try to create character-mimicking AI. They simulate characters in the game world; not players controlling those characters. It might seem like a subtle difference, but I think it's huge and informs every decision in AI design. These types of AIs are not supposed to simulate intelligence, but to act as pawns with a moderately complex ruleset, which can be arranged to create interesting challenges for players. Each part of their AI is a design element no different than the rules of a gun firing.
That's simply not true. There are some games where AIs have already passed for human-level players, particularly in RTS games.
I've also seen many cases where actual humans failed the Turing test and believed they were playing "crappy AIs" when they were, in fact, playing the person directly across the table from them.
I won't deny that there are distinct advantages to playing against an actual human being. That's part of what makes multiplayer so entertaining, especially when real-life friends are involved.
But to say that AIs can NEVER fill the role of players ignores the fact that in many cases, they already do.
Yes, if you're a master-level StarCraft 2 or Civilization 5 player, you may find the computer player in those games predictable and unchallenging ... but if you're not, chances are it's just fine.
> "They may pretend, one layer in, to respond like a player would, but they have no theory of the mind, no personality, no emotions."
Many actual humans I know also lack these qualities.
In cases where a theory of mind, a personality, or an emotion can enhance the gameplay experience, there's a great deal that we can do to simulate them in-game.
For those reasons I think the advice given in this article makes a lot of sense. I would add, though, that making the game less symmetrical may be the best strategy in many cases, as it provides the player with a challenge without the developer having to sacrifice those fun elements of the design that give the AI fits.
Soren lists Starcraft as a symmetrical game, but unlike, say, Diplomacy, it is rarely symmetrical. At the most basic level, expert players can pit themselves against multiple hostile AI opponents. This is a crude form of asymmetry, granted. The better form is to design levels that pose challenges relevant to underlying game strategy -- limit the player in some way or give the AI some advantage that tests the players understanding of the game mechanics. In most cases there's little advantage gained by forcing the computer to play by the same rules as the human -- let the AI build units faster or whatever, so long as it provides an interesting challenge.
In the end, a master player will only ever find a challenge in another human, but short of coding a "Deep Blue' AI system, there's nothing that can be done about that.
I think the budgetary implications are greatly exaggerated. There's a great deal more we could be doing with our existing budgets.
Why do you think we see so many cases where we see two developers with similar team sizes and similar budgets, and yet one spits out games with high-quality AI, while the other doesn't?
Sometimes it comes down to the design aspects that Soren discusses, but that's clearly not the whole answer.
> "In the end, a master player will only ever find a challenge in another human, but short of coding a "Deep Blue' AI system, there's nothing that can be done about that."
Most of the power of Deep Blue came from the hardware; the software was, at its core, a game tree search.
So if what you're saying is true, and all it requires is a Deep Blue level of AI, then this is actually only a temporary problem -- at some point down the road, Moore's Law will ensure that we do have the computing power we need to provide master-level players with truly challenging opponents when they set the game to that difficulty setting.
All the more reason, I think, for the industry to move beyond a fatalistic attitude toward game AI.
Once the AI is turned higher than "Hard" ("Hardest" and "Insane") they begin to gain game advantages over players, the most obvious one being that they harvest resources faster -- the player receives 5 units of minerals every turn in, an Insane AI receives 7. Being a strategy game of speed, this becomes a greater and greater advantage for the AI as the game continues.
Of course, the game still lands in some of the pitfalls described in the article, for example the AI (on every difficulty) build a balanced military force, so if their only way to get to a player is through a large air assault the game becomes broken.
I would guess that in an example like Starcraft 2, the pitfalls are due to a lack of priority in player vs. AI games that are not campaign driven. To that end the AI serves very well, as a player will never encounter these AI issues within the campaign itself. Therefore these issues only really show themselves in custom games or co-op player vs. AI matches.
Obviously the AI cannot be expected to react properly to custom games in all situations (there are some moderate AI "habit" conditions you can change when creating custom games).
In co-op player vs AI matches, the games are limited to specific maps which prevent players from blocking themselves off from land assaults, allowing the AI to serve its purpose.
Anyway...that was a bit more of a rant than I meant it to be. The point I was trying to make was that even on large budget games, designers are known to cater to their AI limitations, rather than commit themselves to high-cost, low benefit solutions.
For example, if I want to train against a 2-hatch Mutalisk strategy in StarCraft, I can play against Berkeley's Overmind bot. While the bot does not execute a wide range of strategies, it executes a specific strategy very well.
Chess is a simple, formalized game where solving AI problems boils down to optimizing search algorithms.
Starcraft, or any other RTS game, is a tremendously complicated system which involves orders of magnitude more variables than Chess, everything from unit types, to build orders, to how you occupy space, to attack timings, to scarce resource management, to various psychological aspects (bluffing, rushing, etc).
"Solving" these sorts of AI problems isn't about CPU, it's about coming up with algorithms that simulate various human-designed strategies, while obeying basic game rulesets. And these algorithms aren't always easy to craft or discover. If you think AI limitations don't exist, try making an AI that plays Go sometime.
I've written RTS AI before, so I'm familiar with the problems you're talking about. StarCraft and Go do have quite a bit more common than you might think.
Both games are fundamentally about reasoning under uncertainty. With all the advances that have been made over the last decade in both videogame AI and academic AI systems for games such as "Go," there's no reason to think this is somehow an intractable problem.
The best approaches to 'Go' currently involve Monte-Carlo-based game tree search -- see http://remi.coulom.free.fr/JFFoS/JFFoS.pdf
Master-level StarCraft AI has recently become an active area of research (see http://eis-blog.ucsc.edu/2010/10/starcraft-ai-competition-results ). Given how encouraging the results have been so far, I don't see any reason to rule out the notion that we'll eventually end up with an AI that can beat the South Koreans.
It may not happen in the near term, or even in the next decade, but I think there's no reason to believe a computer can't do it if human players can.
The history of AI is littered with arguments about "Only humans can do X," "Only humans can do Y," "Z is way too complicated," and every time, we find out that it's not really impossible, and the right kind of AI can do it, too.
> ' "Solving" these sorts of AI problems isn't about CPU, it's about coming up with algorithms that simulate various human-designed strategies, while obeying basic game rulesets.'
They don't necessarily need to simulate human-designed strategies. That's often the case right now, but there are certainly games where AIs can improvise, plan, reason, and come up with their own strategies and tactics.
Also:
- Game designs are usually iterated on up until shortly before launch, so the AI problem is a constantly moving target.
- Your average AI programmer doesn't spend their days combing through university literature to find academic solutions to problems, typically things are approached from a more pragmatic perspective (i.e. FSM's on top of FSM's, not machine learning or Monte-Carlo simulations)
- I don't believe that RTS's are isomorphic to each other in a strong sense, so just because you spend 20 years researching an finding good algorithms for Starcraft 1 doesn't mean they'll necessarily work for Starcraft 2, much less different RTS games
So, personally, I don't see anything wrong with a designer making some smart choices to make the AI's life significantly easier. A lot of people only play single-player, and in that situation the AI really controls the entire game experience for them.
Of course, Andrew. Of course.
No one is making a claim that we need savant-level AI systems and crazy AI budgets on every project.
But don't you think that a lot of developers, right now, do, and should, have the time to make their AIs not pathfind into walls, or get stuck on crates?
And a surprising number of developers don't do that.
And even if master-level StarCraft is only in its infancy, don't you think that someday -- whether years or decades down the road -- we'll see game AI at a point where there's a lot more research, a lot more middleware, and a lot more computing power available, and that becomes a much more tractable problem?
Again, I'm not really arguing against Soren's point here, because Soren is right.
My point is that we have a lingering, irrational phobia of AI that too often holds us back, and designers often consider AI as a problem to be worked around rather than the insanely useful tool that it really is.
And although we often find ourselves asking HOW to constrain the design to the AI, we should ask ourselves at least as often WHY we have to constrain it, and whether it truly is necessary.
As a customer, I paid too much for Fallout 3 for the wasteland savages I pick fights with to constantly get stuck on fences and sandbags, too much for Dragon Age: Origins to have to deal with uncontrollable party members who continually get themselves and each other killed, and far too much for World of WarCraft to have to continually micromanage my pets to keep them from getting stuck.
These are not games that have insanely demanding design goals where AI is concerned.
I don't disagree with Soren's prescription, but I don't think that medicine would have cured what ails those games. There's nothing you could do in terms of constraining the AI goals even further that would make any of them more tractable.
And yet there exist many Go AIs that play at high level amateur play, far beyond what most people with several years of regular experience can achieve. Sure, being in the top 0.01% is very difficult, even for a computer, but being in the top 1% has pretty much been done. I've seen the Starcraft AI counter cheese and heavy macro alike, and it's running with 6 other AIs on my machine in 4vs4 custom game (and not chugging too much either). Sure, you can claim all these different aspects that require more thought/planning, but zealot kills zergling is a pretty easy heuristic to write, store as a lookup-table and evaluate upon scouting/encountering the enemy.
Right, that's not really AI though, that's pathfinding. More of a geometric problem than a logic one. Pathfinding is essentially a solved problem (A*, nav meshes, etc), but when games can't even get that right, IMHO it points more towards a process/bugfixing problem than anything else. Did the designers put the crates down without updating the navmesh? Is there a math bug in the solver? Is the bounding box of the sandbag set incorrectly? Did the NPC actually spawn inside of the fence? etc etc
Pathfinding is a solved problem, like so many other things in game AI ... and the fact that we continue to trip over it again and again points toward process problems.
That's exactly why I am in this comment thread, egging people on -- to get them to ask the same questions you are asking here.
If this is true, then I must have discovered a bug in the original Civilization. Because I distinctly remember getting nuked by Julius Ceasar.
Look around at reviews, blogs, and YouTube. People are laughing at our AI. They don't have to be... we continue to insist on giving them something to laugh at on the one hand, and tell ourselves that they aren't really laughing on the other.
You can't have it both ways.
Low level aspects of game technology, such as rendering, sound, networking, or physics, can more easily be abstracted into solid, flexible, reusable middleware, because these technologies do not depend as much on the design of the game. AI does. You could reasonably use the same rendering, sound, networking, or physics middleware to create an RPG, RTS, FPS, flight sim, or turn-based strategy game. This is far less true for AI.
Even within a single genre, the requirements of the AI is drastically dependent on the particular design of that game. Different combat rules, different environments, and different strategy often demand distinct approaches to how the AI should work.
This is one of the reasons AI is so commonly flawed - there are great barriers against creating and refining the "perfect reusable AI code". We're shooting at a moving target that changes genre by genre, game by game, and design decision by design decision.
Pointing at one game with good AI and wondering "see? why can't all games have good AI?" is like pointing at one game with good design and wondering "see? why can't all games have good design?". Or like pointing at one piece of software with no bugs and wondering, "see? why can't all software have no bugs?" Each game is different. Each game presents new AI challenges that provide new technical hurdles and fresh opportunities for AI design flaws.
I even extend this argument to path finding. There are always different considerations, like: group movement, finding cover, crowd behavior, unique animation and environmental challenges (e.g. climbing), different kinds of physics and dynamic obstacles, different combat styles, and the list goes on. The AI coder may need to be actively making changes in their code depending on what the designer, animator, and level builder are doing. AI code that works in one place doesn't work in others. It's not a "solved problem". It's a problem that needs to be resolved all the time on new game projects.