Gamasutra: The Art & Business of Making Gamesspacer
The Pure Advantage: Advanced Racing Game AI
View All     RSS
September 19, 2018
arrowPress Releases
September 19, 2018
Games Press
View All     RSS
  • Editor-In-Chief:
    Kris Graft
  • Editor:
    Alex Wawro
  • Contributors:
    Chris Kerr
    Alissa McAloon
    Emma Kidwell
    Bryant Francis
    Katherine Cross
  • Advertising:
    Libby Kruse






If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

The Pure Advantage: Advanced Racing Game AI


February 3, 2009 Article Start Previous Page 2 of 5 Next
 

You could argue that this first iteration of the DCB was merely a rubber band method with some variations. We used a different name anyway because of the bad reputation of the phrase "rubber banding". We tried to (internally) avoid the prejudices associated with that term.

We initially based the DCB values on the leaderboard position of the character. Every AI character was aiming to be in the position of their index -- that is, player 0 (human player) gets the first position, player 1 the second, etc.

This system alleviated the original problems a bit, but was far from solving them. It was still too unresponsive and it didn't work; the AI characters finished in a random order. Most importantly, the human player was still racing alone a lot of the time.

We also wanted the characters to form three groups that would be distributed over the race so we decided to add a "group" concept to the DCB. That meant that we had "group leaders" that were the ones trying to finish in a fixed position in the leaderboard and the rest of their group was trying to beat the next group's leader.

We ended up having a way too complex system for the DCB that didn't work so well either. The groups usually weren't compact and more often than not, the leader and the group were distributed randomly in the leaderboard.

We could have gone on with this concept and maybe we could have achieved acceptable results. Instead we came up with a different and innovative concept (that I will explain later on) and that worked really well.

Skills... Again!

There were some issues with the skills that appeared when we implemented the DCB.

The first thing that came up was what to do when the skill goes out of the range [0..1]. We defined the skill to be applied must be in the range [0..1]. We realized the initial skill sometimes ended up falling outside that range, after applying the DCB modifier to the initial value.

To solve this, we just clamped it to the valid range, because values outside make no sense. We really couldn't have the AI performing at -20% or at 110%; if we could make them perform better or worse we simply reassign both ends of the range to match the better and worse performance we could (and wanted to) allow.

We found out that an important part of the problem we had balancing the AI and trying to keep them close to the player had to do with the skills themselves and the way they were implemented. We ended up identifying properties the skills had to have to be suitable for the job:

  • Broad Range. We needed the skills to determine the AI character's behavior, so we had to allow a lot of variation in every skill -- so the 0 value truly means the AI will behave incredibly badly and the 1 value means the AI will behave very well indeed. If the range of the skill is not wide enough then we will find ourselves without enough coverage to match the variety of players' skill levels.
  • Responsiveness. It's not enough to have a wide skill representation if it's distributed in big discrete steps. For instance, if the tricks skill is distributed so that any value under 0.5 means that the AI character will fail every trick and every value over 0.5 means he will successfully land every trick, you may not have enough dynamic range in the skill.

    The 0 skill character will perform very badly and a 1 skill character will perform as well as possible. But if a character has skill 0.3, his performance won't vary when upgrading to 0.4 or 0.45, nor will it change when reducing to 0.2 or 0.1. This makes the skill quite useless since it's not responsive enough to changes.

    Since most changes will be small (probably below 0.2) we need a very responsive skill representation, continuous if possible. If the steps are discrete there should be plenty of them and they should be evenly distributed.

    The progression should also be linear. It's not very useful if 90% of the behavior change correlates to a 10% change in value. The behavior changes should match the value changes as much as possible.

Article Start Previous Page 2 of 5 Next

Related Jobs

Endnight Games Ltd
Endnight Games Ltd — Vancouver, British Columbia, Canada
[09.18.18]

Senior Programmer (Generalist)
Wombat Studio
Wombat Studio — Silicon Valley, California, United States
[09.18.18]

Backend Engineer
Wombat Studio
Wombat Studio — Silicon Valley, California, United States
[09.18.18]

Graphics Engineer
Obsidian Entertainment
Obsidian Entertainment — Irvine, California, United States
[09.18.18]

Engine Programmer





Loading Comments

loader image