This article
is not a how-to guide, it’s a brain dump from the perspective of the engine
programmer (me) of Shiny’s upcoming title, Messiah. Usually Game
Developer articles are littered with formulas, graphs, and code listings
that serve to up the intellectual profile of the piece. However, I’m not
a mathematician and I don’t feel the need to state any information in
the form of a graph — in this article I describe problems, solutions,
and things I’ve learned in general terms, and that allows me to cover
a lot more ground.
My interest
in character systems started more than four years ago, when I was working
at Scavenger, a now-defunct development studio. I was assigned to develop
a "next-generation" X-Men game for the Sega Saturn. Sega
wanted motion-captured characters and chose to use pre-rendered sprites
to represent them. I observed the planning of the motion-capture sessions,
examined the raw mo-cap data that these sessions generated, saw it applied
to high-resolution characters on SGIs, and then received the frames which
I was to integrate into the game.
The results
were disappointing. The motion-capture data, which could have driven characters
at 60 frames per second (FPS), was reduced to little bursts of looping
animation running at12 to 15 FPS, and could only be seen from four angles
at most. The characters were reduced to only 80 to 100 pixels high, and
still I still had problems fitting them in memory. The models we spent
weeks creating came out as fuzzy, blurry sprites.
Around that
time, two new modelers, Darran and Mike, were hired for my team (and the
three of us still work together at Shiny). These two talented modelers
wanted to create the best-looking characters possible, but we didn’t know
how to justify the time spent on modeling super-sharp characters when
the resulting sprites came out looking average at best.
Eventually,
Sega Software stopped developing first-party games and X-Men was
canned. Soon thereafter we were asked to develop our own game. That provided
me with the incentive to figure out how to represent characters in a game
better. We knew we wanted at least ten or more characters on the screen
simultaneously, but all the low-resolution polygonal characters we had
seen just didn’t cut it. So I decided to keep pursuing a solution based
on what I had been working on for X-Men, hoping that I’d come up
with something that would eventually yield better results.
At first
I flirted with a voxel-like solution, and developed a character system
which was shown at E3 in 1996 in a game called Terminus. This system
allowed a player to see characters from any angle rotating around one
axis, which solved a basic problem inherent to sprite-based systems. Still,
you couldn’t see the character from any angle, and while everybody liked
the look of the "sprite from any angle" solution, many people
wanted to get a closer look at the characters’ faces. This caused the
whole voxel idea to fall apart. Any attempt to zoom in on characters made
the lack of detail in the voxel routine obvious to people, and the computation
time shot up (just try to get a character close-up in West-wood’s Blade
Runner and you’ll see what I mean). I tried a million different ways
to fix the detail problem, but I was never satisfied. The other problem
with a voxel-based engine was the absence of a real-time skeletal deformation
system. Rotating every visible point on the surface of a character in
relation to a bone beneath the surface was not a viable solution, so we
had to pre-store frames and again, as in X-Men, cut down in the
playback speed and resolution. At that point I was ready to try a different
solution.
When my
team and I were hired by Shiny a little less than two-and-a-half years
ago, I had done the prototype of a new character system after leaving
Scavenger. Shiny was really excited about it and I continued to develop
the system for the game that would eventually become Messiah. Let’s
look at that system and examine the solutions I came up with.