Gamasutra: The Art & Business of Making Gamesspacer
Teaching an Old Dog New Bits: How Console Developers are Able to Improve Performance When the Hardware Hasn't Changed
View All     RSS
January 22, 2019
arrowPress Releases
January 22, 2019
Games Press
View All     RSS






If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

Teaching an Old Dog New Bits: How Console Developers are Able to Improve Performance When the Hardware Hasn't Changed


November 12, 1999 Article Start Previous Page 3 of 4 Next
 

The Crash Bandicoot Trilogy: A Practical Example

The three Crash Bandicoot games represent a clear example of the process of technology and gameplay refinement on a single platform. Crash Bandicoot was Naughty Dog's first game on the Sony Playstation game console, and its first fully 3D game. With Crash Bandicoot 2: Cortex Strikes Back and Crash Bandicoot: Warped, we were able to improve the technology and offer a slicker, more detailed game experience in successively less development time. With the exception of added support for the Analog Joystick, Dual Shock Controller, and Sony Pocketstation the hardware platforms for the three titles are identical.

Timely and reasonably orderly development of a video game title is about risk management. Given that you have a certain amount of time to develop the title, you can only allow for a certain quantity of gameplay and technology risks during the course of development. One of the principle ways in which successive games improve is by the reuse of these risks. Most solutions which worked for the earlier game will work again, if desired, in the new game. In addition, many techniques can be gleaned from other games on the same machine that have been released during the elapsed time.

In the case of sequels such as the later Crash games there is even more reduction of risk. Most gameplay risks, as well as significant art, code, and sound can be reused. This allows the development team to concentrate on adding new features, while at the same time retaining all the good things about the old game. The result is that sequels are empirically better games.

Crash Bandicoot - how do we do character action in 3D?

The first Crash Bandicoot game presented unique problems

 

Development: September 1994 - September 1996
Staff: 9 people: 3 programmers, 4 artists, 1 designer, 1 support

Premise: Do for the ultra popular platform action game genre what Virtua Fighter had done for fighting games: bring it into 3D. Design a very likeable broad market character and place him in a fun and fast paced action game. Attempt to occupy the "official character" niche on the then empty Playstation market. Remember, that by the fall of 1994 no one had yet produced an effective 3D platform action game.

Gameplay risk: How do you design and control an action character in 3D such that the feel is as natural and intuitive as in 2D?

When we first asked ourselves, "what do you get if you put Sonic the Hedgehog (or any other character action game for that matter) in 3D," the answer that came to mind was: "a game where you always see Sonic's Ass." The entire question of how to make a platform game in 3D was the single largest design risk on the project. We spent 9 months struggling with this before there was a single fun level. However, by the time this happened we had formulated many of the basic concepts of the Crash gameplay.

We were trying to preserve all of the good elements of classic platform games. To us this meant really good control, fast paced action, and progressively ramping challenges. In order to maintain a very solid control feel we opted to keep the camera relatively stable, and to orient the control axis with respect to the camera. Basically this means that Crash moves into the screen when you push up on the joypad. This may seem obvious, but it was not at the time, and there are many 3D games which use different (and usually inferior) schemes.

Technical risk: How do you get the Playstation CPU and GPU to draw complex organic scenes with a high degree of texture and color complexity, good sorting, and a solid high resolution look?

It took quite a while, a few clever tricks, and not a little bit of assembly writing and rewriting of the polygon engines. One of our major realizations was that on a CD based game system with a 33mhz processor, it is favorable to pre-compute many kinds of data in non real-time on the faster workstations, and then use a lean fast game engine to deliver high performance.

Technical risk: How do the artists build an maintain roughly 1 million polygon levels with per poly and per vertex texture and color assignment?

The challenge of constructing large detailed levels turned out to be one of the biggest challenges of the whole project. We didn't want to duplicate the huge amount of work that has gone into making the commercial 3D modeling packages, so we chose to integrate with one of them. We tried Softimage at first, but a number of factors caused us to switch to Alias Power Animator. When we began the project it was not possible to load and view a one million polygon level on a 200mhz R4400 Indigo II Extreme. We spent several months creating a system and tools by which smaller chunks of the level could be hierarchically assembled into a larger whole.

In addition, the commercial packages were not aware that anyone would desire per polygon and per vertex control over texture, color, and shading information. They used a projective texture model preferred by the film and effects industry. In order to maximize the limited amount of memory on the Playstation we knew we would need to have very detailed control. So we created a suite of custom tools to aid in the assignment of surface details to Power Animator models. Many of these features have since folded into the commercial programs, but at the time we were among the first to make use of this style of model construction.

Technical risk: How do you get a 200mhz R4400 Indigo II to process a 1 million polygon level?

For the first time in our experience, it became necessary to put some real thought into the design of the offline data processing pipeline. When we first wrote the level processing tool it took 20 hours to run a small test case. A crisis ensued and we were forced to both seriously optimize the performance of the tool and multithread it so that the process could be distributed across a number of workstations.

Conventional wisdom says that game tools are child's play. Historically speaking, this is a fair judgment - 2D games almost never involve either sophisticated preprocessing or huge data sets. But now that game consoles house dedicated polygon rendering hardware, the kid gloves are off.

In Crash Bandicoot, players explore levels composed of over a million polygons. Quick and dirty techniques that work for smaller data sets (e.g., repeated linear searches instead of binary searches of has table lookups) no longer suffice. Data structures now matter - choosing one that doesn't scale well as the problem size increases leads to level processing tasks that take hours instead of seconds.

The problems have gotten correspondingly harder, too. Building an optimal BSP tree, finding ideal polygon strips, determining the best way to pack data into fixed-size pages for CD streaming - these are all tough problems by any metric, academic or practical.

To make matters worse, game tools undergo constant revisions as the run-time engine evolves towards the bleeding edge of available technology. Unlike many jobs, where programmers write functional units according to a rigid a priori specification, games begin with a vague "what-if" technical spec - one that inevitably changes as the team learns how to best exploit the target machine for graphics and gameplay.

The Crash tools became a test bed for developing techniques for large database management, parallel execution, data flexibility, and complicated compression and bin packing techniques.

Art / Technical risk: How do you make low poly 3D characters that don't look like the "Money for Nothing" video?

From the beginning, the Crash art design was very cartoon in style. We wanted to back up our organic stylize environments with highly animated cartoon characters that looked 3D, but not polygonal. By using a single skinned polygonal mesh model similar to the kind used in cutting edge special effects shots (except with a lot less polygons), we were able to create a three dimensional cartoon look. Unlike the traditional "chain of sausages" style of modeling, the single skin allows interesting "squash and stretch" style animation like that in traditional cartoons.

By very careful hand modeling and judicious use of both textured and shaded polygons, we were able to keep these models within reasonable polygon limits. In addition, it was our belief that because Crash was the most important thing in the game, he deserved a substantial percentage of the game's resources. Our animation system allows Crash to have unique facial expressions for each animation, helping to convey his personality.

Technical risk: How do you fit a million polygons, tons of textures, thousands of frames of animation, and lots of creatures into a couple megs of memory?

Perhaps the single largest technical risk of the entire project was the memory issue. Although there was a plan from the beginning, this issue was not tackled until February of 1996. At this point we had over 20 levels in various stages of completion, all of which consumed between 2 and 5 megabytes of memory. They had to fit into about 1.2 megabytes of active area.

At the beginning of the project we had decided that the CD was the system resource least likely to be fully utilized, and that system memory (of various sorts) was going to be one of the greatest constraints. We planned to trade CD bandwidth and space for increased level size.

The Crash series employs an extremely complicated virtual memory scheme which dynamically swaps into memory any kind of game component: geometry, animation, texture, code, sound, collision data, camera data, etc. A workstation based tool called NPT implements an expert system for laying out the disk. This tool belongs to the class of formal Artificially Intelligence programs. Its job is to figure out how the 500 to 1000 resources that make up a Crash level can be arranged so as to never have more than 1.2 megabytes needed in memory at any time. A multithreaded virtual memory implementation follows the instructions produced by the tool in order to achieve this effect at run time. Together they manage and optimize the essential resources of main, texture, and sound RAM based on a larger CD based database.

Technical/Design risk: What to do with the camera?

With the 32 bit generation of games, cameras have become a first class character in any 3D game. However, we did not realize this until considerably into the project. Crash represents our first tentative stab at how to do an aesthetic job of controlling the camera without detracting from gameplay. Although it was rewritten perhaps five times during the project, the final camera is fairly straightforward from the perspective of the user. None of the crop of 1995 and 1996 3D action games played very well until Mario 64 and Crash. These two games, while very different, were released within two months of each other and we were essentially finished with Crash when we first saw Mario. Earlier games had featured some inducement of motion sickness and a difficulty for the players in quickly judging the layout of the scene. In order to enhance the tight, high impact feel of Crash's gameplay, we were fairly conservative with the camera. As a result, Crash retains the quick action feel of the traditional 2D platform game more faithfully than other 3D games.

Technical risk: How do you make a character collide in a reasonable fashion with an arbitrary 3D world… at 30 frames a second?

Another of the game's more difficult challenges was in the area of collision detection. From the beginning we believed this would be difficult, and indeed it was. For action games, collision is a critical part of the overall feel of the game. Since the player is looking down on a character in the 3rd person he is intimately aware when the collision does not react reasonably.

Crash can often be within a meter or two of several hundred polygons. This means that the game has to store and process a great deal of data in order to calculate the collision reactions. We had to comb through the computer science literature for innovative new ways of compressing and storing this database. One of our programmers spent better than six months on the collision detection part of the game, writing and rewriting the entire system half a dozen times. Finally, with some very clever ideas and a lot of hacks, it ended up working reasonably well.

Technical risk: How do you program, coordinate, and maintain the code for several hundred different game objects?

Object control code, which the gaming world euphemistically calls AI, typically runs only a couple of times per frame. For this kind of code, speed of implementation, flexibility, and ease of later modification are the most important requirements. This is because games are all about gameplay, and good gameplay only comes from constant experimentation with and extensive reworking of the code that controls the game's objects. The constructs and abstractions of standard programming languages are not well suited to object authoring, particularly when it comes to flow of control and state.

For Crash Bandicoot we implemented GOOL (Game Oriented Object LISP), a compiled language designed specifically for object control code that addresses the limitations of traditional languages.

Having a custom language whose primitives and constructs both lend themselves to the general task (object programming), and are customizable to the specific task (a particular object) makes it much easier to write clean descriptive code very quickly. GOOL makes it possible to prototype a new creature or object in as little as 10 minutes. New things can be tried and quickly elaborated or discarded. If the object doesn't work out it can be pulled from the game in seconds without leaving any hard to find and wasteful traces behind in the source. In addition, since GOOL is a compiled language produced by an advanced register coloring compiler with reductions, flow analysis, and simple continuations, it is at least as efficient as C, more so in many cases because of its more specific knowledge of the task at hand. The use of a custom compiler allowed us to escape many of the classic problems of C.


Article Start Previous Page 3 of 4 Next

Related Jobs

Impulse Gear, Inc.
Impulse Gear, Inc. — San Francisco, California, United States
[01.20.19]

Senior Software Engineer
Cignition
Cignition — Palo Alto, California, United States
[01.18.19]

Game Programmer
Heart Machine
Heart Machine — Culver City, California, United States
[01.18.19]

Gameplay Engineer
Wargaming Sydney
Wargaming Sydney — Broadway, New South Wales, Australia
[01.17.19]

Gameplay Programmer, C++ - Vehicle Physics





Loading Comments

loader image