Introduction
The task of porting Indiana Jones and the Infernal Machine from PC to N64 proved to be quite challenging. The Nintendo 64 could handle the amount of polygons quite well, but memory use was a huge problem because the developers fo the PC original hadn't needed to worry about it at all. While a typical PC memory footprint might be 128 MB, the N64 needed to run the game with only 4MB -- and that must also hold the code, frame buffers, textures and display lists. You can take advantage of the N64 Expansion Pak to provide higher resolution and better performance, but only if it is present.
Looking at these numbers alone, it seems that this kind of challenge is doomed to fail from the start. However, when the project started, the original PC game was much smaller in terms of texture and model usage. As we all know, projects tend to grow during the last phase of production because everybody wants to make the game as good as it can be. Certainly, an adventure game like Indy provided enough opportunities for growth.
It turned
out, we were looking for every thousand bytes we could free up in the
end. A number of different methods were used to make it fit. This article
presents a summary of the most important methods necessary for dealing
with memory constraints, most of which are applicable and beneficial to
other projects. Reducing memory usage also helps to increase performance
because data is more likely to be local, and therefore more likely to
be found in one of the CPU's data caches.
One could argue that developers today no longer need to concentrate on
memory constraint issues because of new systems like the Sony Playstation
2 and Nintendo Gamecube. However, while their rendering capabilities have
steadily improved, their memory subsystems have not changed very much.
This suggests that memory organization and efficient memory usage are
now an even more pressing issue.
Scenario
The approach used in porting Indy to the N64 shares many similarities with a data path that is used in producing an original title from scratch. Normally, the art and level data is exported into the game engine-specific data format using a combination of exporting tools. Those in turn are then bound together, creating a binary resource which is loaded on the target. To reduce the size of the binary resource, we were left with two options: rewriting the export tool setting it to output smaller and more optimized structures, or using the existing PC game startup and initialization part to load the binary resource.
We chose the second approach, because it offered the benefit of a quick startup given that all the code loading and initializing data was already in place. All we needed to do was to take the game and strip the rendering loop out of it. Starting here, we could take all the game's data structures, rewrite and optimize them in an easy fashion (Figure 1). We familiarized ourselves with the game's internal structure in the process. We called this abused executable the munger. We extended it to optimize and rewrite all data structures. It also provided for more consistency checks, which were required to ensure that our changes to the data structures didn't cripple the data by casting 32 bit values to eight bits.
|
|
![]() |
|
||
|
|
||||
|
|
Figure 1: Data-Path Setup |
|||
This approach has many similarities between an original data path because it is used for an original title, and therefore the methods used and lessons learned by porting Indy apply to original titles. It is also important to take data management and storage requirements into account, since the amount of art data (e.g. geometry, textures, animation data, etc.) is increasing with the release of the next generation systems
Methods Used
The methods used are categorized into three classes. The rest of the article discusses the methods in more detail and offers some practical insight, which may be of some benefit to those of you experiencing the similar memory problems.