Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Gamasutra: The Art & Business of Making Gamesspacer
Shader Integration: Merging Shading Technologies on the Nintendo Gamecube
View All     RSS
September 25, 2020
arrowPress Releases
September 25, 2020
Games Press
View All     RSS







If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

Shader Integration: Merging Shading Technologies on the Nintendo Gamecube


October 2, 2002 Article Start Previous Page 3 of 5 Next
 

Description of Shading Methods

In the next couple of sections, many of the shading methods used in Star Wars: Rogue Leader are briefly described. Specific aspects on how they integrate in the shading environment are discussed in detail. Additional information can be found in the Nintendo Gamecube SDK.


Figure 7: Illumination mapping.

Method 1: Illumination Maps

Illumination maps are used by the artists when they want specific areas to be self-illuminated. Good examples are lights of small windows. Illumination mapping requires an additional texture, which typically is just an intensity texture (one for four bits). Note that strictly speaking, an illumination map could be colored, however, since the white of the illumination map will be multiplied by the texture color of the color map anyways, colored illumination maps can be circumvented. After the light calculation is done, the self-illuminating term is fetched from a texture and simply added to the light color (c.f. figure 7).


Figure 8: Texture used for specular highlights.

Method 2: Specular Highlights

Specular highlights are a very important feature visually. The shiny reflection of light on surfaces gives the eye another hint about where in the scene light sources are located and adds to the overall realism. Technically speaking, specular highlights are relatively simple to implement. The light database needs to be able to determine the dominant light direction, which is, in most cases, derived from one of the directional lights (i.e. from the brightest one).

There are two different methods available to implement specular highlights. The lighting hardware can compute a specular term per vertex. This is quick to setup and the results are quite reasonable with highly tessellated geometry, but as always, computing something per pixel always gives results that are more pleasant. Therefore generating texture coordinates per vertex and looking up a specular highlight texture (c.f. figure 8) looks better.


Figure 9: Specular highlights.

The generation of texture coordinates is done in two steps (c.f. section “Texture Coordinate Generation”). First, the normal data is transformed from model into eye space using a texture matrix. Note that this step is common for all geometry and provides the basis for all other shading methods as well. This has the benefit that the interface to the geometry engine (computing skinned meshes, etc.) can be kept fairly simple and interdependencies between both subsystems are reduced. The transformed normals are now transformed again into using the “eye_normal to specular_coords” matrix using the dual transform feature of the hardware. This matrix depends on the cosine power (i.e. size) of the material being rendered and the direction of the light (also in eye space), c.f. code fraction 1 for more details. Since the specular texture is so frequently used, one should consider preloading them into the hardware texture cache permanently.

Method 3: Reflection/Environment Mapping

A mapping method that is quite similar to specular mapping is environment mapping. The surroundings of an object are reflecting from its surface. It’s not feasible with current generation hardware to implement this correctly. This is because a ray tracing approach would be required; which can’t be done in consumer hardware. Instead, a generic view of the scene is used that is rendered by an artist or generated during startup just once. However, this view (consisting of six texture maps, one in each direction) needs to be converted into a spherical environment map. This generated map is used to lookup pixels during runtime. This map needs to be regenerated as soon as the camera orientation is changing. The Nintendo Gamecube SDK contains examples on how to do so.


Figure 10: Environment mapping.

Code fraction 2 shows how to setup the second pass matrix in this case. In addition, some care needs to be taken on how the computed environment color is used and how it interacts with the computed light color for the same pixel (i.e. fragment). A linear interpolation between those two values solves the problem and gives control over how reflective a material is. Note that highly reflective surfaces will get almost no contribution from the computed light color (which is correct, since the surface reflects the incoming light), but since color per vertex painting is done via multiplication in the lighting hardware, the painted colors will be removed. This is an example of bad shader integration, but since the solution to the problem (passing color per vertex values unmodified and multiplying in the texture environment) is a significant performance hit it also introduces other problems. The trade off seems quite reasonable.

Method 4: Emboss mapping

Are more subtle but nevertheless important method is bump mapping where a height field mapped onto a surface describes its elevation per pixel without adding geometric data. On the Nintendo Gamecube two different methods are straightforward to implement. Emboss mapping computes light values per pixel. It is not possible to compute “bumped” specular highlights and reflection with this method. “Real” per pixel bump mapping using the indirect texture unit is capable of doing so (c.f. method 5).

The hardware has direct support for emboss mapping. The height field is looked up twice, first with the original set of texture coordinates as generated by the texturing artist. Afterwards with a slightly different set of texture coordinates as generated by the lighting hardware, which shifts the original texture coordinates depending of the direction of the light. Note that the amount of shifting (and therefore the resulting depth impression) comes from the scale of the normal matrix as loaded with GXLoadNrmMtxImm(); . This means that the matrices need to be scaled to the desired values. This does not affect lighting calculation since the normals are renormalized for light computations anyways, but it does mean that one mesh (i.e. set of polygons rendered with one set of matrices) can have only one depth value for emboss mapping and imposes a interdependency between the shading and geometry subsystems. The resulting height values are subtracted and multiplied by the computed light color (c.f. figure 11).


Figure 11: Emboss mapping.

Emboss mapping does not support the computation of specular highlights. However, one can just ignore the emboss map and add non-bumpy specular highlights. Nevertheless, by doing so, the dark edges of the bumpy surface will be removed (due to the adding) and the effect falls apart to some extent (not to mention that the specular highlights by itself ignore the height field completely).

Finally, emboss mapping (as bump mapping) needs binormals to describe the orientation of
the height field on the surface. Since they need to be transformed the same way the normals
are transformed this can add a bit overhead.

Method 5: Bump Mapping

Visually better results can be achieved using “real” bump mapping as supported with the indirect texture unit. Using this method the hardware computes a normal per pixel and uses that to lookup different textures including a diffuse light map (containing all directional and ambient lights), an environment map (as described in method 3) and even a specular map. Thereby all those shading effects are computed correctly in a bumped way. However, since the global lights are now fetched from a texture instead of being computed by the lighting hardware, the texture needs to be generated dynamically as soon as the camera orientation and/or the lights change (again, one can find an example on how this is done in the demo section of the Nintendo Gamecube SDK).

In addition, the height field needs to be pre-processed into a “delta U/delta V texture” (which is an intensity/alpha texture with four bit per component) and therefore needs (without further measures) twice as much memory for texture storage than the emboss mapping method described in method 4.


Figure 12: Bump mapping.

The delta-texture is fed into the indirect unit where it is combined with the surface normals, describing the orientation of the bump map. In the last stage of this three-cycle setup, the diffuse light map is looked up and the result is the bumped light color for the global lights. Note that the local lights are still computed per vertex (because they have a location and the normal used as input data does not give this information) and are added later in the texture environment.


Figure 13: Actions in the texture environment and
the indirect texturing unit during bump mapping.

Since the computation of the perturbed normal is so elaborate, doing more than one lookup with the computation result amortizes the effort slightly. Specular highlights and environment reflections can be looked up in subsequent stages. The lookup of the bumped specularity is a bit tricky, since the coordinates are kept in the indirect unit and passed from stage to stage. However, they are already denormalized and this is what makes it tricky (c.f. code fraction 3). The processes in the texture environment and the indirect texture unit are described in figure 13.

Method 7: Self Shadowing

Per-object self-shadowing can be realized quite nicely on the Nintendo Gamecube. The benefit of doing self-shadowing on a per object basis is that one does not need to be concerned so much with precision. Almost all reasonably sized (e.g. in diameter) objects can be represented nicely in an eight-bit Z texture as needed by the algorithm. To figure out if a pixel falls within shadow or not the object is pre-rendered from the viewpoint of the main directional light. “Viewpoint” means, the point that is reached when going backwards in the light direction from the center point of the model in question (note that a directional light by itself does not have a point of origin). This pre-render step is done using an orthogonal projection and the top/down, left/right and near/far planes have to be set in a way that the texture area used for pre-rendering is used to its maximum extend (i.e. deriving this planes from the bounding sphere). After the pre-rendering is complete, the Z buffer is grabbed.

Later, when the object is rendered into the view, each rendered pixels coordinates are projected into texture coordinates of the grabbed Z texture. Using a ramp texture, the distance of each pixel to the imaginary light point is measured and the resulting two values are compared. Depending on the outcome of this test, the rendered fragment either falls into shadow or not. Local lights are passed through a second color channel and added conditionally to the global colors (c.f. figure 14). Yet again, the Nintendo Gamecube SDK contains examples that describe the technical details.


Figure 14: Self shadowing.


Article Start Previous Page 3 of 5 Next

Related Jobs

Deep Silver Volition
Deep Silver Volition — Champaign, Illinois, United States
[09.24.20]

Senior Engine Programmer
Deep Silver Volition
Deep Silver Volition — Champaign, Illinois, United States
[09.24.20]

Senior Technical Designer
Random42
Random42 — London, England, United Kingdom
[09.24.20]

UE4 Technical Artist
Evil Empire
Evil Empire — Bordeaux, France
[09.24.20]

Senior Technical Developer





Loading Comments

loader image