How about the related issue of removing jaggies from edges? I've seen a lot of discussion recently about different methodology to remove some of the jagginess as we get into more high definition displays and things like that.
HC: I think that's very very critical. It's one of the big emphases on our current graphics engine, removing what we call the "digital artifacts". Jaggies definitely is one of them. So we place a fairly high emphasis on removing these temporal aliasing. And again, this is one area that we see a lot of recent research, and there are some things that look much better, and we're definitely looking at a few of them right now.
What looks most interesting to you?
HC: I think all of the morphological anti-aliasing, especially some of the latest variations of MLAA with deferred rendering engines. Some recent techniques are fast enough for even the current generation of consoles.
But there's also the notion of decoupling sampling from shading in some of the recent papers from ATI and NVIDIA and other people, which is also very interesting. That's about having higher quality AA without having to pay for the extra storage. As you know, storage and bandwidth in a console is always in premium.
There are new techniques that allow us to achieve higher quality anti-aliasing, something like the equivalent of 8x or 16x MSAA, but which only requires 2x or 4x storage. There are also other things we can do to make the game look smoother, like motion blur, better filtering, and better lighting and material.
I also expect quality and performance improvements to continue, perhaps by taking advantage of a GPGPU. Hopefully with all this, jaggies will be reduced to a point where you have to look very hard to find them in the next generation of games.
How about that sort of texture pop effect that has continued to plague pretty much every game with more complicated normal maps as they're paged in from the disc? How are you dealing with that?
HC: Well that's more or less a resource management issue. That again is dealing with a limited amount of memory. There are ways to make it less noticeable, and they all have to do with clever blending of the textual transition or morphing into the textures as things are being paged in. And also generally higher performance allows you to hide the pops until they're further away, or have better prediction of textures that are needed. But the cold reality is, there's limited amount of space on the console that can store things in memory, so we need to page things in and out.
For a very large world, we also need level of detail management so we don't draw everything at the same fidelity everywhere; these are the main reasons of "popping." But texture differences are typically only a small part of this, often changing geometry silhouette and differences in lighting and shading of the different LODs contribute to the most jarring popping.
To combat this, we have a very clever LOD generation systems that try to preserve both the geometry silhouette and the material appearances. You might have seen our GDC presentation last year on our imposter system that talked about some of this, and we're still improving it. But still, when you want to have much larger worlds with much larger content, you have to page things in. So for now it's just one hack that's probably going to be better than the other hack.
How much time in your department do you have to spend being concerned about game performance? Can you push the boundaries as much as possible with your effects and then get reined back in, or do you have to be constantly on top of memory management and that sort of thing?
HC: You're talking about two things. One is performance, one is memory. But they kind of follow similar patterns. Our teams are typically very involved in both performance and memory, because we are the largest consumer of both.
Typically, the way that we handle performance is we want a game to be within [a performance] ballpark around milestones. So for each milestone we set a performance target and say by this milestone, we have to be within this ballpark of this performance number. And then at the end of the milestone we will have very formal performance reviews, where we go through each level and find the performance bottlenecks and then assign teams and individuals to track those performance bottlenecks and improve performance that way.
That's the typical process we go through. But obviously a lot of performance comes from how things are designed. If you design something to be low-performance, then typically you're going to be low-performance all the way until the end. A lot of our design decisions already factor in performance from very early on. We try to solve the performance issues more as a design problem, not as a hardware optimization problem.
So in terms of memory, it follows a similar pattern. We typically have an agreed-upon budget very early in the game, and then we try to make the team live by that budget, especially the content people. And every time they make a level there's going to be a content report that tells us where they're using memory, and where they exceeded their memory budget. Then we take early enforcement action to make sure we're within ballpark.
But performance and memory are some of those things that depending on the stage of your game, you cannot be overly strict about them. At the earlier stage you want people to put more content in so they can get a feel for the game and explore the look of the game. As long as they're within ballpark I think we'll be okay. So we try not to be too draconian about performance and memory numbers too early.
I don't know if you have been keeping up on advances in voxelization...
HC: Voxels are very very interesting to us. For example, when we take advantage of voxelization, we basically voxelize our level and then we build these portalizations and clustering of our spaces based on the voxeliation. And so voxelization, what it does is hide all the small geometry details. And in the regular data structures, it's very easy to reason out the space when it's voxelized versus dealing with individual polygons.
But besides this ability, there's also the very interesting possibility for us to use voxelization or a voxelized scene to do lighting and global illumination. We have some thoughts in that area that we might research in the future, but in general I think it's a very good direction for us to think about; to use voxelization to hide all the details of the scene geometry and sort of decouple the complexity of the scene from the complexity of lighting and visibility. In that way everything becomes easier in voxelization.
But as far as disadvantages, once you've figured out all this connectivity within your space that's based on voxels, how do you then map that back to the original geometry? In terms of lighting for example, if you've figured out where each voxel should be lit, how do you take that information and bake it back to the original geometry? Because you're not drawing the voxels, you're drawing the original scene. And so that mapping is actually non-trivial. So there's a lot of research that is still needed in order for voxels to be used directly in-engine.