Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
October 25, 2014
arrowPress Releases
October 25, 2014
PR Newswire
View All
View All     Submit Event

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

Logarithmic Depth Buffer
by Brano Kemen on 08/12/09 02:43:00 pm   Expert Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

I assume pretty much every 3D programmer runs into Z-buffer issues sooner or later. Especially when doing planetary rendering; the distant stuff can be a thousand kilometers away but you still would like to see fine details right in front of the camera.

Previously I have dealt with the problem by splitting the depth range in two and using the first part for near stuff and another for distant stuff. The boundary was floating, somewhere around 5km - quad-tree tiles up to certain level were using the distant part, and the more detailed tiles that by law of LOD are occurring nearer the camera used the other part.
Most of the time this worked. But in one case it failed miserably - when a more detailed tile appeared behind a less detailed one.
I was thinking about the ways to fix it, grumbling why we can't have a Z-buffer with better distribution, when it occurred to me that maybe we can.

Steve Baker's document explains common problems with Z-buffer. In short, the depth values are proportional to the reciprocal of Z. This gives amounts of precision near the camera but little off in the distance. Common method is then to move your near clip plane further away, which helps but also brings its own problems, mainly that .. the near clip plane is too far

A much better Z-value distribution is a logarithmic one. It also plays nicely with LOD used in large scale terrain rendering.
Using the following equation to modify depth value after it's been transformed by the projection matrix:
z = log(C*z + 1) / log(C*Far + 1) * w
Where C is constant that determines the resolution near the camera, and the multiplication by w undoes in advance the implicit division by w later in the pipeline.
Resolution at distance x, for given C and n bits of Z-buffer resolution can be computed as
      log(C*Far + 1)
Res = ----------------
2^n * C/(C*x+1)

So for example for a far plane at 10,000 km and 24-bit Z-buffer this gives the following resolutions:
            1m      10m     100m    1km     10km    100km   1Mm     10Mm
C=1         1.9e-6  1.1e-5  9.7e-5  0.001   0.01    0.096   0.96    9.6     [m]
C=0.001     0.0005  0.0005  0.0006  0.001   0.006   0.055   0.549   5.49    [m]

Along with the better utilization of z-value space it also (almost) gets us rid of the near clip plane.

And here comes the result.

Looking into the nose while keeping eye on distant mountains ..

10 thousand kilometers, no near Z clipping and no Z-fighting! HOORAY!

More details

The C basically changes the resolution near the camera; I used C=1 for the screenshots, having theoretical resolution 1.9e-6m. However, the resolution near the camera cannot be utilized fully as long as the geometry isn't finely tessellated too, because the depth is interpolated linearly and not logarithmically. On models such as the guy on the screenshots it is perfectly fine to put camera on his nose, but with models with long stripes with vertices few meters apart the bugs from the interpolation can be visible. We will be dealing with it by requiring certain minimum tessellation.

Also I think I've read somewhere that some forthcoming generation of hardware will support different modes of interpolation too.

So yes, modifying C changes the resolution near the camera, setting it to a value that gives the largest acceptable resolution may be desirable to achieve more linear distribution in the near range and thus minimizing the interpolation problem.

Near clip plane can be put arbitrarily 'near' but not zero because of the 1/w division. I have put it to 0.0001m. This is using standard perspective projection setup.

Negative Z artifact fix

Ysaneya suggested a fix for the artifacts occurring with thin or huge triangles when Z goes behind the camera, by writing the correct Z-value at the pixel shader level. This disables fast-Z mode but he found the performance hit to be negligible.

Related Jobs

Red 5 Studios
Red 5 Studios — Orange County, California, United States

Graphics Programmer
Red 5 Studios
Red 5 Studios — Orange County, California, United States

Gameplay Programmer
Gearbox Software
Gearbox Software — Plano, Texas, United States

Server Programmer
Forio — San Francisco, California, United States

Web Application Developer Team Lead


Tyler Glaiel
profile image
I'd love to see what it looks like as the camera flys forward at high speed across the terrain.

I was considering something like this once. Is there an easy way to add this into traditional graphics libraries like opengl or directx via a shader?

Brano Kemen
profile image
There's an older video with the terrain at

The code for logarithmic z-buffer is easy to add to shaders, it's actually the one-line expression mentioned above.

Stephen Northcott
profile image
There is another method which I discuss here, based on an article I link to also..

That method is great, except when your viewpoint passes into surfaces.. But in planet and landscape rendering engines it's fairly easy in my experience to isolate those as special cases.

Aurelio Reis
profile image
This is a nice way to go about it but keep in mind that modifying depth values in a shader has the potential to dramatically degrade performance due to the way in which the graphics pipeline rejects pixels early.

Extending this to work by just modifying the projection matrix would make it immensely more useful. Thoughts?

Brano Kemen
profile image
This method modifies the depth in vertex shader, just after you multiply the position with modelviewproj matrix, so that's OK from the performance point of view.

However there are artifacts near the camera when one or more face vertices lie behind the near plane.

The problem is not so much in the values of Z being negative - lower C linearizes the function well enough so that this would not be problem. Problem is that the rasterizer interpolates Z/W and 1/W linearly and computes per-pixel z by dividing the two values. It does this to obtain perspective correct values. The value written to Z-buffer could be a different thing from this all, but actually it's taken from this.

Anyway, the problem is that we are pre-multiplying Z with values of W at the vertices, to get rid of inherent 1/W. Basically then, even when the logarithmic function is linear enough in +-200m range, we are also linearly interpolating between values of W. But since the rasterizer interpolates 1/W, the corresponding values of W are quite different there.

I'm using the pixel shader computed depth (which may impair the performance by disabling early-Z and hierarchic-Z) only for the objects intersecting the near plane.

But I didn't see any slowdown in my app anyway.

Thatcher Ulrich
profile image
This is a great post.

There's a formulation I like a little better, which is:

z = log(z / zn) / log(zf / zn)

That keeps relative precision constant through the whole zn to zf range, which is how I'm used to thinking about it.