ever-increasing resolutions made possible by modern 3D graphics cards,
computer games and real-time graphics are acquiring a uniform "look,"
imposed largely by technical limitations and occasionally unimaginative
programming. I'm talking about that sharp-edged, polygonal look. Great
advances in texture resolution and per-pixel lighting have helped to
fill in the previously fuzzy polygon interiors, but the edges and feel
of the image often remain frustratingly synthetic and "clean."
lighting, such as moving shadows, bump maps, and specular highlights,
can help add variety to the rendered scene. These topics have been dealt
with extensively elsewhere. However, one option that is often overlooked
is blurring elements of the scene; for example, depth-of-field and focusing
effects bring certain elements of an image into the foreground, hide
unnecessary clutter, and reduce the need to resort to over-used fogging
tricks. Blurring is a costly process, but a fundamentally simple and
presents a few tricks which can help make real-time blurring possible,
and hopefully will provide enough material to inspire you to invent
your own hybrid techniques.
The Basics of
are many ways to blur an image, but at a basic level it always comes
down to low-pass filtering of the image -- this can be achieved in many
ways, often by convolution of the image with a filter. It's instructive
to think about the basics of this blurring process so you can appreciate
how the tricks work, and what their shortcomings are.
1 shows a source image, and the results of blurring it with two simple
filters: the first is a box filter, and the second is a Gaussian type
equivalent to a Photoshop Gaussian blur of 1.0 pixels. The Gaussian
filter gives a "softer," more aesthetically pleasing look,
but the box filter has computational advantages that I'll discuss later.
the filtering process is achieved by calculating a weighted average
of a cluster of source pixels for every destination pixel. The kernels
shown in Figure 1 dictate the weights of source pixels used to perform
this average. Informally, you can think of this as "sliding"
the small kernel image over the source image. At each position of the
kernel, the average of the product of all the kernel pixels with the
source pixels they overlap is computed, yielding a single destination
pixel. The code to do this in the most obvious fashion is given in Listing
1, and illustrated in Figure 2. Note that all the examples given
here assume a monochrome source image; colored images can be handled
simply by dealing with each channel (red, green, and blue) independently.
Doing It Quickly
algorithm described thus far is simple, but slow. For large images,
and for large kernels, the number of operations rapidly becomes prohibitively
large for real-time operation. The problem is particularly acute when
extreme blurring is required; either a small-kernel filter must be applied
many times iteratively, or a filter with a kernel of a size comparable
with the image must be used. That's approximately an n4 operation using
the code in Figure 1 -- clearly no good.
of this article describes a few tricks -- the first two entirely software-related,
the latter two using the power of 3D graphics cards -- which help make
real-time blurring possible.
Trick One: Box
Filter to the Rescue
at the box-filtered image in Figure 1, and at the kernel. It's a constant
value everywhere. This can be used to great advantage. The general filtering
operation used by Listing 1 is described
by this mathematical expression (you can skip to the end of this section
for the payoff if you don't like math):
Equation 1. Here x,y specifies the coordinate of the destination
pixel, s is the source image, d is the destination image, ker is kernel,
and 2k + 1 is the size (in pixels) of the kernel.
for a constant kernel value, this becomes:
2. Equation 1 rewritten for kernel with constant value c. Values of
c other than 1 allow the brightness of the destination image to be changed.
the costly double sum per destination pixel still remains. This is where
some nifty precomputation comes in handy. The key is to represent the
source image in a different way. Rather than storing the brightness
of each pixel in the image, we precompute a version of the image in
which each pixel location holds the total of all the pixels above and
to the left of it (see Figure 3). Mathematically, this is described
by Equation 3:
Equation 3. Image p at a point x,y contains the sum of all the source
pixels from 0,0 to x,y.
this means that you need to store more than the usual 8 bits per pixel
per channel -- the summed brightnesses toward the bottom right of the
image can get very large.
precomputation has been completed, the expression for the box-filtering
process can be rewritten entirely in terms of sums starting at 0:
Equation 4. Equation 2 rewritten with sums from 0, where p is the
precomputed image from Equation 3.
gives exactly the same result as the basic box filtering algorithm in
Equation 2; the trick is that each of the double sums in Equation 4
is just a single look-up into the precomputed image p. This means that
the blurring operation for each destination pixel is reduced to four
image look-ups, a few additions and subtractions, and a divide by a
constant value (which can also be turned into a lookup with a simple
multiplication table). Even more significantly, this algorithm's speed
is independent of kernel size, meaning that it takes the same time to
blur the image no matter how much blurring is required. Code which implements
this algorithm is given in Listing 2. It's
slightly complicated by having to deal with edge cases, but the core
of the algorithm is still simple. Some impressive focusing and defocusing
effects can be achieved with this code alone. It is particularly suited
to static images (because you only have to perform the precomputation
step once) such as front-end elements and text/fonts.