With the ever-increasing resolutions made possible by modern 3D graphics cards, computer games and real-time graphics are acquiring a uniform "look," imposed largely by technical limitations and occasionally unimaginative programming. I'm talking about that sharp-edged, polygonal look. Great advances in texture resolution and per-pixel lighting have helped to fill in the previously fuzzy polygon interiors, but the edges and feel of the image often remain frustratingly synthetic and "clean."
Imaginative lighting, such as moving shadows, bump maps, and specular highlights, can help add variety to the rendered scene. These topics have been dealt with extensively elsewhere. However, one option that is often overlooked is blurring elements of the scene; for example, depth-of-field and focusing effects bring certain elements of an image into the foreground, hide unnecessary clutter, and reduce the need to resort to over-used fogging tricks. Blurring is a costly process, but a fundamentally simple and useful one.
This article presents a few tricks which can help make real-time blurring possible, and hopefully will provide enough material to inspire you to invent your own hybrid techniques.
The Basics of Blurring
There are many ways to blur an image, but at a basic level it always comes down to low-pass filtering of the image -- this can be achieved in many ways, often by convolution of the image with a filter. It's instructive to think about the basics of this blurring process so you can appreciate how the tricks work, and what their shortcomings are.
Figure 1 shows a source image, and the results of blurring it with two simple filters: the first is a box filter, and the second is a Gaussian type equivalent to a Photoshop Gaussian blur of 1.0 pixels. The Gaussian filter gives a "softer," more aesthetically pleasing look, but the box filter has computational advantages that I'll discuss later.
Figure 1. The effect of blurring an image with (from left to right) a box filter and a Photoshop-style Gaussian filter. The kernels of each these filters are given below. |
Algorithmically, the filtering process is achieved by calculating a weighted average of a cluster of source pixels for every destination pixel. The kernels shown in Figure 1 dictate the weights of source pixels used to perform this average. Informally, you can think of this as "sliding" the small kernel image over the source image. At each position of the kernel, the average of the product of all the kernel pixels with the source pixels they overlap is computed, yielding a single destination pixel. The code to do this in the most obvious fashion is given in l 1, and illustrated in Figure 2. Note that all the examples given here assume a monochrome source image; colored images can be handled simply by dealing with each channel (red, green, and blue) independently.
Figure 2. An example of how a single destination pixel in a blurred image results from the weighted average of several source pixels. |
Doing It Quickly
The blurring algorithm described thus far is simple, but slow. For large images, and for large kernels, the number of operations rapidly becomes prohibitively large for real-time operation. The problem is particularly acute when extreme blurring is required; either a small-kernel filter must be applied many times iteratively, or a filter with a kernel of a size comparable with the image must be used. That's approximately an n4 operation using the code in Figure 1 -- clearly no good.
The rest of this article describes a few tricks -- the first two entirely software-related, the latter two using the power of 3D graphics cards -- which help make real-time blurring possible.
Trick One: Box Filter to the Rescue
Look again at the box-filtered image in Figure 1, and at the kernel. It's a constant value everywhere. This can be used to great advantage. The general filtering operation used by Listing 1 is described by this mathematical expression (you can skip to the end of this section for the payoff if you don't like math):
Equation 1. Here x,y specifies the coordinate of the destination pixel, s is the source image, d is the destination image, ker is kernel, and 2k + 1 is the size (in pixels) of the kernel.
Allowing for a constant kernel value, this becomes:
Equation 2. Equation 1 rewritten for kernel with constant value c. Values of c other than 1 allow the brightness of the destination image to be changed.
However, the costly double sum per destination pixel still remains. This is where some nifty precomputation comes in handy. The key is to represent the source image in a different way. Rather than storing the brightness of each pixel in the image, we precompute a version of the image in which each pixel location holds the total of all the pixels above and to the left of it (see Figure 3). Mathematically, this is described by Equation 3:
Equation 3. Image p at a point x,y contains the sum of all the source pixels from 0,0 to x,y.
Note that this means that you need to store more than the usual 8 bits per pixel per channel -- the summed brightnesses toward the bottom right of the image can get very large.
Once this precomputation has been completed, the expression for the box-filtering process can be rewritten entirely in terms of sums starting at 0:
Equation 4. Equation 2 rewritten with sums from 0, where p is the precomputed image from Equation 3.
Figure 3. The values in the table on the left represent a source image. Each entry in the table on the right contains the sum of all the source pixels above and to the left of that position. |
This equation gives exactly the same result as the basic box filtering algorithm in Equation 2; the trick is that each of the double sums in Equation 4 is just a single look-up into the precomputed image p. This means that the blurring operation for each destination pixel is reduced to four image look-ups, a few additions and subtractions, and a divide by a constant value (which can also be turned into a lookup with a simple multiplication table). Even more significantly, this algorithm's speed is independent of kernel size, meaning that it takes the same time to blur the image no matter how much blurring is required. Code which implements this algorithm is given in Listing 2. It's slightly complicated by having to deal with edge cases, but the core of the algorithm is still simple. Some impressive focusing and defocusing effects can be achieved with this code alone. It is particularly suited to static images (because you only have to perform the precomputation step once) such as front-end elements and text/fonts.