Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Gamasutra: The Art & Business of Making Gamesspacer
Four Tricks for Fast Blurring in Software and Hardware
View All     RSS
July 29, 2021
arrowPress Releases
July 29, 2021
Games Press
View All     RSS







If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

Four Tricks for Fast Blurring in Software and Hardware


February 9, 2001 Article Start Previous Page 2 of 4 Next
 

Trick Two: Moving Images and Motion Blur

The technique just described suffers from one drawback: the fairly costly precomputation step before any blurring can be performed. This is fine for static images, but for moving ones it significantly increases the storage and time required to perform a blur.

To solve this, let's consider a horizontal motion blur. That is, blurring an image horizontally only, rather like Photoshop's Motion Blur filter (see Figure 4). Like Trick Two, this can done quickly and independently of the amount of blur, but this trick requires no precomputation.

Figure 4. An example image, and the result of blurring horizontally along its scan-lines.

The motion blurring described here is exactly like the box filter above, except for the fact that the kernel is only one pixel high. That is, each destination pixel is simply the average of a short horizontal line of pixels in the source image.


Equation 5. The basic equation for a horizontal motion blur with a constant valued kernel of value c and width 2k + 1.

The heart of the trick to do this quickly lies in keeping a running total of source pixels as you move across the scan lines of the image. Imagine the situation halfway across an image, say at horizontal position 23. We've already computed destination pixel 22, so we must have known the sum of all the source pixels used to compute its color. Destination pixel 23 uses almost exactly the same source pixels, apart from the ends -- we have to lose one source pixel at the left edge, and gain one at the right edge. Mathematically, this is shown in Equation 6.


Equation 6. Equation 5 rewritten to exploit coherence between adjacent destination pixels.

In other words, the new total of source pixels can be adjusted to the value for position 23, by subtracting the original source pixel at the left edge, and adding on the value at the right edge. This is illustrated in Figure 5, with the code given in Listing 3.

Figure 5. The sum of the first five values shown is 12. To compute the sum of the adjacent five values, one merely needs to subtract the blue number and add the red number to the old sum.

With a little more effort, the same algorithm can be made to motion blur an image in any direction -- not just horizontally -- by processing the pixels in vertical strips (for vertical blurring) or even along "skewed scan lines" for blurring at an arbitrary angle. The vertical scan line routine is particularly useful, because if you motion blur an image twice -- that is, once horizontally and then filtering the result once vertically -- you achieve exactly the same effect as a box filter (see Figure 6). Thus, an arbitrary amount of box-filter blur can be achieved, as in Trick One, in constant time regardless of kernel size, without the need for any precomputation.

Figure 6. (From left to right) The original image, the result of blurring it horizontally, and the result of blurring that vertically. This final result is equivalent to a box filtered version of the original image.

Taking Stock

Being able to quickly blur images, both static and moving, is all very well, but thus far I have concentrated entirely on software techniques, which ignore the power of 3D graphics cards. The following two tricks allow parts of the scene, or whole scenes rendered on a 3D card, to be blurred or "defocused" rapidly, by borrowing techniques from MIP-mapping, bilinear filtering, and the software blurring tricks given above.

The key to both of the following tricks is the idea that a sharp image scaled down and then blown up with bilinear filtering looks blurry (see Figure 7). Just try walking very close up to a wall or object in a game that uses a 3D card (but doesn't use any detail mapping) and you'll know what I mean. The problem of blurring or defocusing is then reduced to producing a small, good-quality version of the image or object to be blurred, and then "blowing it up" by rendering a sprite or billboard with that small image texture-mapped onto it.


Article Start Previous Page 2 of 4 Next

Related Jobs

Sucker Punch Productions
Sucker Punch Productions — Bellevue, Washington, United States
[07.28.21]

AI Systems Designer
Xbox Graphics
Xbox Graphics — Redmond, Washington, United States
[07.28.21]

Senior Software Engineer: Performance Tooling
Xbox Graphics
Xbox Graphics — Redmond, Washington, United States
[07.28.21]

Senior Software Engineer: GPU Compilers
Mountaintop Studios
Mountaintop Studios — San Francisco, California, United States
[07.27.21]

Data Engineer





Loading Comments

loader image