Skip to content Skip to sidebar Skip to footer

The Skylanders SWAP Force Depth-of-Field Shader

Our depth of field effect in the game.
Mike Bukowski, Padraic Hennessy, Brian Osman, and Morgan McGuire (that's me) published in GPU Pro4 an early version of a depth of field shader that we developed while working together on Skylanders: SWAP Force at Vicarious Visions, and Activision studio.  This post describes the shader a bit more and includes links to updated sample code.

A lens camera can only focus perfectly at one depth.  Everything in front of that depth in the near field is out of focus, with each point blurred to a circle of confusion (a.k.a. point spread function) radius that increases towards the camera.  Everything past that depth in the far field is also out of focus, with radius increasing towards infinity.  When the circle of confusion radius is less than about half of a pixel, it is hard to notice points are out of focus, so the depth region around the plane of focus is called the focus (a.k.a. mid) field. Technically, "depth of field" is a distance specifying the extent of the focus field.  However, in the industry, a "depth of field" effect is one that limits this from the common computer graphics pinhole camera effect of having an infinite depth of field to one that resembles a real camera with a small, finite depth of field.

The method that we describe is a simple modification to a previous post-processing technique by David Gillham from ShaderX5.  Our technique is simple to implement and fast across a range of hardware, from modern discrete GPUs to mobile and integrated GPUs to Xbox360 generation consoles. As you can see from the screenshot above, the depth of field effect is key to both framing the action and presenting the soft, cinematic look of this game. Scaling across different hardware while providing this feel for the visuals was essential for the game. "Simple to integrate" means that it requires only a regular pinhole image and depth buffer as input, and that it executes in three 2D passes, two of which are at substantially reduced resolution. The primary improvement over Gilham's original is better handling of occlusion, especially where an in focus object is seen behind an out of focus one.

The diagram below shows how the algorithm works.  The implementation in our sample code computes the Signed CoC radius from depth. One can also directly write those values during shading. See the book chapter and our sample code below for full details.
The code linked below contains some changes from the original. It performs 4x downsampling in each direction during blurring to speed the gather operations. We disabled one of the occlusion tests within the far field to reduce artifacts from this downsampling (at the expense of introducing some glowy halos in the far field--use 2x downsampling and uncomment that test to avoid this.) We added support for guard bands to reduce artifacts at borders and work with other techniques (such as our Alchemy AO '11 and SAO '12 algorithms) General small performance improvements appear throughout. It has also been updated to compile under version 10 of the open source G3D Innovation Engine, although you need not compile it since we include a binary.

Here's a video result showing how the effect looks in motion:

If you have something to say, please leave a message in the comments :)