Camera Motion Blur

Motion Blur is a common postprocessing effect simulating the fact that for most camera systems 'light' gets accumulated over time (instead of just taking discrete snapshots). Fast camera or object motion will hence produce blurred images.


Example of a standard camera motion blur when the camera is moving sideways. Also, notice how the background areas blur less than the foreground ones which is a typical side effect of motion blur.

The current Motion Blur implementation only supports blur due to camera motion with the option to exclude certain layers (useful for excluding characters and/or dynamic objects, especially when those are following the camera movement). It can however be extended to support dynamic objects if an additional script keeps track of each objects model matrix and updates the already generated velocity buffer.


Example showing Camera Motion Blur with dynamic objects (canisters, bus) being excluded

As with the other image effects, this effect is only available in Unity Pro and you must have the Pro Standard Assets installed before it becomes available.

Properties

TechniqueMotion Blur algorithm. Reconstruction filters will generally give best results at the expense of performance and a limited blur radius of 10 pixels unless a DirectX11 enabled graphics device is used.
Velocity ScaleHigher scale makes image more likely to blur.
Velocity MaxMaximum pixel distance blur will be clamped to and tile size for reconstruction filters (see below).
Velocity MinMinimum pixel distance at which blur is removed entirely.
Camera Motion specific:
Camera RotationScales strength of blurs due to camera rotations.
Camera MovementScales strength of blurs due to camera translations.
Local Blur, Reconstruction, ReconstructionDX11 and ReconstructionDisc specific:
Exclude layersObjects in this layer will remain unaffected.
Velocity downsampleLower resolution velocity buffers might help performance but will heavily degrade blur quality. Might still be a valid option for simple scenes.
Sampler JitterAdding noise helps prevent ghosting for the Reconstruction filter.
Max Sample CountNumber of samples used to determine the blur. Affects performance a lot.
Preview (Scale)Preview how blur might look like given artificial camera motion values.

Motion Blur Filters (Technique)

Local Blur simply performs a directional blur along the current's pixel velocity. Being essentially a gather operation, it is suited for scenes with a low geometric complexity (e.g. vast terrains), large blur radii or when 'realism' is not the governing factor. One shortcoming is that it can't produce proper 'overlaps' of blurred objects onto focused background areas. Another one that excluded objects 'smear' onto blurred areas.


Example using the Local Blur technique while the camera is translating sideways and either foreground (top) or background is excluded (bottom). Notice that both of the above mentioned artifacts apply, typically degrading image quality. If those are not important in your case, this motion blur technique is a fast and effective option.

Reconstruction filters can produce more realistic blur results. The name Reconstruction is derived from the fact that the filter tries to estimate backgrounds, even if there is no information available in the given color and depth buffers. The results can be of higher quality and shortcomings of the Local Blur's gather filter can be avoided (it can e.g. produce proper overlaps).

It is based on the paper A Reconstruction Filter for Plausible Motion Blur (http://graphics.cs.williams.edu/papers/MotionBlurI3D12/). The algorithm chops the image into tiles of the size Velocity Max and uses the maximum velocity in the area to simulate a blurry pixel scattering onto neighbouring areas. Artifacts can arise if the velocity is highly varying while the mentioned tile size is large.

The DirectX11 exclusive filter ReconstructionDX11 allows arbitrary blur distances (aka tile size or Velocity Max) and creates nicer blurs by using more samples.

The ReconstructionDisc variation uses a different sampling pattern to generate a softer look compared to the standard Reconstruction filters. However, as more samples are taken (it scales automatically to DirectX11), the resulting cost can be higher.


Example using the Reconstruction technique while the camera is translating sideways. Notice that this time, the mentioned artifacts are less severe as the Reconstruction filter tries to solve those cases (cubes overlapping when background is excluded (bottom) or excluded cubes not smearing onto blurred background (top)).

While all of the above filters need a prepass to generate a velocity buffer, the Camera Motion filter solely works on the camera motion. It generates a global filter direction based on camera change and blurs the screen along that direction (see http://www.valvesoftware.com/publications/2008/GDC2008_PostProcessingInTheOrangeBox.pdf for more details). It is especially suited for smoothing fast camera rotations, for instance in first person shooter games.


Example using the Camera Motion technique. Notice that the blur is uniform across the entire screen.

Hardware support

This effect requires a graphics card with pixel shaders (3.0) or OpenGL ES 2.0. Additionally, depth texture support is required. PC: NVIDIA cards since 2004 (GeForce 6), AMD cards since 2005 (Radeon X1300), Intel cards since 2006 (GMA X3000); Mobile: OpenGL ES 2.0 with depth texture support; Consoles: Xbox 360, PS3.

All image effects automatically disable themselves when they can not run on end-users graphics card.

Page last updated: 2013-02-06