Version: 5.5
Справка по графическим эффектам
Antialiasing

Writing Image Effects

Image Effects are a way of post-processing rendered image in Unity.

Any script that has OnRenderImage function can act as a postprocessing effect – just add it on a camera object. The script function will drive the whole image effect logic.

OnRenderImage function

This script function receives two arguments: source image as a RenderTexture and destination it should render into, as a render texture as well. Typically a postprocessing effect uses Shaders that read the source image, do some calculations on it, and render the result into the provided destination (e.g using Graphics.Blit). It is expected that the image effect will fully replace all pixels of the destination texture.

When multiple postprocessing effects are added on the camera, they are executed in the order they are shown in the inspector, topmost effect being rendered first. Result of one effect is passed as “source image” to the next one; and internally Unity creates one or more temporary render textures to keep these intermediate results in.

Things to keep in mind:

  • Destination render texture can be null, which means “render to screen” (i.e. the backbuffer). This typically happens on the last image postprocessing effect on a camera.
  • When OnRenderImage finishes, it is expected that the destination render texture is the active render target. That is, generally a Graphics.Blit or manual rendering into destination texture should be the last rendering operation.
  • You generally want to turn off depth buffer writes and tests in your image effect shaders – otherwise can end up writing unintended values into destination Z buffer when doing Graphics.Blit. Almost all image effect shader passes should contain Cull Off ZWrite Off ZTest Always states.
  • If you wish to use stencil or depth buffer values from the original scene render, you should explicitly bind the depth buffer from the original scene render as your depth target. This can be done using Graphics.SetRenderTarget. You should pass the very first source image effects depth buffer as the depth buffer to bind.

After opaque image effects

By default, an image effect is executed after whole scene is rendered. In some cases however, it is desirable to render an effect after all opaque objects are done (but before skybox or transparencies are rendered). Often depth-based effects like Depth of Field use this.

Adding an ImageEffectOpaque attribute on the OnRenderImage function allows to achieve that.

Texture coordinates on different platforms

If an image effect is sampling different screen-related textures at once, you might need to be aware of platform differences in how texture coordinates for them are used.

A common scenario is that the effect “source” texture and camera’s depth texture will need different vertical coordinates, depending on anti-aliasing settings. See rendering platform differences page for details.

Using helper image effects code from Standard Assets

Effects package contains some base and helper classes to base your own image effects on. All the code there is in UnityStandardAssets.ImageEffects namespace.

Related topics

Справка по графическим эффектам
Antialiasing