Legacy Documentation: Version 2017.2 (Go to current version)
Custom Shader GUI
Camera's Depth Texture
Other Versions

Using Depth Textures

It is possible to create Render Textures where each pixel contains a high-precision Depth value. This is mostly used when some effects need the Scene’s Depth to be available (for example, soft particles, screen space ambient occlusion and translucency would all need the Scene’s Depth). Image Effects often use Depth Textures too.

Pixel values in the Depth Texture range between 0 and 1, with a non-linear distribution. Precision is usually 32 or 16 bits, depending on configuration and platform used. When reading from the Depth Texture, a high precision value in a range between 0 and 1 is returned. If you need to get distance from the Camera, or an otherwise linear 0–1 value, compute that manually using helper macros (see below).

Depth Textures are supported on most modern hardware and graphics APIs. Special requirements are listed below:

  • Direct3D 11+ (Windows), OpenGL 3+ (Mac/Linux), OpenGL ES 3.0+ (Android/iOS), Metal (iOS) and consoles like PS4/Xbox One all support depth textures.
  • Direct3D 9 (Windows) requires a graphics driver to support “INTZ” texture format to get depth textures.
  • OpenGL ES 2.0 (iOS/Android) requires GL_OES_depth_texture extension to be present.
  • WebGL requires WEBGL_depth_texture extension.

Depth Texture Shader helper macros

Most of the time, Depth Texture are used to render Depth from the Camera. The UnityCG.cginc include file contains some macros to deal with the above complexity in this case:

  • UNITY_TRANSFER_DEPTH(o): computes eye space depth of the vertex and outputs it in o (which must be a float2). Use it in a vertex program when rendering into a depth texture. On platforms with native depth textures this macro does nothing at all, because Z buffer value is rendered implicitly.
  • UNITY_OUTPUT_DEPTH(i): returns eye space depth from i (which must be a float2). Use it in a fragment program when rendering into a depth texture. On platforms with native depth textures this macro always returns zero, because Z buffer value is rendered implicitly.
  • COMPUTE_EYEDEPTH(i): computes eye space depth of the vertex and outputs it in o. Use it in a vertex program when not rendering into a depth texture.
  • DECODE_EYEDEPTH(i)/LinearEyeDepth(i): given high precision value from depth texture i, returns corresponding eye space depth.
  • Linear01Depth(i): given high precision value from depth texture i, returns corresponding linear depth in range between 0 and 1.

Note: On DX11/12, PS4, XboxOne and Metal, the Z buffer range is 1–0 and UNITY_REVERSED_Z is defined. On other platforms, the range is 0–1.

For example, this shader would render depth of its GameObjects:

Shader "Render Depth" {
    SubShader {
        Tags { "RenderType"="Opaque" }
        Pass {
            CGPROGRAM

            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            struct v2f {
                float4 pos : SV_POSITION;
                float2 depth : TEXCOORD0;
            };

            v2f vert (appdata_base v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                UNITY_TRANSFER_DEPTH(o.depth);
                return o;
            }

            half4 frag(v2f i) : SV_Target {
                UNITY_OUTPUT_DEPTH(i.depth);
            }
            ENDCG
        }
    }
}

See also

Did you find this page useful? Please give it a rating:

Custom Shader GUI
Camera's Depth Texture