Using Depth Textures
Suggest a change
Success!
Thank you for helping us improve the quality of Unity Documentation. Although we cannot accept all submissions, we do read each suggested change from our users and will make updates where applicable.
Close
Sumbission failed
For some reason your suggested change could not be submitted. Please try again in a few minutes. And thank you for taking the time to help us improve the quality of Unity Documentation.
Close
It is possible to create Render Textures where each pixel contains a high precision “depth” value (see RenderTextureFormat.Depth). This is mostly used when some effects need scene’s depth to be available (for example, soft particles, screen space ambient occlusion, translucency would all need scene’s depth).
Pixel values in the depth texture range from 0 to 1 with a nonlinear distribution. Precision is usually 24 or 16 bits, depending on depth buffer used. When reading from depth texture, a high precision value in 0..1 range is returned. If you need to get distance from the camera, or otherwise linear value, you should compute that manually.
Depth textures in Unity are implemented differently on different platforms.
- On Direct3D 9 (Windows), depth texture is either a native depth buffer, or a single channel 32 bit floating point texture (“R32F” Direct3D format).
- Graphics card must support either native depth buffer (INTZ format) or floating point render textures in order for them to work.
- When rendering into the depth texture, fragment program must output the value needed.
- When reading from depth texture, red component of the color contains the high precision value.
- On OpenGL (Mac OS X), depth texture is the native OpenGL depth buffer (see ARB_depth_texture).
- Graphics card must support OpenGL 1.4 or ARB_depth_texture extension.
- Depth texture corresponds to Z buffer contents that are rendered, it does not use the result from the fragment program.
- OpenGL ES 2.0 (iOS/Android) is very much like OpenGL above.
- Direct3D 11 (Windows) has native depth texture capability just like OpenGL.
- Flash (Stage3D) uses a color-encoded depth texture to emulate the high precision required for it.
Using depth texture helper macros
Most of the time depth textures are used to render depth from the camera. UnityCG.cginc
include file contains some macros to deal with the above complexity in this case:
-
UNITY_TRANSFER_DEPTH(o): computes eye space depth of the vertex and outputs it in o (which must be a float2). Use it in a vertex program when rendering into a depth texture. On platforms with native depth textures this macro does nothing at all, because Z buffer value is rendered implicitly.
-
UNITY_OUTPUT_DEPTH(i): returns eye space depth from i (which must be a float2). Use it in a fragment program when rendering into a depth texture. On platforms with native depth textures this macro always returns zero, because Z buffer value is rendered implicitly.
-
COMPUTE_EYEDEPTH(i): computes eye space depth of the vertex and outputs it in o. Use it in a vertex program when not rendering into a depth texture.
-
DECODE_EYEDEPTH(i): given high precision value from depth texture i, returns corresponding eye space depth. This macro just returns i*FarPlane on Direct3D. On platforms with native depth textures it linearizes and expands the value to match camera’s range.
For example, this shader would render depth of its objects:
Shader "Render Depth" {
SubShader {
Tags { "RenderType"="Opaque" }
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
float2 depth : TEXCOORD0;
};
v2f vert (appdata_base v) {
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
UNITY_TRANSFER_DEPTH(o.depth);
return o;
}
half4 frag(v2f i) : COLOR {
UNITY_OUTPUT_DEPTH(i.depth);
}
ENDCG
}
}
}