Use DepthTextureMode
to output a depth texture or a depth-normals texture from a cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary.
This builds a screen-sized depth texture.
Depth texture is rendered using the same shaderA program that runs on the GPU. More info
See in Glossary passes as used for shadow caster rendering (ShadowCaster
pass type). So by extension, if a shader does not support shadow casting (i.e. there’s no shadow caster pass in the shader or any of the fallbacks), then objects using that shader will not show up in the depth texture.
addshadow
directive will make them generate a shadow pass too.Note that only “opaque” objects (that which have their materials and shaders setup to use render queue <= 2500) are rendered into the depth texture.
This builds a screen-sized 32 bit (8 bit/channel) texture, where view space normals are encoded into R&G channels, and depth is encoded in B&A channels. Normals are encoded using Stereographic projection, and depth is 16 bit value packed into two 8 bit channels.
UnityCG.cginc
include file has a helper function DecodeDepthNormal
to decode depth and normal from the encoded pixelThe smallest unit in a computer image. Pixel size depends on your screen resolution. Pixel lighting is calculated at every screen pixel. More info
See in Glossary value. Returned depth is in 0..1 range.
For examples on how to use the depth and normals texture, please refer to Replacing shaders at runtime or Ambient OcclusionA method to approximate how much ambient light (light not coming from a specific direction) can hit a point on a surface.
See in Glossary in Post-processing and full-screen effects.