By default, GPUs use 32-bit precision. You can use 16-bit precision instead in GPU calculations, which has the following benefits on mobile platforms:
To use 16-bit precision in a shader, use half
when you declare a scalar, a vector, or a matrix. For example:
half _Glossiness;
half4 _Color;
half4x4 _Matrix;
To use 16-bit precision in a texture sampler, add _half
when you declare it. For example:
Texture2D<half4> _MainTex;
16-bit precision might not be enough for some shader calculations. This might cause visible errors such as color banding or stuttering geometry. To check for errors, run your project on a platform that supports half
. If there are errors, use float
instead.
A half
variable is stored in buffers with a size and alignment of 32 bits.
By default, half
has no effect on higher performance platforms in Unity, for example platforms that use MacOS. A half
variable becomes a float
, and the GPU uses 32-bit values for calculations.
To use 16-bit precision on more platforms, go to Edit > Project Settings > Player and set Shader Precision Model to Uniform. Unity then treats half
in your HLSL code as the following:
min16float
for scalarsfloat
for texture samplersWhen Unity compiles shaders, min16float
becomes a platform data type that allows the GPU to use 16-bit precision for calculations if it supports them. For example:
min16float
.mediump
.RelaxedPrecision
float.half
, which has a size and alignment of 2 bytes instead of 4 bytes.To override the Shader Precision Model setting for textures, add _half
when you declare a texture sampler. For example Texture2D<half4> _MainTex
.