Legacy Documentation: Version 5.0
Camera's Depth Texture
Shader Level of Detail

Platform Specific Rendering Differences

Suggest a change

Success!

Thank you for helping us improve the quality of Unity Documentation. Although we cannot accept all submissions, we do read each suggested change from our users and will make updates where applicable.

Close

Sumbission failed

For some reason your suggested change could not be submitted. Please try again in a few minutes. And thank you for taking the time to help us improve the quality of Unity Documentation.

Close

Cancel

Unity runs on various platforms and in some cases there are differences in how things behave. Most of the time Unity hides the differences from you, but sometimes you can still bump into them.

Render Texture Coordinates

Vertical texture coordinate conventions differ between Direct3D, OpenGL and OpenGL ES:

  • In Direct3D, the coordinate is zero at the top, and increases downwards.
  • In OpenGL and OpenGL ES, the coordinate is zero at the bottom, and increases upwards.

Most of the time this does not really matter, except when rendering into a Render Texture. In that case, Unity internally flips rendering upside down when rendering into a texture on Direct3D, so that the conventions match between the platforms.

One case where this does not happen, is when Image Effects and Anti-Aliasing is used. In this case, Unity renders to screen to get anti-aliasing, and then “resolves” rendering into a RenderTexture for further processing with an Image Effect. The resulting source texture for an image effect is not flipped upside down on Direct3D (unlike all other Render Textures).

If your Image Effect is a simple one (processes one texture at a time) then this does not really matter because Graphics.Blit takes care of that.

However, if you’re processing more than one RenderTexture together in your Image Effect, most likely they will come out at different vertical orientations (only in Direct3D-like platforms, and only when anti-aliasing is used). You need to manually “flip” the screen texture upside down in your vertex shader, like this:

// On D3D when AA is used, the main texture and scene depth texture
// will come out in different vertical orientations.
// So flip sampling of the texture when that is the case (main texture
// texel size will have negative Y).

#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
        uv.y = 1-uv.y;
#endif

Check out the Edge Detection scene in the Shader Replacement sample project for an example of this. Edge detection there uses both the screen texture and the Camera’s Depth+Normals texture.

Semantics used by shaders

To get shaders working on all platforms, some special shader values should use these semantics:

  • Vertex shader output (clip space) position: SV_POSITION. Sometimes shaders use POSITION semantics for that, but this will not work on Sony PS4 and will not work when tessellation is used.
  • Fragment shader output color: SV_Target. Sometimes shaders use COLOR or COLOR0 for that, but again that will not work on PS4.

AlphaTest and programmable shaders

Some platforms, most notably mobile (OpenGL ES & Metal) and Direct3D 11, do not have fixed function alpha testing functionality. When you are using programmable shaders, it’s advisable to use the Cg/HLSL clip() function in the pixel shader instead.

Direct3D 11 shader compiler is more picky about syntax

Direct3D platforms use Microsoft’s HLSL shader compiler. The HLSL compiler is more picky than other compilers about various subtle shader errors. For example, it won’t accept function output values that aren’t initialized properly.

The most common places where you would run into this are:

  • A Surface shader vertex modifier that has an “out” parameter. Make sure to initialize the output like this: void vert (inout appdata_full v, out Input o) { UNITY_INITIALIZE_OUTPUT(Input,o); // … }

  • Partially initialized values, e.g. a function returns float4, but the code only sets the .xyz values of it. Make sure to set all values or else change to float3 if you only need three values.

  • Using tex2D in the vertex shader. This is not valid since UV derivatives don’t exist in the vertex shader; you need to sample an explicit mip level instead, e.g. use tex2Dlod (tex, float4(uv,0,0)). You’ll need to add #pragma target 3.0 since tex2Dlod is a shader model 3.0 feature.

DirectX 11 HLSL syntax and Surface Shaders

Currently some parts of the surface shader compilation pipeline do not understand DX11-specific HLSL syntax. If you’re using HLSL features like StructuredBuffers, RWTextures and other non-DX9 syntax, you have to wrap them in a DX11-only preprocessor macro:

#ifdef SHADER_API_D3D11
// DX11-specific code, e.g.
StructuredBuffer<float4> myColors;
RWTexture2D<float4> myRandomWriteTexture;
#endif

Using Shader Framebuffer Fetch

Some GPUs (most notably PowerVR based ones on iOS) allow doing a form of programmable blending, by providing current fragment color as input to the fragment shader (see EXT_shader_framebuffer_fetch).

It is possible to write shaders in Unity that use the framebuffer fetch functionality. When writing HLSL/Cg fragment shader, just use inout color argument in it, for example:

CGPROGRAM
// only compile shader for platforms that can potentially
// do it (currently gles,gles3,metal)
#pragma only_renderers framebufferfetch

void frag (v2f i, inout half4 ocol : SV_Target)
{
    // ocol can be read (current framebuffer color)
    // and written into (will change color to that one)
    // ...
}   
ENDCG

Using OpenGL Shading Language (GLSL) shaders with OpenGL ES 2.0

OpenGL ES 2.0 provides only limited native support for the OpenGL Shading Language (GLSL). For example, the OpenGL ES 2.0 layer provides no built-in parameters to the shader.

Unity implements built-in parameters for you the same way as OpenGL does, except that the following built-in parameters are missing:

  • gl_ClipVertex
  • gl_SecondaryColor
  • gl_DepthRange
  • halfVector property of the gl_LightSourceParameters structure
  • gl_FrontFacing
  • gl_FrontLightModelProduct
  • gl_BackLightModelProduct
  • gl_BackMaterial
  • gl_Point
  • gl_PointSize
  • gl_ClipPlane
  • gl_EyePlaneR, gl_EyePlaneS, gl_EyePlaneT, gl_EyePlaneQ
  • gl_ObjectPlaneR, gl_ObjectPlaneS, gl_ObjectPlaneT, gl_ObjectPlaneQ
  • gl_Fog

iPad2 and MSAA and alpha-blended geometry

There is a bug in Apple driver resulting in artifacts when MSAA is enabled and alpha-blended geometry is drawn with non RGBA colorMask. To prevent artifacts we force RGBA colorMask when this configuration is encountered, though it will render built-in Glow FX unusable (as it needs DST_ALPHA for intensity value). Also, please update your shaders if you wrote them yourself (see “Render Setup -> ColorMask” in Pass Docs).

Camera's Depth Texture
Shader Level of Detail