카메라의 뎁스 텍스처(Camera's Depth Texture)
셰이더 디테일 수준(LOD)

플랫폼별 렌더링 차이(Platform-specific rendering differences)

Unity는 다양한 그래픽스 라이브러리 플랫폼(Open GL, Direct3D, Metal) 및 게임 콘솔에서 실행됩니다. 일부의 경우 플랫폼과 셰이더 언어의 시맨틱 간에 그래픽스 렌더링 동작이 다를 수 있습니다. 대부분의 경우 Unity 에디터는 이러한 차이를 숨기고 있지만, 에디터상에서는 그 차이를 숨길 수 없는 경우도 있습니다. 이 경우 플랫폼 간 차이를 직접 제거해야 합니다. 아래에는 이러한 경우 및 이러한 상황이 발생했을 때 취해야 하는 조치를 설명합니다.

렌더 텍스처(Render Texture) 좌표

Vertical Texture coordinate conventions differ between two types of platforms: Direct3D-like and OpenGL-like.

  • Direct3D-like: The coordinate is 0 at the top and increases downward. This applies to Direct3D, Metal and consoles.
  • OpenGL-like: The coordinate is 0 at the bottom and increases upward. This applies to OpenGL and OpenGL ES.

This difference tends not to have any effect on your project, other than when rendering into a Render Texture. When rendering into a Texture on a Direct3D-like platform, Unity internally flips rendering upside down. This makes the conventions match between platforms, with the OpenGL-like platform convention the standard.

Image Effects and rendering in UV space are two common cases in the Shaders where you need to take action to ensure that the different coordinate conventions do not create problems in your project.

Image Effects

이미지 이펙트와 안티앨리어싱을 사용하면 이미지 이펙트의 소스 텍스처가 OpenGL 타입 플랫폼 규칙과 일치하도록 플립되지 않습니다. 이 경우 Unity는 화면에 안티앨리어싱이 적용되도록 렌더링한 다음 렌더링을 이미지 이펙트로 더 처리하기 위해 렌더 텍스처로 분석합니다.

이미지 이펙트가 렌더 텍스처를 한 번에 하나씩 처리하는 간단한 효과인 경우, 일치하지 않는 좌표는 Graphics.Blit로 처리됩니다. 하지만 이미지 이펙트에서 2개 이상의 렌더 텍스처를 한꺼번에 처리하는 경우 렌더 텍스처가 Direct3D 타입 플랫폼에서 다른 세로 오리엔테이션으로 출력될 수 있으며, 안티앨리어싱을 사용할 때도 그렇습니다. 좌표를 표준화하려면 버텍스 셰이더에서 화면 텍스처를 위아래로 “플립”하여 OpenGL 타입 좌표 표준과 일치하도록 해야 합니다.

The following code sample demonstrates how to do this:

// Flip sampling of the Texture: 
// The main Texture
// texel size will have negative Y).

#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
        uv.y = 1-uv.y;
#endif

Refer to the Edge Detection Scene in Unity’s Shader Replacement sample project (see Unity’s Learn resources) for a more detailed example of this. Edge detection in this project uses both the screen Texture and the Camera’s Depth+Normals texture.

A similar situation occurs with GrabPass. The resulting render Texture might not actually be turned upside down on Direct3D-like (non-OpenGL-like) platforms. If your Shader code samples GrabPass Textures, use the ComputeGrabScreenPos function from the UnityCG include file.

Rendering in UV space

When rendering in Texture coordinate (UV) space for special effects or tools, you might need to adjust your Shaders so that rendering is consistent between Direct3D-like and OpenGL-like systems. You also might need to adjust your rendering between rendering into the screen and rendering into a Texture. Adjust these by flipping the Direct3D-like projection upside down so its coordinates match the OpenGL-like projection coordinates.

빌트인 변수ProjectionParams.x에는 +1 또는 –1 값이 있습니다. -1은 OpenGL 타입 투사 좌표와 일치하도록 투사가 위 아래로 플립되었음을 나타내는 한편, +1은 플립되지 않았음을 나타냅니다. 셰이더에서 이 값을 확인한 다음 다른 작업을 수행할 수 있습니다. 아래 예제에서는 투사가 플립되었는지 확인하고, 플립된 경우 일치하는 UV 좌표를 반환합니다.

float4 vert(float2 uv : TEXCOORD0) : SV_POSITION
{
    float4 pos;
    pos.xy = uv;
    // This example is rendering with upside-down flipped projection,
    // so flip the vertical UV coordinate too
    if (_ProjectionParams.x < 0)
        pos.y = 1 - pos.y;
    pos.z = 0;
    pos.w = 1;
    return pos;
}

Clip space coordinates

Similar to Texture coordinates, the clip space coordinates (also known as post-projection space coordinates) differ between Direct3D-like and OpenGL-like platforms:

  • Direct3D-like: The clip space depth goes from 0.0 at the near plane to +1.0 at the far plane. This applies to Direct3D, Metal and consoles.

  • OpenGL-like: The clip space depth goes from –1.0 at the near plane to +1.0 at the far plane. This applies to OpenGL and OpenGL ES.

Inside Shader code, you can use the UNITY_NEAR_CLIP_VALUE built-in macro to get the near plane value based on the platform.

Inside script code, use GL.GetGPUProjectionMatrix to convert from Unity’s coordinate system (which follows OpenGL-like conventions) to Direct3D-like coordinates if that is what the platform expects.

Precision of Shader computations

To avoid precision issues, make sure that you test your Shaders on the target platforms. The GPUs in mobile devices and PCs differ in how they treat floating point types. PC GPUs treat all floating point types (float, half and fixed) as the same - they do all calculations using full 32-bit precision, while many mobile device GPUs do not do this.

See documentation on data types and precision for details.

Const declarations in Shaders

Use of const differs between Microsoft HSL (see msdn.microsoft.com) and OpenGL’s GLSL (see Wikipedia) Shader language.

  • Microsoft’s HLSL const has much the same meaning as it does in C# and C++ in that the variable declared is read-only within its scope but can be initialized in any way.

  • OpenGL’s GLSL const means that the variable is effectively a compile time constant, and so it must be initialized with compile time constraints (either literal values or calculations on other consts).

It is best to follow the OpenGL’s GLSL semantics and only declare a variable as const when it is truly invariant. Avoid initializing a const variable with some other mutable values (for example, as a local variable in a function). This also works in Microsoft’s HLSL, so using const in this way avoids confusing errors on some platforms.

Semantics used by Shaders

To get Shaders working on all platforms, some Shader values should use these semantics:

  • Vertex Shader output (clip space) position: SV_POSITION. Sometimes Shaders use POSITION semantics to get Shaders working on all platforms. Note that this does not not work on Sony PS4 or with tessellation.

  • Fragment Shader output color: SV_Target. Sometimes Shaders use COLOR or COLOR0 to get Shaders working on all platforms. Note that this does not work on Sony PS4.

When rendering Meshes as Points, output PSIZE semantics from the vertex Shader (for example, set it to 1). Some platforms, such as OpenGL ES or Metal, treat point size as “undefined” when it’s not written to from the Shader.

See documentation on Shader semantics for more details.

Direct3D Shader compiler syntax

Direct3D platforms use Microsoft’s HLSL Shader compiler. The HLSL compiler is stricter than other compilers about various subtle Shader errors. For example, it doesn’t accept function output values that aren’t initialized properly.

The most common situations that you might run into using this are:

  • A Surface Shader vertex modifier that has an out parameter. Initialize the output like this:
  void vert (inout appdata_full v, out Input o) 
      {
        **UNITY_INITIALIZE_OUTPUT(Input,o);**
        // ...
      }
  • Partially initialized values. For example, a function returns float4 but the code only sets the .xyz values of it. Set all values or change to float3 if you only need three values.

  • Using tex2D in the Vertex Shader. This is not valid, because UV derivatives don’t exist in the vertex Shader. You need to sample an explicit mip level instead; for example, use tex2Dlod (tex, float4(uv,0,0)). You also need to add #pragma target 3.0 as tex2Dlod is a Shader model 3.0 feature.

DirectX 11 (DX11) HLSL syntax in Shaders

Some parts of the Surface Shader compilation pipeline do not understand DirectX 11-specific HLSL (Microsoft’s shader language) syntax.

If you’re using HLSL features like StructuredBuffers, RWTextures and other non-DirectX 9 syntax, wrap them in a DirectX X11-only preprocessor macro as shown in the example below.

#ifdef SHADER_API_D3D11
// DirectX11-specific code, for example
StructuredBuffer<float4> myColors;
RWTexture2D<float4> myRandomWriteTexture;
#endif

Using Shader framebuffer fetch

Some GPUs (most notably PowerVR-based ones on iOS) allow you to do a form of programmable blending by providing current fragment color as input to the Fragment Shader (see EXT_shader_framebuffer_fetch on khronos.org).

It is possible to write Shaders in Unity that use the framebuffer fetch functionality. To do this, use the inout color argument when you write a Fragment Shader in either HLSL (Microsoft’s shading language - see msdn.microsoft.com) or Cg (the shading language by Nvidia - see nvidia.co.uk).

The example below is in Cg.

CGPROGRAM
// only compile Shader for platforms that can potentially
// do it (currently gles,gles3,metal)
#pragma only_renderers framebufferfetch

void frag (v2f i, inout half4 ocol : SV_Target)
{
    // ocol can be read (current framebuffer color)
    // and written into (will change color to that one)
    // ...
}   
ENDCG

The Depth (Z) direction in Shaders

Depth (Z) direction differs on different Shader platforms.

DirectX 11, DirectX 12, PS4, Xbox One, Metal: Reversed direction

  • The depth (Z) buffer is 1.0 at the near plane, decreasing to 0.0 at the far plane.

  • Clip space range is [near,0] (meaning the near plane distance at the near plane, decreasing to 0.0 at the far plane).

Other platforms: Traditional direction

  • The depth (Z) buffer value is 0.0 at the near plane and 1.0 at the far plane.

  • Clip space depends on the specific platform:
    • On Direct3D-like platforms, the range is [0,far] (meaning 0.0 at the near plane, increasing to the far plane distance at the far plane).
    • On OpenGL-like platforms, the range is [-near,far] (meaning minus the near plane distance at the near plane, increasing to the far plane distance at the far plane).

Note that reversed direction depth (Z), combined with a floating point depth buffer, significantly improves depth buffer precision against the traditional direction. The advantages of this are less conflict for Z coordinates and better shadows, especially when using small near planes and large far planes.

So, when you use Shaders from platforms with the depth (Z) reversed:

  • UNITY_REVERSED_Z is defined.
  • _CameraDepth Texture texture range is 1 (near) to 0 (far).
  • Clip space range is within “near” (near) to 0 (far).

However, the following macros and functions automatically work out any differences in depth (Z) directions:

  • Linear01Depth(float z)
  • LinearEyeDepth(float z)
  • UNITY_CALC_FOG_FACTOR(coord)

Fetching the depth Buffer

If you are fetching the depth (Z) buffer value manually, you might want to check the buffer direction. The following is an example of this:

float z = tex2D(_CameraDepthTexture, uv);
#if defined(UNITY_REVERSED_Z)
    z = 1.0f - z;
#endif

Using clip space

If you are using clip space (Z) depth manually, you might also want to abstract platform differences by using the following macro:

float clipSpaceRange01 = UNITY_Z_0_FAR_FROM_CLIPSPACE(rawClipSpace);

Note: This macro does not alter clip space on OpenGL or OpenGL ES platforms, so it returns within “-near”1 (near) to far (far) on these platforms.

Projection matrices

GL.GetGPUProjectionMatrix()는 뎁스(Z)가 반전된 플랫폼에서 a z 반전 매트릭스를 반환합니다. 하지만 투사 매트릭스에서 (예를 들어, 커스텀 섀도우 또는 뎁스 렌더링을 위해)수동으로 작성하는 경우 스크립트를 통해 뎁스(Z) 방향을 직접 반전해야 합니다(해당되는 경우).

An example of this is below:

var shadowProjection = Matrix4x4.Ortho(...); //shadow camera projection matrix
var shadowViewMat = ...     //shadow camera view matrix
var shadowSpaceMatrix = ... //from clip to shadowMap texture space
    
//'m_shadowCamera.projectionMatrix' is implicitly reversed 
//when the engine calculates device projection matrix from the camera projection
m_shadowCamera.projectionMatrix = shadowProjection; 

//'shadowProjection' is manually flipped before being concatenated to 'm_shadowMatrix'
//because it is seen as any other matrix to a Shader.
if(SystemInfo.usesReversedZBuffer) 
{
    shadowProjection[2, 0] = -shadowProjection[2, 0];
    shadowProjection[2, 1] = -shadowProjection[2, 1];
    shadowProjection[2, 2] = -shadowProjection[2, 2];
    shadowProjection[2, 3] = -shadowProjection[2, 3];
}
    m_shadowMatrix = shadowSpaceMatrix * shadowProjection * shadowViewMat;

Depth (Z) bias

Unity automatically deals with depth (Z) bias to ensure it matches Unity’s depth (Z) direction. However, if you are using a native code rendering plugin, you need to negate (reverse) depth (Z) bias in your C or C++ code.

Tools to check for depth (Z) direction

카메라의 뎁스 텍스처(Camera's Depth Texture)
셰이더 디테일 수준(LOD)