Version: Unity 6.1 Alpha (6000.1)
Language : English
Render a camera's output to a Render Texture in URP
Camera render order in URP

Create a render request in URP

To trigger a cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary
to render to a render textureA special type of Texture that is created and updated at runtime. To use them, first create a new Render Texture and designate one of your Cameras to render into it. Then you can use the Render Texture in a Material just like a regular Texture. More info
See in Glossary
outside of the Universal Render PipelineA series of operations that take the contents of a Scene, and displays them on a screen. Unity lets you choose from pre-built render pipelines, or write your own. More info
See in Glossary
(URP) rendering loop, use the SubmitRenderRequest API in a C# script.

This example shows how to use render requests and callbacks to monitor the progress of these requests. You can see the full code sample in the Example code section.

Render a single camera from a camera stack

To render a single camera without taking into account the full stack of cameras, use the UniversalRenderPipeline.SingleCameraRequest API. Follow these steps:

  1. Create a C# script with the name SingleCameraRenderRequestExample and add the using statements shown below.

    using System.Collections;
    using UnityEngine;
    using UnityEngine.Rendering;
    using UnityEngine.Rendering.Universal;
    
    public class SingleCameraRenderRequestExample : MonoBehaviour
    {
    
    }
    
  2. Create arrays to store the cameras and Render Textures that you want to render from and to.

    public class SingleCameraRenderRequestExample : MonoBehaviour
    {
        public Camera[] cameras;
        public RenderTexture[] renderTextures;
    }
    
  3. In the Start method, add a check to ensure the cameras and renderTextures arrays are valid and contain the correct data before continuing with running the script.

    void Start()
    {
        // Make sure all data is valid before you start the component
        if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length)
        {
            Debug.LogError("Invalid setup");
            return;
        }
    }
    
  4. Make a method with the name SendSingleRenderRequests and the return type void within the SingleCameraRenderRequest class.

  5. In the SendSingleRenderRequests method, add a for loop that iterates over the cameras array as shown below.

    void SendSingleRenderRequests()
    {
        for (int i = 0; i < cameras.Length; i++)
        {
    
        }
    }
    
  6. Inside the for loop, create a render request of the UniversalRenderPipeline.SingleCameraRequest type in a variable with the name request. Then check if the active render pipeline supports this render request type with RenderPipeline.SupportsRenderRequest.

  7. If the active render pipeline supports the render request, set the destination of the camera output to the matching Render Texture from the renderTextures array. Then submit the render request with RenderPipeline.SubmitRenderRequest.

    void SendSingleRenderRequests()
    {
        for (int i = 0; i < cameras.Length; i++)
        {
            UniversalRenderPipeline.SingleCameraRequest request =
                new UniversalRenderPipeline.SingleCameraRequest();
    
            // Check if the active render pipeline supports the render request
            if (RenderPipeline.SupportsRenderRequest(cameras[i], request))
            {
                // Set the destination of the camera output to the matching RenderTexture
                request.destination = renderTextures[i];
                    
                // Render the camera output to the RenderTexture synchronously
                // When this is complete, the RenderTexture in renderTextures[i] contains the scene rendered from the point
                // of view of the Camera in cameras[i]
                RenderPipeline.SubmitRenderRequest(cameras[i], request);
            }
        }
    }
    
  8. Above the SendSingleRenderRequest method, create an IEnumerator interface with the name RenderSingleRequestNextFrame.

  9. Inside RenderSingleRequestNextFrame, wait for the main camera to finish rendering, then call SendSingleRenderRequest. Wait for the end of the frame before restarting RenderSingleRequestNextFrame in a coroutine with StartCoroutine.

    IEnumerator RenderSingleRequestNextFrame()
    {
        // Wait for the main camera to finish rendering
        yield return new WaitForEndOfFrame();
    
        // Enqueue one render request for each camera
        SendSingleRenderRequests();
    
        // Wait for the end of the frame
        yield return new WaitForEndOfFrame();
    
        // Restart the coroutine
        StartCoroutine(RenderSingleRequestNextFrame());
    }
    
  10. In the Start method, call RenderSingleRequestNextFrame in a coroutine with StartCoroutine.

    void Start()
    {
        // Make sure all data is valid before you start the component
        if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length)
        {
            Debug.LogError("Invalid setup");
            return;
        }
    
        // Start the asynchronous coroutine
        StartCoroutine(RenderSingleRequestNextFrame());
    }
    
  11. In the Editor, create an empty GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
    See in Glossary
    in your sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
    See in Glossary
    and add SingleCameraRenderRequestExample.cs as a componentA functional part of a GameObject. A GameObject can contain any number of components. Unity has many built-in components, and you can create your own by writing scripts that inherit from MonoBehaviour. More info
    See in Glossary
    .

  12. In the InspectorA Unity window that displays information about the currently selected GameObject, asset or project settings, allowing you to inspect and edit the values. More info
    See in Glossary
    window, add the camera you want to render from to the cameras list, and the Render Texture you want to render into to the renderTextures list.

Note: The number of cameras in the cameras list and the number of Render Textures in the renderTextures list must be the same.

Now when you enter Play mode, the cameras you added render to the Render Textures you added.

Check when a camera finishes rendering

To check when a camera finishes rendering, use any callback from the RenderPipelineManager API.

The following example uses the RenderPipelineManager.endContextRendering callback.

  1. Add using System.Collections.Generic to the top of the SingleCameraRenderRequestExample.cs file.

  2. At the end of the Start method, subscribe to the endContextRendering callback.

    void Start()
    {
        // Make sure all data is valid before you start the component
        if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length)
        {
            Debug.LogError("Invalid setup");
            return;
        }
    
        // Start the asynchronous coroutine
        StartCoroutine(RenderSingleRequestNextFrame());
            
        // Call a method called OnEndContextRendering when a camera finishes rendering
        RenderPipelineManager.endContextRendering += OnEndContextRendering;
    }
    
  3. Create a method with the name OnEndContextRendering. Unity runs this method when the endContextRendering callback triggers.

    void OnEndContextRendering(ScriptableRenderContext context, List<Camera> cameras)
    {
        // Create a log to show cameras have finished rendering
        Debug.Log("All cameras have finished rendering.");
    }
    
  4. To unsubscribe the OnEndContextRendering method from the endContextRendering callback, add an OnDestroy method to the SingleCameraRenderRequestExample class.

    void OnDestroy()
    {
        // End the subscription to the callback
        RenderPipelineManager.endContextRendering -= OnEndContextRendering;
    }
    

This script now works as before, but logs a message to the Console WindowA Unity Editor window that shows errors, warnings and other messages generated by Unity, or your own scripts. More info
See in Glossary
about which cameras have finished rendering.

Example code

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class SingleCameraRenderRequest : MonoBehaviour
{
    public Camera[] cameras;
    public RenderTexture[] renderTextures;

    void Start()
    {
        // Make sure all data is valid before you start the component
        if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length)
        {
            Debug.LogError("Invalid setup");
            return;
        }

        // Start the asynchronous coroutine
        StartCoroutine(RenderSingleRequestNextFrame());
        
        // Call a method called OnEndContextRendering when a camera finishes rendering
        RenderPipelineManager.endContextRendering += OnEndContextRendering;
    }

    void OnEndContextRendering(ScriptableRenderContext context, List<Camera> cameras)
    {
        // Create a log to show cameras have finished rendering
        Debug.Log("All cameras have finished rendering.");
    }

    void OnDestroy()
    {
        // End the subscription to the callback
        RenderPipelineManager.endContextRendering -= OnEndContextRendering;
    }

    IEnumerator RenderSingleRequestNextFrame()
    {
        // Wait for the main camera to finish rendering
        yield return new WaitForEndOfFrame();

        // Enqueue one render request for each camera
        SendSingleRenderRequests();

        // Wait for the end of the frame
        yield return new WaitForEndOfFrame();

        // Restart the coroutine
        StartCoroutine(RenderSingleRequestNextFrame());
    }

    void SendSingleRenderRequests()
    {
        for (int i = 0; i < cameras.Length; i++)
        {
            UniversalRenderPipeline.SingleCameraRequest request =
                new UniversalRenderPipeline.SingleCameraRequest();

            // Check if the active render pipeline supports the render request
            if (RenderPipeline.SupportsRenderRequest(cameras[i], request))
            {
                // Set the destination of the camera output to the matching RenderTexture
                request.destination = renderTextures[i];
                
                // Render the camera output to the RenderTexture synchronously
                RenderPipeline.SubmitRenderRequest(cameras[i], request);

                // At this point, the RenderTexture in renderTextures[i] contains the scene rendered from the point
                // of view of the Camera in cameras[i]
            }
        }
    }
}
Render a camera's output to a Render Texture in URP
Camera render order in URP