To trigger a cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary to render to a render textureA special type of Texture that is created and updated at runtime. To use them, first create a new Render Texture and designate one of your Cameras to render into it. Then you can use the Render Texture in a Material just like a regular Texture. More info
See in Glossary outside of the Universal Render PipelineA series of operations that take the contents of a Scene, and displays them on a screen. Unity lets you choose from pre-built render pipelines, or write your own. More info
See in Glossary (URP) rendering loop, use the SubmitRenderRequest
API in a C# script.
This example shows how to use render requests and callbacks to monitor the progress of these requests. You can see the full code sample in the Example code section.
To render a single camera without taking into account the full stack of cameras, use the UniversalRenderPipeline.SingleCameraRequest
API. Follow these steps:
Create a C# script with the name SingleCameraRenderRequestExample
and add the using
statements shown below.
using System.Collections; using UnityEngine; using UnityEngine.Rendering; using UnityEngine.Rendering.Universal; public class SingleCameraRenderRequestExample : MonoBehaviour { }
Create arrays to store the cameras and Render Textures that you want to render from and to.
public class SingleCameraRenderRequestExample : MonoBehaviour { public Camera[] cameras; public RenderTexture[] renderTextures; }
In the Start
method, add a check to ensure the cameras
and renderTextures
arrays are valid and contain the correct data before continuing with running the script.
void Start() { // Make sure all data is valid before you start the component if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length) { Debug.LogError("Invalid setup"); return; } }
Make a method with the name SendSingleRenderRequests
and the return type void
within the SingleCameraRenderRequest
class.
In the SendSingleRenderRequests
method, add a for
loop that iterates over the cameras
array as shown below.
void SendSingleRenderRequests() { for (int i = 0; i < cameras.Length; i++) { } }
Inside the for
loop, create a render request of the UniversalRenderPipeline.SingleCameraRequest
type in a variable with the name request
. Then check if the active render pipeline supports this render request type with RenderPipeline.SupportsRenderRequest
.
If the active render pipeline supports the render request, set the destination of the camera output to the matching Render Texture from the renderTextures
array. Then submit the render request with RenderPipeline.SubmitRenderRequest
.
void SendSingleRenderRequests() { for (int i = 0; i < cameras.Length; i++) { UniversalRenderPipeline.SingleCameraRequest request = new UniversalRenderPipeline.SingleCameraRequest(); // Check if the active render pipeline supports the render request if (RenderPipeline.SupportsRenderRequest(cameras[i], request)) { // Set the destination of the camera output to the matching RenderTexture request.destination = renderTextures[i]; // Render the camera output to the RenderTexture synchronously // When this is complete, the RenderTexture in renderTextures[i] contains the scene rendered from the point // of view of the Camera in cameras[i] RenderPipeline.SubmitRenderRequest(cameras[i], request); } } }
Above the SendSingleRenderRequest
method, create an IEnumerator
interface with the name RenderSingleRequestNextFrame
.
Inside RenderSingleRequestNextFrame
, wait for the main camera to finish rendering, then call SendSingleRenderRequest
. Wait for the end of the frame before restarting RenderSingleRequestNextFrame
in a coroutine with StartCoroutine
.
IEnumerator RenderSingleRequestNextFrame() { // Wait for the main camera to finish rendering yield return new WaitForEndOfFrame(); // Enqueue one render request for each camera SendSingleRenderRequests(); // Wait for the end of the frame yield return new WaitForEndOfFrame(); // Restart the coroutine StartCoroutine(RenderSingleRequestNextFrame()); }
In the Start
method, call RenderSingleRequestNextFrame
in a coroutine with StartCoroutine
.
void Start() { // Make sure all data is valid before you start the component if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length) { Debug.LogError("Invalid setup"); return; } // Start the asynchronous coroutine StartCoroutine(RenderSingleRequestNextFrame()); }
In the Editor, create an empty GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary in your sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary and add SingleCameraRenderRequestExample.cs
as a componentA functional part of a GameObject. A GameObject can contain any number of components. Unity has many built-in components, and you can create your own by writing scripts that inherit from MonoBehaviour. More info
See in Glossary.
In the InspectorA Unity window that displays information about the currently selected GameObject, asset or project settings, allowing you to inspect and edit the values. More info
See in Glossary window, add the camera you want to render from to the cameras list, and the Render Texture you want to render into to the renderTextures list.
Note: The number of cameras in the cameras list and the number of Render Textures in the renderTextures list must be the same.
Now when you enter Play mode, the cameras you added render to the Render Textures you added.
To check when a camera finishes rendering, use any callback from the RenderPipelineManager API.
The following example uses the RenderPipelineManager.endContextRendering callback.
Add using System.Collections.Generic
to the top of the SingleCameraRenderRequestExample.cs
file.
At the end of the Start
method, subscribe to the endContextRendering
callback.
void Start() { // Make sure all data is valid before you start the component if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length) { Debug.LogError("Invalid setup"); return; } // Start the asynchronous coroutine StartCoroutine(RenderSingleRequestNextFrame()); // Call a method called OnEndContextRendering when a camera finishes rendering RenderPipelineManager.endContextRendering += OnEndContextRendering; }
Create a method with the name OnEndContextRendering
. Unity runs this method when the endContextRendering
callback triggers.
void OnEndContextRendering(ScriptableRenderContext context, List<Camera> cameras) { // Create a log to show cameras have finished rendering Debug.Log("All cameras have finished rendering."); }
To unsubscribe the OnEndContextRendering
method from the endContextRendering
callback, add an OnDestroy
method to the SingleCameraRenderRequestExample
class.
void OnDestroy() { // End the subscription to the callback RenderPipelineManager.endContextRendering -= OnEndContextRendering; }
This script now works as before, but logs a message to the Console WindowA Unity Editor window that shows errors, warnings and other messages generated by Unity, or your own scripts. More info
See in Glossary about which cameras have finished rendering.
using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.Rendering; using UnityEngine.Rendering.Universal; public class SingleCameraRenderRequest : MonoBehaviour { public Camera[] cameras; public RenderTexture[] renderTextures; void Start() { // Make sure all data is valid before you start the component if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length) { Debug.LogError("Invalid setup"); return; } // Start the asynchronous coroutine StartCoroutine(RenderSingleRequestNextFrame()); // Call a method called OnEndContextRendering when a camera finishes rendering RenderPipelineManager.endContextRendering += OnEndContextRendering; } void OnEndContextRendering(ScriptableRenderContext context, List<Camera> cameras) { // Create a log to show cameras have finished rendering Debug.Log("All cameras have finished rendering."); } void OnDestroy() { // End the subscription to the callback RenderPipelineManager.endContextRendering -= OnEndContextRendering; } IEnumerator RenderSingleRequestNextFrame() { // Wait for the main camera to finish rendering yield return new WaitForEndOfFrame(); // Enqueue one render request for each camera SendSingleRenderRequests(); // Wait for the end of the frame yield return new WaitForEndOfFrame(); // Restart the coroutine StartCoroutine(RenderSingleRequestNextFrame()); } void SendSingleRenderRequests() { for (int i = 0; i < cameras.Length; i++) { UniversalRenderPipeline.SingleCameraRequest request = new UniversalRenderPipeline.SingleCameraRequest(); // Check if the active render pipeline supports the render request if (RenderPipeline.SupportsRenderRequest(cameras[i], request)) { // Set the destination of the camera output to the matching RenderTexture request.destination = renderTextures[i]; // Render the camera output to the RenderTexture synchronously RenderPipeline.SubmitRenderRequest(cameras[i], request); // At this point, the RenderTexture in renderTextures[i] contains the scene rendered from the point // of view of the Camera in cameras[i] } } } }
Did you find this page useful? Please give it a rating:
Thanks for rating this page!
What kind of problem would you like to report?
Thanks for letting us know! This page has been marked for review based on your feedback.
If you have time, you can provide more information to help us fix the problem faster.
Provide more information
You've told us this page needs code samples. If you'd like to help us further, you could provide a code sample, or tell us about what kind of code sample you'd like to see:
You've told us there are code samples on this page which don't work. If you know how to fix it, or have something better we could use instead, please let us know:
You've told us there is information missing from this page. Please tell us more about what's missing:
You've told us there is incorrect information on this page. If you know what we should change to make it correct, please tell us:
You've told us this page has unclear or confusing information. Please tell us more about what you found unclear or confusing, or let us know how we could make it clearer:
You've told us there is a spelling or grammar error on this page. Please tell us what's wrong:
You've told us this page has a problem. Please tell us more about what's wrong:
Thank you for helping to make the Unity documentation better!
Your feedback has been submitted as a ticket for our documentation team to review.
We are not able to reply to every ticket submitted.
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Some 3rd party video providers do not allow video views without targeting cookies. If you are experiencing difficulty viewing a video, you will need to set your cookie preferences for targeting to yes if you wish to view videos from these providers. Unity does not control this.
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.