Warning
Warning: Unity Simulation is deprecated as of December 2023, and is no longer available.
Setting up Distributed Rendering in a project
Now that we have installed Distributed Rendering, lets dive right into the basics of setting up the project!
To run a project with Distributed Rendering, two key tasks need to be completed; tagging objects to be synchronized, and configuring Distributed Rendering for the simulation's needs. In this guide we will go into the details of both.
Tagging objects in the scene
In order to render each frame of the simulation correctly on the Clients, GameObjects need to be sychronized between the Server and Clients. This is done by tagging the GameObjects, and the active cameras in the scene.
GameObjects
The idea behind tagging GameObjects is to synchronize transform data between the Server and the Clients. While in theory every GameObject in the scene can be tagged, this would lead to a lot of unnecessary data being sent to every Client, leading to an increase in network traffic and ultimately reducing throughput. Keeping this in mind, it is recommended to only tag active cameras and GameObjects that are dynamic, since stationary GameObjects will remain in the same position throughtout the simulation, making their transform data updates redundant.
For example, a moving robot must be tagged so that its movement is reflected on the Clients, while a fixed shelf or pallet need not be tagged.
To tag an object:
Select the GameObject in the Hierarchy, and in the Inspector click the
Add Component
button.Search for "Include In Distributed Rendering" and select it.
Alternatively, to tag large object trees the Tag Object Tree
button on the Distributed Rendering Configuration Menu
can also be used.
💡 Note: In Scene Serialization Mode only the cameras to be rendered on the Clients need to be tagged, not the GameObjects.
Configuring Distributed Rendering
Open the Distributed Rendering Configuration Menu by selecting Window
-> Distributed Rendering
. Opening this window will automatically create a DRConfig Scriptable Object in the Resources folder of the project, which can be configured using the menu pictured below.
Let us take a closer look at the options in this menu:
Distributed Rendering Tools
Validate Scene Objects: This option goes through every GameObject in the scene with an
IncludeInDistributedRendering
component and checks their IDs. If any GameObject has a duplicate ID it assigns a new ID to it, making sure all the GameObject IDs are unique.- Tag Cameras As
SingleCameraNodes
: Upon validation tags all cameras on GameObjects asSingleCameraNodes
.
- Tag Cameras As
Tag Object Tree: This option tags the parent as well as each child object in the object tree selected by adding the
IncludeInDistributedRendering
component to all objects that don't already have it.Remove Tags From Object Tree: This option removes the
IncludeInDistributedRendering
component tag from all objects that have it in the selected object tree.
Global Settings
Logging Level: This option can be used to set the logging level to set for the Unity Simulation logger.
Enable Distribution: This option enables the Distributed Rendering system. This should always be enabled, but may be disabled for purposes such as performance benchmarking.
Enable Distributable Methods: This option enables Distributable Methods.
Enable Debug UI: This option enables the display of Distributed Rendering debug stats.
Pause At Startup: This option pauses the simulation at startup.
Editor Operating Mode: This option determines the operating mode of the Unity Editor on Play mode. It can be set to None (to run without Distributed Rendering), Server or Client.
Network Properties:
Network Port: IP port where the network connection will be established between the Server and Clients. When running on multiple machines, make sure this port is not blocked by a firewall. When running across different networks, NAT punch-through may be required on the routing hardware due to the networking layer relying on UDP for communication.
Cluster Port: IP communication port for the node auto-discovery protocol. Typically this need not be modified, except to make sure that this port does not overlap with the Network Port.
Cluster Id: Unique identifier to use for node discovery to prevent nodes of incompatible projects from talking to each other. The discovery protocol will only discover nodes on the same network since it relies on UDP multicasting.
Scene Server Settings
Sync Mode: The mode to run Distributed Rendering with. This option provides users with four modes; Asynchronous, Camera Affinity, Lock Step Sync and Scene Serialization. More information about them can be found in the Modes section.
Max Frames To Render: The maximum number of frames to render before exiting (set to 0 to run endlessly). In a scene with one active camera, this will be the number of frames generated. For a scene with multiple cameras, each camera will generate MaxFramesToRender frames.
Enable Physics: When enabled, the Distributed Rendering framework will handle the physics simulation step. The physics step will occur for each update at a rate of
Time.fixedDeltaTime
. Modifying this value will change the physics step rate. It is also possible to disable this option and handle the physics update manually if different behavior is required. In Lock Step Sync Mode, this is ignored.Wait For Clients To Connect: This ensures that the Server waits until there is a Client available for each camera. This should always be enabled for Lock Step Sync Mode.
Expected Number Of Render Nodes: Overrides the number of Clients the Server will wait for in Lock Step Sync mode.
Frame Batching:
Enable Frame Batching: When enabled, the Server will batch multiple frames to be sent to a Client per update rather than sending individual frames at a time.
Enable Load Balancing: Enables load balancing for frame batching, only for Asynchronous mode.
Max Batch Size: When enabled, the Server will batch multiple frames to be sent to a Client per update rather than sending individual frames at a time.
💡 Note: If the output of the Clients will need to help drive the simulation by communicating some information back to the Server externally, it may be more beneficial to disable this option to maximize simulation accuracy.
Client Settings
- Client Connection Timeout: Client connection to Server timeout in milliseconds. 0 indicates no timeout.
Modes in Distributed Rendering
As explained above, Distributed Rendering provides users with four modes; Asynchronous, Camera Affinity, Lock Step Sync and Scene Serialization. Each of these modes has its own characteristics which makes them better suited for certain use cases.
Asynchronous Mode
Asynchronous mode is the foundation for every mode of Distributed Rendering. The goal of this mode is to increase the throughput of the simulation without getting bogged down by rendering visuals. In this mode the user may run as many Clients as needed because the number is not dependent on the number of cameras in the scene. In fact, running more Clients than the number of cameras in the scene is encouraged because it increases the overall throughput of frames being processed.
This mode is useful for cases where the number of frames processed is higher priority than the actual rendering.
Camera Affinity Mode
In this mode, one Client can render one camera in the scene only. The user must launch as many Clients as the number of cameras in the scene to completely capture the simulation.
This mode is useful for cases where viewing every the output of every sensor in the scene on separate instances is required.
Lock Step Sync Mode
Similar to Camera Affinity mode, each node can render one camera in the Scene only. In this mode, the Server renders a pre-specified "Main Camera" that directs the Clients, aiding inter-node synchronization that makes sure the Server and the Clients are rendering the same frame at any given time step. Adding rendering capabilities to the Server provides the added advantage of enabling users to interact with the simulation by interacting with the Server, which makes sure that the changes are reflected accordingly in all the corresponding Client frames.
This mode is useful for cases where the frames processed by individual sensors must be completely in sync with each other, and/or where the users needs to interact with the scenario directly or externally.
💡 Note: In this mode the Server also renders frames like the Clients, hence the user must launch one less Client than the number of cameras in the scene. You may exclude the server from rendering by having no cameras on a node where
isServer
is checked.
Scene Serialization Mode
This mode removes the need for IncludeInDistributedRendering
to be attached to every dynamic GameObject. This mode implicitly tracks the changes in every frame without any explicit manual tags. The user must launch as many Clients as the number of cameras in the scene.
This mode is useful for scenario with a large number of moving objects, where tagging every individual moving object may be cumbersome.
Cameras
The number of Clients are determined using Distributed Rendering Camera Node components. A camera must either have a Camera Node component or the Include In Distributed Rendering
component. There are 3 kinds of Camera Node components:
SingleCameraNode
: This Camera Node component represents a Distributed Rendering node with one camera. The Camera field in it is automatically populated with the Camera this component is attached to, but can be modified.MultiCameraNode
: This Camera Node component represents a Distributed Rendering node with more than one camera. The number of cameras in the node can be listed out in theCameras
field. Apart from the camera this component may be attached to, every other camera in this list must have anInclude In Distributed Rendering
component.NoCameraNode
: This Camera Node component represents a Distributed Rendering node with no cameras. It contains two events,onClientActivateNode
andonServerActivateNode
that can be used for additional custom operations.
💡 Note: The
isServer
checkbox in all three Camera Node components determines if that node will be the Server if Lock Step Sync Mode is chosen. If any other mode is chosen this option is ignored.
Other Distributed Rendering Settings: Scene Data and Events
Distributed Rendering also allows user to set scene specific data and events. These come in handy while triggering specific events, or setting mode specific information (like the Main Camera in Lock Step Sync mode). Each time a scene is loaded, these components will be found and their properties will be automatically applied. Only one is permitted at runtime, so any additional instances will be discarded.
To set Scene Data and Events in your scene:
- In the Scene Hierarchy, right click and select:
Create Empty
- Rename the new object to
DRSystem
- Rename the new object to
- Click on the
DRSystem
object and in the inspector panel, select:Add Component
- Search for
Distributed Rendering Scene Data
and select it - Search for
Distributed Rendering Events
and select it
- Search for
Scene Data
The Distributed Rendering Scene Data
component encapsulates per-scene data used by Distributed Rendering.
This component contains the following options:
Prefabs List: List of Prefabs that are being used to create instances at runtime while running in Lock Step Sync mode. Prefabs can be added to the list using the
+
button. This is needed for runtime prefab instantiation.Expected Number Of Render Nodes: The number of Clients to be started.
Events
The Distributed Rendering Events
component contains all Distributed Rendering events. These events can be triggered at specific points in time during the entire rendering process, for example when the Server connects to the Client, when the simulation ends on the Client etc.
There are three types of events; Global, Server and Client events. These events can be used for a variety of purposes, one being to capture frames as explained in this section.
Hooking methods automatically into Distributed Rendering Events
Methods can be hooked into events using the Distributed Rendering Events
component on the Unity Editor. To instead hook these events in code, add the DistributedRendering
attribute to the method to be hooked to an event.
This attribute, along with the corresponding event argument, ensures that the method is automatically hooked into the corresponding Distributed Rendering event without using the Editor.
The following example explains how this attribute can be used with the On Client Frame Begin
event.
[DistributedRendering(EventType.OnClientFrameBegin)]
public void YourEventMethod(Camera yourCamera, int yourFrame)
{
...
}
The [DistributedRendering(EventType.OnClientFrameBegin)]
attribute before the method ensures that this method is directly hooked into the On Client Frame Begin
event.
💡 Note: To use this attribute, the method must be defined in a
MonoBehaviour
class.