To optimize loading Adaptive Probe Volume (APV) data at runtime, do either of the following:
You can’t use both methods at the same time.
You can enable Adaptive Probe Volume streaming to enable Adaptive Probe Volume lighting in very large worlds. Using streaming means you can bake Adaptive Probe Volume data larger than available CPU or GPU memory, and load it at runtime when it’s needed. At runtime, as your cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary moves, the Universal Render PipelineA series of operations that take the contents of a Scene, and displays them on a screen. Unity lets you choose from pre-built render pipelines, or write your own. More info
See in Glossary (URP) loads only Adaptive Probe Volume data from cells within the camera’s view frustum.
Unity uses Streaming Assets to store the lighting data. For more information about where Unity stores the files, refer to Streaming Assets.
To enable streaming, follow these steps:
You can now enable two types of streaming:
You can configure streaming settings in the same window. Refer to URP Asset for more information.
The smallest section URP loads and uses is a cell, which is the same size as the largest brick in an Adaptive Probe Volume. You can influence the size of cells in an Adaptive Probe Volume by adjusting the density of Light Probes
To view the cells in an Adaptive Probe Volume, or debug streaming, use the Rendering Debugger.
To load only the APV data you need at runtime, add the baked APV data to an AssetBundle or Addressable.
Follow these steps:
Enabling Probe Volume Disable Streaming Assets may increase the amount of memory APV uses at runtime. Unity has to keep all the lighting data associated with the current Baking Set in memory, regardless of whether all scenes are loaded.