A generator is a custom sound source that produces audio and injects it directly into a sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary, typically through an AudioSource
. Generators are a core extension point of the scriptable audio pipeline and let you synthesize or stream real-time audio.
You can attach a generator to an AudioSource
in the following two ways: Asset-based workflow and component-based workflow.
Both workflows rely on implementing the IAudioGenerator
interface, which acts as a factory for creating instances of your generator.
Create a ScriptableObject
that implements IAudioGenerator
.
using UnityEngine;
[CreateAssetMenu(fileName = "MyAssetBasedGenerator", menuName = "Audio/Generators/My Asset-Based Generator", order = 1)]
public class MyAssetBasedGenerator : ScriptableObject, IAudioGenerator
{
public bool isFinite => ...;
public bool isRealtime => ...;
public DiscreteTime? length => ...;
public GeneratorInstance CreateInstance(
ControlContext context,
AudioFormat? nestedConfiguration,
ProcessorInstance.CreationParameters creationParameters)
{
return context.AllocateGenerator(new MyRealtime(...), new MyControl(...));
}
}
After you’ve created the asset, you can assign it to an AudioSource
by dragging and dropping to the Generator field in the InspectorA Unity window that displays information about the currently selected GameObject, asset or project settings, allowing you to inspect and edit the values. More info
See in Glossary.
To use the Component-based workflow, define a MonoBehaviour
that implements IAudioGenerator
and then add it to a GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary in your scene. In the Inspector for your AudioSource
, assign this component to the Generator field.
using UnityEngine;
public class MyComponentBasedGenerator : MonoBehaviour, IAudioGenerator
{
public bool isFinite => ...;
public bool isRealtime => ...;
public DiscreteTime? length => ...;
public GeneratorInstance CreateInstance(
ControlContext context,
AudioFormat? nestedConfiguration,
ProcessorInstance.CreationParameters creationParameters)
{
return context.AllocateGenerator(new MyRealtime(...), new MyControl(...));
}
}
IAudioGenerator
interfaceAn IAudioGenerator
includes the following:
GeneratorInstance.ICapabilities
that describe how the generator behaves in advance.CreateInstance
method, which the audio system calls to create a GeneratorInstance
.Following is a list of the capabilities:
false
.You must expose matching capabilities in your GeneratorInstance.IRealtime
implementation. A mismatch between IAudioGenerator
and IRealtime
may cause unexpected behavior and will produce warnings in the Console.
Generator configuration happens initially during construction, and additionally when the system changes configuration. This typically happens in response to a user action, such as changing the audio output device in the OS system settings or when connecting a pair of headphones. It is a two-step negotiation between the host (the owner of the generator) and the generator:
AudioFormat
for the generator (sample rate, speaker layout, buffer size).Setup
it will actually use.Some generators (e.g. when synthesizing audio) can adapt to many formats, while others (e.g. when playing back audio files) may only support the file’s native format. If a generator selects a different format than suggested by the host, it is the host’s responsibility to convert between formats when needed.
When creating child generators via ControlContext.AllocateGenerator
, a parent can provide an optional AudioFormat
as a suggestion to the child. Children should follow the suggestion when possible to minimize conversions.
Generators are always processed within a mix cycle. For a description of mix cycles see the mix cycles subsection in the Scriptable processors concepts section.
Generators can be organized into hierarchical trees. In each tree, the root generator sits at the top and is responsible for invoking Configure
, Update
, and Process
on its immediate child generators. Each child, in turn, forwards those calls to its own children. This pattern enables complex structures such as mixers, blend containers, and randomized sequencers.
When a parent generator creates a child using ControlContext.AllocateGenerator
, it might pass an optional suggested AudioFormat
. The child generator should match the suggestion if possible to minimize format conversions, but can select a different format, if required.
Consider the following:
Process
real‑time safe. Avoid allocations, locks, blocking I/O, logging, and throwing exceptions on the audio thread. Use stack-allocated or pooled buffers, and struct-based states for predictable performance.UnityEngine
APIs from within Process
. Use pipes to synchronize state between the control and real-time part, and avoid any multithreaded communication that might yield indeterministic results.Unity.Mathematics
and organize data for optimal Burst auto-vectorization.isFinite
, isRealtime
, and length
in your IAudioGenerator
implementation match with the values in GeneratorInstance.IRealtime
.AudioFormat
minimizes additional resampling or channel mixing.length
property enables better waveform previews, scheduling, and progress indicators in the Editor.