docs.unity3d.com
Search Results for

    Show / Hide Table of Contents

    Use a command buffer

    You can use a command buffer to create a queue of Inference Engine commands, which can then be run on the graphics processing unit (GPU) at a later time.

    To use a command buffer, follow these steps:

    1. Create or get an empty command buffer.
    2. Create a worker with the BackendType.GPUCompute backend type.
    3. In a Unity event function, for example OnRenderImage, add Inference Engine methods to the command buffer.
    4. Use Graphics.ExecuteCommandBuffer to run the command buffer.

    To add commands to the command buffer, do one of the following:

    • Use an Inference Engine API that takes the command buffer as a parameter, for example TextureConverter.ToTensor.
    • Use the ScheduleWorker method on the command buffer to add a command that runs the model.

    For example, the following code creates and runs a command buffer queue. This queue first converts a render texture to an input tensor, then runs the model, and finally converts the output tensor back to a render texture.

    // Create a worker that uses the GPUCompute backend type
    worker = new Worker(ModelLoader.Load(modelAsset), BackendType.GPUCompute);
    
    //...
    
    // Add model execution command buffer that runs the model using the input tensor
    commandBuffer.ScheduleWorker(worker, inputs);
    

    Build the CommandBuffer once, update the input every frame, and run the CommandBuffer. This reduces the scheduling time by half.

    Tensor<float> input0 = new Tensor<float>(new TensorShape(1));
    Model model;
    CommandBuffer cb;
    Worker worker;
    
    void Start()
    {
        worker = new Worker(model, BackendType.GPUCompute);
    
        worker.SetInput("input0", input0);
    
        cb = new CommandBuffer();
        cb.ScheduleWorker(worker);
    }
    
    void Update()
    {
        // modify input0
        input0.Upload(new float[] { Time.deltaTime });
        Graphics.ExecuteCommandBuffer(cb);
    }
    

    For more information, refer to the following:

    • Extending the Built-in Render Pipeline with CommandBuffers
    • The Unity Discussions group for Inference Engine, which has a full sample project that uses command buffers.

    Additional resources

    • Get output from a model
    • Read output asynchronously
    In This Article
    Back to top
    Copyright © 2025 Unity Technologies — Trademarks and terms of use
    • Legal
    • Privacy Policy
    • Cookie Policy
    • Do Not Sell or Share My Personal Information
    • Your Privacy Choices (Cookie Settings)