docs.unity3d.com
Search Results for

    Show / Hide Table of Contents

    Interface ITensorData

    Interface for device-dependent storage of tensor data.

    Inherited Members
    IDisposable.Dispose()
    Namespace: Unity.InferenceEngine
    Assembly: Unity.InferenceEngine.dll
    Syntax
    [MovedFrom("Unity.Sentis")]
    public interface ITensorData : IDisposable
    Remarks

    ITensorData abstracts where tensor elements are physically stored (CPU, GPU compute, or GPU pixel). A Tensor holds an ITensorData instance via dataOnBackend.

    Implementations
    Use CPUTensorData for CPU storage (Burst-compatible, Job system). Use ComputeTensorData for GPU compute buffers. Use TextureTensorData for GPU texture-backed storage.

    Data transfer
    Call Upload<T>(NativeArray<T>, int) to copy data into the backing storage. Call Download<T>(int) to copy data out (blocking), or ReadbackRequest() and IsReadbackRequestDone() for async readback. Call CompleteAllPendingOperations() before accessing data when operations may be pending.

    Lifetime
    Implementations manage native resources. Call Dispose() when finished. Ownership typically belongs to the tensor; do not dispose separately when the tensor owns the data.

    Examples

    Pin a tensor to CPU and access its data.

    var cpuData = CPUTensorData.Pin(inputTensor);
    var job = new MyJob { data = cpuData.array.GetNativeArrayHandle<float>() };
    cpuData.fence = job.Schedule(inputTensor.shape.length, 64);
    worker.Schedule(inputTensor);
    cpuData.CompleteAllPendingOperations();
    cpuData.Dispose();

    Properties

    backendType

    The device backend (CPU, GPU compute, or GPU pixel) where the tensor data is stored.

    Declaration
    BackendType backendType { get; }
    Property Value
    Type Description
    BackendType
    Remarks

    Returns CPU, GPUCompute, or GPUPixel. Use this to determine whether data is on CPU or GPU before calling Download<T>(int) or scheduling Jobs that access the buffer.

    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    maxCapacity

    The maximum count of the stored data elements.

    Declaration
    int maxCapacity { get; }
    Property Value
    Type Description
    int
    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    Methods

    CompleteAllPendingOperations()

    Blocking call to make sure that internal data is correctly written to and available for CPU read back.

    Declaration
    void CompleteAllPendingOperations()
    Remarks

    Call before reading data via Download<T>(int) or IsReadbackRequestDone() when Jobs or GPU operations may still be in progress. For CPU backends, this completes any scheduled Jobs. For GPU backends, this waits for any in-flight transfers.

    Examples

    Complete pending jobs, then download.

    cpuData.fence = job.Schedule(count, 64);
    worker.Schedule(inputTensor);
    cpuData.CompleteAllPendingOperations();
    var data = cpuData.Download<float>(count);
    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    DownloadAsync<T>(int)

    Awaitable contiguous block of data from internal storage.

    Declaration
    Awaitable<NativeArray<T>> DownloadAsync<T>(int dstCount) where T : unmanaged
    Parameters
    Type Name Description
    int dstCount

    The number of elements to copy.

    Returns
    Type Description
    Awaitable<NativeArray<T>>

    An Awaitable_1 that resolves to a NativeArray_1 containing the copied data.

    Type Parameters
    Name Description
    T

    The element type of the data (for example, float or int).

    Remarks

    Use this to download data without blocking the main thread. For GPU backends, the readback runs asynchronously. The returned array uses Temp. Dispose of it when finished.

    Examples

    Download without blocking the main thread.

    var data = await tensorData.DownloadAsync<float>(count);
    float value = data[0];
    data.Dispose();
    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    Download<T>(int)

    Blocking call that returns a contiguous block of data from internal storage.

    Declaration
    NativeArray<T> Download<T>(int dstCount) where T : unmanaged
    Parameters
    Type Name Description
    int dstCount

    The number of elements to copy.

    Returns
    Type Description
    NativeArray<T>

    A new NativeArray_1 containing the copied data.

    Type Parameters
    Name Description
    T

    The element type of the data (for example, float or int).

    Remarks

    Blocks until the data is available. For GPU backends, this may trigger a synchronous readback. The returned array uses Temp. Dispose of it or use it within the same frame. Call CompleteAllPendingOperations() first if Jobs or GPU work may be pending.

    Examples

    Wait for operations, download data, and dispose the array.

    tensorData.CompleteAllPendingOperations();
    var data = tensorData.Download<float>(count);
    float value = data[0];
    data.Dispose();
    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    IsReadbackRequestDone()

    Checks if asynchronous readback request is done.

    Declaration
    bool IsReadbackRequestDone()
    Returns
    Type Description
    bool

    true if the readback has completed. Otherwise false.

    Remarks

    Use after calling ReadbackRequest() to poll for completion. When this returns true, the data is available for CPU access. For CPU backends, this completes any pending Job operations and returns true when ready.

    Examples

    Poll for readback completion, then download.

    tensorData.ReadbackRequest();
    while (!tensorData.IsReadbackRequestDone())
        await Task.Yield();
    var data = tensorData.Download<float>(count);
    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    ReadbackRequest()

    Schedules asynchronous readback of the internal data.

    Declaration
    void ReadbackRequest()
    Remarks

    For GPU backends, initiates a non-blocking transfer from device to CPU. Poll IsReadbackRequestDone() to check completion, then call Download<T>(int) to obtain the data. For CPU backends, this is a no-op; data is already on CPU.

    Examples

    Schedule async readback.

    tensorData.ReadbackRequest();
    // Continue other work, then check IsReadbackRequestDone()
    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    Upload<T>(NativeArray<T>, int)

    Uploads a contiguous block of tensor data to internal storage.

    Declaration
    void Upload<T>(NativeArray<T> data, int srcCount) where T : unmanaged
    Parameters
    Type Name Description
    NativeArray<T> data

    The source data to copy.

    int srcCount

    The number of elements to copy from data.

    Type Parameters
    Name Description
    T

    The element type of the data (for example, float or int).

    Remarks

    Copies srcCount elements from data into the internal buffer. For GPU backends, this transfers data from CPU to the device. Call CompleteAllPendingOperations() before reading the uploaded data if other operations may be pending.

    Examples

    Upload data to backing storage and wait for completion.

    var data = new NativeArray<float>(256, Allocator.Temp);
    // Fill data
    tensorData.Upload(data, 256);
    tensorData.CompleteAllPendingOperations();
    See Also
    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType

    See Also

    Tensor
    CPUTensorData
    ComputeTensorData
    TextureTensorData
    BackendType
    In This Article
    Back to top
    Copyright © 2026 Unity Technologies — Trademarks and terms of use
    • Legal
    • Privacy Policy
    • Cookie Policy
    • Do Not Sell or Share My Personal Information
    • Your Privacy Choices (Cookie Settings)