Class ComputeTensorData
Represents data storage for a Tensor as a compute buffer, for GPUCompute backend.
Inherited Members
Namespace: Unity.InferenceEngine
Assembly: Unity.InferenceEngine.dll
Syntax
[MovedFrom("Unity.Sentis")]
public class ComputeTensorData : ITensorData, IDisposable
Constructors
ComputeTensorData(int, bool)
Initializes and returns an instance of ComputeTensorData, and allocates storage for a tensor with the shape of shape.
Declaration
public ComputeTensorData(int count, bool clearOnInit = false)
Parameters
| Type | Name | Description |
|---|---|---|
| int | count | The number of elements. |
| bool | clearOnInit | Whether to zero the data on allocation. The default value is |
Properties
backendType
The device backend (CPU, GPU compute, or GPU pixel) where the tensor data is stored.
Declaration
public BackendType backendType { get; }
Property Value
| Type | Description |
|---|---|
| BackendType |
Remarks
Returns CPU, GPUCompute, or GPUPixel. Use this to determine whether data is on CPU or GPU before calling Download<T>(int) or scheduling Jobs that access the buffer.
buffer
The data storage as a compute buffer.
Declaration
public ComputeBuffer buffer { get; }
Property Value
| Type | Description |
|---|---|
| ComputeBuffer |
maxCapacity
The maximum count of the stored data elements.
Declaration
public int maxCapacity { get; }
Property Value
| Type | Description |
|---|---|
| int |
Methods
CompleteAllPendingOperations()
Blocking call to make sure that internal data is correctly written to and available for CPU read back.
Declaration
public void CompleteAllPendingOperations()
Remarks
Call before reading data via Download<T>(int) or IsReadbackRequestDone() when Jobs or GPU operations may still be in progress. For CPU backends, this completes any scheduled Jobs. For GPU backends, this waits for any in-flight transfers.
Examples
Complete pending jobs, then download.
cpuData.fence = job.Schedule(count, 64);
worker.Schedule(inputTensor);
cpuData.CompleteAllPendingOperations();
var data = cpuData.Download<float>(count);
ConvertToCPUTensorData(int)
Implement this method to convert to CPUTensorData.
Declaration
public CPUTensorData ConvertToCPUTensorData(int dstCount)
Parameters
| Type | Name | Description |
|---|---|---|
| int | dstCount | The number of elements. |
Returns
| Type | Description |
|---|---|
| CPUTensorData | Converted CPUTensorData. |
Dispose()
Disposes of the ComputeTensorData and any associated memory.
Declaration
public void Dispose()
DownloadAsync<T>(int)
Awaitable contiguous block of data from internal storage.
Declaration
public Awaitable<NativeArray<T>> DownloadAsync<T>(int dstCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| int | dstCount | The number of elements to copy. |
Returns
| Type | Description |
|---|---|
| Awaitable<NativeArray<T>> | An Awaitable_1 that resolves to a NativeArray_1 containing the copied data. |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Use this to download data without blocking the main thread. For GPU backends, the readback runs asynchronously. The returned array uses Temp. Dispose of it when finished.
Examples
Download without blocking the main thread.
var data = await tensorData.DownloadAsync<float>(count);
float value = data[0];
data.Dispose();
Download<T>(int)
Blocking call that returns a contiguous block of data from internal storage.
Declaration
public NativeArray<T> Download<T>(int dstCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| int | dstCount | The number of elements to copy. |
Returns
| Type | Description |
|---|---|
| NativeArray<T> | A new NativeArray_1 containing the copied data. |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Blocks until the data is available. For GPU backends, this may trigger a synchronous readback. The returned array uses Temp. Dispose of it or use it within the same frame. Call CompleteAllPendingOperations() first if Jobs or GPU work may be pending.
Examples
Wait for operations, download data, and dispose the array.
tensorData.CompleteAllPendingOperations();
var data = tensorData.Download<float>(count);
float value = data[0];
data.Dispose();
~ComputeTensorData()
Finalizes the ComputeTensorData.
Declaration
protected ~ComputeTensorData()
IsReadbackRequestDone()
Checks if asynchronous readback request is done.
Declaration
public bool IsReadbackRequestDone()
Returns
| Type | Description |
|---|---|
| bool |
Remarks
Use after calling ReadbackRequest() to poll for completion. When this returns true, the data is available for CPU access. For CPU backends, this completes any pending Job operations and returns true when ready.
Examples
Poll for readback completion, then download.
tensorData.ReadbackRequest();
while (!tensorData.IsReadbackRequestDone())
await Task.Yield();
var data = tensorData.Download<float>(count);
Pin(Tensor, bool)
Moves the tensor into GPU memory on the GPUCompute backend device.
Declaration
public static ComputeTensorData Pin(Tensor X, bool clearOnInit = false)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The tensor to move to the compute backend. |
| bool | clearOnInit | Whether to zero the data on pinning. The default value is |
Returns
| Type | Description |
|---|---|
| ComputeTensorData | The pinned |
ReadbackRequest()
Schedules asynchronous readback of the internal data.
Declaration
public void ReadbackRequest()
Remarks
For GPU backends, initiates a non-blocking transfer from device to CPU. Poll IsReadbackRequestDone() to check completion, then call Download<T>(int) to obtain the data. For CPU backends, this is a no-op; data is already on CPU.
Examples
Schedule async readback.
tensorData.ReadbackRequest();
// Continue other work, then check IsReadbackRequestDone()
ToString()
Returns a string that represents the ComputeTensorData.
Declaration
public override string ToString()
Returns
| Type | Description |
|---|---|
| string | The string summary of the |
Overrides
Upload<T>(NativeArray<T>, int)
Uploads a contiguous block of tensor data to internal storage.
Declaration
public void Upload<T>(NativeArray<T> data, int srcCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| NativeArray<T> | data | The source data to copy. |
| int | srcCount | The number of elements to copy from |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Copies srcCount elements from data into the internal buffer. For GPU backends, this transfers data from CPU to the device. Call CompleteAllPendingOperations() before reading the uploaded data if other operations may be pending.
Examples
Upload data to backing storage and wait for completion.
var data = new NativeArray<float>(256, Allocator.Temp);
// Fill data
tensorData.Upload(data, 256);
tensorData.CompleteAllPendingOperations();