Interface ITensorData
Interface for device-dependent storage of tensor data.
Inherited Members
Namespace: Unity.InferenceEngine
Assembly: Unity.InferenceEngine.dll
Syntax
[MovedFrom("Unity.Sentis")]
public interface ITensorData : IDisposable
Remarks
ITensorData abstracts where tensor elements are physically stored (CPU, GPU compute, or GPU pixel). A Tensor holds an ITensorData instance via dataOnBackend.
Implementations
Use CPUTensorData for CPU storage (Burst-compatible, Job system). Use ComputeTensorData for GPU compute buffers. Use TextureTensorData for GPU texture-backed storage.
Data transfer
Call Upload<T>(NativeArray<T>, int) to copy data into the backing storage. Call Download<T>(int) to copy data out (blocking), or ReadbackRequest() and IsReadbackRequestDone() for async readback. Call CompleteAllPendingOperations() before accessing data when operations may be pending.
Lifetime
Implementations manage native resources. Call Dispose() when finished. Ownership typically belongs to the tensor; do not dispose separately when the tensor owns the data.
Examples
Pin a tensor to CPU and access its data.
var cpuData = CPUTensorData.Pin(inputTensor);
var job = new MyJob { data = cpuData.array.GetNativeArrayHandle<float>() };
cpuData.fence = job.Schedule(inputTensor.shape.length, 64);
worker.Schedule(inputTensor);
cpuData.CompleteAllPendingOperations();
cpuData.Dispose();
Properties
backendType
The device backend (CPU, GPU compute, or GPU pixel) where the tensor data is stored.
Declaration
BackendType backendType { get; }
Property Value
| Type | Description |
|---|---|
| BackendType |
Remarks
Returns CPU, GPUCompute, or GPUPixel. Use this to determine whether data is on CPU or GPU before calling Download<T>(int) or scheduling Jobs that access the buffer.
See Also
maxCapacity
The maximum count of the stored data elements.
Declaration
int maxCapacity { get; }
Property Value
| Type | Description |
|---|---|
| int |
See Also
Methods
CompleteAllPendingOperations()
Blocking call to make sure that internal data is correctly written to and available for CPU read back.
Declaration
void CompleteAllPendingOperations()
Remarks
Call before reading data via Download<T>(int) or IsReadbackRequestDone() when Jobs or GPU operations may still be in progress. For CPU backends, this completes any scheduled Jobs. For GPU backends, this waits for any in-flight transfers.
Examples
Complete pending jobs, then download.
cpuData.fence = job.Schedule(count, 64);
worker.Schedule(inputTensor);
cpuData.CompleteAllPendingOperations();
var data = cpuData.Download<float>(count);
See Also
DownloadAsync<T>(int)
Awaitable contiguous block of data from internal storage.
Declaration
Awaitable<NativeArray<T>> DownloadAsync<T>(int dstCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| int | dstCount | The number of elements to copy. |
Returns
| Type | Description |
|---|---|
| Awaitable<NativeArray<T>> | An Awaitable_1 that resolves to a NativeArray_1 containing the copied data. |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Use this to download data without blocking the main thread. For GPU backends, the readback runs asynchronously. The returned array uses Temp. Dispose of it when finished.
Examples
Download without blocking the main thread.
var data = await tensorData.DownloadAsync<float>(count);
float value = data[0];
data.Dispose();
See Also
Download<T>(int)
Blocking call that returns a contiguous block of data from internal storage.
Declaration
NativeArray<T> Download<T>(int dstCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| int | dstCount | The number of elements to copy. |
Returns
| Type | Description |
|---|---|
| NativeArray<T> | A new NativeArray_1 containing the copied data. |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Blocks until the data is available. For GPU backends, this may trigger a synchronous readback. The returned array uses Temp. Dispose of it or use it within the same frame. Call CompleteAllPendingOperations() first if Jobs or GPU work may be pending.
Examples
Wait for operations, download data, and dispose the array.
tensorData.CompleteAllPendingOperations();
var data = tensorData.Download<float>(count);
float value = data[0];
data.Dispose();
See Also
IsReadbackRequestDone()
Checks if asynchronous readback request is done.
Declaration
bool IsReadbackRequestDone()
Returns
| Type | Description |
|---|---|
| bool |
Remarks
Use after calling ReadbackRequest() to poll for completion. When this returns true, the data is available for CPU access. For CPU backends, this completes any pending Job operations and returns true when ready.
Examples
Poll for readback completion, then download.
tensorData.ReadbackRequest();
while (!tensorData.IsReadbackRequestDone())
await Task.Yield();
var data = tensorData.Download<float>(count);
See Also
ReadbackRequest()
Schedules asynchronous readback of the internal data.
Declaration
void ReadbackRequest()
Remarks
For GPU backends, initiates a non-blocking transfer from device to CPU. Poll IsReadbackRequestDone() to check completion, then call Download<T>(int) to obtain the data. For CPU backends, this is a no-op; data is already on CPU.
Examples
Schedule async readback.
tensorData.ReadbackRequest();
// Continue other work, then check IsReadbackRequestDone()
See Also
Upload<T>(NativeArray<T>, int)
Uploads a contiguous block of tensor data to internal storage.
Declaration
void Upload<T>(NativeArray<T> data, int srcCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| NativeArray<T> | data | The source data to copy. |
| int | srcCount | The number of elements to copy from |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Copies srcCount elements from data into the internal buffer. For GPU backends, this transfers data from CPU to the device. Call CompleteAllPendingOperations() before reading the uploaded data if other operations may be pending.
Examples
Upload data to backing storage and wait for completion.
var data = new NativeArray<float>(256, Allocator.Temp);
// Fill data
tensorData.Upload(data, 256);
tensorData.CompleteAllPendingOperations();