Class CPUTensorData
Represents Burst-specific internal data storage for a Tensor.
Inherited Members
Namespace: Unity.InferenceEngine
Assembly: Unity.InferenceEngine.dll
Syntax
[MovedFrom("Unity.Sentis")]
public class CPUTensorData : ITensorData, IDisposable
Remarks
CPUTensorData stores tensor elements in native memory on the CPU, compatible with the Burst compiler and Unity's Job system. Use it when you need direct access to tensor data for custom CPU operations, or when running inference on the CPU backend.
Access the underlying buffer via array, which returns a NativeTensorArray. Use Pin(Tensor, bool) to ensure a tensor's data resides on CPU before scheduling jobs that read or write it. The fence and reuse properties provide Job system dependency handles for synchronization.
Call Dispose() when finished to release native memory. Dispose must be called from the main thread; do not call from a finalizer.
The Sentis package provides provides a complete sample that uses Burst to write data to a tensor in the Job system. To learn more, refer to Samples.
Additional resources
Examples
// Pin a tensor to CPU and write data via a Burst job.
var cpuData = CPUTensorData.Pin(inputTensor);
var job = new MyJob { data = cpuData.array.GetNativeArrayHandle<float>() };
cpuData.fence = job.Schedule(inputTensor.shape.length, 64);
worker.Schedule(inputTensor);
// Define the job struct (used in examples below)
[BurstCompile]
struct MyJob : IJobParallelFor
{
[Unity.Collections.LowLevel.Unsafe.NativeDisableUnsafePtrRestriction]
public NativeArray<float> data;
public void Execute(int i)
{
data[i] = 3.14f;
}
}
Constructors
CPUTensorData(int, bool)
Allocates a new CPUTensorData with storage for the specified number of elements.
Declaration
public CPUTensorData(int count, bool clearOnInit = false)
Parameters
| Type | Name | Description |
|---|---|---|
| int | count | The number of elements to allocate. |
| bool | clearOnInit | Whether to zero the data on allocation. The default value is |
Remarks
Use this constructor when creating tensor data from scratch. Set clearOnInit to true to zero-initialize the buffer. For tensors backed by existing data, use the CPUTensorData(NativeTensorArray) overload.
Examples
var data = new CPUTensorData(1024, clearOnInit: true);
// data.array contains 1024 zero-initialized floats
CPUTensorData(NativeTensorArray)
Wraps an existing NativeTensorArray as CPUTensorData.
Declaration
public CPUTensorData(NativeTensorArray data)
Parameters
| Type | Name | Description |
|---|---|---|
| NativeTensorArray | data | The tensor data to wrap, or |
Remarks
Use this constructor when you have pre-allocated tensor data. The CPUTensorData takes ownership of the array. Do not dispose of it separately. Pass null to create an empty instance.
Examples
var nativeArray = new NativeTensorArray(256);
var cpuData = new CPUTensorData(nativeArray);
Properties
array
The underlying NativeTensorArray containing the tensor data.
Declaration
public NativeTensorArray array { get; }
Property Value
| Type | Description |
|---|---|
| NativeTensorArray |
Remarks
Use GetNativeArrayHandle<T>() to obtain a NativeArray<T> for use with Unity Jobs. The buffer is shared with the tensor. Do not dispose of it separately.
backendType
The device backend (CPU, GPU compute, or GPU pixel) where the tensor data is stored.
Declaration
public BackendType backendType { get; }
Property Value
| Type | Description |
|---|---|
| BackendType |
Remarks
Returns CPU, GPUCompute, or GPUPixel. Use this to determine whether data is on CPU or GPU before calling Download<T>(int) or scheduling Jobs that access the buffer.
fence
A read fence job handle. You can use fence as a dependsOn argument when you schedule a job that reads data. The job will start when the tensor data is ready for read access.
Declaration
public JobHandle fence { get; set; }
Property Value
| Type | Description |
|---|---|
| JobHandle |
maxCapacity
The maximum count of the stored data elements.
Declaration
public int maxCapacity { get; }
Property Value
| Type | Description |
|---|---|
| int |
rawPtr
The raw memory pointer for the resource.
Declaration
public void* rawPtr { get; }
Property Value
| Type | Description |
|---|---|
| void* |
reuse
A write fence job handle. You can use reuse as a dependsOn argument when you schedule a job that reads data. The job will start when the tensor data is ready for write access.
Declaration
public JobHandle reuse { get; set; }
Property Value
| Type | Description |
|---|---|
| JobHandle |
Methods
CompleteAllPendingOperations()
Blocking call to make sure that internal data is correctly written to and available for CPU read back.
Declaration
public void CompleteAllPendingOperations()
Remarks
Call before reading data via Download<T>(int) or IsReadbackRequestDone() when Jobs or GPU operations may still be in progress. For CPU backends, this completes any scheduled Jobs. For GPU backends, this waits for any in-flight transfers.
Examples
Complete pending jobs, then download.
cpuData.fence = job.Schedule(count, 64);
worker.Schedule(inputTensor);
cpuData.CompleteAllPendingOperations();
var data = cpuData.Download<float>(count);
ConvertToComputeTensorData(int)
Implement this method to convert to ComputeTensorData.
Declaration
public ComputeTensorData ConvertToComputeTensorData(int count)
Parameters
| Type | Name | Description |
|---|---|---|
| int | count |
Returns
| Type | Description |
|---|---|
| ComputeTensorData | Converted |
Dispose()
Releases the native memory associated with this CPUTensorData.
Declaration
public void Dispose()
Remarks
Must be called from the main thread. If pending Job operations exist, this method completes them before releasing memory. Do not call from a finalizer; the garbage collector may run on a different thread and cause undefined behavior.
Examples
Pin a tensor to CPU, schedule jobs, complete pending operations, then dispose.
var cpuData = CPUTensorData.Pin(inputTensor);
var job = new MyJob { data = cpuData.array.GetNativeArrayHandle<float>() };
cpuData.fence = job.Schedule(inputTensor.shape.length, 64);
worker.Schedule(inputTensor);
cpuData.CompleteAllPendingOperations();
cpuData.Dispose();
(Refer to the class-level example for an example MyJob definition.)
DownloadAsync<T>(int)
Awaitable contiguous block of data from internal storage.
Declaration
public Awaitable<NativeArray<T>> DownloadAsync<T>(int dstCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| int | dstCount | The number of elements to copy. |
Returns
| Type | Description |
|---|---|
| Awaitable<NativeArray<T>> | An Awaitable_1 that resolves to a NativeArray_1 containing the copied data. |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Use this to download data without blocking the main thread. For GPU backends, the readback runs asynchronously. The returned array uses Temp. Dispose of it when finished.
Examples
Download without blocking the main thread.
var data = await tensorData.DownloadAsync<float>(count);
float value = data[0];
data.Dispose();
Download<T>(int)
Blocking call that returns a contiguous block of data from internal storage.
Declaration
public NativeArray<T> Download<T>(int dstCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| int | dstCount | The number of elements to copy. |
Returns
| Type | Description |
|---|---|
| NativeArray<T> | A new NativeArray_1 containing the copied data. |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Blocks until the data is available. For GPU backends, this may trigger a synchronous readback. The returned array uses Temp. Dispose of it or use it within the same frame. Call CompleteAllPendingOperations() first if Jobs or GPU work may be pending.
Examples
Wait for operations, download data, and dispose the array.
tensorData.CompleteAllPendingOperations();
var data = tensorData.Download<float>(count);
float value = data[0];
data.Dispose();
~CPUTensorData()
Finalizes the CPUTensorData.
Declaration
protected ~CPUTensorData()
IsReadbackRequestDone()
Checks if asynchronous readback request is done.
Declaration
public bool IsReadbackRequestDone()
Returns
| Type | Description |
|---|---|
| bool |
Remarks
Use after calling ReadbackRequest() to poll for completion. When this returns true, the data is available for CPU access. For CPU backends, this completes any pending Job operations and returns true when ready.
Examples
Poll for readback completion, then download.
tensorData.ReadbackRequest();
while (!tensorData.IsReadbackRequestDone())
await Task.Yield();
var data = tensorData.Download<float>(count);
Pin(Tensor, bool)
Ensures the tensor's data resides on the CPU and returns the CPUTensorData.
Declaration
public static CPUTensorData Pin(Tensor X, bool clearOnInit = false)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The tensor to pin to CPU. |
| bool | clearOnInit | Whether to zero-initialize when allocating new CPU storage. The default value is |
Returns
| Type | Description |
|---|---|
| CPUTensorData | The CPUTensorData backing the tensor. |
Remarks
If the tensor is already on CPU, returns the existing CPUTensorData. If on GPU, copies or converts the data to CPU. Use this before scheduling Jobs that read or write the tensor via array.
Examples
// Pin a tensor to CPU and write data via a Burst job.
var cpuData = CPUTensorData.Pin(inputTensor);
var job = new MyJob { data = cpuData.array.GetNativeArrayHandle<float>() };
cpuData.fence = job.Schedule(inputTensor.shape.length, 64);
worker.Schedule(inputTensor);
ReadbackRequest()
Schedules asynchronous readback of the internal data.
Declaration
public void ReadbackRequest()
Remarks
For GPU backends, initiates a non-blocking transfer from device to CPU. Poll IsReadbackRequestDone() to check completion, then call Download<T>(int) to obtain the data. For CPU backends, this is a no-op; data is already on CPU.
Examples
Schedule async readback.
tensorData.ReadbackRequest();
// Continue other work, then check IsReadbackRequestDone()
ToString()
Returns a string representation of the CPU tensor data.
Declaration
public override string ToString()
Returns
| Type | Description |
|---|---|
| string | A string in the form |
Overrides
Remarks
The format is (CPU burst: [length], uploaded: count), where length is the buffer length and count is the uploaded element count.
Examples
var cpuData = CPUTensorData.Pin(inputTensor);
Debug.Log(cpuData.ToString());
// Output: (CPU burst: [256], uploaded: 256)
Upload<T>(NativeArray<T>, int)
Uploads a contiguous block of tensor data to internal storage.
Declaration
public void Upload<T>(NativeArray<T> data, int srcCount) where T : unmanaged
Parameters
| Type | Name | Description |
|---|---|---|
| NativeArray<T> | data | The source data to copy. |
| int | srcCount | The number of elements to copy from |
Type Parameters
| Name | Description |
|---|---|
| T | The element type of the data (for example, |
Remarks
Copies srcCount elements from data into the internal buffer. For GPU backends, this transfers data from CPU to the device. Call CompleteAllPendingOperations() before reading the uploaded data if other operations may be pending.
Examples
Upload data to backing storage and wait for completion.
var data = new NativeArray<float>(256, Allocator.Temp);
// Fill data
tensorData.Upload(data, 256);
tensorData.CompleteAllPendingOperations();