Class ComputeTensorData
Tensor
data storage for GPU backends
Inherited Members
Namespace: Unity.Barracuda
Assembly: solution.dll
Syntax
public class ComputeTensorData : UniqueResourceId, ITensorData, IDisposable, ITensorDataStatistics, IUniqueResource
Constructors
ComputeTensorData(TensorShape, string, ChannelsOrder, bool)
Create ComputeTensorData
Declaration
public ComputeTensorData(TensorShape shape, string buffername, ComputeInfo.ChannelsOrder onDeviceChannelsOrder, bool clearOnInit = true)
Parameters
Type | Name | Description |
---|---|---|
Tensor |
shape | shape |
string | buffername | buffer name |
Compute |
onDeviceChannelsOrder | channel order |
bool | clearOnInit | clear on init |
Fields
name
Parent Tensor
name
Declaration
public string name
Field Value
Type | Description |
---|---|
string |
Properties
buffer
Data storage as ComputeBuffer
Declaration
public ComputeBuffer buffer { get; }
Property Value
Type | Description |
---|---|
Compute |
channelsOrder
Channel order channels-first vs channels-last
Declaration
public ComputeInfo.ChannelsOrder channelsOrder { get; }
Property Value
Type | Description |
---|---|
Compute |
dataType
Returns the type of the elements this tensorData can contain.
Declaration
public virtual DataType dataType { get; }
Property Value
Type | Description |
---|---|
Data |
inUse
Declaration
public virtual bool inUse { get; }
Property Value
Type | Description |
---|---|
bool |
isGPUMem
Declaration
public virtual bool isGPUMem { get; }
Property Value
Type | Description |
---|---|
bool |
maxCapacity
Returns the maximum number of element this tensorData can contain.
Declaration
public virtual int maxCapacity { get; }
Property Value
Type | Description |
---|---|
int |
offset
Offset in the data storage buffer
Declaration
public int offset { get; }
Property Value
Type | Description |
---|---|
int |
Methods
Dispose()
Dispose internal storage
Declaration
public virtual void Dispose()
Download(TensorShape)
Returns an array filled with the values of a tensor.
Depending on the implementation and underlying device this array might be a copy or direct reference to the tensor values.
This is a blocking call, unless data from device was requested via ScheduleAsyncDownload
beforehand and has already arrived.
Declaration
public virtual float[] Download(TensorShape shape)
Parameters
Type | Name | Description |
---|---|---|
Tensor |
shape | the TensorShape (and thus length) of the data to copy |
Returns
Type | Description |
---|---|
float[] | Tensor data as |
~ComputeTensorData()
Finalizer
Declaration
protected ~ComputeTensorData()
Reserve(int)
Reserve uninitialized memory.
Declaration
public virtual void Reserve(int count)
Parameters
Type | Name | Description |
---|---|---|
int | count | element count to reserve |
ScheduleAsyncDownload(int)
Schedule an asynchronous download from device memory.
count
is the number of element to readback.
Declaration
public virtual bool ScheduleAsyncDownload(int count)
Parameters
Type | Name | Description |
---|---|---|
int | count | count of elements to download |
Returns
Type | Description |
---|---|
bool |
|
SharedAccess(out int)
Returns an array filled with the values of multiple tensors that share the same tensorData on device.
Depending on the implementation and underlying device this array might be a copy or direct reference to tensor values, no conversion from on device memory layout will occur.
This is a blocking call, unless data from device was requested via ScheduleAsyncDownload
beforehand and has already arrived.
Declaration
public virtual BarracudaArray SharedAccess(out int offset)
Parameters
Type | Name | Description |
---|---|---|
int | offset | This function outputs |
Returns
Type | Description |
---|---|
Barracuda |
array filled with the values of multiple tensors that share the same tensorData on device |
ToString()
Summary
Declaration
public override string ToString()
Returns
Type | Description |
---|---|
string | summary |
Overrides
Upload(float[], TensorShape, int)
Initialize with data
.
shape
is the TensorShape (and thus length) of the data to copy.
managedBufferStartIndex
is the offset where to start the copy in the data
Declaration
public virtual void Upload(float[] data, TensorShape shape, int managedBufferStartIndex = 0)
Parameters
Type | Name | Description |
---|---|---|
float[] | data | data as |
Tensor |
shape | Tensor shape |
int | managedBufferStartIndex | managed buffer start index |