Interface IBackend
An interface that provides methods for operations on tensors.
Inherited Members
Namespace: Unity.Sentis
Assembly: Unity.Sentis.dll
Syntax
public interface IBackend : IDisposable
Properties
backendType
Returns the BackendType
for the ops.
Declaration
BackendType backendType { get; }
Property Value
Type | Description |
---|---|
BackendType |
Methods
Abs(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Abs
math function: f(x) = f(x) = |x|.
Declaration
void Abs(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Abs(TensorInt, TensorInt)
Computes an output tensor by applying the element-wise Abs
math function: f(x) = f(x) = |x|.
Declaration
void Abs(TensorInt X, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Acos(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Acos
trigonometric function: f(x) = acos(x).
Declaration
void Acos(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Acosh(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Acosh
trigonometric function: f(x) = acosh(x).
Declaration
void Acosh(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Add(TensorFloat, TensorFloat, TensorFloat)
Performs an element-wise Add
math operation: f(a, b) = a + b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Add(TensorFloat A, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Add(TensorInt, TensorInt, TensorInt)
Performs an element-wise Add
math operation: f(a, b) = a + b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Add(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
And(TensorInt, TensorInt, TensorInt)
Performs an element-wise And
logical operation: f(a, b) = a & b.
This supports numpy-style broadcasting of input tensors.
Declaration
void And(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ArgMax(TensorFloat, TensorInt, int, bool, bool)
Computes the indices of the maximum elements of the input tensor along a given axis.
Declaration
void ArgMax(TensorFloat X, TensorInt O, int axis, bool keepdim, bool selectLastIndex)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | axis | The axis along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
bool | selectLastIndex | Whether to perform the operation from the back of the axis. |
ArgMax(TensorInt, TensorInt, int, bool, bool)
Computes the indices of the maximum elements of the input tensor along a given axis.
Declaration
void ArgMax(TensorInt X, TensorInt O, int axis, bool keepdim, bool selectLastIndex)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | axis | The axis along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
bool | selectLastIndex | Whether to perform the operation from the back of the axis. |
ArgMin(TensorFloat, TensorInt, int, bool, bool)
Computes the indices of the minimum elements of the input tensor along a given axis.
Declaration
void ArgMin(TensorFloat X, TensorInt O, int axis, bool keepdim, bool selectLastIndex)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | axis | The axis along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
bool | selectLastIndex | Whether to perform the operation from the back of the axis. |
ArgMin(TensorInt, TensorInt, int, bool, bool)
Computes the indices of the minimum elements of the input tensor along a given axis.
Declaration
void ArgMin(TensorInt X, TensorInt O, int axis, bool keepdim, bool selectLastIndex)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | axis | The axis along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
bool | selectLastIndex | Whether to perform the operation from the back of the axis. |
Asin(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Asin
trigonometric function: f(x) = asin(x).
Declaration
void Asin(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Asinh(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Asinh
trigonometric function: f(x) = asinh(x).
Declaration
void Asinh(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Atan(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Atan
trigonometric function: f(x) = atan(x).
Declaration
void Atan(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Atanh(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Atanh
trigonometric function: f(x) = atanh(x).
Declaration
void Atanh(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
AveragePool(TensorFloat, TensorFloat, int[], int[], int[])
Calculates an output tensor by pooling the mean values of the input tensor across its spatial dimensions according to the given pool and stride values.
Declaration
void AveragePool(TensorFloat X, TensorFloat O, int[] kernelShape, int[] strides, int[] pads)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int[] | kernelShape | The size of the kernel along each spatial axis. |
int[] | strides | The stride along each spatial axis. |
int[] | pads | The lower and upper padding values for each spatial dimension. For example, [pad_left, pad_right] for 1D, or [pad_top, pad_bottom, pad_left, pad_right] for 2D. |
BatchNormalization(TensorFloat, TensorFloat, TensorFloat, TensorFloat, TensorFloat, TensorFloat, float)
Computes the mean variance on the last dimension of the input tensor and normalizes it according to scale
and bias
tensors.
Declaration
void BatchNormalization(TensorFloat X, TensorFloat S, TensorFloat B, TensorFloat mean, TensorFloat variance, TensorFloat O, float epsilon)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | S | The scale tensor. |
TensorFloat | B | The bias tensor. |
TensorFloat | mean | The mean tensor. |
TensorFloat | variance | The variance tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | epsilon | The epsilon value the layer uses to avoid division by zero. |
Bernoulli(TensorFloat, Tensor, int?)
Generates an output tensor with values 0 or 1 from a Bernoulli distribution. The input tensor contains the probabilities to use for generating the output values.
Declaration
void Bernoulli(TensorFloat X, Tensor O, int? seed)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The probabilities input tensor. |
Tensor | O | The output tensor to be computed and filled. |
int? | seed | The optional seed to use for the random number generation. If this is |
Cast(TensorFloat, TensorInt)
Computes the output tensor using an element-wise Cast
function: f(x) = (int)x.
Declaration
void Cast(TensorFloat X, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Cast(TensorInt, TensorFloat)
Computes the output tensor using an element-wise Cast
function: f(x) = (float)x.
Declaration
void Cast(TensorInt X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Cast(TensorShort, TensorFloat)
Computes the output tensor using an element-wise Cast
function: f(x) = (float)x.
Declaration
void Cast(TensorShort X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorShort | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Ceil(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Ceil
math function: f(x) = ceil(x).
Declaration
void Ceil(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Celu(TensorFloat, TensorFloat, float)
Computes an output tensor by applying the element-wise Celu
activation function: f(x) = max(0, x) + min(0, alpha * (exp(x / alpha) - 1)).
Declaration
void Celu(TensorFloat X, TensorFloat O, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | alpha | The alpha value to use for the |
Clip(TensorFloat, TensorFloat, float, float)
Computes an output tensor by applying the element-wise Clip
math function: f(x) = clamp(x, min, max).
Declaration
void Clip(TensorFloat X, TensorFloat O, float min, float max)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | min | The lower clip value. |
float | max | The upper clip value. |
Clip(TensorInt, TensorInt, int, int)
Computes an output tensor by applying the element-wise Clip
math function: f(x) = clamp(x, min, max).
Declaration
void Clip(TensorInt X, TensorInt O, int min, int max)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | min | The lower clip value. |
int | max | The upper clip value. |
CompressWithIndices(Tensor, TensorInt, Tensor, int, int)
Computes the output tensor by selecting slices from an input tensor according to the 'indices' tensor along an 'axis'.
Declaration
void CompressWithIndices(Tensor X, TensorInt indices, Tensor O, int numIndices, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | numIndices | The number of indices. |
int | axis | The axis along which to compress. |
Concat(Tensor[], Tensor, int)
Calculates an output tensor by concatenating the input tensors along a given axis.
Declaration
void Concat(Tensor[] inputs, Tensor O, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor[] | inputs | The input tensors. |
Tensor | O | The output tensor to be computed and filled. |
int | axis | The axis along which to concatenate the input tensors. |
Conv(TensorFloat, TensorFloat, TensorFloat, TensorFloat, int, Span<int>, Span<int>, Span<int>, FusableActivation)
Applies a convolution filter to an input tensor.
Declaration
void Conv(TensorFloat X, TensorFloat K, TensorFloat B, TensorFloat O, int groups, Span<int> strides, Span<int> pads, Span<int> dilations, FusableActivation fusedActivation)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | K | The filter tensor. |
TensorFloat | B | The optional bias tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | groups | The number of groups that input channels and output channels are divided into. |
Span<int> | strides | The optional stride value for each spatial dimension of the filter. |
Span<int> | pads | The optional lower and upper padding values for each spatial dimension of the filter. |
Span<int> | dilations | The optional dilation value of each spatial dimension of the filter. |
FusableActivation | fusedActivation | The fused activation type to apply after the convolution. |
ConvTranspose(TensorFloat, TensorFloat, TensorFloat, TensorFloat, Span<int>, Span<int>, Span<int>, FusableActivation)
Applies a transpose convolution filter to an input tensor.
Declaration
void ConvTranspose(TensorFloat X, TensorFloat W, TensorFloat B, TensorFloat O, Span<int> strides, Span<int> pads, Span<int> outputPadding, FusableActivation fusedActivation)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | W | The filter tensor. |
TensorFloat | B | The optional bias tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Span<int> | strides | The optional stride value for each spatial dimension of the filter. |
Span<int> | pads | The optional lower and upper padding values for each spatial dimension of the filter. |
Span<int> | outputPadding | The output padding value for each spatial dimension in the filter. |
FusableActivation | fusedActivation | The fused activation type to apply after the convolution. |
Cos(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Cos
trigonometric function: f(x) = cos(x).
Declaration
void Cos(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Cosh(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Cosh
trigonometric function: f(x) = cosh(x).
Declaration
void Cosh(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
CumSum(TensorFloat, TensorFloat, int, bool, bool)
Performs the cumulative sum along a given axis.
Declaration
void CumSum(TensorFloat X, TensorFloat O, int axis, bool reverse, bool exclusive)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | axis | The axis along which to apply the cumulative sum. |
bool | reverse | Whether to perform the cumulative sum from the end of the axis. |
bool | exclusive | Whether to include the respective input element in the cumulative sum. |
CumSum(TensorInt, TensorInt, int, bool, bool)
Performs the cumulative sum along a given axis.
Declaration
void CumSum(TensorInt X, TensorInt O, int axis, bool reverse, bool exclusive)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | axis | The axis along which to apply the cumulative sum. |
bool | reverse | Whether to perform the cumulative sum from the end of the axis. |
bool | exclusive | Whether to include the respective input element in the cumulative sum. |
Dense(TensorFloat, TensorFloat, TensorFloat, TensorFloat, FusableActivation)
Performs a matrix multiplication operation: f(x, w, b) = X x W + B.
This supports numpy-style broadcasting of input tensors.
Declaration
void Dense(TensorFloat X, TensorFloat W, TensorFloat B, TensorFloat O, FusableActivation fusedActivation)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | W | The weights tensor. |
TensorFloat | B | The bias tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
FusableActivation | fusedActivation | The fused activation to apply to the output tensor after the dense operation. |
DepthToSpace(TensorFloat, TensorFloat, int, DepthToSpaceMode)
Computes the output tensor by permuting data from depth into blocks of spatial data.
Declaration
void DepthToSpace(TensorFloat X, TensorFloat O, int blocksize, DepthToSpaceMode mode)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | blocksize | The size of the blocks to move the depth data into. |
DepthToSpaceMode | mode | The ordering of the data in the output tensor as a |
DequantizeLinear(TensorByte, TensorFloat, float, byte)
Computes the output tensor by unpacking four uint8 values from each int value and scaling to floats.
Declaration
void DequantizeLinear(TensorByte X, TensorFloat O, float scale, byte zeroPoint)
Parameters
Type | Name | Description |
---|---|---|
TensorByte | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | scale | The scale value to use for dequantization. |
byte | zeroPoint | The zero point value to use for dequantization. |
Div(TensorFloat, TensorFloat, TensorFloat)
Performs an element-wise Div
math operation: f(a, b) = a / b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Div(TensorFloat A, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Div(TensorInt, TensorInt, TensorInt)
Performs an element-wise Div
math operation: f(a, b) = a / b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Div(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Einsum(TensorFloat[], TensorFloat, TensorIndex[], TensorIndex, TensorIndex, TensorShape)
Performs an Einsum
math operation.
Declaration
void Einsum(TensorFloat[] inputTensors, TensorFloat O, TensorIndex[] operandIndices, TensorIndex outputIndices, TensorIndex sumIndices, TensorShape sumShape)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | inputTensors | The input tensors. |
TensorFloat | O | The output tensor to be computed and filled. |
TensorIndex[] | operandIndices | The operand indices for each input tensor. |
TensorIndex | outputIndices | The output indices for each input tensor. |
TensorIndex | sumIndices | The indices along which to sum. |
TensorShape | sumShape | The shape along which to sum. |
Elu(TensorFloat, TensorFloat, float)
Computes an output tensor by applying the element-wise Elu
activation function: f(x) = x if x >= 0, otherwise f(x) = alpha * (e^x - 1).
Declaration
void Elu(TensorFloat X, TensorFloat O, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | alpha | The alpha value to use for the |
Equal(TensorFloat, TensorFloat, TensorInt)
Performs an element-wise Equal
logical comparison operation: f(a, b) = 1 if a == b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void Equal(TensorFloat A, TensorFloat B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Equal(TensorInt, TensorInt, TensorInt)
Performs an element-wise Equal
logical comparison operation: f(a, b) = 1 if a == b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void Equal(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Erf(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Erf
activation function: f(x) = erf(x).
Declaration
void Erf(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Exp(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Exp
math function: f(x) = exp(x).
Declaration
void Exp(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Expand(Tensor, Tensor)
Calculates an output tensor by broadcasting the input tensor into a given shape.
Declaration
void Expand(Tensor X, Tensor O)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
FMod(TensorFloat, TensorFloat, TensorFloat)
Performs an element-wise Mod
math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the dividend, as in C#.
This supports numpy-style broadcasting of input tensors.
Declaration
void FMod(TensorFloat A, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
FMod(TensorInt, TensorInt, TensorInt)
Performs an element-wise Mod
math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the dividend, as in C#.
This supports numpy-style broadcasting of input tensors.
Declaration
void FMod(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Floor(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Floor
math function: f(x) = floor(x).
Declaration
void Floor(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Gather(Tensor, TensorInt, Tensor, int)
Takes values from the input tensor indexed by the indices tensor along a given axis and concatenates them.
Declaration
void Gather(Tensor X, TensorInt indices, Tensor O, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | axis | The axis along which to gather. |
GatherElements(Tensor, TensorInt, Tensor, int)
Takes values from the input tensor indexed by the indices tensor along a given axis and concatenates them.
Declaration
void GatherElements(Tensor X, TensorInt indices, Tensor O, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | axis | The axis along which to gather. |
GatherND(Tensor, TensorInt, Tensor, int)
Takes slices of values from the batched input tensor indexed by the indices
tensor.
Declaration
void GatherND(Tensor X, TensorInt indices, Tensor O, int batchDims)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | batchDims | The number of batch dimensions of the input tensor, the gather begins at the next dimension. |
Gelu(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Gelu
activation function: f(x) = x / 2 * (1 + erf(x / sqrt(2))).
Declaration
void Gelu(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
GeluFast(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Gelu
aproximate but fast gelu function: f(x) = (x / 2) * (tanh(x + x^3 * 0.04472) * 0.7978) + 1.
Declaration
void GeluFast(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
GlobalAveragePool(TensorFloat, TensorFloat)
Calculates an output tensor by pooling the mean values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
Declaration
void GlobalAveragePool(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
GlobalMaxPool(TensorFloat, TensorFloat)
Calculates an output tensor by pooling the maximum values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
Declaration
void GlobalMaxPool(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Greater(TensorFloat, TensorFloat, TensorInt)
Performs an element-wise Greater
logical comparison operation: f(a, b) = 1 if a > b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void Greater(TensorFloat A, TensorFloat B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Greater(TensorInt, TensorInt, TensorInt)
Performs an element-wise Greater
logical comparison operation: f(a, b) = 1 if a > b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void Greater(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
GreaterOrEqual(TensorFloat, TensorFloat, TensorInt)
Performs an element-wise GreaterOrEqual
logical comparison operation: f(a, b) = 1 if a >= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void GreaterOrEqual(TensorFloat A, TensorFloat B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
GreaterOrEqual(TensorInt, TensorInt, TensorInt)
Performs an element-wise GreaterOrEqual
logical comparison operation: f(a, b) = 1 if a >= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void GreaterOrEqual(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
HardSigmoid(TensorFloat, TensorFloat, float, float)
Computes an output tensor by applying the element-wise HardSigmoid
activation function: f(x) = clamp(alpha * x + beta, 0, 1).
Declaration
void HardSigmoid(TensorFloat X, TensorFloat O, float alpha, float beta)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | alpha | The alpha value to use for the |
float | beta | The beta value to use for the |
HardSwish(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise HardSwish
activation function: f(x) = x * max(0, min(1, 1/6 * x + 0.5)).
Declaration
void HardSwish(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Hardmax(TensorFloat, TensorFloat, int)
Computes an output tensor by applying the Hardmax
activation function along an axis: f(x, axis) = 1 if x is the first maximum value along the specified axis, otherwise f(x) = 0.
Declaration
void Hardmax(TensorFloat X, TensorFloat O, int axis)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | axis | The axis along which to apply the |
InstanceNormalization(TensorFloat, TensorFloat, TensorFloat, TensorFloat, float)
Computes the mean variance on the spatial dimensions of the input tensor and normalizes them according to scale
and bias
tensors.
Declaration
void InstanceNormalization(TensorFloat X, TensorFloat S, TensorFloat B, TensorFloat O, float epsilon)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | S | The scale tensor. |
TensorFloat | B | The bias tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | epsilon | The epsilon value the layer uses to avoid division by zero. |
IsInf(TensorFloat, TensorInt, bool, bool)
Performs an element-wise IsInf
logical operation: f(x) = 1 elementwise if x is +Inf and detectPositive
is true
, or x is -Inf and detectNegative
is true
. Otherwise f(x) = 0.
Declaration
void IsInf(TensorFloat X, TensorInt O, bool detectNegative, bool detectPositive)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
bool | detectNegative | Whether to detect negative infinities in the |
bool | detectPositive | Whether to detect positive infinities in the |
IsNaN(TensorFloat, TensorInt)
Performs an element-wise IsNaN
logical operation: f(x) = 1 if x is NaN, otherwise f(x) = 0.
Declaration
void IsNaN(TensorFloat X, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
LSTM(TensorFloat, TensorFloat, TensorFloat, TensorFloat, TensorInt, TensorFloat, TensorFloat, TensorFloat, TensorFloat, TensorFloat, TensorFloat, RnnDirection, RnnActivation[], float[], float[], bool, float, RnnLayout)
Generates an output tensor by computing a one-layer long short-term memory (LSTM) on an input tensor.
Declaration
void LSTM(TensorFloat X, TensorFloat W, TensorFloat R, TensorFloat B, TensorInt sequenceLens, TensorFloat initialH, TensorFloat initialC, TensorFloat P, TensorFloat Y, TensorFloat Yh, TensorFloat Yc, RnnDirection direction, RnnActivation[] activations, float[] activationAlpha, float[] activationBeta, bool inputForget, float clip, RnnLayout layout)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input sequences tensor. |
TensorFloat | W | The weights tensor for the gates of the LSTM. |
TensorFloat | R | The recurrent weights tensor for the gates of the LSTM. |
TensorFloat | B | The optional bias tensor for the input gate of the LSTM. |
TensorInt | sequenceLens | The optional 1D tensor specifying the lengths of the sequences in a batch. |
TensorFloat | initialH | The optional initial values tensor of the hidden neurons of the LSTM. If this is |
TensorFloat | initialC | The optional initial values tensor of the cells of the LSTM. If this is |
TensorFloat | P | The optional weight tensor for the peepholes of the LSTM. If this is |
TensorFloat | Y | The output tensor to be computed and filled with the concatenated intermediate output values of the hidden. |
TensorFloat | Yh | The output tensor to be computed and filled with the last output value of the hidden. |
TensorFloat | Yc | The output tensor to be computed and filled with the last output value of the cell. |
RnnDirection | direction | The direction of the LSTM as an |
RnnActivation[] | activations | The activation functions of the LSTM as an array of |
float[] | activationAlpha | The alpha values of the activation functions of the LSTM. |
float[] | activationBeta | The beta values of the activation functions of the LSTM. |
bool | inputForget | Whether to forget the input values in the LSTM. If this is |
float | clip | The cell clip threshold of the LSTM. |
RnnLayout | layout | The layout of the tensors as an |
LayerNormalization(TensorFloat, TensorFloat, TensorFloat, TensorFloat, float)
Computes the mean variance on the last dimension of the input tensor and normalizes it according to scale
and bias
tensors.
Declaration
void LayerNormalization(TensorFloat X, TensorFloat S, TensorFloat B, TensorFloat O, float epsilon)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | S | The scale tensor. |
TensorFloat | B | The bias tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | epsilon | The epsilon value the layer uses to avoid division by zero. |
LeakyRelu(TensorFloat, TensorFloat, float)
Computes an output tensor by applying the element-wise LeakyRelu
activation function: f(x) = x if x >= 0, otherwise f(x) = alpha * x.
Declaration
void LeakyRelu(TensorFloat X, TensorFloat O, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | alpha | The alpha value to use for the |
Less(TensorFloat, TensorFloat, TensorInt)
Performs an element-wise Less
logical comparison operation: f(a, b) = 1 if a < b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void Less(TensorFloat A, TensorFloat B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Less(TensorInt, TensorInt, TensorInt)
Performs an element-wise Less
logical comparison operation: f(a, b) = 1 if a < b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void Less(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
LessOrEqual(TensorFloat, TensorFloat, TensorInt)
Performs an element-wise LessOrEqual
logical comparison operation: f(a, b) = 1 if a <= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void LessOrEqual(TensorFloat A, TensorFloat B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
LessOrEqual(TensorInt, TensorInt, TensorInt)
Performs an element-wise LessOrEqual
logical comparison operation: f(a, b) = 1 if a <= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
void LessOrEqual(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Log(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Log
math function: f(x) = log(x).
Declaration
void Log(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
LogSoftmax(TensorFloat, TensorFloat, int)
Computes an output tensor by applying the LogSoftmax
activation function along an axis: f(x, axis) = log(Softmax(x, axis)).
Declaration
void LogSoftmax(TensorFloat X, TensorFloat O, int axis)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | axis | The axis along which to apply the |
MatMul(TensorFloat, TensorFloat, TensorFloat)
Performs a multi-dimensional matrix multiplication operation: f(a, b) = a x b.
Declaration
void MatMul(TensorFloat X, TensorFloat Y, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The first input tensor. |
TensorFloat | Y | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
MatMul2D(TensorFloat, TensorFloat, TensorFloat, bool, bool)
Performs a matrix multiplication operation with optional transposes: f(a, b) = a' x b'.
Declaration
void MatMul2D(TensorFloat X, TensorFloat Y, TensorFloat O, bool xTranspose, bool yTranspose)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The first input tensor. |
TensorFloat | Y | The second input tensor. |
TensorFloat | O | The output tensor. |
bool | xTranspose | Whether to transpose the first input tensor before performing the matrix multiplication. |
bool | yTranspose | Whether to transpose the second input tensor before performing the matrix multiplication. |
Max(TensorFloat[], TensorFloat)
Performs an element-wise Max
math operation: f(x1, x2 ... xn) = max(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
void Max(TensorFloat[] inputs, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | inputs | The input tensors. |
TensorFloat | O | The output tensor to be computed and filled. |
Max(TensorInt[], TensorInt)
Performs an element-wise Max
math operation: f(x1, x2 ... xn) = max(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
void Max(TensorInt[] inputs, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt[] | inputs | The input tensors. |
TensorInt | O | The output tensor to be computed and filled. |
MaxPool(TensorFloat, TensorFloat, int[], int[], int[])
Calculates an output tensor by pooling the maximum values of the input tensor across its spatial dimensions according to the given pool and stride values.
Declaration
void MaxPool(TensorFloat X, TensorFloat O, int[] kernelShape, int[] strides, int[] pads)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int[] | kernelShape | The size of the kernel along each spatial axis. |
int[] | strides | The stride along each spatial axis. |
int[] | pads | The lower and upper padding values for each spatial dimension. For example, [pad_left, pad_right] for 1D, or [pad_top, pad_bottom, pad_left, pad_right] for 2D. |
Mean(TensorFloat[], TensorFloat)
Performs an element-wise Mean
math operation: f(x1, x2 ... xn) = (x1 + x2 ... xn) / n.
This supports numpy-style broadcasting of input tensors.
Declaration
void Mean(TensorFloat[] inputs, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | inputs | The input tensors. |
TensorFloat | O | The output tensor to be computed and filled. |
MemClear(Tensor)
Sets the entries of a tensor to 0.
Declaration
void MemClear(Tensor O)
Parameters
Type | Name | Description |
---|---|---|
Tensor | O | The output tensor to be computed and filled. |
MemCopy(Tensor, Tensor)
Creates a copy of a given input tensor with the same shape and values.
Declaration
void MemCopy(Tensor X, Tensor O)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
MemCopyStride(Tensor, Tensor, int, int, int, int, int, int)
Copy blocks of values from X to O, we copy 'count' blocks each of length 'length' values with initial offsets given by 'offsetX', 'offsetO' and with strides given by 'strideX', 'strideO'
Declaration
void MemCopyStride(Tensor X, Tensor O, int strideX, int strideO, int length, int count, int offsetX, int offsetO)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | strideX | The stride of the blocks in the input tensor. |
int | strideO | The stride of the blocks in the output tensor. |
int | length | The number of elements in each block. |
int | count | The number of blocks to copy. |
int | offsetX | The first index to copy from in the input tensor. |
int | offsetO | The first index to copy to in the output tensor. |
MemSet(TensorFloat, float)
Sets the entries of a tensor to a given fill value.
Declaration
void MemSet(TensorFloat O, float value)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | O | The output tensor to be computed and filled. |
float | value | The fill value. |
MemSet(TensorInt, int)
Sets the entries of a tensor to a given fill value.
Declaration
void MemSet(TensorInt O, int value)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | O | The output tensor to be computed and filled. |
int | value | The fill value. |
Min(TensorFloat[], TensorFloat)
Performs an element-wise Min
math operation: f(x1, x2 ... xn) = min(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
void Min(TensorFloat[] inputs, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | inputs | The input tensors. |
TensorFloat | O | The output tensor to be computed and filled. |
Min(TensorInt[], TensorInt)
Performs an element-wise Min
math operation: f(x1, x2 ... xn) = min(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
void Min(TensorInt[] inputs, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt[] | inputs | The input tensors. |
TensorInt | O | The output tensor to be computed and filled. |
Mod(TensorFloat, TensorFloat, TensorFloat)
Performs an element-wise Mod
math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the divisor, as in Python.
This supports numpy-style broadcasting of input tensors.
Declaration
void Mod(TensorFloat A, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Mod(TensorInt, TensorInt, TensorInt)
Performs an element-wise Mod
math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the divisor, as in Python.
This supports numpy-style broadcasting of input tensors.
Declaration
void Mod(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Mul(TensorFloat, TensorFloat, TensorFloat)
Performs an element-wise Mul
math operation: f(a, b) = a * b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Mul(TensorFloat A, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Mul(TensorInt, TensorInt, TensorInt)
Performs an element-wise Mul
math operation: f(a, b) = a * b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Mul(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Neg(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Neg
math function: f(x) = -x.
Declaration
void Neg(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Neg(TensorInt, TensorInt)
Computes an output tensor by applying the element-wise Neg
math function: f(x) = -x.
Declaration
void Neg(TensorInt X, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Not(TensorInt, TensorInt)
Performs an element-wise Not
logical operation: f(x) = ~x.
Declaration
void Not(TensorInt X, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
OneHot(TensorInt, TensorFloat, int, int, float, float)
Generates a one-hot tensor with a given depth
, indices
and on and off values.
Declaration
void OneHot(TensorInt indices, TensorFloat O, int axis, int depth, float offValue, float onValue)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | indices | The indices input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | axis | The axis along which the operation adds the one-hot representation. |
int | depth | The depth of the one-hot tensor. |
float | offValue | The value to use for an off element. |
float | onValue | The value to use for an on element. |
OneHot(TensorInt, TensorInt, int, int, int, int)
Generates a one-hot tensor with a given depth
, indices
and on and off values.
Declaration
void OneHot(TensorInt indices, TensorInt O, int axis, int depth, int offValue, int onValue)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | indices | The indices input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | axis | The axis along which the operation adds the one-hot representation. |
int | depth | The depth of the one-hot tensor. |
int | offValue | The value to use for an off element. |
int | onValue | The value to use for an on element. |
Or(TensorInt, TensorInt, TensorInt)
Performs an element-wise Or
logical operation: f(a, b) = a | b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Or(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
PRelu(TensorFloat, TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise PRelu
activation function: f(x) = x if x >= 0, otherwise f(x) = slope * x.
Declaration
void PRelu(TensorFloat X, TensorFloat slope, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | slope | The slope tensor, must be unidirectional broadcastable to x. |
TensorFloat | O | The output tensor to be computed and filled. |
Pad(TensorFloat, TensorFloat, ReadOnlySpan<int>, PadMode, float)
Calculates the output tensor by adding padding to the input tensor according to the given padding values, mode and constant value.
Declaration
void Pad(TensorFloat X, TensorFloat O, ReadOnlySpan<int> pad, PadMode padMode, float constant)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | pad | The lower and upper padding values for each dimension. |
PadMode | padMode | The |
float | constant | The constant value to fill with. |
Pad(TensorInt, TensorInt, ReadOnlySpan<int>, PadMode, int)
Calculates the output tensor by adding padding to the input tensor according to the given padding values, mode and constant value.
Declaration
void Pad(TensorInt X, TensorInt O, ReadOnlySpan<int> pad, PadMode padMode, int constant)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | pad | The lower and upper padding values for each dimension. |
PadMode | padMode | The |
int | constant | The constant value to fill with. |
PinToDevice(Tensor, bool)
Pins and returns a tensor using this backend.
Declaration
Tensor PinToDevice(Tensor X, bool clearOnInit = false)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
bool | clearOnInit | Whether to initialize the backend data. The default value is |
Returns
Type | Description |
---|---|
Tensor | The pinned input tensor. |
Pow(TensorFloat, TensorFloat, TensorFloat)
Performs an element-wise Pow
math operation: f(a, b) = pow(a, b).
This supports numpy-style broadcasting of input tensors.
Declaration
void Pow(TensorFloat A, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Pow(TensorFloat, TensorInt, TensorFloat)
Performs an element-wise Pow
math operation: f(a, b) = pow(a, b).
This supports numpy-style broadcasting of input tensors.
Declaration
void Pow(TensorFloat A, TensorInt B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
RandomNormal(TensorFloat, float, float, int?)
Generates an output tensor of a given shape with random values in a normal distribution with given mean
and scale
, and an optional seed
value.
Declaration
void RandomNormal(TensorFloat O, float mean, float scale, int? seed)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | O | The output tensor to be computed and filled. |
float | mean | The mean of the normal distribution to use to generate the output. |
float | scale | The standard deviation of the normal distribution to use to generate the output. |
int? | seed | The optional seed to use for the random number generation. If this is |
RandomUniform(TensorFloat, float, float, int?)
Generates an output tensor of a given shape with random values in a uniform distribution between a given low
and high
, and an optional seed
value.
Declaration
void RandomUniform(TensorFloat O, float low, float high, int? seed)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | O | The output tensor to be computed and filled. |
float | low | The lower end of the interval of the uniform distribution to use to generate the output. |
float | high | The upper end of the interval of the uniform distribution to use to generate the output. |
int? | seed | The optional seed to use for the random number generation. If this is |
Range(TensorFloat, float, float)
Generates a 1D output tensor where the values form an arithmetic progression defined by the start
and delta
values.
Declaration
void Range(TensorFloat O, float start, float delta)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | O | The output tensor to be computed and filled. |
float | start | The first value in the range. |
float | delta | The delta between subsequent values in the range. |
Range(TensorInt, int, int)
Generates a 1D output tensor where the values form an arithmetic progression defined by the start
and delta
values.
Declaration
void Range(TensorInt O, int start, int delta)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | O | The output tensor to be computed and filled. |
int | start | The first value in the range. |
int | delta | The delta between subsequent values in the range. |
Reciprocal(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Reciprocal
math function: f(x) = 1 / x.
Declaration
void Reciprocal(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReduceL1(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceL1
operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
void ReduceL1(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceL1(TensorInt, TensorInt, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceL1
operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
void ReduceL1(TensorInt X, TensorInt O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceL2(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceL2
operation: f(x1, x2 ... xn) = sqrt(x1² + x2² + ... + xn²).
Declaration
void ReduceL2(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceLogSum(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceLogSum
operation: f(x1, x2 ... xn) = log(x1 + x2 + ... + xn).
Declaration
void ReduceLogSum(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceLogSumExp(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceLogSumExp
operation: f(x1, x2 ... xn) = log(e^x1 + e^x2 + ... + e^xn).
Declaration
void ReduceLogSumExp(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceMax(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceMax
operation: f(x1, x2 ... xn) = max(x1, x2, ... , xn).
Declaration
void ReduceMax(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceMax(TensorInt, TensorInt, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceMean
operation: f(x1, x2 ... xn) = max(x1, x2, ... , xn).
Declaration
void ReduceMax(TensorInt X, TensorInt O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceMean(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceMean
operation: f(x1, x2 ... xn) = (x1 + x2 + ... + xn) / n.
Declaration
void ReduceMean(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceMin(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceMin
operation: f(x1, x2 ... xn) = min(x1, x2, ... , xn).
Declaration
void ReduceMin(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceMin(TensorInt, TensorInt, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceMin
operation: f(x1, x2 ... xn) = min(x1, x2, ... , xn).
Declaration
void ReduceMin(TensorInt X, TensorInt O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceProd(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceProd
operation: f(x1, x2 ... xn) = x1 * x2 * ... * xn.
Declaration
void ReduceProd(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceProd(TensorInt, TensorInt, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceProd
operation: f(x1, x2 ... xn) = x1 * x2 * ... * xn.
Declaration
void ReduceProd(TensorInt X, TensorInt O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceSum(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceSum
operation: f(x1, x2 ... xn) = x1 + x2 + ... + xn.
Declaration
void ReduceSum(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceSum(TensorInt, TensorInt, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceSum
operation: f(x1, x2 ... xn) = x1 + x2 + ... + xn.
Declaration
void ReduceSum(TensorInt X, TensorInt O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceSumSquare(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceSumSquare
operation: f(x1, x2 ... xn) = x1² + x2² + ... + xn².
Declaration
void ReduceSumSquare(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
ReduceSumSquare(TensorInt, TensorInt, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceSumSquare
operation: f(x1, x2 ... xn) = x1² + x2² + ... + xn².
Declaration
void ReduceSumSquare(TensorInt X, TensorInt O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | axes | The axes along which to reduce. |
bool | keepdim | Whether to keep the reduced axes in the output tensor. |
Relu(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Relu
activation function: f(x) = max(0, x).
Declaration
void Relu(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Relu6(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Relu6
activation function: f(x) = clamp(x, 0, 6).
Declaration
void Relu6(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Reshape(Tensor, Tensor)
Calculates an output tensor by copying the data from the input tensor and using a given shape. The data from the input tensor is unchanged.
Declaration
void Reshape(Tensor X, Tensor O)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
Resize(TensorFloat, TensorFloat, ReadOnlySpan<float>, InterpolationMode, NearestMode, CoordTransformMode)
Calculates an output tensor by resampling the input tensor along the spatial dimensions with given scales.
Declaration
void Resize(TensorFloat X, TensorFloat O, ReadOnlySpan<float> scale, InterpolationMode interpolationMode, NearestMode nearestMode, CoordTransformMode coordTransformMode)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ReadOnlySpan<float> | scale | The factor to scale each dimension by. |
InterpolationMode | interpolationMode | The |
NearestMode | nearestMode | The |
CoordTransformMode | coordTransformMode | The |
RoiAlign(TensorFloat, TensorFloat, TensorInt, TensorFloat, RoiPoolingMode, int, int, int, float)
Calculates an output tensor by pooling the input tensor across each region of interest given by the rois
tensor.
Declaration
void RoiAlign(TensorFloat X, TensorFloat rois, TensorInt indices, TensorFloat O, RoiPoolingMode mode, int outputHeight, int outputWidth, int samplingRatio, float spatialScale)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | rois | The region of interest input tensor. |
TensorInt | indices | The indices input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
RoiPoolingMode | mode | The pooling mode of the operation as an |
int | outputHeight | The height of the output tensor. |
int | outputWidth | The width of the output tensor. |
int | samplingRatio | The number of sampling points in the interpolation grid used to compute the output value of each pooled output bin. |
float | spatialScale | The multiplicative spatial scale factor used to translate coordinates from their input spatial scale to the scale used when pooling. |
Round(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Round
math function: f(x) = round(x).
If the fractional part is equal to 0.5, rounds to the nearest even integer.
Declaration
void Round(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ScalarMad(TensorFloat, TensorFloat, float, float)
Performs an element-wise Mad
math operation: multiplies and adds bias to a tensor: f(T, s, b) = s * T + b.
Declaration
void ScalarMad(TensorFloat X, TensorFloat O, float s, float b)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | s | Input scalar for multiplication. |
float | b | Input bias for addition. |
ScalarMad(TensorInt, TensorInt, int, int)
Performs an element-wise Mad
math operation: multiplies and adds bias to a tensor: f(T, s, b) = s * T + b.
Declaration
void ScalarMad(TensorInt X, TensorInt O, int s, int b)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
int | s | Input scalar for multiplication. |
int | b | Input bias for addition. |
ScaleBias(TensorFloat, TensorFloat, TensorFloat, TensorFloat)
Computes the output tensor with an element-wise ScaleBias
function: f(x, s, b) = x * s + b.
Declaration
void ScaleBias(TensorFloat X, TensorFloat S, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | S | The scale tensor. |
TensorFloat | B | The bias tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ScatterElements(Tensor, TensorInt, Tensor, Tensor, int, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor along a given axis.
ScatterElements
updates the values depending on the reduction mode used.
Declaration
void ScatterElements(Tensor X, TensorInt indices, Tensor updates, Tensor O, int axis, ScatterReductionMode reduction)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Tensor | updates | The updates tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | axis | The axis on which to perform the scatter. |
ScatterReductionMode | reduction | The reduction mode used to update the values as a |
ScatterND(TensorFloat, TensorInt, TensorFloat, TensorFloat, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor.
ScatterND
updates the values depending on the reduction mode used.
Declaration
void ScatterND(TensorFloat X, TensorInt indices, TensorFloat updates, TensorFloat O, ScatterReductionMode reduction)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorInt | indices | The indices tensor. |
TensorFloat | updates | The updates tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ScatterReductionMode | reduction | The reduction mode used to update the values as a |
ScatterND(TensorInt, TensorInt, TensorInt, TensorInt, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor.
ScatterND
updates the values depending on the reduction mode used.
Declaration
void ScatterND(TensorInt X, TensorInt indices, TensorInt updates, TensorInt O, ScatterReductionMode reduction)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | indices | The indices tensor. |
TensorInt | updates | The updates tensor. |
TensorInt | O | The output tensor to be computed and filled. |
ScatterReductionMode | reduction | The reduction mode used to update the values as a |
Selu(TensorFloat, TensorFloat, float, float)
Computes an output tensor by applying the element-wise Selu
activation function: f(x) = gamma * x if x >= 0, otherwise f(x) = (alpha * e^x - alpha).
Declaration
void Selu(TensorFloat X, TensorFloat O, float alpha, float gamma)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | alpha | The alpha value to use for the |
float | gamma | The alpha value to use for the |
Shrink(TensorFloat, TensorFloat, float, float)
Computes an output tensor by applying the element-wise Shrink
activation function: f(x) = x + bias if x < lambd. f(x) = x - bias if x > lambd. Otherwise f(x) = 0.
Declaration
void Shrink(TensorFloat X, TensorFloat O, float bias, float lambd)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | bias | The bias value to use for the |
float | lambd | The lambda value to use for the |
Sigmoid(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Sigmoid
activation function: f(x) = 1/(1 + e^(-x)).
Declaration
void Sigmoid(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Sign(TensorFloat, TensorFloat)
Performs an element-wise Sign
math operation: f(x) = 1 if x > 0. f(x) = -1 if x < 0. Otherwise f(x) = 0.
Declaration
void Sign(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Sign(TensorInt, TensorInt)
Performs an element-wise Sign
math operation: f(x) = 1 if x > 0. f(x) = -1 if x < 0. Otherwise f(x) = 0.
Declaration
void Sign(TensorInt X, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Sin(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Sin
trigonometric function: f(x) = sin(x).
Declaration
void Sin(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Sinh(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Sinh
trigonometric function: f(x) = sinh(x).
Declaration
void Sinh(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Slice(Tensor, Tensor, ReadOnlySpan<int>, ReadOnlySpan<int>, ReadOnlySpan<int>)
Calculates an output tensor by slicing the input tensor along given axes with given starts, ends, and steps.
Declaration
void Slice(Tensor X, Tensor O, ReadOnlySpan<int> starts, ReadOnlySpan<int> axes, ReadOnlySpan<int> steps)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | starts | The start index along each axis. |
ReadOnlySpan<int> | axes | The axes along which to slice. If this is |
ReadOnlySpan<int> | steps | The step values for slicing. If this is |
SliceSet(Tensor, Tensor, Tensor, ReadOnlySpan<int>, ReadOnlySpan<int>, ReadOnlySpan<int>)
Copies the input tensor and updates values at indexes specified by the slices defined by axes, starts, ends, and steps.
Declaration
void SliceSet(Tensor X, Tensor values, Tensor O, ReadOnlySpan<int> starts, ReadOnlySpan<int> axes, ReadOnlySpan<int> steps)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | values | The values tensor. |
Tensor | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | starts | The start index along each axis. |
ReadOnlySpan<int> | axes | The axes along which to slice. If this is |
ReadOnlySpan<int> | steps | The step values for slicing. If this is |
Softmax(TensorFloat, TensorFloat, int)
Computes an output tensor by applying the Softmax
activation function along an axis: f(x, axis) = exp(X) / ReduceSum(exp(X), axis).
Declaration
void Softmax(TensorFloat X, TensorFloat O, int axis)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | axis | The axis along which to apply the |
Softplus(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Softplus
activation function: f(x) = ln(e^x + 1).
Declaration
void Softplus(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Softsign(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Softsign
activation function: f(x) = x/(|x| + 1).
Declaration
void Softsign(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
SpaceToDepth(TensorFloat, TensorFloat, int)
Computes the output tensor by permuting data from blocks of spatial data into depth.
Declaration
void SpaceToDepth(TensorFloat X, TensorFloat O, int blocksize)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
int | blocksize | The size of the blocks to move the depth data into. |
Split(Tensor, Tensor, int, int)
Calculates an output tensor by splitting the input tensor along a given axis between start and end.
Declaration
void Split(Tensor X, Tensor O, int axis, int start)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | axis | The axis along which to split the input tensor. |
int | start | The inclusive start value for the split. |
Sqrt(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Sqrt
math function: f(x) = sqrt(x).
Declaration
void Sqrt(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Square(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Square
math function: f(x) = x * x.
Declaration
void Square(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Square(TensorInt, TensorInt)
Computes an output tensor by applying the element-wise Square
math function: f(x) = x * x.
Declaration
void Square(TensorInt X, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Sub(TensorFloat, TensorFloat, TensorFloat)
Performs an element-wise Sub
math operation: f(a, b) = a - b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Sub(TensorFloat A, TensorFloat B, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Sub(TensorInt, TensorInt, TensorInt)
Performs an element-wise Sub
math operation: f(a, b) = a - b.
This supports numpy-style broadcasting of input tensors.
Declaration
void Sub(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |
Sum(TensorFloat[], TensorFloat)
Performs an element-wise Sum
math operation: f(x1, x2 ... xn) = x1 + x2 ... xn.
This supports numpy-style broadcasting of input tensors.
Declaration
void Sum(TensorFloat[] inputs, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | inputs | The input tensors. |
TensorFloat | O | The output tensor to be computed and filled. |
Swish(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Swish
activation function: f(x) = sigmoid(x) * x = x / (1 + e^{-x}).
Declaration
void Swish(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Tan(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Tan
trigonometric function: f(x) = tan(x).
Declaration
void Tan(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
Tanh(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise Tanh
activation function: f(x) = tanh(x).
Declaration
void Tanh(TensorFloat X, TensorFloat O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
ThresholdedRelu(TensorFloat, TensorFloat, float)
Computes an output tensor by applying the element-wise ThresholdedRelu
activation function: f(x) = x if x > alpha, otherwise f(x) = 0.
Declaration
void ThresholdedRelu(TensorFloat X, TensorFloat O, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | O | The output tensor to be computed and filled. |
float | alpha | The alpha value to use for the |
Tile(Tensor, Tensor, ReadOnlySpan<int>)
Calculates an output tensor by repeating the input layer a given number of times along each axis.
Declaration
void Tile(Tensor X, Tensor O, ReadOnlySpan<int> repeats)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | repeats | The number of times to tile the input tensor along each axis. |
TopK(TensorFloat, TensorFloat, TensorInt, int, int, bool)
Calculates the top-K largest or smallest elements of an input tensor along a given axis.
Declaration
void TopK(TensorFloat X, TensorFloat values, TensorInt indices, int k, int axis, bool largest)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | values | The output tensor to be computed and filled with the top K values from the input tensor. |
TensorInt | indices | The output tensor to be computed and filled with the corresponding input tensor indices for the top K values from the input tensor. |
int | k | The number of elements to calculate. |
int | axis | The axis along which to perform the top-K operation. |
bool | largest | Whether to calculate the top-K largest elements. If this is |
TopP(TensorFloat, TensorFloat, TensorInt)
Computes the index of the element which cumulative sum until said element is >= than a random value
Declaration
void TopP(TensorFloat X, TensorFloat random, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | random | The probability values used for the exit criteria. |
TensorInt | O | The output tensor to be computed and filled. |
Transpose(Tensor, Tensor)
Calculates an output tensor by reversing the dimensions of the input tensor.
Declaration
void Transpose(Tensor X, Tensor O)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
Transpose(Tensor, Tensor, ReadOnlySpan<int>)
Calculates an output tensor by permuting the axes and data of the input tensor according to the given permutations.
Declaration
void Transpose(Tensor X, Tensor O, ReadOnlySpan<int> permutations)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
ReadOnlySpan<int> | permutations | The axes to sample the output tensor from in the input tensor. |
Tril(Tensor, Tensor, int)
Computes the output tensor by retaining the lower triangular values from an input matrix or matrix batch and setting the other values to zero.
Declaration
void Tril(Tensor X, Tensor O, int k)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | k | The offset from the diagonal to keep. |
Triu(Tensor, Tensor, int)
Computes the output tensor by retaining the upper triangular values from an input matrix or matrix batch and setting the other values to zero.
Declaration
void Triu(Tensor X, Tensor O, int k)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Tensor | O | The output tensor to be computed and filled. |
int | k | The offset from the diagonal to exclude. |
Where(TensorInt, Tensor, Tensor, Tensor)
Performs an element-wise Where
logical operation: f(condition, a, b) = a if condition
is true
, otherwise f(condition, a, b) = b.
Declaration
void Where(TensorInt C, Tensor A, Tensor B, Tensor O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | C | The condition tensor. |
Tensor | A | The first input tensor. |
Tensor | B | The second input tensor. |
Tensor | O | The output tensor to be computed and filled. |
Xor(TensorInt, TensorInt, TensorInt)
Performs an element-wise Xor
logical operation: f(a) = a ^ b.
Declaration
void Xor(TensorInt A, TensorInt B, TensorInt O)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
TensorInt | O | The output tensor to be computed and filled. |