Interface IOps
An interface that provides methods for operations on tensors.
Inherited Members
Namespace: Unity.Sentis
Syntax
public interface IOps : IDisposable
Properties
deviceType
Returns the DeviceType for the ops.
Declaration
DeviceType deviceType { get; }
Property Value
| Type | Description |
|---|---|
| DeviceType |
Methods
Abs(TensorFloat)
Computes an output tensor by applying the element-wise Abs math function: f(x) = f(x) = |x|.
Declaration
TensorFloat Abs(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Abs(TensorInt)
Computes an output tensor by applying the element-wise Abs math function: f(x) = f(x) = |x|.
Declaration
TensorInt Abs(TensorInt x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Acos(TensorFloat)
Computes an output tensor by applying the element-wise Acos trigonometric function: f(x) = acos(x).
Declaration
TensorFloat Acos(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Acosh(TensorFloat)
Computes an output tensor by applying the element-wise Acosh trigonometric function: f(x) = acosh(x).
Declaration
TensorFloat Acosh(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Add(TensorFloat, TensorFloat)
Performs an element-wise Add math operation: f(a, b) = a + b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Add(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Add(TensorInt, TensorInt)
Performs an element-wise Add math operation: f(a, b) = a + b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Add(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
And(TensorInt, TensorInt)
Declaration
TensorInt And(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | |
| TensorInt | B |
Returns
| Type | Description |
|---|---|
| TensorInt |
ArgMax(TensorFloat, Int32, Boolean, Boolean)
Computes the indices of the maximum elements of the input tensor along a given axis.
Declaration
TensorInt ArgMax(TensorFloat X, int axis, bool keepdim, bool selectLastIndex = false)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | axis | The axis along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
| Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ArgMax(TensorInt, Int32, Boolean, Boolean)
Computes the indices of the maximum elements of the input tensor along a given axis.
Declaration
TensorInt ArgMax(TensorInt X, int axis, bool keepdim, bool selectLastIndex = false)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32 | axis | The axis along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
| Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ArgMin(TensorFloat, Int32, Boolean, Boolean)
Computes the indices of the minimum elements of the input tensor along a given axis.
Declaration
TensorInt ArgMin(TensorFloat X, int axis, bool keepdim, bool selectLastIndex)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | axis | The axis along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
| Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ArgMin(TensorInt, Int32, Boolean, Boolean)
Computes the indices of the minimum elements of the input tensor along a given axis.
Declaration
TensorInt ArgMin(TensorInt X, int axis, bool keepdim, bool selectLastIndex)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32 | axis | The axis along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
| Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Asin(TensorFloat)
Computes an output tensor by applying the element-wise Asin trigonometric function: f(x) = asin(x).
Declaration
TensorFloat Asin(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Asinh(TensorFloat)
Computes an output tensor by applying the element-wise Asinh trigonometric function: f(x) = asinh(x).
Declaration
TensorFloat Asinh(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Atan(TensorFloat)
Computes an output tensor by applying the element-wise Atan trigonometric function: f(x) = atan(x).
Declaration
TensorFloat Atan(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Atanh(TensorFloat)
Computes an output tensor by applying the element-wise Atanh trigonometric function: f(x) = atanh(x).
Declaration
TensorFloat Atanh(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
AveragePool(TensorFloat, Int32[], Int32[], Int32[])
Calculates an output tensor by pooling the mean values of the input tensor across its spatial dimensions according to the given pool and stride values.
Declaration
TensorFloat AveragePool(TensorFloat X, int[] pool, int[] stride, int[] pad)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | pool | The size of the kernel along each spatial axis. |
| Int32[] | stride | The stride along each spatial axis. |
| Int32[] | pad | The lower and upper padding values for each spatial dimension. For example, [pad_left, pad_right] for 1D, or [pad_top, pad_bottom, pad_left, pad_right] for 2D. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
AxisNormalization(TensorFloat, TensorFloat, TensorFloat, Single)
Computes the mean variance on the spatial dimensions of the input tensor and normalizes them according to scale and bias tensors.
Declaration
TensorFloat AxisNormalization(TensorFloat X, TensorFloat S, TensorFloat B, float epsilon)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | S | The scale tensor. |
| TensorFloat | B | The bias tensor. |
| Single | epsilon | The epsilon value the layer uses to avoid division by zero. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Bernoulli(TensorFloat, DataType, Nullable<Single>)
Generates an output tensor with values 0 or 1 from a Bernoulli distribution. The input tensor contains the probabilities to use for generating the output values.
Declaration
Tensor Bernoulli(TensorFloat x, DataType dataType, float? seed)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The probabilities input tensor. |
| DataType | dataType | The data type of the output tensor. |
| Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Cast(Tensor, DataType)
Computes the output tensor using an element-wise Cast function: f(x) = (float)x or f(x) = (int)x depending on the value of toType.
Declaration
Tensor Cast(Tensor x, DataType toType)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | x | The input tensor. |
| DataType | toType | The data type to cast to as a |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Ceil(TensorFloat)
Computes an output tensor by applying the element-wise Ceil math function: f(x) = ceil(x).
Declaration
TensorFloat Ceil(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Celu(TensorFloat, Single)
Computes an output tensor by applying the element-wise Celu activation function: f(x) = max(0, x) + min(0, alpha * (exp(x / alpha) - 1)).
Declaration
TensorFloat Celu(TensorFloat x, float alpha)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Single | alpha | The alpha value to use for the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Clip(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise Clip math function: f(x) = clamp(x, min, max).
Declaration
TensorFloat Clip(TensorFloat x, float min, float max)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Single | min | The lower clip value. |
| Single | max | The upper clip value. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Compress(Tensor, TensorInt, Int32)
Selects slices of an input tensor along a given axis according to a condition tensor.
Declaration
Tensor Compress(Tensor X, TensorInt indices, int axis)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| TensorInt | indices | The indices tensor. |
| Int32 | axis | The axis along which to compress. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Concat(Tensor[], Int32)
Calculates an output tensor by concatenating the input tensors along a given axis.
Declaration
Tensor Concat(Tensor[] tensors, int axis)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor[] | tensors | The input tensors. |
| Int32 | axis | The axis along which to concatenate the input tensors. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
ConstantOfShape(TensorShape, Int32)
Generates a tensor with a given shape filled with a given value.
Declaration
TensorInt ConstantOfShape(TensorShape X, int value)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorShape | X | The input tensor shape. |
| Int32 | value | The fill value. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ConstantOfShape(TensorShape, Single)
Generates a tensor with a given shape filled with a given value.
Declaration
TensorFloat ConstantOfShape(TensorShape X, float value)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorShape | X | The input tensor shape. |
| Single | value | The fill value. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Conv(TensorFloat, TensorFloat, TensorFloat, Int32, Int32[], Int32[], Int32[], FusableActivation)
Applies a convolution filter to an input tensor.
Declaration
TensorFloat Conv(TensorFloat X, TensorFloat K, TensorFloat B, int groups, int[] stride, int[] pad, int[] dilation, FusableActivation fusedActivation)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | K | The filter tensor. |
| TensorFloat | B | The optional bias tensor. |
| Int32 | groups | The number of groups that input channels and output channels are divided into. |
| Int32[] | stride | The optional stride value for each spatial dimension of the filter. |
| Int32[] | pad | The optional lower and upper padding values for each spatial dimension of the filter. |
| Int32[] | dilation | The optional dilation value of each spatial dimension of the filter. |
| FusableActivation | fusedActivation | The fused activation type to apply after the convolution. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Conv2DTrans(TensorFloat, TensorFloat, TensorFloat, Int32[], Int32[], Int32[], FusableActivation)
Applies a transpose convolution filter to an input tensor.
Declaration
TensorFloat Conv2DTrans(TensorFloat X, TensorFloat K, TensorFloat B, int[] stride, int[] pad, int[] outputAdjustment, FusableActivation fusedActivation)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | K | The filter tensor. |
| TensorFloat | B | The optional bias tensor. |
| Int32[] | stride | The optional stride value for each spatial dimension of the filter. |
| Int32[] | pad | The optional lower and upper padding values for each spatial dimension of the filter. |
| Int32[] | outputAdjustment | The output padding value for each spatial dimension in the filter. |
| FusableActivation | fusedActivation | The fused activation type to apply after the convolution. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Copy(Tensor)
Creates a copy of a given input tensor with the same shape and values.
Declaration
Tensor Copy(Tensor x)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Cos(TensorFloat)
Computes an output tensor by applying the element-wise Cos trigonometric function: f(x) = cos(x).
Declaration
TensorFloat Cos(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Cosh(TensorFloat)
Computes an output tensor by applying the element-wise Cosh trigonometric function: f(x) = cosh(x).
Declaration
TensorFloat Cosh(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
CumSum(TensorFloat, Int32, Boolean, Boolean)
Performs the cumulative sum along a given axis.
Declaration
TensorFloat CumSum(TensorFloat X, int axis, bool reverse = false, bool exclusive = false)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | axis | The axis along which to apply the cumulative sum. |
| Boolean | reverse | Whether to perform the cumulative sum from the end of the axis. |
| Boolean | exclusive | Whether to include the respective input element in the cumulative sum. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
CumSum(TensorInt, Int32, Boolean, Boolean)
Performs the cumulative sum along a given axis.
Declaration
TensorInt CumSum(TensorInt X, int axis, bool reverse = false, bool exclusive = false)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32 | axis | The axis along which to apply the cumulative sum. |
| Boolean | reverse | Whether to perform the cumulative sum from the end of the axis. |
| Boolean | exclusive | Whether to include the respective input element in the cumulative sum. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Dense(TensorFloat, TensorFloat, TensorFloat, FusableActivation)
Performs a matrix multiplication operation: f(x, w, b) = X x W + B.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Dense(TensorFloat X, TensorFloat W, TensorFloat B, FusableActivation fusedActivation)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | W | The weights tensor. |
| TensorFloat | B | The bias tensor. |
| FusableActivation | fusedActivation | The fused activation to apply to the output tensor after the dense operation. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
DepthToSpace(TensorFloat, Int32, DepthToSpaceMode)
Computes the output tensor by permuting data from depth into blocks of spatial data.
Declaration
TensorFloat DepthToSpace(TensorFloat X, int blocksize, DepthToSpaceMode mode)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | blocksize | The size of the blocks to move the depth data into. |
| DepthToSpaceMode | mode | The ordering of the data in the output tensor as a |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Div(TensorFloat, TensorFloat)
Performs an element-wise Div math operation: f(a, b) = a / b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Div(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Div(TensorInt, TensorInt)
Performs an element-wise Div math operation: f(a, b) = a / b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Div(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Einsum(String, TensorFloat[])
Performs an Einsum math operation.
Declaration
TensorFloat Einsum(string equation, params TensorFloat[] operands)
Parameters
| Type | Name | Description |
|---|---|---|
| String | equation | The equation of the Einstein summation as a comma-separated list of subscript labels. |
| TensorFloat[] | operands | The input tensors of the Einsum. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Elu(TensorFloat, Single)
Computes an output tensor by applying the element-wise Elu activation function: f(x) = x if x >= 0, otherwise f(x) = alpha * (e^x - 1).
Declaration
TensorFloat Elu(TensorFloat X, float alpha)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Single | alpha | The alpha value to use for the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Equal(TensorFloat, TensorFloat)
Performs an element-wise Equal logical comparison operation: f(a, b) = 1 if a == b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Equal(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Equal(TensorInt, TensorInt)
Performs an element-wise Equal logical comparison operation: f(a, b) = 1 if a == b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Equal(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Erf(TensorFloat)
Computes an output tensor by applying the element-wise Erf activation function: f(x) = erf(x).
Declaration
TensorFloat Erf(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Exp(TensorFloat)
Computes an output tensor by applying the element-wise Exp math function: f(x) = exp(x).
Declaration
TensorFloat Exp(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Expand(Tensor, TensorShape)
Calculates an output tensor by broadcasting the input tensor into a given shape.
Declaration
Tensor Expand(Tensor X, TensorShape shape)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| TensorShape | shape | The shape to broadcast the input shape together with to calculate the output tensor. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Floor(TensorFloat)
Computes an output tensor by applying the element-wise Floor math function: f(x) = floor(x).
Declaration
TensorFloat Floor(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
FMod(TensorFloat, TensorFloat)
Performs an element-wise Mod math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the dividend, as in C#.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat FMod(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
FMod(TensorInt, TensorInt)
Performs an element-wise Mod math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the dividend, as in C#.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt FMod(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Gather(Tensor, TensorInt, Int32)
Takes values from the input tensor indexed by the indices tensor along a given axis and concatenates them.
Declaration
Tensor Gather(Tensor X, TensorInt indices, int axis)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| TensorInt | indices | The indices tensor. |
| Int32 | axis | The axis along which to gather. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
GatherElements(Tensor, TensorInt, Int32)
Takes values from the input tensor indexed by the indices tensor along a given axis and concatenates them.
Declaration
Tensor GatherElements(Tensor X, TensorInt indices, int axis)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| TensorInt | indices | The indices tensor. |
| Int32 | axis | The axis along which to gather. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
GatherND(Tensor, TensorInt, Int32)
Takes slices of values from the batched input tensor indexed by the indices tensor.
Declaration
Tensor GatherND(Tensor X, TensorInt indices, int batchDims)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| TensorInt | indices | The indices tensor. |
| Int32 | batchDims | The number of batch dimensions of the input tensor, the gather begins at the next dimension. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Gelu(TensorFloat)
Computes an output tensor by applying the element-wise Gelu activation function: f(x) = x / 2 * (1 + erf(x / sqrt(2))).
Declaration
TensorFloat Gelu(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
GlobalAveragePool(TensorFloat)
Calculates an output tensor by pooling the mean values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
Declaration
TensorFloat GlobalAveragePool(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
GlobalMaxPool(TensorFloat)
Calculates an output tensor by pooling the maximum values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
Declaration
TensorFloat GlobalMaxPool(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Greater(TensorFloat, TensorFloat)
Performs an element-wise Greater logical comparison operation: f(a, b) = 1 if a > b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Greater(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Greater(TensorInt, TensorInt)
Performs an element-wise Greater logical comparison operation: f(a, b) = 1 if a > b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Greater(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
GreaterOrEqual(TensorFloat, TensorFloat)
Performs an element-wise GreaterOrEqual logical comparison operation: f(a, b) = 1 if a >= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt GreaterOrEqual(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
GreaterOrEqual(TensorInt, TensorInt)
Performs an element-wise GreaterOrEqual logical comparison operation: f(a, b) = 1 if a >= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt GreaterOrEqual(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Hardmax(TensorFloat, Int32)
Computes an output tensor by applying the Hardmax activation function along an axis: f(x, axis) = 1 if x is the first maximum value along the specified axis, otherwise f(x) = 0.
Declaration
TensorFloat Hardmax(TensorFloat X, int axis = -1)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | axis | The axis along which to apply the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
HardSigmoid(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise HardSigmoid activation function: f(x) = clamp(alpha * x + beta, 0, 1).
Declaration
TensorFloat HardSigmoid(TensorFloat x, float alpha, float beta)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Single | alpha | The alpha value to use for the |
| Single | beta | The beta value to use for the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
HardSwish(TensorFloat)
Computes an output tensor by applying the element-wise HardSwish activation function: f(x) = x * max(0, min(1, 1/6 * x + 0.5)).
Declaration
TensorFloat HardSwish(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
InstanceNormalization(TensorFloat, TensorFloat, TensorFloat, Single)
Computes the mean variance on the spatial dimensions of the input tensor and normalizes them according to scale and bias tensors.
Declaration
TensorFloat InstanceNormalization(TensorFloat X, TensorFloat S, TensorFloat B, float epsilon)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | S | The scale tensor. |
| TensorFloat | B | The bias tensor. |
| Single | epsilon | The epsilon value the layer uses to avoid division by zero. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
IsInf(TensorFloat, Boolean, Boolean)
Performs an element-wise IsInf logical operation: f(x) = 1 elementwise if x is +Inf and detectPositive is true, or x is -Inf and detectNegative is true. Otherwise f(x) = 0.
Declaration
TensorInt IsInf(TensorFloat X, bool detectNegative, bool detectPositive)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Boolean | detectNegative | Whether to detect negative infinities in the |
| Boolean | detectPositive | Whether to detect positive infinities in the |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
IsNaN(TensorFloat)
Performs an element-wise IsNaN logical operation: f(x) = 1 if x is NaN, otherwise f(x) = 0.
Declaration
TensorInt IsNaN(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
LeakyRelu(TensorFloat, Single)
Computes an output tensor by applying the element-wise LeakyRelu activation function: f(x) = x if x >= 0, otherwise f(x) = alpha * x.
Declaration
TensorFloat LeakyRelu(TensorFloat x, float alpha)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Single | alpha | The alpha value to use for the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Less(TensorFloat, TensorFloat)
Performs an element-wise Less logical comparison operation: f(a, b) = 1 if a < b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Less(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Less(TensorInt, TensorInt)
Performs an element-wise Less logical comparison operation: f(a, b) = 1 if a < b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Less(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
LessOrEqual(TensorFloat, TensorFloat)
Performs an element-wise LessOrEqual logical comparison operation: f(a, b) = 1 if a <= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt LessOrEqual(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
LessOrEqual(TensorInt, TensorInt)
Performs an element-wise LessOrEqual logical comparison operation: f(a, b) = 1 if a <= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt LessOrEqual(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Log(TensorFloat)
Computes an output tensor by applying the element-wise Log math function: f(x) = log(x).
Declaration
TensorFloat Log(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
LogSoftmax(TensorFloat, Int32)
Computes an output tensor by applying the LogSoftmax activation function along an axis: f(x, axis) = log(Softmax(x, axis)).
Declaration
TensorFloat LogSoftmax(TensorFloat X, int axis = -1)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | axis | The axis along which to apply the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
LRN(TensorFloat, Single, Single, Single, Int32)
Normalizes the input tensor over local input regions.
Declaration
TensorFloat LRN(TensorFloat X, float alpha, float beta, float bias, int size)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Single | alpha | The scaling parameter to use for the normalization. |
| Single | beta | The exponent to use for the normalization. |
| Single | bias | The bias value to use for the normalization. |
| Int32 | size | The number of channels to sum over. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
LSTM(TensorFloat, TensorFloat, TensorFloat, TensorFloat, TensorInt, TensorFloat, TensorFloat, TensorFloat, RnnDirection, RnnActivation[], Single[], Single[], Boolean, Single, RnnLayout)
Generates an output tensor by computing a one-layer long short-term memory (LSTM) on an input tensor.
Declaration
TensorFloat[] LSTM(TensorFloat X, TensorFloat W, TensorFloat R, TensorFloat B, TensorInt sequenceLens, TensorFloat initialH, TensorFloat initialC, TensorFloat P, RnnDirection direction, RnnActivation[] activations, float[] activationAlpha, float[] activationBeta, bool inputForget, float clip, RnnLayout layout)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input sequences tensor. |
| TensorFloat | W | The weights tensor for the gates of the LSTM. |
| TensorFloat | R | The recurrent weights tensor for the gates of the LSTM. |
| TensorFloat | B | The optional bias tensor for the input gate of the LSTM. |
| TensorInt | sequenceLens | The optional 1D tensor specifying the lengths of the sequences in a batch. |
| TensorFloat | initialH | The optional initial values tensor of the hidden neurons of the LSTM. If this is |
| TensorFloat | initialC | The optional initial values tensor of the cells of the LSTM. If this is |
| TensorFloat | P | The optional weight tensor for the peepholes of the LSTM. If this is |
| RnnDirection | direction | The direction of the LSTM as an |
| RnnActivation[] | activations | The activation functions of the LSTM as an array of |
| Single[] | activationAlpha | The alpha values of the activation functions of the LSTM. |
| Single[] | activationBeta | The beta values of the activation functions of the LSTM. |
| Boolean | inputForget | Whether to forget the input values in the LSTM. If this is |
| Single | clip | The cell clip threshold of the LSTM. |
| RnnLayout | layout | The layout of the tensors as an |
Returns
| Type | Description |
|---|---|
| TensorFloat[] | The computed output tensor. |
MatMul(TensorFloat, TensorFloat)
Performs a multi-dimensional matrix multiplication operation: f(a, b) = a x b.
Declaration
TensorFloat MatMul(TensorFloat X, TensorFloat Y)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The first input tensor. |
| TensorFloat | Y | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
MatMul2D(TensorFloat, Boolean, TensorFloat, Boolean)
Performs a matrix multiplication operation with optional transposes: f(a, b) = a' x b'.
Declaration
TensorFloat MatMul2D(TensorFloat X, bool xTranspose, TensorFloat y, bool yTranspose)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The first input tensor. |
| Boolean | xTranspose | Whether to transpose the first input tensor before performing the matrix multiplication. |
| TensorFloat | y | The second input tensor. |
| Boolean | yTranspose | Whether to transpose the second input tensor before performing the matrix multiplication. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Max(TensorFloat[])
Performs an element-wise Max math operation: f(x1, x2 ... xn) = max(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Max(TensorFloat[] tensors)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat[] | tensors | The input tensors. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Max(TensorInt[])
Performs an element-wise Max math operation: f(x1, x2 ... xn) = max(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Max(TensorInt[] tensors)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt[] | tensors | The input tensors. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
MaxPool(TensorFloat, Int32[], Int32[], Int32[])
Calculates an output tensor by pooling the maximum values of the input tensor across its spatial dimensions according to the given pool and stride values.
Declaration
TensorFloat MaxPool(TensorFloat X, int[] pool, int[] stride, int[] pad)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | pool | The size of the kernel along each spatial axis. |
| Int32[] | stride | The stride along each spatial axis. |
| Int32[] | pad | The lower and upper padding values for each spatial dimension. For example, [pad_left, pad_right] for 1D, or [pad_top, pad_bottom, pad_left, pad_right] for 2D. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Mean(TensorFloat[])
Performs an element-wise Mean math operation: f(x1, x2 ... xn) = (x1 + x2 ... xn) / n.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Mean(TensorFloat[] tensors)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat[] | tensors | The input tensors. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Min(TensorFloat[])
Performs an element-wise Min math operation: f(x1, x2 ... xn) = min(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Min(TensorFloat[] tensors)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat[] | tensors | The input tensors. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Min(TensorInt[])
Performs an element-wise Min math operation: f(x1, x2 ... xn) = min(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Min(TensorInt[] tensors)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt[] | tensors | The input tensors. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Mod(TensorInt, TensorInt)
Performs an element-wise Mod math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the divisor, as in Python.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Mod(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Mul(TensorFloat, TensorFloat)
Performs an element-wise Mul math operation: f(a, b) = a * b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Mul(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Mul(TensorInt, TensorInt)
Performs an element-wise Mul math operation: f(a, b) = a * b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Mul(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Multinomial(TensorFloat, Int32, Nullable<Single>)
Generates an output tensor with values from a multinomial distribution according to the probabilities given by the input tensor.
Declaration
TensorInt Multinomial(TensorFloat x, int count, float? seed)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The probabilities input tensor. |
| Int32 | count | The number of times to sample the input. |
| Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Neg(TensorFloat)
Computes an output tensor by applying the element-wise Neg math function: f(x) = -x.
Declaration
TensorFloat Neg(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Neg(TensorInt)
Computes an output tensor by applying the element-wise Neg math function: f(x) = -x.
Declaration
TensorInt Neg(TensorInt X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
NewTensor(TensorShape, DataType, AllocScope)
Allocates a new tensor with the internal allocator.
Declaration
Tensor NewTensor(TensorShape shape, DataType dataType, AllocScope scope)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorShape | shape | |
| DataType | dataType | |
| AllocScope | scope |
Returns
| Type | Description |
|---|---|
| Tensor |
NonMaxSuppression(TensorFloat, TensorFloat, Int32, Single, Single, CenterPointBox)
Calculates an output tensor of selected indices of boxes from input boxes and scores tensors where the indices are based on the scores and amount of intersection with previously selected boxes.
Declaration
TensorInt NonMaxSuppression(TensorFloat boxes, TensorFloat scores, int maxOutputBoxesPerClass, float iouThreshold, float scoreThreshold, CenterPointBox centerPointBox)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | boxes | The boxes input tensor. |
| TensorFloat | scores | The scores input tensor. |
| Int32 | maxOutputBoxesPerClass | The maximum number of boxes to return for each class. |
| Single | iouThreshold | The threshold above which the intersect-over-union rejects a box. |
| Single | scoreThreshold | The threshold below which the box score filters a box from the output. |
| CenterPointBox | centerPointBox | The format the |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
NonZero(TensorFloat)
Returns the indices of the elements of the input tensor that are not zero.
Declaration
TensorInt NonZero(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
NonZero(TensorInt)
Returns the indices of the elements of the input tensor that are not zero.
Declaration
TensorInt NonZero(TensorInt X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Not(TensorInt)
Performs an element-wise Not logical operation: f(x) = ~x.
Declaration
TensorInt Not(TensorInt X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
OneHot(TensorInt, Int32, Int32, Int32, Int32)
Generates a one-hot tensor with a given depth, indices and on and off values.
Declaration
TensorInt OneHot(TensorInt indices, int axis, int depth, int offValue, int onValue)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | indices | The indices input tensor. |
| Int32 | axis | The axis along which the operation adds the one-hot representation. |
| Int32 | depth | The depth of the one-hot tensor. |
| Int32 | offValue | The value to use for an off element. |
| Int32 | onValue | The value to use for an on element. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Or(TensorInt, TensorInt)
Performs an element-wise Or logical operation: f(a, b) = a | b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Or(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Pad(TensorFloat, Int32[], PadMode, Single)
Calculates the output tensor by adding padding to the input tensor according to the given padding values and mode.
Declaration
TensorFloat Pad(TensorFloat X, int[] pad, PadMode padMode = PadMode.Constant, float constant = 0F)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | pad | The lower and upper padding values for each dimension. |
| PadMode | padMode | The |
| Single | constant | The constant value to fill with when using |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
PinToDevice(Tensor, Boolean)
Pins and returns a tensor using this backend.
Declaration
Tensor PinToDevice(Tensor x, bool uploadCache = true)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | x | The input tensor. |
| Boolean | uploadCache | Whether to also move the elements of the tensor to the device. |
Returns
| Type | Description |
|---|---|
| Tensor | The pinned input tensor. |
PostLayerCleanup()
Called after every layer execution. It allows IOps to run cleanup operations such as clearing temporary buffers only used in the scope of the last layer executed.
Declaration
void PostLayerCleanup()
Pow(TensorFloat, TensorFloat)
Performs an element-wise Pow math operation: f(a, b) = pow(a, b).
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Pow(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Pow(TensorFloat, TensorInt)
Performs an element-wise Pow math operation: f(a, b) = pow(a, b).
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Pow(TensorFloat A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
PRelu(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise PRelu activation function: f(x) = x if x >= 0, otherwise f(x) = slope * x.
Declaration
TensorFloat PRelu(TensorFloat x, TensorFloat slope)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| TensorFloat | slope | The slope tensor, must be unidirectional broadcastable to x. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
RandomNormal(TensorShape, Single, Single, Nullable<Single>)
Generates an output tensor of a given shape with random values in a normal distribution with given mean and scale, and an optional seed value.
Declaration
TensorFloat RandomNormal(TensorShape S, float mean, float scale, float? seed)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorShape | S | The shape to use for the output tensor. |
| Single | mean | The mean of the normal distribution to use to generate the output. |
| Single | scale | The standard deviation of the normal distribution to use to generate the output. |
| Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
RandomUniform(TensorShape, Single, Single, Nullable<Single>)
Generates an output tensor of a given shape with random values in a uniform distribution between a given low and high, and an optional seed value.
Declaration
TensorFloat RandomUniform(TensorShape S, float low, float high, float? seed)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorShape | S | The shape to use for the output tensor. |
| Single | low | The lower end of the interval of the uniform distribution to use to generate the output. |
| Single | high | The upper end of the interval of the uniform distribution to use to generate the output. |
| Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Range(Int32, Int32, Int32)
Generates a 1D output tensor where the values form an arithmetic progression defined by the start, limit, and delta values.
Declaration
TensorInt Range(int start, int limit, int delta)
Parameters
| Type | Name | Description |
|---|---|---|
| Int32 | start | The first value in the range. |
| Int32 | limit | The limit of the range. |
| Int32 | delta | The delta between subsequent values in the range. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Range(Single, Single, Single)
Generates a 1D output tensor where the values form an arithmetic progression defined by the start, limit, and delta values.
Declaration
TensorFloat Range(float start, float limit, float delta)
Parameters
| Type | Name | Description |
|---|---|---|
| Single | start | The first value in the range. |
| Single | limit | The limit of the range. |
| Single | delta | The delta between subsequent values in the range. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Reciprocal(TensorFloat)
Computes an output tensor by applying the element-wise Reciprocal math function: f(x) = 1 / x.
Declaration
TensorFloat Reciprocal(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceL1(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceL1 operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
TensorFloat ReduceL1(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceL1(TensorInt, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceL1 operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
TensorInt ReduceL1(TensorInt X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ReduceL2(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceL2 operation: f(x1, x2 ... xn) = sqrt(x1² + x2² + ... + xn²).
Declaration
TensorFloat ReduceL2(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceLogSum(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceLogSum operation: f(x1, x2 ... xn) = log(x1 + x2 + ... + xn).
Declaration
TensorFloat ReduceLogSum(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceLogSumExp(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceLogSumExp operation: f(x1, x2 ... xn) = log(e^x1 + e^x2 + ... + e^xn).
Declaration
TensorFloat ReduceLogSumExp(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceMax(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceMax operation: f(x1, x2 ... xn) = max(x1, x2, ... , xn).
Declaration
TensorFloat ReduceMax(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceMax(TensorInt, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceMean operation: f(x1, x2 ... xn) = max(x1, x2, ... , xn).
Declaration
TensorInt ReduceMax(TensorInt X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ReduceMean(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceMean operation: f(x1, x2 ... xn) = (x1 + x2 + ... + xn) / n.
Declaration
TensorFloat ReduceMean(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceMin(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceMin operation: f(x1, x2 ... xn) = min(x1, x2, ... , xn).
Declaration
TensorFloat ReduceMin(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceMin(TensorInt, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceMin operation: f(x1, x2 ... xn) = min(x1, x2, ... , xn).
Declaration
TensorInt ReduceMin(TensorInt X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ReduceProd(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceProd operation: f(x1, x2 ... xn) = x1 * x2 * ... * xn.
Declaration
TensorFloat ReduceProd(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceProd(TensorInt, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceProd operation: f(x1, x2 ... xn) = x1 * x2 * ... * xn.
Declaration
TensorInt ReduceProd(TensorInt X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ReduceSum(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceSum operation: f(x1, x2 ... xn) = x1 + x2 + ... + xn.
Declaration
TensorFloat ReduceSum(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceSum(TensorInt, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceSum operation: f(x1, x2 ... xn) = x1 + x2 + ... + xn.
Declaration
TensorInt ReduceSum(TensorInt X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
ReduceSumSquare(TensorFloat, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceSumSquare operation: f(x1, x2 ... xn) = x1² + x2² + ... + xn².
Declaration
TensorFloat ReduceSumSquare(TensorFloat X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ReduceSumSquare(TensorInt, Int32[], Boolean)
Reduces an input tensor along the given axes using the ReduceSumSquare operation: f(x1, x2 ... xn) = x1² + x2² + ... + xn².
Declaration
TensorInt ReduceSumSquare(TensorInt X, int[] axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| Int32[] | axes | The axes along which to reduce. |
| Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Relu(TensorFloat)
Computes an output tensor by applying the element-wise Relu activation function: f(x) = max(0, x).
Declaration
TensorFloat Relu(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Relu6(TensorFloat)
Computes an output tensor by applying the element-wise Relu6 activation function: f(x) = clamp(x, 0, 6).
Declaration
TensorFloat Relu6(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ResetAllocator(Boolean)
Resets the internal allocator.
Declaration
void ResetAllocator(bool keepCachedMemory = true)
Parameters
| Type | Name | Description |
|---|---|---|
| Boolean | keepCachedMemory |
Reshape(Tensor, TensorShape)
Calculates an output tensor by copying the data from the input tensor and using a given shape. The data from the input tensor is unchanged.
Declaration
Tensor Reshape(Tensor X, TensorShape shape)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| TensorShape | shape | The shape of the output tensor. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Resize(TensorFloat, Single[], InterpolationMode, NearestMode, CoordTransformMode)
Calculates an output tensor by resampling the input tensor along the spatial dimensions with given scales.
Declaration
TensorFloat Resize(TensorFloat X, float[] scale, InterpolationMode interpolationMode, NearestMode nearestMode = NearestMode.RoundPreferFloor, CoordTransformMode coordTransformMode = CoordTransformMode.HalfPixel)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Single[] | scale | The factor to scale each dimension by. |
| InterpolationMode | interpolationMode | The |
| NearestMode | nearestMode | The |
| CoordTransformMode | coordTransformMode | The |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
RoiAlign(TensorFloat, TensorFloat, TensorInt, RoiPoolingMode, Int32, Int32, Int32, Single)
Calculates an output tensor by pooling the input tensor across each region of interest given by the rois tensor.
Declaration
TensorFloat RoiAlign(TensorFloat X, TensorFloat Rois, TensorInt Indices, RoiPoolingMode mode, int outputHeight, int outputWidth, int samplingRatio, float spatialScale)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | Rois | The region of interest input tensor. |
| TensorInt | Indices | The indices input tensor. |
| RoiPoolingMode | mode | The pooling mode of the operation as an |
| Int32 | outputHeight | The height of the output tensor. |
| Int32 | outputWidth | The width of the output tensor. |
| Int32 | samplingRatio | The number of sampling points in the interpolation grid used to compute the output value of each pooled output bin. |
| Single | spatialScale | The multiplicative spatial scale factor used to translate coordinates from their input spatial scale to the scale used when pooling. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Round(TensorFloat)
Computes an output tensor by applying the element-wise Round math function: f(x) = round(x).
If the fractional part is equal to 0.5, rounds to the nearest even integer.
Declaration
TensorFloat Round(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ScaleBias(TensorFloat, TensorFloat, TensorFloat)
Computes the output tensor with an element-wise ScaleBias function: f(x, s, b) = x * s + b.
Declaration
TensorFloat ScaleBias(TensorFloat X, TensorFloat S, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | S | The scale tensor. |
| TensorFloat | B | The bias tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ScatterElements(Tensor, TensorInt, Tensor, Int32, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices tensor with values specified by the updates tensor along a given axis.
ScatterElements updates the values depending on the reduction mode used.
Declaration
Tensor ScatterElements(Tensor X, TensorInt indices, Tensor updates, int axis, ScatterReductionMode reduction)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| TensorInt | indices | The indices tensor. |
| Tensor | updates | The updates tensor. |
| Int32 | axis | The axis on which to perform the scatter. |
| ScatterReductionMode | reduction | The reduction mode used to update the values as a |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
ScatterND(TensorFloat, TensorInt, TensorFloat, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices tensor with values specified by the updates tensor.
ScatterND updates the values depending on the reduction mode used.
Declaration
TensorFloat ScatterND(TensorFloat X, TensorInt indices, TensorFloat updates, ScatterReductionMode reduction)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorInt | indices | The indices tensor. |
| TensorFloat | updates | The updates tensor. |
| ScatterReductionMode | reduction | The reduction mode used to update the values as a |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ScatterND(TensorInt, TensorInt, TensorInt, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices tensor with values specified by the updates tensor.
ScatterND updates the values depending on the reduction mode used.
Declaration
TensorInt ScatterND(TensorInt X, TensorInt indices, TensorInt updates, ScatterReductionMode reduction)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| TensorInt | indices | The indices tensor. |
| TensorInt | updates | The updates tensor. |
| ScatterReductionMode | reduction | The reduction mode used to update the values as a |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Selu(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise Selu activation function: f(x) = gamma * x if x >= 0, otherwise f(x) = (alpha * e^x - alpha).
Declaration
TensorFloat Selu(TensorFloat x, float alpha, float gamma)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Single | alpha | The alpha value to use for the |
| Single | gamma | The alpha value to use for the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Shape(Tensor, Int32, Int32)
Calculates the shape of an input tensor as a 1D TensorInt.
Declaration
TensorInt Shape(Tensor X, int start = 0, int end = 8)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| Int32 | start | The inclusive start axis for slicing the shape of the input tensor. The default value is 0. |
| Int32 | end | The exclusive end axis for slicing the shape of the input tensor. The default value is 8. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Shrink(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise Shrink activation function: f(x) = x + bias if x < lambd. f(x) = x - bias if x > lambd. Otherwise f(x) = 0.
Declaration
TensorFloat Shrink(TensorFloat x, float bias, float lambd)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Single | bias | The bias value to use for the |
| Single | lambd | The lambda value to use for the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Sigmoid(TensorFloat)
Computes an output tensor by applying the element-wise Sigmoid activation function: f(x) = 1/(1 + e^(-x)).
Declaration
TensorFloat Sigmoid(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Sign(TensorFloat)
Performs an element-wise Sign math operation: f(x) = 1 if x > 0. f(x) = -1 if x < 0. Otherwise f(x) = 0.
Declaration
TensorFloat Sign(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Sign(TensorInt)
Performs an element-wise Sign math operation: f(x) = 1 if x > 0. f(x) = -1 if x < 0. Otherwise f(x) = 0.
Declaration
TensorInt Sign(TensorInt X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Sin(TensorFloat)
Computes an output tensor by applying the element-wise Sin trigonometric function: f(x) = sin(x).
Declaration
TensorFloat Sin(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Sinh(TensorFloat)
Computes an output tensor by applying the element-wise Sinh trigonometric function: f(x) = sinh(x).
Declaration
TensorFloat Sinh(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Size(TensorShape)
Calculates the number of elements of an input tensor shape as a scalar TensorInt.
Declaration
TensorInt Size(TensorShape shape)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorShape | shape | The input tensor shape. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Slice(Tensor, Int32[], Int32[], Int32[], Int32[])
Calculates an output tensor by slicing the input tensor along given axes with given starts, ends, and steps.
Declaration
Tensor Slice(Tensor X, int[] starts, int[] ends, int[] axes, int[] steps)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| Int32[] | starts | The start index along each axis. |
| Int32[] | ends | The end index along each axis. |
| Int32[] | axes | The axes along which to slice. If this is |
| Int32[] | steps | The step values for slicing. If this is |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Softmax(TensorFloat, Int32)
Computes an output tensor by applying the Softmax activation function along an axis: f(x, axis) = exp(X) / ReduceSum(exp(X), axis).
Declaration
TensorFloat Softmax(TensorFloat X, int axis = -1)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | axis | The axis along which to apply the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Softplus(TensorFloat)
Computes an output tensor by applying the element-wise Softplus activation function: f(x) = ln(e^x + 1).
Declaration
TensorFloat Softplus(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Softsign(TensorFloat)
Computes an output tensor by applying the element-wise Softsign activation function: f(x) = x/(|x| + 1).
Declaration
TensorFloat Softsign(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
SpaceToDepth(TensorFloat, Int32)
Computes the output tensor by permuting data from blocks of spatial data into depth.
Declaration
TensorFloat SpaceToDepth(TensorFloat x, int blocksize)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Int32 | blocksize | The size of the blocks to move the depth data into. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Split(Tensor, Int32, Int32, Int32)
Calculates an output tensor by splitting the input tensor along a given axis between start and end.
Declaration
Tensor Split(Tensor X, int axis, int start, int end)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| Int32 | axis | The axis along which to split the input tensor. |
| Int32 | start | The inclusive start value for the split. |
| Int32 | end | The exclusive end value for the split. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Sqrt(TensorFloat)
Computes an output tensor by applying the element-wise Sqrt math function: f(x) = sqrt(x).
Declaration
TensorFloat Sqrt(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Square(TensorFloat)
Computes an output tensor by applying the element-wise Square math function: f(x) = x * x.
Declaration
TensorFloat Square(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Sub(TensorFloat, TensorFloat)
Performs an element-wise Sub math operation: f(a, b) = a - b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Sub(TensorFloat A, TensorFloat B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | A | The first input tensor. |
| TensorFloat | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Sub(TensorInt, TensorInt)
Performs an element-wise Sub math operation: f(a, b) = a - b.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorInt Sub(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |
Sum(TensorFloat[])
Performs an element-wise Sum math operation: f(x1, x2 ... xn) = x1 + x2 ... xn.
This supports numpy-style broadcasting of input tensors.
Declaration
TensorFloat Sum(TensorFloat[] tensors)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat[] | tensors | The input tensors. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Swish(TensorFloat)
Computes an output tensor by applying the element-wise Swish activation function: f(x) = sigmoid(x) * x = x / (1 + e^{-x}).
Declaration
TensorFloat Swish(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Tan(TensorFloat)
Computes an output tensor by applying the element-wise Tan trigonometric function: f(x) = tan(x).
Declaration
TensorFloat Tan(TensorFloat x)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Tanh(TensorFloat)
Computes an output tensor by applying the element-wise Tanh activation function: f(x) = tanh(x).
Declaration
TensorFloat Tanh(TensorFloat X)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
ThresholdedRelu(TensorFloat, Single)
Computes an output tensor by applying the element-wise ThresholdedRelu activation function: f(x) = x if x > alpha, otherwise f(x) = 0.
Declaration
TensorFloat ThresholdedRelu(TensorFloat x, float alpha)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | x | The input tensor. |
| Single | alpha | The alpha value to use for the |
Returns
| Type | Description |
|---|---|
| TensorFloat | The computed output tensor. |
Tile(Tensor, Int32[])
Calculates an output tensor by repeating the input layer a given number of times along each axis.
Declaration
Tensor Tile(Tensor X, int[] repeats)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| Int32[] | repeats | The number of times to tile the input tensor along each axis. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
TopK(TensorFloat, Int32, Int32, Boolean, Boolean)
Calculates the top-K largest or smallest elements of an input tensor along a given axis.
Declaration
Tensor[] TopK(TensorFloat X, int k, int axis, bool largest, bool sorted)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| Int32 | k | The number of elements to calculate. |
| Int32 | axis | The axis along which to perform the top-K operation. |
| Boolean | largest | Whether to calculate the top-K largest elements. If this is |
| Boolean | sorted | Whether to return the elements in sorted order. |
Returns
| Type | Description |
|---|---|
| Tensor[] | The computed output tensor. |
Transpose(Tensor)
Calculates an output tensor by reversing the dimensions of the input tensor.
Declaration
Tensor Transpose(Tensor x)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | x | The input tensor. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Transpose(Tensor, Int32[])
Calculates an output tensor by permuting the axes and data of the input tensor according to the given permutations.
Declaration
Tensor Transpose(Tensor x, int[] permutations)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | x | The input tensor. |
| Int32[] | permutations | The axes to sample the output tensor from in the input tensor. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Tril(Tensor, Int32)
Computes the output tensor by retaining the lower triangular values from an input matrix or matrix batch and setting the other values to zero.
Declaration
Tensor Tril(Tensor X, int k = 0)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| Int32 | k | The offset from the diagonal to keep. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Triu(Tensor, Int32)
Computes the output tensor by retaining the upper triangular values from an input matrix or matrix batch and setting the other values to zero.
Declaration
Tensor Triu(Tensor X, int k = 0)
Parameters
| Type | Name | Description |
|---|---|---|
| Tensor | X | The input tensor. |
| Int32 | k | The offset from the diagonal to exclude. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Where(TensorInt, Tensor, Tensor)
Performs an element-wise Where logical operation: f(condition, a, b) = a if condition is true, otherwise f(condition, a, b) = b.
Declaration
Tensor Where(TensorInt C, Tensor A, Tensor B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | C | The condition tensor. |
| Tensor | A | The first input tensor. |
| Tensor | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| Tensor | The computed output tensor. |
Xor(TensorInt, TensorInt)
Performs an element-wise Xor logical operation: f(a) = a ^ b.
Declaration
TensorInt Xor(TensorInt A, TensorInt B)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | A | The first input tensor. |
| TensorInt | B | The second input tensor. |
Returns
| Type | Description |
|---|---|
| TensorInt | The computed output tensor. |