Class Ops
Inherited Members
Namespace: Unity.Sentis
Syntax
public abstract class Ops : IDisposable
Constructors
Ops(BackendType, ITensorAllocator)
Declaration
protected Ops(BackendType backendType, ITensorAllocator allocator)
Parameters
Type | Name | Description |
---|---|---|
BackendType | backendType | |
ITensorAllocator | allocator |
Properties
backendType
Declaration
public BackendType backendType { get; }
Property Value
Type | Description |
---|---|
BackendType |
Methods
Abs(TensorFloat)
Computes an output tensor by applying the element-wise Abs
math function: f(x) = f(x) = |x|.
Declaration
public TensorFloat Abs(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Abs(TensorInt)
Computes an output tensor by applying the element-wise Abs
math function: f(x) = f(x) = |x|.
Declaration
public TensorInt Abs(TensorInt x)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Acos(TensorFloat)
Computes an output tensor by applying the element-wise Acos
trigonometric function: f(x) = acos(x).
Declaration
public TensorFloat Acos(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Acosh(TensorFloat)
Computes an output tensor by applying the element-wise Acosh
trigonometric function: f(x) = acosh(x).
Declaration
public TensorFloat Acosh(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Add(Single, TensorFloat)
Performs an element-wise Add
math operation between a float and a tensor: f(a, b) = a + b.
Declaration
public TensorFloat Add(float a, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
Single | a | The first argument as a float. |
TensorFloat | B | The second argument as a tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Add(TensorFloat, Single)
Performs an element-wise Add
math operation between a tensor and a float: f(a, b) = a + b.
Declaration
public TensorFloat Add(TensorFloat A, float b)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first argument as a tensor. |
Single | b | The second argument as a float. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Add(TensorFloat, TensorFloat)
Performs an element-wise Add
math operation: f(a, b) = a + b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Add(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Add(TensorInt, TensorInt)
Performs an element-wise Add
math operation: f(a, b) = a + b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Add(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
And(TensorInt, TensorInt)
Declaration
public TensorInt And(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | |
TensorInt | B |
Returns
Type | Description |
---|---|
TensorInt |
ArgMax(TensorFloat, Int32, Boolean, Boolean)
Computes the indices of the maximum elements of the input tensor along a given axis.
Declaration
public TensorInt ArgMax(TensorFloat X, int axis, bool keepdim, bool selectLastIndex = false)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | axis | The axis along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ArgMax(TensorInt, Int32, Boolean, Boolean)
Computes the indices of the maximum elements of the input tensor along a given axis.
Declaration
public TensorInt ArgMax(TensorInt X, int axis, bool keepdim, bool selectLastIndex = false)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
Int32 | axis | The axis along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ArgMin(TensorFloat, Int32, Boolean, Boolean)
Computes the indices of the minimum elements of the input tensor along a given axis.
Declaration
public TensorInt ArgMin(TensorFloat X, int axis, bool keepdim, bool selectLastIndex)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | axis | The axis along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ArgMin(TensorInt, Int32, Boolean, Boolean)
Computes the indices of the minimum elements of the input tensor along a given axis.
Declaration
public TensorInt ArgMin(TensorInt X, int axis, bool keepdim, bool selectLastIndex)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
Int32 | axis | The axis along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Boolean | selectLastIndex | Whether to perform the operation from the back of the axis. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Asin(TensorFloat)
Computes an output tensor by applying the element-wise Asin
trigonometric function: f(x) = asin(x).
Declaration
public TensorFloat Asin(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Asinh(TensorFloat)
Computes an output tensor by applying the element-wise Asinh
trigonometric function: f(x) = asinh(x).
Declaration
public TensorFloat Asinh(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Atan(TensorFloat)
Computes an output tensor by applying the element-wise Atan
trigonometric function: f(x) = atan(x).
Declaration
public TensorFloat Atan(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Atanh(TensorFloat)
Computes an output tensor by applying the element-wise Atanh
trigonometric function: f(x) = atanh(x).
Declaration
public TensorFloat Atanh(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
AveragePool(TensorFloat, Int32[], Int32[], Int32[])
Calculates an output tensor by pooling the mean values of the input tensor across its spatial dimensions according to the given pool and stride values.
Declaration
public TensorFloat AveragePool(TensorFloat X, int[] pool, int[] stride, int[] pad)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32[] | pool | The size of the kernel along each spatial axis. |
Int32[] | stride | The stride along each spatial axis. |
Int32[] | pad | The lower and upper padding values for each spatial dimension. For example, [pad_left, pad_right] for 1D, or [pad_top, pad_bottom, pad_left, pad_right] for 2D. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
AxisNormalization(TensorFloat, TensorFloat, TensorFloat, Single)
Computes the mean variance on the last dimension of the input tensor and normalizes it according to scale
and bias
tensors.
Declaration
public TensorFloat AxisNormalization(TensorFloat X, TensorFloat S, TensorFloat B, float epsilon)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | S | The scale tensor. |
TensorFloat | B | The bias tensor. |
Single | epsilon | The epsilon value the layer uses to avoid division by zero. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Bernoulli(TensorFloat, DataType, Nullable<Single>)
Generates an output tensor with values 0 or 1 from a Bernoulli distribution. The input tensor contains the probabilities to use for generating the output values.
Declaration
public Tensor Bernoulli(TensorFloat x, DataType dataType, float? seed)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The probabilities input tensor. |
DataType | dataType | The data type of the output tensor. |
Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Cast(Tensor, DataType)
Computes the output tensor using an element-wise Cast
function: f(x) = (float)x or f(x) = (int)x depending on the value of toType
.
Declaration
public Tensor Cast(Tensor x, DataType toType)
Parameters
Type | Name | Description |
---|---|---|
Tensor | x | The input tensor. |
DataType | toType | The data type to cast to as a |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Ceil(TensorFloat)
Computes an output tensor by applying the element-wise Ceil
math function: f(x) = ceil(x).
Declaration
public TensorFloat Ceil(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Celu(TensorFloat, Single)
Computes an output tensor by applying the element-wise Celu
activation function: f(x) = max(0, x) + min(0, alpha * (exp(x / alpha) - 1)).
Declaration
public TensorFloat Celu(TensorFloat x, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Single | alpha | The alpha value to use for the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Clip(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise Clip
math function: f(x) = clamp(x, min, max).
Declaration
public TensorFloat Clip(TensorFloat x, float min, float max)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Single | min | The lower clip value. |
Single | max | The upper clip value. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Compress(Tensor, TensorInt, Int32)
Selects slices of an input tensor along a given axis according to a condition tensor.
Declaration
public Tensor Compress(Tensor X, TensorInt indices, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Int32 | axis | The axis along which to compress. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Concat(Tensor[], Int32)
Calculates an output tensor by concatenating the input tensors along a given axis.
Declaration
public Tensor Concat(Tensor[] tensors, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor[] | tensors | The input tensors. |
Int32 | axis | The axis along which to concatenate the input tensors. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
ConstantOfShape(TensorShape, Int32)
Generates a tensor with a given shape filled with a given value.
Declaration
public TensorInt ConstantOfShape(TensorShape X, int value)
Parameters
Type | Name | Description |
---|---|---|
TensorShape | X | The input tensor shape. |
Int32 | value | The fill value. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ConstantOfShape(TensorShape, Single)
Generates a tensor with a given shape filled with a given value.
Declaration
public TensorFloat ConstantOfShape(TensorShape X, float value)
Parameters
Type | Name | Description |
---|---|---|
TensorShape | X | The input tensor shape. |
Single | value | The fill value. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Conv(TensorFloat, TensorFloat, TensorFloat, Int32, Int32[], Int32[], Int32[], FusableActivation)
Applies a convolution filter to an input tensor.
Declaration
public TensorFloat Conv(TensorFloat X, TensorFloat K, TensorFloat B, int groups, int[] stride, int[] pad, int[] dilation, FusableActivation fusedActivation)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | K | The filter tensor. |
TensorFloat | B | The optional bias tensor. |
Int32 | groups | The number of groups that input channels and output channels are divided into. |
Int32[] | stride | The optional stride value for each spatial dimension of the filter. |
Int32[] | pad | The optional lower and upper padding values for each spatial dimension of the filter. |
Int32[] | dilation | The optional dilation value of each spatial dimension of the filter. |
FusableActivation | fusedActivation | The fused activation type to apply after the convolution. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Conv2DTrans(TensorFloat, TensorFloat, TensorFloat, Int32[], Int32[], Int32[], FusableActivation)
Applies a transpose convolution filter to an input tensor.
Declaration
public TensorFloat Conv2DTrans(TensorFloat X, TensorFloat K, TensorFloat B, int[] stride, int[] pad, int[] outputAdjustment, FusableActivation fusedActivation)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | K | The filter tensor. |
TensorFloat | B | The optional bias tensor. |
Int32[] | stride | The optional stride value for each spatial dimension of the filter. |
Int32[] | pad | The optional lower and upper padding values for each spatial dimension of the filter. |
Int32[] | outputAdjustment | The output padding value for each spatial dimension in the filter. |
FusableActivation | fusedActivation | The fused activation type to apply after the convolution. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Copy(Tensor)
Creates a copy of a given input tensor with the same shape and values.
Declaration
public Tensor Copy(Tensor x)
Parameters
Type | Name | Description |
---|---|---|
Tensor | x | The input tensor. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Cos(TensorFloat)
Computes an output tensor by applying the element-wise Cos
trigonometric function: f(x) = cos(x).
Declaration
public TensorFloat Cos(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Cosh(TensorFloat)
Computes an output tensor by applying the element-wise Cosh
trigonometric function: f(x) = cosh(x).
Declaration
public TensorFloat Cosh(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
CumSum(TensorFloat, Int32, Boolean, Boolean)
Performs the cumulative sum along a given axis.
Declaration
public TensorFloat CumSum(TensorFloat X, int axis, bool reverse = false, bool exclusive = false)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | axis | The axis along which to apply the cumulative sum. |
Boolean | reverse | Whether to perform the cumulative sum from the end of the axis. |
Boolean | exclusive | Whether to include the respective input element in the cumulative sum. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
CumSum(TensorInt, Int32, Boolean, Boolean)
Performs the cumulative sum along a given axis.
Declaration
public TensorInt CumSum(TensorInt X, int axis, bool reverse = false, bool exclusive = false)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
Int32 | axis | The axis along which to apply the cumulative sum. |
Boolean | reverse | Whether to perform the cumulative sum from the end of the axis. |
Boolean | exclusive | Whether to include the respective input element in the cumulative sum. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Dense(TensorFloat, TensorFloat, TensorFloat, FusableActivation)
Performs a matrix multiplication operation: f(x, w, b) = X x W + B.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Dense(TensorFloat X, TensorFloat W, TensorFloat B, FusableActivation fusedActivation)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | W | The weights tensor. |
TensorFloat | B | The bias tensor. |
FusableActivation | fusedActivation | The fused activation to apply to the output tensor after the dense operation. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
DepthToSpace(TensorFloat, Int32, DepthToSpaceMode)
Computes the output tensor by permuting data from depth into blocks of spatial data.
Declaration
public TensorFloat DepthToSpace(TensorFloat X, int blocksize, DepthToSpaceMode mode)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | blocksize | The size of the blocks to move the depth data into. |
DepthToSpaceMode | mode | The ordering of the data in the output tensor as a |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Dispose()
Declaration
public void Dispose()
Implements
Div(TensorFloat, Single)
Performs an element-wise Div
math operation between a tensor and a float: f(a, b) = a / b.
Declaration
public TensorFloat Div(TensorFloat A, float b)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first argument as a tensor. |
Single | b | The second argument as a float. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Div(TensorFloat, TensorFloat)
Performs an element-wise Div
math operation: f(a, b) = a / b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Div(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Div(TensorInt, TensorInt)
Performs an element-wise Div
math operation: f(a, b) = a / b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Div(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Einsum(String, TensorFloat[])
Performs an Einsum
math operation.
Declaration
public TensorFloat Einsum(string equation, params TensorFloat[] operands)
Parameters
Type | Name | Description |
---|---|---|
String | equation | The equation of the Einstein summation as a comma-separated list of subscript labels. |
TensorFloat[] | operands | The input tensors of the Einsum. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Elu(TensorFloat, Single)
Computes an output tensor by applying the element-wise Elu
activation function: f(x) = x if x >= 0, otherwise f(x) = alpha * (e^x - 1).
Declaration
public TensorFloat Elu(TensorFloat X, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Single | alpha | The alpha value to use for the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Equal(TensorFloat, TensorFloat)
Performs an element-wise Equal
logical comparison operation: f(a, b) = 1 if a == b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Equal(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Equal(TensorInt, TensorInt)
Performs an element-wise Equal
logical comparison operation: f(a, b) = 1 if a == b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Equal(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Erf(TensorFloat)
Computes an output tensor by applying the element-wise Erf
activation function: f(x) = erf(x).
Declaration
public TensorFloat Erf(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Exp(TensorFloat)
Computes an output tensor by applying the element-wise Exp
math function: f(x) = exp(x).
Declaration
public TensorFloat Exp(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Expand(Tensor, TensorShape)
Calculates an output tensor by broadcasting the input tensor into a given shape.
Declaration
public Tensor Expand(Tensor X, TensorShape shape)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorShape | shape | The shape to broadcast the input shape together with to calculate the output tensor. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Floor(TensorFloat)
Computes an output tensor by applying the element-wise Floor
math function: f(x) = floor(x).
Declaration
public TensorFloat Floor(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
FMod(TensorFloat, TensorFloat)
Performs an element-wise Mod
math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the dividend, as in C#.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat FMod(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
FMod(TensorInt, TensorInt)
Performs an element-wise Mod
math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the dividend, as in C#.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt FMod(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Gather(Tensor, TensorInt, Int32)
Takes values from the input tensor indexed by the indices tensor along a given axis and concatenates them.
Declaration
public Tensor Gather(Tensor X, TensorInt indices, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Int32 | axis | The axis along which to gather. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
GatherElements(Tensor, TensorInt, Int32)
Takes values from the input tensor indexed by the indices tensor along a given axis and concatenates them.
Declaration
public Tensor GatherElements(Tensor X, TensorInt indices, int axis)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Int32 | axis | The axis along which to gather. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
GatherND(Tensor, TensorInt, Int32)
Takes slices of values from the batched input tensor indexed by the indices
tensor.
Declaration
public Tensor GatherND(Tensor X, TensorInt indices, int batchDims)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Int32 | batchDims | The number of batch dimensions of the input tensor, the gather begins at the next dimension. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Gelu(TensorFloat)
Computes an output tensor by applying the element-wise Gelu
activation function: f(x) = x / 2 * (1 + erf(x / sqrt(2))).
Declaration
public TensorFloat Gelu(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
GlobalAveragePool(TensorFloat)
Calculates an output tensor by pooling the mean values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
Declaration
public TensorFloat GlobalAveragePool(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
GlobalMaxPool(TensorFloat)
Calculates an output tensor by pooling the maximum values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
Declaration
public TensorFloat GlobalMaxPool(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Greater(TensorFloat, TensorFloat)
Performs an element-wise Greater
logical comparison operation: f(a, b) = 1 if a > b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Greater(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Greater(TensorInt, TensorInt)
Performs an element-wise Greater
logical comparison operation: f(a, b) = 1 if a > b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Greater(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
GreaterOrEqual(TensorFloat, TensorFloat)
Performs an element-wise GreaterOrEqual
logical comparison operation: f(a, b) = 1 if a >= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt GreaterOrEqual(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
GreaterOrEqual(TensorInt, TensorInt)
Performs an element-wise GreaterOrEqual
logical comparison operation: f(a, b) = 1 if a >= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt GreaterOrEqual(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Hardmax(TensorFloat, Int32)
Computes an output tensor by applying the Hardmax
activation function along an axis: f(x, axis) = 1 if x is the first maximum value along the specified axis, otherwise f(x) = 0.
Declaration
public TensorFloat Hardmax(TensorFloat X, int axis = -1)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | axis | The axis along which to apply the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
HardSigmoid(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise HardSigmoid
activation function: f(x) = clamp(alpha * x + beta, 0, 1).
Declaration
public TensorFloat HardSigmoid(TensorFloat x, float alpha, float beta)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Single | alpha | The alpha value to use for the |
Single | beta | The beta value to use for the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
HardSwish(TensorFloat)
Computes an output tensor by applying the element-wise HardSwish
activation function: f(x) = x * max(0, min(1, 1/6 * x + 0.5)).
Declaration
public TensorFloat HardSwish(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
InstanceNormalization(TensorFloat, TensorFloat, TensorFloat, Single)
Computes the mean variance on the spatial dimensions of the input tensor and normalizes them according to scale
and bias
tensors.
Declaration
public TensorFloat InstanceNormalization(TensorFloat X, TensorFloat S, TensorFloat B, float epsilon)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | S | The scale tensor. |
TensorFloat | B | The bias tensor. |
Single | epsilon | The epsilon value the layer uses to avoid division by zero. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
IsInf(TensorFloat, Boolean, Boolean)
Performs an element-wise IsInf
logical operation: f(x) = 1 elementwise if x is +Inf and detectPositive
is true
, or x is -Inf and detectNegative
is true
. Otherwise f(x) = 0.
Declaration
public TensorInt IsInf(TensorFloat X, bool detectNegative, bool detectPositive)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Boolean | detectNegative | Whether to detect negative infinities in the |
Boolean | detectPositive | Whether to detect positive infinities in the |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
IsNaN(TensorFloat)
Performs an element-wise IsNaN
logical operation: f(x) = 1 if x is NaN, otherwise f(x) = 0.
Declaration
public TensorInt IsNaN(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
LeakyRelu(TensorFloat, Single)
Computes an output tensor by applying the element-wise LeakyRelu
activation function: f(x) = x if x >= 0, otherwise f(x) = alpha * x.
Declaration
public TensorFloat LeakyRelu(TensorFloat x, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Single | alpha | The alpha value to use for the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Less(TensorFloat, TensorFloat)
Performs an element-wise Less
logical comparison operation: f(a, b) = 1 if a < b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Less(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Less(TensorInt, TensorInt)
Performs an element-wise Less
logical comparison operation: f(a, b) = 1 if a < b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Less(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
LessOrEqual(TensorFloat, TensorFloat)
Performs an element-wise LessOrEqual
logical comparison operation: f(a, b) = 1 if a <= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt LessOrEqual(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
LessOrEqual(TensorInt, TensorInt)
Performs an element-wise LessOrEqual
logical comparison operation: f(a, b) = 1 if a <= b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt LessOrEqual(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Log(TensorFloat)
Computes an output tensor by applying the element-wise Log
math function: f(x) = log(x).
Declaration
public TensorFloat Log(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
LogSoftmax(TensorFloat, Int32)
Computes an output tensor by applying the LogSoftmax
activation function along an axis: f(x, axis) = log(Softmax(x, axis)).
Declaration
public TensorFloat LogSoftmax(TensorFloat X, int axis = -1)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | axis | The axis along which to apply the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
LRN(TensorFloat, Single, Single, Single, Int32)
Normalizes the input tensor over local input regions.
Declaration
public TensorFloat LRN(TensorFloat X, float alpha, float beta, float bias, int size)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Single | alpha | The scaling parameter to use for the normalization. |
Single | beta | The exponent to use for the normalization. |
Single | bias | The bias value to use for the normalization. |
Int32 | size | The number of channels to sum over. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
LSTM(TensorFloat, TensorFloat, TensorFloat, TensorFloat, TensorInt, TensorFloat, TensorFloat, TensorFloat, RnnDirection, RnnActivation[], Single[], Single[], Boolean, Single, RnnLayout)
Generates an output tensor by computing a one-layer long short-term memory (LSTM) on an input tensor.
Declaration
public TensorFloat[] LSTM(TensorFloat X, TensorFloat W, TensorFloat R, TensorFloat B, TensorInt sequenceLens, TensorFloat initialH, TensorFloat initialC, TensorFloat P, RnnDirection direction, RnnActivation[] activations, float[] activationAlpha, float[] activationBeta, bool inputForget, float clip, RnnLayout layout)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input sequences tensor. |
TensorFloat | W | The weights tensor for the gates of the LSTM. |
TensorFloat | R | The recurrent weights tensor for the gates of the LSTM. |
TensorFloat | B | The optional bias tensor for the input gate of the LSTM. |
TensorInt | sequenceLens | The optional 1D tensor specifying the lengths of the sequences in a batch. |
TensorFloat | initialH | The optional initial values tensor of the hidden neurons of the LSTM. If this is |
TensorFloat | initialC | The optional initial values tensor of the cells of the LSTM. If this is |
TensorFloat | P | The optional weight tensor for the peepholes of the LSTM. If this is |
RnnDirection | direction | The direction of the LSTM as an |
RnnActivation[] | activations | The activation functions of the LSTM as an array of |
Single[] | activationAlpha | The alpha values of the activation functions of the LSTM. |
Single[] | activationBeta | The beta values of the activation functions of the LSTM. |
Boolean | inputForget | Whether to forget the input values in the LSTM. If this is |
Single | clip | The cell clip threshold of the LSTM. |
RnnLayout | layout | The layout of the tensors as an |
Returns
Type | Description |
---|---|
TensorFloat[] | The computed output tensor. |
MatMul(TensorFloat, TensorFloat)
Performs a multi-dimensional matrix multiplication operation: f(a, b) = a x b.
Declaration
public TensorFloat MatMul(TensorFloat X, TensorFloat Y)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The first input tensor. |
TensorFloat | Y | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
MatMul2D(TensorFloat, Boolean, TensorFloat, Boolean)
Performs a matrix multiplication operation with optional transposes: f(a, b) = a' x b'.
Declaration
public TensorFloat MatMul2D(TensorFloat X, bool xTranspose, TensorFloat y, bool yTranspose)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The first input tensor. |
Boolean | xTranspose | Whether to transpose the first input tensor before performing the matrix multiplication. |
TensorFloat | y | The second input tensor. |
Boolean | yTranspose | Whether to transpose the second input tensor before performing the matrix multiplication. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Max(TensorFloat[])
Performs an element-wise Max
math operation: f(x1, x2 ... xn) = max(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Max(TensorFloat[] tensors)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | tensors | The input tensors. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Max(TensorInt[])
Performs an element-wise Max
math operation: f(x1, x2 ... xn) = max(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Max(TensorInt[] tensors)
Parameters
Type | Name | Description |
---|---|---|
TensorInt[] | tensors | The input tensors. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
MaxPool(TensorFloat, Int32[], Int32[], Int32[])
Calculates an output tensor by pooling the maximum values of the input tensor across its spatial dimensions according to the given pool and stride values.
Declaration
public TensorFloat MaxPool(TensorFloat X, int[] pool, int[] stride, int[] pad)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32[] | pool | The size of the kernel along each spatial axis. |
Int32[] | stride | The stride along each spatial axis. |
Int32[] | pad | The lower and upper padding values for each spatial dimension. For example, [pad_left, pad_right] for 1D, or [pad_top, pad_bottom, pad_left, pad_right] for 2D. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Mean(TensorFloat[])
Performs an element-wise Mean
math operation: f(x1, x2 ... xn) = (x1 + x2 ... xn) / n.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Mean(TensorFloat[] tensors)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | tensors | The input tensors. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Min(TensorFloat[])
Performs an element-wise Min
math operation: f(x1, x2 ... xn) = min(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Min(TensorFloat[] tensors)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | tensors | The input tensors. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Min(TensorInt[])
Performs an element-wise Min
math operation: f(x1, x2 ... xn) = min(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Min(TensorInt[] tensors)
Parameters
Type | Name | Description |
---|---|---|
TensorInt[] | tensors | The input tensors. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Mod(TensorInt, TensorInt)
Performs an element-wise Mod
math operation: f(a, b) = a % b.
The sign of the remainder is the same as the sign of the divisor, as in Python.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Mod(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Mul(Single, TensorFloat)
Performs an element-wise Mul
math operation between a float and a tensor: f(a, b) = a * b.
Declaration
public TensorFloat Mul(float a, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
Single | a | The first argument as a float. |
TensorFloat | B | The second argument as a tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Mul(TensorFloat, Single)
Performs an element-wise Mul
math operation between a tensor and a float: f(a, b) = a * b.
Declaration
public TensorFloat Mul(TensorFloat A, float b)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first argument as a tensor. |
Single | b | The second argument as a float. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Mul(TensorFloat, TensorFloat)
Performs an element-wise Mul
math operation: f(a, b) = a * b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Mul(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Mul(TensorInt, TensorInt)
Performs an element-wise Mul
math operation: f(a, b) = a * b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Mul(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Multinomial(TensorFloat, Int32, Nullable<Single>)
Generates an output tensor with values from a multinomial distribution according to the probabilities given by the input tensor.
Declaration
public TensorInt Multinomial(TensorFloat x, int count, float? seed)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The probabilities input tensor. |
Int32 | count | The number of times to sample the input. |
Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Neg(TensorFloat)
Computes an output tensor by applying the element-wise Neg
math function: f(x) = -x.
Declaration
public TensorFloat Neg(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Neg(TensorInt)
Computes an output tensor by applying the element-wise Neg
math function: f(x) = -x.
Declaration
public TensorInt Neg(TensorInt X)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
NonMaxSuppression(TensorFloat, TensorFloat, Int32, Single, Single, CenterPointBox)
Calculates an output tensor of selected indices of boxes from input boxes
and scores
tensors where the indices are based on the scores and amount of intersection with previously selected boxes.
Declaration
public TensorInt NonMaxSuppression(TensorFloat boxes, TensorFloat scores, int maxOutputBoxesPerClass, float iouThreshold, float scoreThreshold, CenterPointBox centerPointBox)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | boxes | The boxes input tensor. |
TensorFloat | scores | The scores input tensor. |
Int32 | maxOutputBoxesPerClass | The maximum number of boxes to return for each class. |
Single | iouThreshold | The threshold above which the intersect-over-union rejects a box. |
Single | scoreThreshold | The threshold below which the box score filters a box from the output. |
CenterPointBox | centerPointBox | The format the |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
NonZero(TensorFloat)
Returns the indices of the elements of the input tensor that are not zero.
Declaration
public TensorInt NonZero(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
NonZero(TensorInt)
Returns the indices of the elements of the input tensor that are not zero.
Declaration
public TensorInt NonZero(TensorInt X)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Not(TensorInt)
Performs an element-wise Not
logical operation: f(x) = ~x.
Declaration
public TensorInt Not(TensorInt X)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
OneHot(TensorInt, Int32, Int32, Int32, Int32)
Generates a one-hot tensor with a given depth
, indices
and on and off values.
Declaration
public TensorInt OneHot(TensorInt indices, int axis, int depth, int offValue, int onValue)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | indices | The indices input tensor. |
Int32 | axis | The axis along which the operation adds the one-hot representation. |
Int32 | depth | The depth of the one-hot tensor. |
Int32 | offValue | The value to use for an off element. |
Int32 | onValue | The value to use for an on element. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Or(TensorInt, TensorInt)
Performs an element-wise Or
logical operation: f(a, b) = a | b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Or(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Pad(TensorFloat, ReadOnlySpan<Int32>, PadMode, Single)
Calculates the output tensor by adding padding to the input tensor according to the given padding values and mode.
Declaration
public TensorFloat Pad(TensorFloat X, ReadOnlySpan<int> pad, PadMode padMode = PadMode.Constant, float constant = 0F)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | pad | The lower and upper padding values for each dimension. |
PadMode | padMode | The |
Single | constant | The constant value to fill with when using |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Pow(TensorFloat, TensorFloat)
Performs an element-wise Pow
math operation: f(a, b) = pow(a, b).
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Pow(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Pow(TensorFloat, TensorInt)
Performs an element-wise Pow
math operation: f(a, b) = pow(a, b).
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Pow(TensorFloat A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
PRelu(TensorFloat, TensorFloat)
Computes an output tensor by applying the element-wise PRelu
activation function: f(x) = x if x >= 0, otherwise f(x) = slope * x.
Declaration
public TensorFloat PRelu(TensorFloat x, TensorFloat slope)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
TensorFloat | slope | The slope tensor, must be unidirectional broadcastable to x. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
RandomNormal(TensorShape, Single, Single, Nullable<Single>)
Generates an output tensor of a given shape with random values in a normal distribution with given mean
and scale
, and an optional seed
value.
Declaration
public TensorFloat RandomNormal(TensorShape S, float mean, float scale, float? seed)
Parameters
Type | Name | Description |
---|---|---|
TensorShape | S | The shape to use for the output tensor. |
Single | mean | The mean of the normal distribution to use to generate the output. |
Single | scale | The standard deviation of the normal distribution to use to generate the output. |
Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
RandomUniform(TensorShape, Single, Single, Nullable<Single>)
Generates an output tensor of a given shape with random values in a uniform distribution between a given low
and high
, and an optional seed
value.
Declaration
public TensorFloat RandomUniform(TensorShape S, float low, float high, float? seed)
Parameters
Type | Name | Description |
---|---|---|
TensorShape | S | The shape to use for the output tensor. |
Single | low | The lower end of the interval of the uniform distribution to use to generate the output. |
Single | high | The upper end of the interval of the uniform distribution to use to generate the output. |
Nullable<Single> | seed | The optional seed to use for the random number generation. If this is |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Range(Int32, Int32, Int32)
Generates a 1D output tensor where the values form an arithmetic progression defined by the start
, limit
, and delta
values.
Declaration
public TensorInt Range(int start, int limit, int delta)
Parameters
Type | Name | Description |
---|---|---|
Int32 | start | The first value in the range. |
Int32 | limit | The limit of the range. |
Int32 | delta | The delta between subsequent values in the range. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Range(Single, Single, Single)
Generates a 1D output tensor where the values form an arithmetic progression defined by the start
, limit
, and delta
values.
Declaration
public TensorFloat Range(float start, float limit, float delta)
Parameters
Type | Name | Description |
---|---|---|
Single | start | The first value in the range. |
Single | limit | The limit of the range. |
Single | delta | The delta between subsequent values in the range. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Reciprocal(TensorFloat)
Computes an output tensor by applying the element-wise Reciprocal
math function: f(x) = 1 / x.
Declaration
public TensorFloat Reciprocal(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceL1(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceL1
operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
public TensorFloat ReduceL1(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceL1(TensorInt, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceL1
operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
public TensorInt ReduceL1(TensorInt X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ReduceL2(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceL2
operation: f(x1, x2 ... xn) = sqrt(x1² + x2² + ... + xn²).
Declaration
public TensorFloat ReduceL2(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceLogSum(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceLogSum
operation: f(x1, x2 ... xn) = log(x1 + x2 + ... + xn).
Declaration
public TensorFloat ReduceLogSum(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceLogSumExp(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceLogSumExp
operation: f(x1, x2 ... xn) = log(e^x1 + e^x2 + ... + e^xn).
Declaration
public TensorFloat ReduceLogSumExp(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceMax(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceMax
operation: f(x1, x2 ... xn) = max(x1, x2, ... , xn).
Declaration
public TensorFloat ReduceMax(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceMax(TensorInt, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceMean
operation: f(x1, x2 ... xn) = max(x1, x2, ... , xn).
Declaration
public TensorInt ReduceMax(TensorInt X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ReduceMean(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceMean
operation: f(x1, x2 ... xn) = (x1 + x2 + ... + xn) / n.
Declaration
public TensorFloat ReduceMean(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceMin(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceMin
operation: f(x1, x2 ... xn) = min(x1, x2, ... , xn).
Declaration
public TensorFloat ReduceMin(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceMin(TensorInt, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceMin
operation: f(x1, x2 ... xn) = min(x1, x2, ... , xn).
Declaration
public TensorInt ReduceMin(TensorInt X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ReduceProd(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceProd
operation: f(x1, x2 ... xn) = x1 * x2 * ... * xn.
Declaration
public TensorFloat ReduceProd(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceProd(TensorInt, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceProd
operation: f(x1, x2 ... xn) = x1 * x2 * ... * xn.
Declaration
public TensorInt ReduceProd(TensorInt X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ReduceSum(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceSum
operation: f(x1, x2 ... xn) = x1 + x2 + ... + xn.
Declaration
public TensorFloat ReduceSum(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceSum(TensorInt, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceSum
operation: f(x1, x2 ... xn) = x1 + x2 + ... + xn.
Declaration
public TensorInt ReduceSum(TensorInt X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
ReduceSumSquare(TensorFloat, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceSumSquare
operation: f(x1, x2 ... xn) = x1² + x2² + ... + xn².
Declaration
public TensorFloat ReduceSumSquare(TensorFloat X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ReduceSumSquare(TensorInt, ReadOnlySpan<Int32>, Boolean)
Reduces an input tensor along the given axes using the ReduceSumSquare
operation: f(x1, x2 ... xn) = x1² + x2² + ... + xn².
Declaration
public TensorInt ReduceSumSquare(TensorInt X, ReadOnlySpan<int> axes, bool keepdim)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
ReadOnlySpan<Int32> | axes | The axes along which to reduce. |
Boolean | keepdim | Whether to keep the reduced axes in the output tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Relu(TensorFloat)
Computes an output tensor by applying the element-wise Relu
activation function: f(x) = max(0, x).
Declaration
public TensorFloat Relu(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Relu6(TensorFloat)
Computes an output tensor by applying the element-wise Relu6
activation function: f(x) = clamp(x, 0, 6).
Declaration
public TensorFloat Relu6(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Reshape(Tensor, TensorShape)
Calculates an output tensor by copying the data from the input tensor and using a given shape. The data from the input tensor is unchanged.
Declaration
public Tensor Reshape(Tensor X, TensorShape shape)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorShape | shape | The shape of the output tensor. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Resize(TensorFloat, ReadOnlySpan<Single>, InterpolationMode, NearestMode, CoordTransformMode)
Calculates an output tensor by resampling the input tensor along the spatial dimensions with given scales.
Declaration
public TensorFloat Resize(TensorFloat X, ReadOnlySpan<float> scale, InterpolationMode interpolationMode, NearestMode nearestMode = NearestMode.RoundPreferFloor, CoordTransformMode coordTransformMode = CoordTransformMode.HalfPixel)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
ReadOnlySpan<Single> | scale | The factor to scale each dimension by. |
InterpolationMode | interpolationMode | The |
NearestMode | nearestMode | The |
CoordTransformMode | coordTransformMode | The |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
RoiAlign(TensorFloat, TensorFloat, TensorInt, RoiPoolingMode, Int32, Int32, Int32, Single)
Calculates an output tensor by pooling the input tensor across each region of interest given by the rois
tensor.
Declaration
public TensorFloat RoiAlign(TensorFloat X, TensorFloat Rois, TensorInt Indices, RoiPoolingMode mode, int outputHeight, int outputWidth, int samplingRatio, float spatialScale)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | Rois | The region of interest input tensor. |
TensorInt | Indices | The indices input tensor. |
RoiPoolingMode | mode | The pooling mode of the operation as an |
Int32 | outputHeight | The height of the output tensor. |
Int32 | outputWidth | The width of the output tensor. |
Int32 | samplingRatio | The number of sampling points in the interpolation grid used to compute the output value of each pooled output bin. |
Single | spatialScale | The multiplicative spatial scale factor used to translate coordinates from their input spatial scale to the scale used when pooling. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Round(TensorFloat)
Computes an output tensor by applying the element-wise Round
math function: f(x) = round(x).
If the fractional part is equal to 0.5, rounds to the nearest even integer.
Declaration
public TensorFloat Round(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ScaleBias(TensorFloat, TensorFloat, TensorFloat)
Computes the output tensor with an element-wise ScaleBias
function: f(x, s, b) = x * s + b.
Declaration
public TensorFloat ScaleBias(TensorFloat X, TensorFloat S, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorFloat | S | The scale tensor. |
TensorFloat | B | The bias tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ScatterElements(Tensor, TensorInt, Tensor, Int32, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor along a given axis.
ScatterElements
updates the values depending on the reduction mode used.
Declaration
public Tensor ScatterElements(Tensor X, TensorInt indices, Tensor updates, int axis, ScatterReductionMode reduction)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
TensorInt | indices | The indices tensor. |
Tensor | updates | The updates tensor. |
Int32 | axis | The axis on which to perform the scatter. |
ScatterReductionMode | reduction | The reduction mode used to update the values as a |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
ScatterND(TensorFloat, TensorInt, TensorFloat, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor.
ScatterND
updates the values depending on the reduction mode used.
Declaration
public TensorFloat ScatterND(TensorFloat X, TensorInt indices, TensorFloat updates, ScatterReductionMode reduction)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
TensorInt | indices | The indices tensor. |
TensorFloat | updates | The updates tensor. |
ScatterReductionMode | reduction | The reduction mode used to update the values as a |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ScatterND(TensorInt, TensorInt, TensorInt, ScatterReductionMode)
Copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor.
ScatterND
updates the values depending on the reduction mode used.
Declaration
public TensorInt ScatterND(TensorInt X, TensorInt indices, TensorInt updates, ScatterReductionMode reduction)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
TensorInt | indices | The indices tensor. |
TensorInt | updates | The updates tensor. |
ScatterReductionMode | reduction | The reduction mode used to update the values as a |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Selu(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise Selu
activation function: f(x) = gamma * x if x >= 0, otherwise f(x) = (alpha * e^x - alpha).
Declaration
public TensorFloat Selu(TensorFloat x, float alpha, float gamma)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Single | alpha | The alpha value to use for the |
Single | gamma | The alpha value to use for the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Set(Tensor, Tensor, Int32, Int32, Int32)
Updates values of A with values from B similar to setting a slice in numpy. A[..., start:end, ....] = B
This returns a new tensor rather than working on A in-place.
This supports numpy-style one-directional broadcasting of B into A.
Declaration
public Tensor Set(Tensor A, Tensor B, int axis, int start, int end)
Parameters
Type | Name | Description |
---|---|---|
Tensor | A | The first argument as a tensor. |
Tensor | B | The second argument as a tensor. |
Int32 | axis | |
Int32 | start | |
Int32 | end |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Shape(Tensor, Int32, Int32)
Calculates the shape of an input tensor as a 1D TensorInt
.
Declaration
public TensorInt Shape(Tensor X, int start = 0, int end = 8)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Int32 | start | The inclusive start axis for slicing the shape of the input tensor. The default value is 0. |
Int32 | end | The exclusive end axis for slicing the shape of the input tensor. The default value is 8. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Shrink(TensorFloat, Single, Single)
Computes an output tensor by applying the element-wise Shrink
activation function: f(x) = x + bias if x < lambd. f(x) = x - bias if x > lambd. Otherwise f(x) = 0.
Declaration
public TensorFloat Shrink(TensorFloat x, float bias, float lambd)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Single | bias | The bias value to use for the |
Single | lambd | The lambda value to use for the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sigmoid(TensorFloat)
Computes an output tensor by applying the element-wise Sigmoid
activation function: f(x) = 1/(1 + e^(-x)).
Declaration
public TensorFloat Sigmoid(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sign(TensorFloat)
Performs an element-wise Sign
math operation: f(x) = 1 if x > 0. f(x) = -1 if x < 0. Otherwise f(x) = 0.
Declaration
public TensorFloat Sign(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sign(TensorInt)
Performs an element-wise Sign
math operation: f(x) = 1 if x > 0. f(x) = -1 if x < 0. Otherwise f(x) = 0.
Declaration
public TensorInt Sign(TensorInt X)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Sin(TensorFloat)
Computes an output tensor by applying the element-wise Sin
trigonometric function: f(x) = sin(x).
Declaration
public TensorFloat Sin(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sinh(TensorFloat)
Computes an output tensor by applying the element-wise Sinh
trigonometric function: f(x) = sinh(x).
Declaration
public TensorFloat Sinh(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Size(TensorShape)
Calculates the number of elements of an input tensor shape as a scalar TensorInt
.
Declaration
public TensorInt Size(TensorShape shape)
Parameters
Type | Name | Description |
---|---|---|
TensorShape | shape | The input tensor shape. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Slice(Tensor, ReadOnlySpan<Int32>, ReadOnlySpan<Int32>, ReadOnlySpan<Int32>, ReadOnlySpan<Int32>)
Calculates an output tensor by slicing the input tensor along given axes with given starts, ends, and steps.
Declaration
public Tensor Slice(Tensor X, ReadOnlySpan<int> starts, ReadOnlySpan<int> ends, ReadOnlySpan<int> axes, ReadOnlySpan<int> steps)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
ReadOnlySpan<Int32> | starts | The start index along each axis. |
ReadOnlySpan<Int32> | ends | The end index along each axis. |
ReadOnlySpan<Int32> | axes | The axes along which to slice. If this is |
ReadOnlySpan<Int32> | steps | The step values for slicing. If this is |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Softmax(TensorFloat, Int32)
Computes an output tensor by applying the Softmax
activation function along an axis: f(x, axis) = exp(X) / ReduceSum(exp(X), axis).
Declaration
public TensorFloat Softmax(TensorFloat X, int axis = -1)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | axis | The axis along which to apply the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Softplus(TensorFloat)
Computes an output tensor by applying the element-wise Softplus
activation function: f(x) = ln(e^x + 1).
Declaration
public TensorFloat Softplus(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Softsign(TensorFloat)
Computes an output tensor by applying the element-wise Softsign
activation function: f(x) = x/(|x| + 1).
Declaration
public TensorFloat Softsign(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
SpaceToDepth(TensorFloat, Int32)
Computes the output tensor by permuting data from blocks of spatial data into depth.
Declaration
public TensorFloat SpaceToDepth(TensorFloat x, int blocksize)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Int32 | blocksize | The size of the blocks to move the depth data into. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Split(Tensor, Int32, Int32, Int32)
Calculates an output tensor by splitting the input tensor along a given axis between start and end.
Declaration
public Tensor Split(Tensor X, int axis, int start = 0, int end = 2147483647)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Int32 | axis | The axis along which to split the input tensor. |
Int32 | start | The inclusive start value for the split. |
Int32 | end | The exclusive end value for the split. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Sqrt(TensorFloat)
Computes an output tensor by applying the element-wise Sqrt
math function: f(x) = sqrt(x).
Declaration
public TensorFloat Sqrt(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Square(TensorFloat)
Computes an output tensor by applying the element-wise Square
math function: f(x) = x * x.
Declaration
public TensorFloat Square(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sub(Single, TensorFloat)
Performs an element-wise Sub
math operation between a float and a tensor: f(a, b) = a - b.
Declaration
public TensorFloat Sub(float a, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
Single | a | The first argument as a float. |
TensorFloat | B | The second argument as a tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sub(TensorFloat, Single)
Performs an element-wise Sub
math operation between a tensor and a float: f(a, b) = a - b.
Declaration
public TensorFloat Sub(TensorFloat A, float b)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first argument as a tensor. |
Single | b | The second argument as a float. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sub(TensorFloat, TensorFloat)
Performs an element-wise Sub
math operation: f(a, b) = a - b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Sub(TensorFloat A, TensorFloat B)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | A | The first input tensor. |
TensorFloat | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Sub(TensorInt, TensorInt)
Performs an element-wise Sub
math operation: f(a, b) = a - b.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorInt Sub(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |
Sum(TensorFloat[])
Performs an element-wise Sum
math operation: f(x1, x2 ... xn) = x1 + x2 ... xn.
This supports numpy-style broadcasting of input tensors.
Declaration
public TensorFloat Sum(TensorFloat[] tensors)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat[] | tensors | The input tensors. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Swish(TensorFloat)
Computes an output tensor by applying the element-wise Swish
activation function: f(x) = sigmoid(x) * x = x / (1 + e^{-x}).
Declaration
public TensorFloat Swish(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Tan(TensorFloat)
Computes an output tensor by applying the element-wise Tan
trigonometric function: f(x) = tan(x).
Declaration
public TensorFloat Tan(TensorFloat x)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Tanh(TensorFloat)
Computes an output tensor by applying the element-wise Tanh
activation function: f(x) = tanh(x).
Declaration
public TensorFloat Tanh(TensorFloat X)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
ThresholdedRelu(TensorFloat, Single)
Computes an output tensor by applying the element-wise ThresholdedRelu
activation function: f(x) = x if x > alpha, otherwise f(x) = 0.
Declaration
public TensorFloat ThresholdedRelu(TensorFloat x, float alpha)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | x | The input tensor. |
Single | alpha | The alpha value to use for the |
Returns
Type | Description |
---|---|
TensorFloat | The computed output tensor. |
Tile(Tensor, ReadOnlySpan<Int32>)
Calculates an output tensor by repeating the input layer a given number of times along each axis.
Declaration
public Tensor Tile(Tensor X, ReadOnlySpan<int> repeats)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
ReadOnlySpan<Int32> | repeats | The number of times to tile the input tensor along each axis. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
TopK(TensorFloat, Int32, Int32, Boolean, Boolean)
Calculates the top-K largest or smallest elements of an input tensor along a given axis.
Declaration
public Tensor[] TopK(TensorFloat X, int k, int axis, bool largest, bool sorted)
Parameters
Type | Name | Description |
---|---|---|
TensorFloat | X | The input tensor. |
Int32 | k | The number of elements to calculate. |
Int32 | axis | The axis along which to perform the top-K operation. |
Boolean | largest | Whether to calculate the top-K largest elements. If this is |
Boolean | sorted | Whether to return the elements in sorted order. |
Returns
Type | Description |
---|---|
Tensor[] | The computed output tensor. |
Transpose(Tensor)
Calculates an output tensor by reversing the dimensions of the input tensor.
Declaration
public Tensor Transpose(Tensor x)
Parameters
Type | Name | Description |
---|---|---|
Tensor | x | The input tensor. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Transpose(Tensor, Int32[])
Calculates an output tensor by permuting the axes and data of the input tensor according to the given permutations.
Declaration
public Tensor Transpose(Tensor x, int[] permutations)
Parameters
Type | Name | Description |
---|---|---|
Tensor | x | The input tensor. |
Int32[] | permutations | The axes to sample the output tensor from in the input tensor. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Tril(Tensor, Int32)
Computes the output tensor by retaining the lower triangular values from an input matrix or matrix batch and setting the other values to zero.
Declaration
public Tensor Tril(Tensor X, int k = 0)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Int32 | k | The offset from the diagonal to keep. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Triu(Tensor, Int32)
Computes the output tensor by retaining the upper triangular values from an input matrix or matrix batch and setting the other values to zero.
Declaration
public Tensor Triu(Tensor X, int k = 0)
Parameters
Type | Name | Description |
---|---|---|
Tensor | X | The input tensor. |
Int32 | k | The offset from the diagonal to exclude. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Where(TensorInt, Tensor, Tensor)
Performs an element-wise Where
logical operation: f(condition, a, b) = a if condition
is true
, otherwise f(condition, a, b) = b.
Declaration
public Tensor Where(TensorInt C, Tensor A, Tensor B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | C | The condition tensor. |
Tensor | A | The first input tensor. |
Tensor | B | The second input tensor. |
Returns
Type | Description |
---|---|
Tensor | The computed output tensor. |
Xor(TensorInt, TensorInt)
Performs an element-wise Xor
logical operation: f(a) = a ^ b.
Declaration
public TensorInt Xor(TensorInt A, TensorInt B)
Parameters
Type | Name | Description |
---|---|---|
TensorInt | A | The first input tensor. |
TensorInt | B | The second input tensor. |
Returns
Type | Description |
---|---|
TensorInt | The computed output tensor. |