Namespace Unity.Sentis.Layers
Classes
Abs
Represents an element-wise Abs
math layer: f(x) = |x|.
Acos
Represents an element-wise Acos
trigonometric layer: f(x) = acos(x).
Acosh
Represents an element-wise Acosh
trigonometric layer: f(x) = acosh(x).
Activation
Represents an element-wise activation layer.
Add
Represents an element-wise Add
math operation layer: f(a, b) = a + b.
This supports numpy-style broadcasting of input tensors.
And
ArgMax
Represents an ArgMax
layer. This computes the indices of the maximum elements of the input tensor along a given axis.
ArgMin
Represents an ArgMin
layer. This computes the indices of the minimum elements of the input tensor along a given axis.
Asin
Represents an element-wise Asin
trigonometric layer: f(x) = asin(x).
Asinh
Represents an element-wise Asinh
trigonometric layer: f(x) = asinh(x).
Atan
Represents an element-wise Atan
trigonometric layer: f(x) = atan(x).
Atanh
Represents an element-wise Atanh
trigonometric layer: f(x) = atanh(x).
AveragePool
Represents an AveragePool
pooling layer. This calculates an output tensor by pooling the mean values of the input tensor across its spatial dimensions according to the given pool and stride values.
AxisNormalization
Represents an AxisNormalization
normalization layer. This computes the mean variance on the last dims of the input tensor and normalizes them according to scale
and bias
tensors.
Bernoulli
Represents a Bernoulli
random layer. This generates an output tensor with values 0 or 1 from a Bernoulli distribution. The input tensor contains the probabilities used for generating the output values.
Broadcast
Represents a base class for layers that apply an operation to input tensors using numpy-style broadcasting.
Cast
Represents an element-wise Cast
layer: f(x) = (float)x or f(x) = (int)x depending on the value of toType
.
CastLike
Represents an element-wise CastLike
layer: f(x) = (float)x or f(x) = (int)x depending on the data type of the targetType tensor.
Ceil
Represents an element-wise Ceil
math layer: f(x) = ceil(x).
Celu
Represents an element-wise Celu
activation layer: f(x) = max(0, x) + min(0, alpha * (exp(x / alpha) - 1)).
Clip
Represents an element-wise Clip
math layer: f(x, xmin, xmax) = min(max(x, xmin), xmax)
Compress
Represents a Compress
logical layer that selects slices of an input tensor along a given axis according to a condition tensor.
If you don't provide an axis, the layer flattens the input tensor.
Concat
Represents a Concat
concatenation layer. The layer computes the output tensor by concatenating the input tensors along a given axis.
Constant
Represents a constant in a model.
ConstantOfShape
Represents a ConstantOfShape
layer. This generates a tensor with the shape given by the input
tensor and filled with a given value.
Conv
Represents a Conv
convolution layer, which applies a convolution filter to an input tensor.
Conv2DTrans
Represents a ConvTranspose
transpose convolution layer, which applies a convolution filter to an input tensor.
Cos
Represents an element-wise Cos
trigonometric layer: f(x) = cos(x).
Cosh
Represents an element-wise Cosh
trigonometric layer: f(x) = cosh(x).
CumSum
Represents a CumSum
math layer that performs the cumulative sum along a given axis.
Dense
Represents a Dense
math operation layer which performs a matrix multiplication operation: f(x, w, b) = X x W + B.
This supports numpy-style broadcasting of input tensors.
DepthToSpace
Represents a DepthToSpace
layer. The layer computes the output tensor by permuting data from depth into blocks of spatial data.
Div
Represents an element-wise Div
math operation layer: f(a, b) = a / b.
This supports numpy-style broadcasting of input tensors.
Einsum
Represents an Einsum
math operation layer.
Elu
Represents an element-wise Elu
activation layer: f(x) = x if x >= 0, otherwise f(x) = alpha * (e^x - 1).
Equal
Represents an element-wise Equal
logical operation layer: f(a, b) = 1 if a == b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
Erf
Represents an element-wise Erf
activation layer: f(x) = erf(x).
Exp
Represents an element-wise Exp
math layer: f(x) = e^{x}.
Expand
Represents an Expand
layer. The layer computes the output tensor by broadcasting the input tensor into a given shape.
Flatten
Represents a Flatten
layer. The layer computes the output tensor by reshaping the input tensor into a 2D matrix according to the given axis.
Floor
Represents an element-wise Floor
math layer: f(x) = floor(x).
FusedActivation
Represents a base class for layers with an optional fused activation at the end of the execution.
Gather
Represents a Gather
layer. This takes values from the input tensor indexed by the indices tensor along a given axis and concatenates them.
GatherElements
Represents a GatherElements
layer. This takes values from the input tensor indexed by the indices
tensor along a given axis.
GatherND
Represents a GatherND
layer. This takes slices of values from the batched input tensor indexed by the indices
tensor.
Gelu
Represents an element-wise Gelu
activation layer: f(x) = x / 2 * (1 + erf(x / sqrt(2))).
GlobalAveragePool
Represents a GlobalAveragePool
pooling layer. This calculates an output tensor by pooling the mean values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
GlobalMaxPool
Represents a GlobalMaxPool
pooling layer. This calculates an output tensor by pooling the maximum values of the input tensor across all of its spatial dimensions. The spatial dimensions of the output are size 1.
Greater
Represents an element-wise Greater
logical operation layer: f(a, b) = 1 if a > b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
GreaterOrEqual
Represents an element-wise GreaterOrEqual
logical operation layer: f(a, b) = 1 if a >= b, otherwise f(a,b) = 0.
This supports numpy-style broadcasting of input tensors.
Hardmax
Represents a Hardmax
activation layer along an axis: f(x, axis) = 1 if x is the first maximum value along the specified axis, otherwise f(x) = 0.
HardSigmoid
Represents an element-wise HardSigmoid
activation layer: f(x) = clamp(alpha * x + beta, 0, 1).
HardSwish
Represents an element-wise HardSwish
activation layer: f(x) = x * max(0, min(1, alpha * x + beta)) = x * HardSigmoid(x, alpha, beta), where alpha = 1/6 and beta = 0.5.
Identity
Represents an Identity
layer. The output tensor is a copy of the input tensor.
InstanceNormalization
Represents an InstanceNormalization
normalization layer. This computes the mean variance on the spatial dims of the input tensor and normalizes them according to scale
and bias
tensors.
IsInf
Represents an element-wise IsInf
logical layer: f(x) = 1 elementwise if x is +Inf and detectPositive, or x is -Inf and detectNegative
is true. Otherwise f(x) = 0.
IsNaN
Represents an element-wise IsNaN
logical layer: f(x) = 1 if x is NaN, otherwise f(x) = 0.
Layer
Represents the base class for all model layers.
LeakyRelu
Represents an element-wise LeakyRelu
activation layer: f(x) = x if x >= 0, otherwise f(x) = alpha * x.
Less
Represents an element-wise Less
logical operation layer: f(a, b) = 1 if a < b, otherwise f(x) = 0.
This supports numpy-style broadcasting of input tensors.
LessOrEqual
Represents an element-wise LessOrEqual
logical operation layer: f(a, b) = 1 if a <= b, otherwise f(a,b) = 0.
This supports numpy-style broadcasting of input tensors.
Log
Represents an element-wise Log
math layer: f(x) = log(x).
LogSoftmax
Represents a LogSoftmax
activation layer along an axis: f(x, axis) = log(Softmax(x, axis)).
LRN
Represents an LRN
local response normalization layer. This normalizes the input tensor over local input regions.
LSTM
Represents an LSTM
recurrent layer. This generates an output tensor by computing a one-layer LSTM (long short-term memory) on an input tensor.
MatMul
Represents a MatMul
math operation layer which performs a matrix multiplication operation: f(a, b) = a x b.
MatMul2D
Represents a MatMul2D
math operation layer which performs a matrix multiplication operation with optional transposes: f(a, b) = a' x b'.
Max
Represents an element-wise Max
math operation layer: f(x1, x2 ... xn) = max(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
MaxPool
Represents a MaxPool
pooling layer. This calculates an output tensor by pooling the maximum values of the input tensor across its spatial dimensions according to the given pool and stride values.
Mean
Represents an element-wise Mean
math operation layer: f(x1, x2 ... xn) = (x1 + x2 ... xn) / n.
This supports numpy-style broadcasting of input tensors.
Min
Represents an element-wise Min
math operation layer: f(x1, x2 ... xn) = min(x1, x2 ... xn).
This supports numpy-style broadcasting of input tensors.
Mod
Represents an element-wise Max
math operation layer: f(a, b) = a % b.
If fmod is false the sign of the remainder is the same as that of the divisor as in Python.
If fmod is true the sign of the remainder is the same as that of the dividend as in C#.
This supports numpy-style broadcasting of input tensors.
Mul
Represents an element-wise Mul
math operation layer: f(a, b) = a * b.
This supports numpy-style broadcasting of input tensors.
Multinomial
Represents a Multinomial
random layer. This generates an output tensor with values from a multinomial distribution according to the probabilities given by the input tensor.
Neg
Represents an element-wise Neg
math layer: f(x) = -x.
NonMaxSuppression
Represents a NonMaxSuppression
object detection layer. This calculates an output tensor of selected indices of boxes from input boxes
and scores
tensors, and bases the indices on the scores and amount of intersection with previously selected boxes.
NonZero
Represents a NonZero
layer. This returns the indices of the elements of the input tensor that are not zero.
Not
Represents an element-wise Not
logical layer: f(x) = ~x.
OneHot
Represents a OneHot
layer. This generates a one-hot tensor with a given depth
, indices
and values
.
Or
Represents an element-wise Or
logical operation layer: f(a, b) = a | b.
This supports numpy-style broadcasting of input tensors.
Pad
Represents a Pad
layer. The layer calculates the output tensor by adding padding to the input tensor according to the given padding values and mode.
Pow
Represents an element-wise Pow
math operation layer: f(a, b) = pow(a, b).
This supports numpy-style broadcasting of input tensors.
PRelu
Represents an element-wise PRelu
activation layer: f(x) = x if x >= 0, otherwise f(x) = slope * x.
The slope tensor must be unidirectional broadcastable to x.
RandomLayer
Represents the abstract base class for layers which generate random values in the output tensor.
RandomNormal
Represents a RandomNormal
random layer. This generates an output tensor of a given shape with random values in a normal distribution with given mean
and scale
, and an optional seed
value.
RandomNormalLike
Represents a RandomNormalLike
random layer. This generates an output tensor with the same shape as the input tensor with random values in a normal distribution, with given mean
and scale
, and an optional seed
value.
RandomUniform
Represents a RandomUniform
random layer. This generates an output tensor of a given shape with random values in a uniform distribution between a given low
and high
, from an optional seed
value.
RandomUniformLike
Represents a RandomUniformLike
random layer. This generates an output tensor with the same shape as the input tensor random values in a uniform distribution between a given low
and high
, from an optional seed
value.
Range
Represents a Range
layer. This generates a 1D output tensor where the values form an arithmetic progression defined by the start
, limit
and delta
scalar input tensors.
Reciprocal
Represents an element-wise Reciprocal
math layer: f(x) = 1 / x.
Reduce
Represents the abstract base class for reduction layers.
ReduceL1
Represents a ReduceL1
reduction layer along the given axes: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
ReduceL2
Represents a ReduceL2
reduction layer along the given axes: f(x1, x2 ... xn) = sqrt(x1² + x2² + ... + xn²).
ReduceLogSum
Represents a ReduceLogSum
reduction layer along the given axes: f(x1, x2 ... xn) = log(x1 + x2 + ... + xn).
ReduceLogSumExp
Represents a ReduceLogSumExp
reduction layer along the given axes: f(x1, x2 ... xn) = log(e^x1 + e^x2 + ... + e^xn).
ReduceMax
Represents a ReduceMax
reduction layer along the given axes: f(x1, x2 ... xn) = max(x1, x2, ... , xn).
ReduceMean
Represents a ReduceMean
reduction layer along the given axes: f(x1, x2 ... xn) = (x1 + x2 + ... + xn) / n.
ReduceMin
Represents a ReduceMin
reduction layer along the given axes: f(x1, x2 ... xn) = min(x1, x2, ... , xn).
ReduceProd
Represents a ReduceProd
reduction layer along the given axes: f(x1, x2 ... xn) = x1 * x2 * ... * xn.
ReduceSum
Represents a ReduceSum
reduction layer along the given axes: f(x1, x2 ... xn) = x1 + x2 + ... + xn.
ReduceSumSquare
Represents a ReduceSumSquare
reduction layer along the given axes: f(x1, x2 ... xn) = x1² + x2² + ... + xn².
Relu
Represents an element-wise Relu
activation layer: f(x) = max(0, x).
Relu6
Represents an element-wise Relu6
activation layer: f(x) = clamp(x, 0, 6).
Reshape
Represents a Reshape
layer. The layer calculates the output tensor by copying the data from the input tensor and using a given shape. The data from the input tensor is unchanged.
Only one of the elements of the shape can be -1. The layer infers the size of this dimension from the remaining dimensions and the length of the input tensor.
Resize
Represents a Resize
layer. The layer calculates the output tensor by resampling the input tensor along the spatial dimensions to a given shape.
RoiAlign
Represents an RoiAlign
region of interest alignment layer. This calculates an output tensor by pooling the input tensor across each region of interest given by the rois
tensor.
Round
Represents an element-wise Round
math layer: f(x) = round(x).
ScaleBias
Represents an element-wise ScaleBias
normalization layer: f(x, s, b) = x * s + b.
ScatterElements
Represents a ScatterElements
layer. This copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor along a given axis.
ScatterElements
updates the values depending on the reduction mode used.
ScatterND
Represents a ScatterND
layer. This copies the input tensor and updates values at indexes specified by the indices
tensor with values specified by the updates
tensor.
ScatterND
updates the values depending on the reduction mode used.
Selu
Represents an element-wise Selu
activation layer: f(x) = gamma * x if x >= 0, otherwise f(x) = (alpha * e^x - alpha).
Shape
Represents a Shape
layer. This computes the shape of an input tensor as a 1D TensorInt
.
Shrink
Represents an element-wise Shrink
math layer: f(x) = x + bias if x < lambd. f(x) = x - bias if x > lambd. Otherwise f(x) = 0.
Sigmoid
Represents an element-wise Sigmoid
activation layer: f(x) = 1/(1 + e^(-x)).
Sign
Represents an element-wise Sign
math layer: f(x) = 1 if x > 0. f(x) = -1 if x < 0. Otherwise f(x) = 0.
Sin
Represents an element-wise Sin
trigonometric layer: f(x) = sin(x).
Sinh
Represents an element-wise Sinh
trigonometric layer: f(x) = sinh(x).
Size
Represents a Size
layer. This computes the number of elements of an input tensor as a scalar TensorInt
.
Slice
Represents a Slice
layer. The layer calculates the output tensor by slicing the input tensor along given axes with given starts, ends and steps.
Softmax
Represents a Softmax
activation layer along an axis: f(x, axis) = exp(X) / ReduceSum(exp(X), axis).
Softplus
Represents an element-wise Softplus
activation layer: f(x) = ln(e^x + 1).
Softsign
Represents an element-wise Softsign
activation layer: f(x) = x/(|x| + 1).
SpaceToDepth
Represents a SpaceToDepth
layer. The layer computes the output tensor by permuting data from blocks of spatial data into depth.
Split
Represents a Split
layer. The layer computes the output tensors by splitting the input tensor along a single given axis.
Sqrt
Represents an element-wise Sqrt
math layer: f(x) = sqrt(x).
Square
Represents an element-wise Square
math layer: f(x) = x * x.
Squeeze
Represents a Squeeze
layer. The layer computes the output tensor by reshaping the input tensor by removing dimensions of size 1.
Sub
Represents an element-wise Sub
math operation layer: f(a, b) = a - b.
This supports numpy-style broadcasting of input tensors.
Sum
Represents an element-wise Sum
math operation layer: f(x1, x2 ... xn) = x1 + x2 ... xn.
This supports numpy-style broadcasting of input tensors.
Swish
Represents an element-wise Swish
activation layer. f(x) = sigmoid(x) * x = x / (1 + e^{-x}).
Tan
Represents an element-wise Tan
trigonometric layer: f(x) = tan(x).
Tanh
Represents an element-wise Tanh
activation layer: f(x) = tanh(x).
ThresholdedRelu
Represents an element-wise ThresholdedRelu
activation layer: f(x) = x if x > alpha, otherwise f(x) = 0.
Tile
Represents a Tile
layer. The layer computes the output tensor by repeating the input layer a given number of times along each axis.
TopK
Represents a TopK
layer. This calculates the top-K largest or smallest elements of an input tensor along a given axis.
This layer calculates both the values tensor of the top-K elements and the indices tensor of the top-K elements as outputs.
Transpose
Represents a Transpose
layer. The layer computes the output tensor by permuting the axes and data of the input tensor according to the given permutations.
Trilu
Represents a Trilu
layer. The layer computes the output tensor by retaining the upper or lower triangular values from an input matrix or matrix batch and setting the other values to zero.
Unsqueeze
Represents an Unsqueeze
layer. The layer computes the output tensor by reshaping the input tensor by adding dimensions of size 1 at the given axes.
Where
Represents an element-wise Where
logical operation layer: f(condition, a, b) = a if condition
, otherwise f(condition, a, b) = b.
This supports numpy-style broadcasting of input tensors.
Xor
Represents an element-wise Xor
logical operation layer: f(a, b) = a ^ b.
This supports numpy-style broadcasting of input tensors.
Enums
AutoPad
Options for auto padding in image layers.
CenterPointBox
Options for the formatting of the box data for NonMaxSuppression
.
CoordTransformMode
Options for how to transform between the coordinate in the output tensor and the coordinate in the input tensor in Resize
.
DepthToSpaceMode
Options for the ordering of the elements in DepthToSpace
.
Flags
Options for the flags of a layer.
FusableActivation
Options for applying an activation at the end of executing a FusedActivation
layer.
InterpolationMode
Options for the interpolation mode to use for Resize
.
NearestMode
Options for how to sample the nearest element in Resize
when using InterpolationMode.NearestMode
.
PadMode
Options for the padding values for Pad
.
RnnActivation
Options for activation functions to apply in a recurrent layer.
RnnDirection
Options for the direction of a recurrent layer.
RnnLayout
Options for the layout of the tensor in a recurrent layer.
RoiPoolingMode
Options for the pooling mode for RoiAlign
.
ScaleMode
Options for the scaling mode to use for Resize
.
ScatterReductionMode
Options for the reduction operation to use in a scatter layer.
TriluMode
Options for which part of the input matrix to retain in Trilu
.