Class ModelBuilder
Syntax
public class ModelBuilder
Constructors
ModelBuilder(Model)
Create a model builder helper to construct the underlying Model.
Declaration
public ModelBuilder(Model model)
Parameters
Type |
Name |
Description |
Model |
model |
|
Properties
model
Declaration
public Model model { get; }
Property Value
Methods
Add(String, Object[])
Element-wise add
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Add(string name, object[] inputs)
Parameters
Returns
AvgPool2D(String, Object, Int32[], Int32[], Int32[])
Apply 'average' pooling by downscaling H and W dimension according to pool
, stride
and pad
.
Pool and stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Output batch and channels dimensions the same as input.
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - pool[1,0]) / stride[1,0] + 1.
Declaration
public Layer AvgPool2D(string name, object input, int[] pool, int[] stride, int[] pad)
Parameters
Returns
Border2D(String, Object, Int32[], Single)
Pads H and W dimension with a given constant value (default to 0).
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
If pad contain negative values H and W dimensions will be cropped instead.
For example a tensor of shape(1,2,3,1)
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 2, 3, 0, 0],
[0, 0, 4, 5, 6, 0, 0],
[0, 0, 0, 0, 0, 0, 0]
Declaration
public Layer Border2D(string name, object input, int[] pad, float constantValue = 0F)
Parameters
Returns
Clip(String, Object, Single, Single)
Declaration
public Layer Clip(string name, object input, float min, float max)
Parameters
Returns
Concat(String, Object[], Int32)
Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the axis to concatenate on.
Declaration
public Layer Concat(string name, object[] inputs, int axis)
Parameters
Returns
Const(String, Tensor, Int32)
Allow to load a tensor from constants.
Declaration
public Layer Const(string name, Tensor tensor, int insertionIndex = -1)
Parameters
Returns
Conv2D(String, Object, Int32[], Int32[], Tensor, Tensor)
Apply a spatial 2D convolution on H and W.
Stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Kernel should be a tensor of shape [kernelHeight, kernelWidth, kernelDepth, kernelCount]
Bias should be a tensor with (batch == 1) and (height * width * channels == kernelCount)
Output batch is same as input.
Output channel is kernel.shape[3].
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - kernel.shape[1,0]) / stride[1,0] + 1.
Declaration
public Layer Conv2D(string name, object input, int[] stride, int[] pad, Tensor kernel, Tensor bias)
Parameters
Returns
Conv2DTrans(String, Object, Int32[], Int32[], Int32[], Tensor, Tensor)
Apply a spatial 2D transposed convolution on H and W.
Stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Kernel should be a tensor of rank 4 of dimensions [kernelHeight, kernelWidth, kernelDepth, kernelCount]
Bias should be a tensor with (batch == 1) and (height * width * channels == kernelCount)
OutputPad should be of length 0 or 2, format is [W, H].
If OutputPad length is 0 it will be defaulted to:
OutputPad[W,H] = (input.shape[W,H] * stride[0,1] + pad[0,1] + pad[2,3] - [kernelWidth, kernelHeight]) % stride[0,1]
Output batch is same as input.
Output channel is kernel.shape[3].
output.shape[H,W] = (input.shape[H,W]-1) * stride[0,1] - (pad[1,0] + pad[3,2]) + [kernelWidth, kernelHeight] + OutputPad[W,H]
Declaration
public Layer Conv2DTrans(string name, object input, int[] stride, int[] pad, int[] outputPad, Tensor kernel, Tensor bias)
Parameters
Returns
Dense(String, Object, Tensor, Tensor)
Apply a densely connected layer (aka general matrix multiplication or GEMM)
Bias should be a tensor with (batch == input.shape[H] * input.shape[W] * input.shape[C]) and only one other dimensions of size > 1
Weight should be a tensor with (batch == 1) and (height * width * channels == bias.shape[B] * )
Output shape is [input.shape[B], 1, 1, Weight.shape[H]Weight.shape[W]Weight.shape[C]]
Declaration
public Layer Dense(string name, object input, Tensor weight, Tensor bias)
Parameters
Returns
DepthwiseConv2D(String, Object, Int32[], Int32[], Tensor, Tensor)
Apply a spatial 2D depthwise convolution on H and W.
Stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Kernel should be a tensor of shape [kernelHeight, kernelWidth, kernelDepth, kernelCount]
Thus input must have a channel dimension of 1
Bias should be a tensor with (batch == 1) and (height * width * channels == kernelCount)
Output batch is same as input.
Output channel is kernel.shape[3].
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - kernel.shape[1,0]) / stride[1,0] + 1.
Declaration
public Layer DepthwiseConv2D(string name, object input, int[] stride, int[] pad, Tensor kernel, Tensor bias)
Parameters
Returns
Div(String, Object[])
Element-wise division of each of the input tensors with multidimensional broadcasting support.
First element is divided by the 2nd, then result is divided by the third one and so on.
Declaration
public Layer Div(string name, object[] inputs)
Parameters
Returns
Elu(String, Object, Single)
Element-wise Elu
activation function: f(x) = x if x >= 0 else alpha*(e^x - 1)
alpha default is 1.0
Declaration
public Layer Elu(string name, object input, float alpha = 1F)
Parameters
Returns
Equal(String, Object, Object)
Performs a equal
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer Equal(string name, object input0, object input1)
Parameters
Returns
Exp(String, Object)
Element-wise Exp
activation function: f(x) = e^{x}
Declaration
public Layer Exp(string name, object input)
Parameters
Returns
Flatten(String, Object)
Return a tensor of shape [input.Batch, input.Height * input.Width * input.Channels]
Declaration
public Layer Flatten(string name, object input)
Parameters
Returns
GlobalAvgPool2D(String, Object)
Apply 'average' pooling by downscaling H and W dimension to [1,1]
Declaration
public Layer GlobalAvgPool2D(string name, object input)
Parameters
Returns
GlobalMaxPool2D(String, Object)
Apply 'max' pooling by downscaling H and W dimension to [1,1]
Declaration
public Layer GlobalMaxPool2D(string name, object input)
Parameters
Returns
Greater(String, Object, Object)
Performs a greater
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer Greater(string name, object input0, object input1)
Parameters
Returns
GreaterEqual(String, Object, Object)
Performs a greaterEqual
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer GreaterEqual(string name, object input0, object input1)
Parameters
Returns
Identity(String, Object)
Declaration
public Layer Identity(string name, object input)
Parameters
Returns
Add an input to the model
Declaration
public Model.Input Input(string name, int batch, int channels)
Parameters
Returns
Add an input to the model
Declaration
public Model.Input Input(string name, int batch, int height, int width, int channels)
Parameters
Returns
Add an input to the model
Declaration
public Model.Input Input(string name, int[] shape)
Parameters
Returns
LeakyRelu(String, Object, Single)
Element-wise LeakyRelu
activation function: f(x) = x if x >= 0 else alpha * x
alpha default is 0.01
Declaration
public Layer LeakyRelu(string name, object input, float alpha = 0.01F)
Parameters
Returns
Less(String, Object, Object)
Performs a less
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer Less(string name, object input0, object input1)
Parameters
Returns
LessEqual(String, Object, Object)
Performs a less equal
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer LessEqual(string name, object input0, object input1)
Parameters
Returns
Log(String, Object)
Element-wise Log
activation function: f(x) = log(x)
Declaration
public Layer Log(string name, object input)
Parameters
Returns
LogicalAnd(String, Object, Object)
Performs a and
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalAnd(string name, object input0, object input1)
Parameters
Returns
LogicalNot(String, Object)
Performs a not
logical operation elementwise on the input tensor.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalNot(string name, object input)
Parameters
Returns
LogicalOr(String, Object, Object)
Performs a or
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalOr(string name, object input0, object input1)
Parameters
Returns
LogicalXor(String, Object, Object)
Performs a xor
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalXor(string name, object input0, object input1)
Parameters
Returns
LogSoftmax(String, Object)
Return the logsoftmax (normalized exponential) values of the flatten HWC dimensions of the input.
Thus output will be of shape [input.Batch, input.Height * input.Width * input.Channels]
Declaration
public Layer LogSoftmax(string name, object input)
Parameters
Returns
Max(String, Object[])
Element-wise max
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Max(string name, object[] inputs)
Parameters
Returns
MaxPool2D(String, Object, Int32[], Int32[], Int32[])
Apply 'max' pooling by downscaling H and W dimension according to pool
, stride
and pad
.
Pool and stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Output batch and channels dimensions the same as input.
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - pool[1,0]) / stride[1,0] + 1.
Declaration
public Layer MaxPool2D(string name, object input, int[] pool, int[] stride, int[] pad)
Parameters
Returns
Mean(String, Object[])
Element-wise mean
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Mean(string name, object[] inputs)
Parameters
Returns
Min(String, Object[])
Element-wise min
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Min(string name, object[] inputs)
Parameters
Returns
Mul(String, Object[])
Element-wise multiplication of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Mul(string name, object[] inputs)
Parameters
Returns
Neg(String, Object)
Element-wise Neg
activation function: f(x) = -x
Declaration
public Layer Neg(string name, object input)
Parameters
Returns
Normalization(String, Object, Tensor, Tensor, Single)
Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022
y = scale * (x - mean) / sqrt(variance + epsilon) + bias, where mean and variance are computed per instance per channel.
Scale and bias should be tensors of shape [1,1,1, input.shape[C]]
Output shape is same as input.
Declaration
public Layer Normalization(string name, object input, Tensor scale, Tensor bias, float epsilon = 1E-05F)
Parameters
Returns
Output(Object)
Add an output to the model
Declaration
public string Output(object input)
Parameters
Type |
Name |
Description |
Object |
input |
|
Returns
Pad2DEdge(String, Object, Int32[])
Pads H and W dimension by repeating the edge values of the input.
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
For example a tensor of shape(1,2,3,1):
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[1, 1, 1, 2, 3, 3, 3],
[1, 1, 1, 2, 3, 3, 3],
[4, 4, 4, 5, 6, 6, 6],
[4, 4, 4, 5, 6, 6, 6]
Declaration
public Layer Pad2DEdge(string name, object input, int[] pad)
Parameters
Returns
Pad2DReflect(String, Object, Int32[])
Pads H and W dimension by mirroring on the first and last values along those axis.
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
For example a tensor of shape(1,2,3,1):
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[6, 5, 4, 5, 6, 5, 4],
[3, 2, 1, 2, 3, 2, 1],
[6, 5, 4, 5, 6, 5, 4],
[3, 2, 1, 2, 3, 2, 1]
Declaration
public Layer Pad2DReflect(string name, object input, int[] pad)
Parameters
Returns
Pad2Symmetric(String, Object, Int32[])
Pads H and W dimension with symmetric replication along those axis.
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
For example a tensor of shape(1,2,3,1):
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[2, 1, 1, 2, 3, 3, 2],
[2, 1, 1, 2, 3, 3, 2],
[5, 4, 4, 5, 6, 6, 5],
[5, 4, 4, 5, 6, 6, 5]
Declaration
public Layer Pad2Symmetric(string name, object input, int[] pad)
Parameters
Returns
Pow(String, Object[])
Element-wise pow of each of the input tensors with multidimensional broadcasting support.
First element get raised to the pow of the 2nd, then result is raised to the pow of the third one and so on.
Declaration
public Layer Pow(string name, object[] inputs)
Parameters
Returns
PRelu(String, Object, Object)
Element-wise PRelu
activation function: f(x) = x if x >= 0 else slope * x
Declaration
public Layer PRelu(string name, object input, object slope)
Parameters
Returns
RandomNormal(String, Single, Single, Single, Int32[])
Declaration
public Layer RandomNormal(string name, float mean, float scale, float seed, int[] shape)
Parameters
Returns
RandomNormal(String, Single, Single, Single, Object)
Declaration
public Layer RandomNormal(string name, float mean, float scale, float seed, object input)
Parameters
Returns
Declaration
public Layer RandomUniform(string name, float min, float max, float seed, int[] shape)
Parameters
Returns
Declaration
public Layer RandomUniform(string name, float min, float max, float seed, object input)
Parameters
Returns
Reciprocal(String, Object)
Element-wise Reciprocal
activation function: f(x) = 1/x
Declaration
public Layer Reciprocal(string name, object input)
Parameters
Returns
Relu(String, Object)
Element-wise Relu
activation function: f(x) = max(0, x)
Declaration
public Layer Relu(string name, object input)
Parameters
Returns
Relu6(String, Object)
Declaration
public Layer Relu6(string name, object input)
Parameters
Returns
Reshape(String, Object, Int32[])
Return a tensor of the requested shape. Input and output must contain the same number of elements.
Declaration
public Layer Reshape(string name, object input, int[] shape)
Parameters
Returns
ScaleBias(String, Object, Tensor, Tensor)
Apply per channel scale and bias.
Scale and bias should be tensors of shape [1,1,1, input.shape[C]]
Output shape is same as input.
Declaration
public Layer ScaleBias(string name, object input, Tensor scale, Tensor bias)
Parameters
Returns
Selu(String, Object, Single, Single)
Element-wise Selu
activation function: f(x) = gamma * x if x >= 0 else (alpha * e^x - alpha)
alpha default is 1.67326
gamma default is 1.0507
Declaration
public Layer Selu(string name, object input, float alpha = 1.67326F, float gamma = 1.0507F)
Parameters
Returns
Sigmoid(String, Object)
Element-wise Sigmoid
activation function: f(x) = 1/(1 + e^{-x})
Declaration
public Layer Sigmoid(string name, object input)
Parameters
Returns
Softmax(String, Object)
Return the softmax (normalized exponential) values of the flatten HWC dimensions of the input.
Thus output will be of shape [input.Batch, input.Height * input.Width * input.Channels]
Declaration
public Layer Softmax(string name, object input)
Parameters
Returns
Sqrt(String, Object)
Element-wise Sqrt
activation function
Declaration
public Layer Sqrt(string name, object input)
Parameters
Returns
StridedSlice(String, Object, Int32[], Int32[], Int32[])
Produces a slice of the input tensor along all axes.
The following rules apply:
begin=0, end=0, stride=1: copy the full range of elements from the given axis
begin=A, end=B, stride=1: copy the range [A, B) (excluding the Bth element) from the given axis
begin=A, end=B, stride=I: copy every Ith element in the range [A, B) from the given axis
begin=N, end=N, stride=0: shrink axis to a single Nth element
output.shape[] = (ends[] - starts[]) / max(1, stride[])
Declaration
public Layer StridedSlice(string name, object input, int[] starts, int[] ends, int[] strides)
Parameters
Returns
Sub(String, Object[])
Element-wise sub
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Sub(string name, object[] inputs)
Parameters
Returns
Swish(String, Object)
Declaration
public Layer Swish(string name, object input)
Parameters
Returns
Tanh(String, Object)
Element-wise Tanh
activation function: f(x) = (1 - e^{-2x})/(1 + e^{-2x})
Declaration
public Layer Tanh(string name, object input)
Parameters
Returns
Upsample2D(String, Object, Int32[])
Upsample the input tensor by scaling H and W by upsample[0] and upsample[1] respectively.
Upsampling is done using nearest neighbor.
Declaration
public Layer Upsample2D(string name, object input, int[] upsample)
Parameters
Returns