Class ModelBuilder
Syntax
public class ModelBuilder
Constructors
ModelBuilder(Model)
Create a model builder helper to construct the underlying Model.
Declaration
public ModelBuilder(Model model = null)
Parameters
Type |
Name |
Description |
Model |
model |
|
Properties
model
Declaration
public Model model { get; }
Property Value
Methods
Abs(String, Object)
Element-wise function that calculates absolute values of the input: f(x) = abs(x)
Declaration
public Layer Abs(string name, object input)
Parameters
Returns
Add(String, Object[])
Element-wise add
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Add(string name, object[] inputs)
Parameters
Returns
AvgPool2D(String, Object, Int32[], Int32[], Int32[])
Apply 'average' pooling by downscaling H and W dimension according to pool
, stride
and pad
.
Pool and stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Output batch and channels dimensions the same as input.
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - pool[1,0]) / stride[1,0] + 1.
Declaration
public Layer AvgPool2D(string name, object input, int[] pool, int[] stride, int[] pad)
Parameters
Returns
Border2D(String, Object, Int32[], Single)
Pads H and W dimension with a given constant value (default to 0).
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
If pad contain negative values H and W dimensions will be cropped instead.
For example a tensor of shape(1,2,3,1)
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 2, 3, 0, 0],
[0, 0, 4, 5, 6, 0, 0],
[0, 0, 0, 0, 0, 0, 0]
Declaration
public Layer Border2D(string name, object input, int[] pad, float constantValue = 0F)
Parameters
Returns
Ceil(String, Object)
Element-wise function that produces rounding towards the greatest integer less than or equal to the input value: f(x) = ceil(x)
Declaration
public Layer Ceil(string name, object input)
Parameters
Returns
Clip(String, Object, Single, Single)
Declaration
public Layer Clip(string name, object input, float min, float max)
Parameters
Returns
Concat(String, Object[], Int32)
Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the axis to concatenate on.
Declaration
public Layer Concat(string name, object[] inputs, int axis = -1)
Parameters
Returns
Const(String, Tensor, Int32)
Allow to load a tensor from constants.
Declaration
public Layer Const(string name, Tensor tensor, int insertionIndex = -1)
Parameters
Returns
Conv2D(String, Object, Int32[], Int32[], Tensor, Tensor)
Apply a spatial 2D convolution on H and W.
Stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Kernel should be a tensor of shape [kernelHeight, kernelWidth, kernelDepth, kernelCount]
Bias should be a tensor with (batch == 1) and (height * width * channels == kernelCount)
Output batch is same as input.
Output channel is kernel.shape[3].
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - kernel.shape[1,0]) / stride[1,0] + 1.
Declaration
public Layer Conv2D(string name, object input, int[] stride, int[] pad, Tensor kernel, Tensor bias)
Parameters
Returns
Conv2DTrans(String, Object, Int32[], Int32[], Int32[], Tensor, Tensor)
Apply a spatial 2D transposed convolution on H and W.
Stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Kernel should be a tensor of rank 4 of dimensions [kernelHeight, kernelWidth, kernelDepth, kernelCount]
Bias should be a tensor with (batch == 1) and (height * width * channels == kernelCount)
OutputPad should be of length 0 or 2, format is [W, H].
If OutputPad length is 0 it will be defaulted to:
OutputPad[W,H] = (input.shape[W,H] * stride[0,1] + pad[0,1] + pad[2,3] - [kernelWidth, kernelHeight]) % stride[0,1]
Output batch is same as input.
Output channel is kernel.shape[3].
output.shape[H,W] = (input.shape[H,W]-1) * stride[0,1] - (pad[1,0] + pad[3,2]) + [kernelWidth, kernelHeight] + OutputPad[W,H]
Declaration
public Layer Conv2DTrans(string name, object input, int[] stride, int[] pad, int[] outputPad, Tensor kernel, Tensor bias)
Parameters
Returns
Dense(String, Object, Tensor, Tensor)
Apply a densely connected layer (aka general matrix multiplication or GEMM)
Bias should be a tensor with (batch == input.shape[H] * input.shape[W] * input.shape[C]) and only one other dimensions of size > 1
Weight should be a tensor with (batch == 1) and (height * width * channels == bias.shape[B] * )
Output shape is [input.shape[B], 1, 1, Weight.shape[H]Weight.shape[W]Weight.shape[C]]
Declaration
public Layer Dense(string name, object input, Tensor weight, Tensor bias)
Parameters
Returns
DepthwiseConv2D(String, Object, Int32[], Int32[], Tensor, Tensor)
Apply a spatial 2D depthwise convolution on H and W.
Stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Kernel should be a tensor of shape [kernelHeight, kernelWidth, kernelDepth, kernelCount]
Thus input must have a channel dimension of 1
Bias should be a tensor with (batch == 1) and (height * width * channels == kernelCount)
Output batch is same as input.
Output channel is kernel.shape[3].
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - kernel.shape[1,0]) / stride[1,0] + 1.
Declaration
public Layer DepthwiseConv2D(string name, object input, int[] stride, int[] pad, Tensor kernel, Tensor bias)
Parameters
Returns
Div(String, Object[])
Element-wise division of each of the input tensors with multidimensional broadcasting support.
First element is divided by the 2nd, then result is divided by the third one and so on.
Declaration
public Layer Div(string name, object[] inputs)
Parameters
Returns
Elu(String, Object, Single)
Element-wise Elu
activation function: f(x) = x if x >= 0 else alpha*(e^x - 1)
alpha default is 1.0
Declaration
public Layer Elu(string name, object input, float alpha = 1F)
Parameters
Returns
Equal(String, Object, Object)
Performs a equal
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer Equal(string name, object input0, object input1)
Parameters
Returns
Exp(String, Object)
Element-wise Exp
function that calculates exponential of the input: f(x) = e^{x}
Declaration
public Layer Exp(string name, object input)
Parameters
Returns
Flatten(String, Object)
Return a tensor of shape [input.Batch, input.Height * input.Width * input.Channels]
Declaration
public Layer Flatten(string name, object input)
Parameters
Returns
Floor(String, Object)
Element-wise function that produces rounding towards least integer greater than or equal to the input value: f(x) = floor(x)
Declaration
public Layer Floor(string name, object input)
Parameters
Returns
Gather(String, Object, Object, Int32)
Gathers input along provided axis. Swizzling pattern is given by input indices:
axis == 0: gatheredData[b, y, x, c] = data[indices[b], y, x, c]
axis == 1: gatheredData[b, y, x, c] = data[b, indices[y], x, c]
axis == 2: gatheredData[b, y, x, c] = data[b, y, indices[x], c]
axis == 3: gatheredData[b, y, x, c] = data[b, y, x, indices[c]]
Declaration
public Layer Gather(string name, object input, object indices, int axis = -1)
Parameters
Returns
GlobalAvgPool2D(String, Object)
Apply 'average' pooling by downscaling H and W dimension to [1,1]
Declaration
public Layer GlobalAvgPool2D(string name, object input)
Parameters
Returns
GlobalMaxPool2D(String, Object)
Apply 'max' pooling by downscaling H and W dimension to [1,1]
Declaration
public Layer GlobalMaxPool2D(string name, object input)
Parameters
Returns
Greater(String, Object, Object)
Performs a greater
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer Greater(string name, object input0, object input1)
Parameters
Returns
GreaterEqual(String, Object, Object)
Performs a greaterEqual
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer GreaterEqual(string name, object input0, object input1)
Parameters
Returns
Identity(String, Object)
Declaration
public Layer Identity(string name, object input)
Parameters
Returns
Add an input to the model
Declaration
public Model.Input Input(string name, TensorShape shape)
Parameters
Returns
Add an input to the model
Declaration
public Model.Input Input(string name, int batch, int channels)
Parameters
Returns
Add an input to the model
Declaration
public Model.Input Input(string name, int batch, int height, int width, int channels)
Parameters
Returns
Add an input to the model
Declaration
public Model.Input Input(string name, int[] shape)
Parameters
Returns
LeakyRelu(String, Object, Single)
Element-wise LeakyRelu
activation function: f(x) = x if x >= 0 else alpha * x
alpha default is 0.01
Declaration
public Layer LeakyRelu(string name, object input, float alpha = 0.01F)
Parameters
Returns
Less(String, Object, Object)
Performs a less
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer Less(string name, object input0, object input1)
Parameters
Returns
LessEqual(String, Object, Object)
Performs a less equal
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Declaration
public Layer LessEqual(string name, object input0, object input1)
Parameters
Returns
Log(String, Object)
Element-wise Log
function that calculates the natural log of the input: f(x) = log(x)
Declaration
public Layer Log(string name, object input)
Parameters
Returns
LogicalAnd(String, Object, Object)
Performs a and
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalAnd(string name, object input0, object input1)
Parameters
Returns
LogicalNot(String, Object)
Performs a not
logical operation elementwise on the input tensor.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalNot(string name, object input)
Parameters
Returns
LogicalOr(String, Object, Object)
Performs a or
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalOr(string name, object input0, object input1)
Parameters
Returns
LogicalXor(String, Object, Object)
Performs a xor
logical operation elementwise on the input tensors with multidimensional broadcasting support.
Return 1.0 elementwise if condition is true 0.0 otherwise.
Input is consider false if 0.0 elementwise true otherwise.
Declaration
public Layer LogicalXor(string name, object input0, object input1)
Parameters
Returns
LogSoftmax(String, Object)
Return the logsoftmax (normalized exponential) values of the flatten HWC dimensions of the input.
Thus output will be of shape [input.Batch, input.Height * input.Width * input.Channels]
Declaration
public Layer LogSoftmax(string name, object input)
Parameters
Returns
Max(String, Object[])
Element-wise max
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Max(string name, object[] inputs)
Parameters
Returns
MaxPool2D(String, Object, Int32[], Int32[], Int32[])
Apply 'max' pooling by downscaling H and W dimension according to pool
, stride
and pad
.
Pool and stride should be of size 2 and format is [W, H].
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
Output batch and channels dimensions the same as input.
output.shape[H,W] = (input.shape[H,W] + pad[1,0] + pad[3,2] - pool[1,0]) / stride[1,0] + 1.
Declaration
public Layer MaxPool2D(string name, object input, int[] pool, int[] stride, int[] pad)
Parameters
Returns
Mean(String, Object[])
Element-wise mean
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Mean(string name, object[] inputs)
Parameters
Returns
Memory(Object, Object, TensorShape)
Declaration
public Model.Memory Memory(object input, object output, TensorShape shape)
Parameters
Returns
Min(String, Object[])
Element-wise min
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Min(string name, object[] inputs)
Parameters
Returns
Mul(String, Object[])
Element-wise multiplication of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Mul(string name, object[] inputs)
Parameters
Returns
Multinomial(String, Object, Int32, Single)
Generate a Tensor with random samples drawn from a multinomial distribution according to the probabilities of each of the possible outcomes.
Output batch is same as input.
Output channel is numberOfSamplesDrawnPerInputChannel
.
Declaration
public Layer Multinomial(string name, object input, int numberOfSamplesDrawnPerInputChannel, float seed)
Parameters
Returns
Neg(String, Object)
Element-wise function that flips the sign of the input: f(x) = -x
Declaration
public Layer Neg(string name, object input)
Parameters
Returns
Normalization(String, Object, Tensor, Tensor, Single)
Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022
y = scale * (x - mean) / sqrt(variance + epsilon) + bias, where mean and variance are computed per instance per channel.
Scale and bias should be tensors of shape [1,1,1, input.shape[C]]
Output shape is same as input.
Declaration
public Layer Normalization(string name, object input, Tensor scale, Tensor bias, float epsilon = 1E-05F)
Parameters
Returns
OneHot(String, Object, Int32, Int32, Int32)
Maps integer to one-hot vector of length equal to depth.
Declaration
public Layer OneHot(string name, object input, int depth, int on, int off)
Parameters
Returns
Output(Object)
Add an output to the model
Declaration
public string Output(object input)
Parameters
Type |
Name |
Description |
Object |
input |
|
Returns
Pad2DEdge(String, Object, Int32[])
Pads H and W dimension by repeating the edge values of the input.
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
For example a tensor of shape(1,2,3,1):
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[1, 1, 1, 2, 3, 3, 3],
[1, 1, 1, 2, 3, 3, 3],
[4, 4, 4, 5, 6, 6, 6],
[4, 4, 4, 5, 6, 6, 6]
Declaration
public Layer Pad2DEdge(string name, object input, int[] pad)
Parameters
Returns
Pad2DReflect(String, Object, Int32[])
Pads H and W dimension by mirroring on the first and last values along those axis.
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
For example a tensor of shape(1,2,3,1):
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[6, 5, 4, 5, 6, 5, 4],
[3, 2, 1, 2, 3, 2, 1],
[6, 5, 4, 5, 6, 5, 4],
[3, 2, 1, 2, 3, 2, 1]
Declaration
public Layer Pad2DReflect(string name, object input, int[] pad)
Parameters
Returns
Pad2Symmetric(String, Object, Int32[])
Pads H and W dimension with symmetric replication along those axis.
Pad should be of size 4 and format is [pre W, pre H, post W, post H].
For example a tensor of shape(1,2,3,1):
[1, 2, 3],
[4, 5, 6]
With pad [2, 1, 2, 1]
Result in a tensor of shape(1,4,7,1)
[2, 1, 1, 2, 3, 3, 2],
[2, 1, 1, 2, 3, 3, 2],
[5, 4, 4, 5, 6, 6, 5],
[5, 4, 4, 5, 6, 6, 5]
Declaration
public Layer Pad2Symmetric(string name, object input, int[] pad)
Parameters
Returns
Pow(String, Object[])
Element-wise pow of each of the input tensors with multidimensional broadcasting support.
First element get raised to the pow of the 2nd, then result is raised to the pow of the third one and so on.
Declaration
public Layer Pow(string name, object[] inputs)
Parameters
Returns
PRelu(String, Object, Object)
Element-wise PRelu
activation function: f(x) = x if x >= 0 else slope * x
Declaration
public Layer PRelu(string name, object input, object slope)
Parameters
Returns
RandomNormal(String, TensorShape, Single, Single, Single)
Generates a Tensor with random values drawn from a normal distribution.
The shape of the tensor is specified by scale
The normal distribution is specified by mean and scale
Declaration
public Layer RandomNormal(string name, TensorShape shape, float mean, float scale, float seed)
Parameters
Returns
RandomNormal(String, Object, Single, Single, Single)
Generates a Tensor with random values drawn from a normal distribution.
The shape of the tensor is specified by input tensor
The normal distribution is specified by mean and scale
Declaration
public Layer RandomNormal(string name, object input, float mean, float scale, float seed)
Parameters
Returns
Generates a Tensor with random values drawn from a uniform distribution.
The shape of the tensor is specified by shape
The uniform distribution scale is specified by min and max range
Declaration
public Layer RandomUniform(string name, TensorShape shape, float min, float max, float seed)
Parameters
Returns
Generates a Tensor with random values drawn from a uniform distribution.
The shape of the tensor is specified by input tensor
The uniform distribution scale is specified by min and max range
Declaration
public Layer RandomUniform(string name, object input, float min, float max, float seed)
Parameters
Returns
Reciprocal(String, Object)
Element-wise function that calculates reciprocal of the input: f(x) = 1/x
Declaration
public Layer Reciprocal(string name, object input)
Parameters
Returns
Reduce(Layer.Type, String, Object, Int32)
Computes a reduce operation (max/min/mean/prod/sum) of the input tensor's element along the provided axis
Declaration
public Layer Reduce(Layer.Type type, string name, object input, int axis = -1)
Parameters
Returns
Relu(String, Object)
Element-wise Relu
activation function: f(x) = max(0, x)
Declaration
public Layer Relu(string name, object input)
Parameters
Returns
Relu6(String, Object)
Declaration
public Layer Relu6(string name, object input)
Parameters
Returns
Reshape(String, Object, TensorShape)
Apply shape to the input tensor. Number of elements in the shape must match number of elements in input tensor.
Declaration
public Layer Reshape(string name, object input, TensorShape shape)
Parameters
Returns
Reshape(String, Object, Int32[])
Apply symbolic shape to input tensor. Symbolic shape can have up to one dimension specified as unknown (value -1).
Declaration
public Layer Reshape(string name, object input, int[] shape)
Parameters
Returns
Reshape(String, Object, Object)
Return a tensor of the shape like another tensor. Both tensors have to have the same number of elements.
Declaration
public Layer Reshape(string name, object input, object shapeLike)
Parameters
Returns
Round(String, Object)
Element-wise function that produces rounding of the input value: f(x) = round(x)
Declaration
public Layer Round(string name, object input)
Parameters
Returns
ScaleBias(String, Object, Tensor, Tensor)
Apply per channel scale and bias.
Scale and bias should be tensors of shape [1,1,1, input.shape[C]]
Output shape is same as input.
Declaration
public Layer ScaleBias(string name, object input, Tensor scale, Tensor bias)
Parameters
Returns
Selu(String, Object, Single, Single)
Element-wise Selu
activation function: f(x) = gamma * x if x >= 0 else (alpha * e^x - alpha)
alpha default is 1.67326
gamma default is 1.0507
Declaration
public Layer Selu(string name, object input, float alpha = 1.67326F, float gamma = 1.0507F)
Parameters
Returns
Sigmoid(String, Object)
Element-wise Sigmoid
activation function: f(x) = 1/(1 + e^{-x})
Declaration
public Layer Sigmoid(string name, object input)
Parameters
Returns
Softmax(String, Object)
Return the softmax (normalized exponential) values of the flatten HWC dimensions of the input.
Thus output will be of shape [input.Batch, input.Height * input.Width * input.Channels]
Declaration
public Layer Softmax(string name, object input)
Parameters
Returns
Sqrt(String, Object)
Element-wise Sqrt
activation function
Declaration
public Layer Sqrt(string name, object input)
Parameters
Returns
StridedSlice(String, Object, Int32[], Int32[], Int32[])
Produces a slice of the input tensor along all axes.
The following rules apply:
begin=0, end=0, stride=1: copy the full range of elements from the given axis
begin=A, end=B, stride=1: copy the range [A, B) (excluding the Bth element) from the given axis
begin=A, end=B, stride=I: copy every Ith element in the range [A, B) from the given axis
begin=N, end=N, stride=0: shrink axis to a single Nth element
output.shape[] = (ends[] - starts[]) / max(1, stride[])
Declaration
public Layer StridedSlice(string name, object input, int[] starts, int[] ends, int[] strides)
Parameters
Returns
Sub(String, Object[])
Element-wise sub
of each of the input tensors with multidimensional broadcasting support.
Declaration
public Layer Sub(string name, object[] inputs)
Parameters
Returns
Swish(String, Object)
Declaration
public Layer Swish(string name, object input)
Parameters
Returns
Tanh(String, Object)
Element-wise Tanh
activation function: f(x) = (1 - e^{-2x})/(1 + e^{-2x})
Declaration
public Layer Tanh(string name, object input)
Parameters
Returns
Upsample2D(String, Object, Int32[])
Upsample the input tensor by scaling H and W by upsample[0] and upsample[1] respectively.
Upsampling is done using nearest neighbor.
Declaration
public Layer Upsample2D(string name, object input, int[] upsample)
Parameters
Returns