Supported ONNX operators
When you import a model, each Open Neural Network Exchange (ONNX) operator in the model graph becomes an Inference Engine layer. An Inference Engine layer has the same name as the ONNX operator, unless the table shows the operator maps to a different layer.
For more information, refer to How Inference Engine optimizes a model.
Supported ONNX operators
The following table lists the ONNX operators that Inference Engine supports and the data types supported for each backend type.
Name | Supported data types with BackendType.CPU |
Supported data types with BackendType.GPUCompute |
Supported data types with BackendType.GPUPixel |
Notes |
---|---|---|---|---|
Abs | float, int | float, int | float, int | |
Acos | float | float | float | |
Acosh | float | float | float | |
Add | float, int | float, int | float, int | |
And | int | int | int | |
ArgMax | float, int | float, int | float, int | |
ArgMin | float, int | float, int | float, int | |
Asin | float | float | float | |
Asinh | float | float | float | |
Atan | float | float | float | |
Atanh | float | float | float | |
AveragePool | float | float (1D and 2D only) | float (1D and 2D only) | The ceil_mode and count_include_pad parameters aren't supported. |
BatchNormalization | float | float | float | The momentum , spatial and training_mode parameters aren't supported. |
Bernoulli | float, int | float, int | float, int | |
Cast | float, int, short | float, int, short | float, int, short | |
CastLike | float, int, short | float, int, short | float, int, short | |
Ceil | float | float | float | |
Celu | float | float | float | |
Clip | float, int | float, int | float, int | |
Compress | float, int | Not supported | Not supported | |
Concat | float, int | float, int | float, int | |
Constant | - | - | - | The sparse_value , value_string and value_strings parameters aren't supported. |
ConstantOfShape | float, int | float, int | float, int | |
Conv | float | float* | float | Supports 1D, 2D or 3D convolutions. |
ConvTranspose | float | float | float | Supports 1D, 2D or 3D convolutions. The output_shape parameter isn't supported. |
Cos | float | float | float | |
Cosh | float | float | float | |
CumSum | float, int | float, int | float, int | |
DepthToSpace | float, int | float, int | float, int | |
Div | float, int | float, int | float, int | |
Dropout | - | - | - | The operator maps to the Inference Engine layer Identity |
Einsum | float | float (1 or 2 inputs only) | Not supported | |
Elu | float | float | float | |
Equal | float, int | float, int | float, int | |
Erf | float | float | float | |
Exp | float | float | float | |
Expand | float, int | float, int | float, int | |
Flatten | float, int | float, int | float, int | |
Floor | float | float | float | |
Gather | float, int | float, int | float, int | |
GatherElements | float, int | float, int | float, int | |
GatherND | float, int | float, int | float, int | |
Gemm | float | float | float | |
GlobalAveragePool | float | float | float | |
GlobalMaxPool | float | float | float | |
Greater | float, int | float, int | float, int | |
GreaterOrEqual | float, int | float, int | float, int | |
GridSample | float | float | float | |
Hardmax | float | float | float | |
HardSigmoid | float | float | float | |
HardSwish | float | float | float | |
Identity | float, int | float, int | float, int | |
InstanceNormalization | float | float | float | |
IsInf | float | float | float (Infs not supported) | |
IsNaN | float | float | float (NaNs not supported) | |
LayerNormalization | float | float | float | |
LeakyRelu | float | float | float | |
Less | float, int | float, int | float, int | |
LessOrEqual | float, int | float, int | float, int | |
Log | float | float | float | |
LogSoftmax | float | float | float | |
LRN | float | Not supported | Not supported | |
LSTM | float | float | Not supported | |
MatMul | float | float* | float | |
Max | float, int | float, int | float, int | |
MaxPool | float | float (1D and 2D only) | float (1D and 2D only) | The ceil_mode , dilations and storage_order parameters aren't supported. |
Mean | float | float | float | The operator maps to the Inference Engine layers Add and ScalarMad . |
Min | float, int | float, int | float, int | |
Mod | float, int | float, int | float, int | |
Mish | float | float | float | |
Mul | float, int | float, int | float, int | |
Multinomial | float | Not supported | Not supported | |
Neg | float, int | float, int | float, int | |
NonMaxSuppression | float | float | Not supported | |
NonZero | float, int | Not supported | Not supported | |
Not | int | int | int | |
OneHot | float, int | float, int | float, int | |
Or | int | int | int | |
Pad | float, int | float, int | float, int | |
Pow | float, int | float, int | float, int | |
PRelu | float | float | float | |
RandomNormal | float | float | float | |
RandomNormalLike | float | float | float | |
RandomUniform | float | float | float | |
RandomUniformLike | float | float | float | |
Range | float, int | float, int | float, int | |
Reciprocal | float | float | float | |
ReduceL1 | float, int | float*, int* | float, int | |
ReduceL2 | float | float* | float | |
ReduceLogSum | float | float* | float | |
ReduceLogSumExp | float | float | float | |
ReduceMax | float, int | float*, int* | float, int | |
ReduceMean | float | float* | float | |
ReduceMin | float, int | float*, int* | float, int | |
ReduceProd | float, int | float*, int* | float, int | |
ReduceSum | float, int | float*, int* | float, int | |
ReduceSumSquare | float, int | float*, int* | float, int | |
Relu | float | float | float | |
Reshape | float, int | float, int | float, int | |
Resize | float | float | float | The cubic_coeff_a , exclude_outside , extrapolation_value and roi parameters aren't supported. The half_pixel_symmetric option for coordinate_transform_mode is not supported. |
RoiAlign | float | float | float | |
Round | float | float | float | |
Scatter (deprecated) | float, int | float, int | float, int | The operator maps to the Inference Engine layer ScatterElements . |
ScatterElements | float, int | float, int (no ScatterReductionMode) | float, int | |
ScatterND | float, int | float, int | float, int | |
Selu | float | float | float | |
Shape | - | - | - | The operator returns a CPU tensor without downloading the input tensor. |
Shrink | float | float | float | |
Sigmoid | float | float | float | |
Sign | float, int | float, int | float, int | |
Sin | float | float | float | |
Sinh | float | float | float | |
Size | - | - | - | The operator returns a CPU tensor without downloading the input tensor. |
Slice | float, int | float, int | float, int | |
Softmax | float | float | float | |
Softplus | float | float | float | |
Softsign | float | float | float | |
SpaceToDepth | float, int | float, int | float, int | |
Split | float, int | float, int | float, int | |
Sqrt | float | float | float | |
Squeeze | float, int | float, int | float, int | |
Sub | float, int | float, int | float, int | |
Sum | float, int | float, int | float, int | The operator maps to the Inference Engine layer Add . |
Tan | float | float | float | |
Tanh | float | float | float | |
ThresholdedRelu | float | float | float | |
Tile | float, int | float, int | float, int | |
TopK | float, int | float, int | Not supported | |
Transpose | float, int | float, int | float, int | |
Trilu | float, int | float, int | float, int | |
Unsqueeze | float, int | float, int | float, int | |
Upsample (deprecated) | float | float | float | The operator maps to the Inference Engine layer Resize . |
Where | float, int | float, int | float, int | |
Xor | int | int | int |
* Inference Engine uses DirectML to accelerate these operators on supported hardware.
Inference Engine-only layers
Inference Engine might create the following layers when it optimizes the model.
Name | Supported data types with BackendType.CPU |
Supported data types with BackendType.GPUCompute |
Supported data types with BackendType.GPUPixel |
---|---|---|---|
BroadcastArgs | int | - | - |
Dense | float | float* | float |
DequantizeUint8 | byte | byte | byte |
Gelu | float | float | float |
GeluFast | float | float | float |
MatMul2D | float | float* | float |
MoveDim | float, int | float, int | float, int |
Narrow | float, int | float, int | float, int |
RandomChoice | float, int | float, int | float, int |
Relu6 | float | float | float |
RMSNormalization | float | float | float |
ScalarMad | float, int | float, int | float, int |
Select | float, int | float, int | float, int |
SliceSet | float, int | float, int | float, int |
Square | float, int | float, int | float, int |
Swish | float | float | float |
ScaleBias | float | float | float |
* Inference Engine uses DirectML to accelerate these operators on supported hardware.
Unsupported operators
The following ONNX operators aren't supported in the current version of Inference Engine.
- AffineGrid
- BitShift
- BitwiseAnd
- BitwiseNot
- BitwiseOr
- BitwiseXor
- CenterCropPad
- Col2Im
- ConcatFromSequence
- ConvInteger
- DeformConv
- DequantizeLinear
- Det
- DynamicQuantizeLinear
- EyeLike
- GlobalLpPool
- GroupNormalization
- GRU
- If
- ImageDecoder
- Loop
- LpNormalization
- LpPool
- MatMulInteger
- MaxRoiPool
- MaxUnpool
- MeanVarianceNormalization
- NegativeLogLikelihoodLoss
- Optional
- OptionalGetElement
- OptionalHasElement
- QLinearConv
- QLinearMatMul
- QuantizeLinear
- RegexFullMatch
- ReverseSequence
- RNN
- Scan
- SequenceAt
- SequenceConstruct
- SequenceEmpty
- SequenceErase
- SequenceInsert
- SequenceLength
- SequenceMap
- SoftmaxCrossEntropyLoss
- SplitToSequence
- StringConcat
- StringNormalizer
- StringSplit
- TfIdfVectorizer
- Unique