Create a model
Create a runtime model by importing an ONNX model file or using the Inference Engine functional API.
Page | Description |
---|---|
Understand models in Inference Engine | Understand how Inference Engine optimizes models, and information about fixed and dynamic input dimensions. |
Export and convert a file to ONNX | Export an ONNX file from a machine learning framework and convert other file types to the ONNX format. |
Import an ONNX file | Import an ONNX file and create a runtime model. |
Supported models | Understand the models Inference Engine supports and find a model for your project. |
Serialize a model | Create a serialized model (.sentis file) in the StreamingAssets folder. |
Inspect a model | Check the inputs, outputs and layers of a model. |
Create a new model | Create a new runtime model with the Inference Engine functional API. |
Edit a model | Make changes to an existing runtime model. |
Encrypt a model | Encrypt and decrypt an Inference Engine model. |
Quantize a model | Quantize the weights of an Inference Engine model. |
Supported ONNX operators | Understand which ONNX operators Inference Engine supports. |
Supported functional methods | Understand which functional methods Inference Engine supports. |
Model Asset Inspector | Understand the settings and properties of an imported model in the Unity Inspector. |