Supported models
You can import open-source models into your Inference Engine project. Explore the following sections to understand the models Inference Engine supports and find an appropriate model for your project.
Pre-trained models
There are various sources to find pre-trained models, which might either be available in the ONNX format or in a format that you can convert. Examples include the following:
- Hugging Face
- Kaggle Models (Formerly TensorFlow Hub)
- PyTorch Hub
- Model Zoo
- XetData
- Meta Research
If you want to train your own models, refer to the following links:
Models from Hugging Face
You can access validated AI models for use with Inference Engine from Hugging Face. These models are already converted to the .sentis
format, so you don't need to convert from ONNX manually.
To browse and download models from Hugging Face navigate to the Unity Hugging Face space and select a model under the Models section.
Each model page includes a How to Use section with instructions for importing the model into your Unity project.
ONNX models
You can also import models in the ONNX format. Inference Engine supports most ONNX models with opset version 7 to 15. Models with opset versions outside this range (for example, <7 or >15) might still import, but results can be unpredictable.
Unsupported models
Inference Engine doesn't support the following:
- Models that use tensors with more than eight dimensions
- Sparse input tensors or constants
- String tensors
- Complex number tensors
Inference Engine also converts some tensor data types like booleans to floats or integers. This might increase the memory your model uses.