Samples
The Inference Engine package includes samples to help you learn and use the API.
The following samples are available:
- Sample projects from the Inference Engine GitHub
- Sample scripts from the Package Manager
Validated models are also available to use in your project. To understand and download available models, refer to Supported models.
Sample projects
Full sample projects are available on GitHub to demonstrate various Inference Engine use cases.
To explore these projects:
- Visit the Inference Engine samples GitHub repository.
- Each project includes setup instructions, and some feature a video walkthrough in the
README
file.
Sample scripts
Use the sample scripts to implement specific features in your own project.
To find the sample scripts, follow these steps:
Go to Window > Package Manager, and select Inference Engine from the package list.
Select Samples.
To import a sample folder into your project, select Import.
Unity creates a
Samples
folder in your project and adds the selected sample script.
The following table describes the available samples:
Sample folder | Description |
---|---|
Convert tensors to textures | Examples of converting tensors to textures. For more information, refer to Use output data. |
Convert textures to tensors | Examples of converting textures to tensors. For more information, refer to Create input for a model. |
Copy a texture tensor to the screen | An example of using TextureConverter.RenderToScreen to copy a texture tensor to the screen. For more information, refer to Use output data. |
Encrypt a model | Example of serializing an encrypted model to disk using a custom editor window and loading that encrypted model at runtime. For more information, refer to Encrypt a model. |
Quantize a model | Example of serializing a quantized model to disk using a custom editor window and loading that quantized model at runtime. For more information, refer to Quantize a model. |
Read output asynchronously | Examples of reading the output from a model asynchronously using compute shaders. For more information, refer to Read output from a model asynchronously. |
Run a model a layer at a time | An example of using ScheduleIterable to run a model a layer a time. For more information, refer to Run a model. |
Run a model | Examples of running models with different numbers of inputs and outputs. For more information, refer to Run a model. |
Use the functional API with an existing model | An example of using the functional API to extend an existing model. For more information, refer to Edit a model. |
Use a compute buffer | An example of using a compute shader to write data to a tensor on the GPU. |
Use Burst to write data | An example of using Burst to write data to a tensor in the Job system. |
Use tensor indexing methods | Examples of using tensor indexing methods to get and set tensor values. |