IWorker interface: core of the engine
The core engine interface in Barracuda is called
IWorker breaks down the model into executable tasks and schedules them on GPU or CPU.
Warning: Some platforms might not support some backends. See Supported platforms for more info.
Create the inference engine (Worker)
You can create a
Worker from the
WorkerFactory. You must specify a backend and a loaded model.
Model model = ... // GPU var worker = WorkerFactory.CreateWorker(WorkerFactory.Type.ComputePrecompiled, model) var worker = WorkerFactory.CreateWorker(WorkerFactory.Type.Compute, model) // slow - GPU path var worker = WorkerFactory.CreateWorker(WorkerFactory.Type.ComputeRef, model) // CPU var worker = WorkerFactory.CreateWorker(WorkerFactory.Type.CSharpBurst, model) var worker = WorkerFactory.CreateWorker(WorkerFactory.Type.CSharp, model) // very slow - CPU path var worker = WorkerFactory.CreateWorker(WorkerFactory.Type.CSharpRef, model)
There a number of different backends you can choose to run your network:
CSharpBurst: highly efficient, jobified and parallelized CPU code compiled via Burst.
CSharp: slightly less efficient CPU code.
CSharpRef: a less efficient but more stable reference implementation.
ComputePrecompiled: highly efficient GPU code with all overhead code stripped away and precompiled into the worker.
Compute: highly efficient GPU but with some logic overhead.
ComputeRef: a less efficient but more stable reference implementation.
Note: You can use reference implementations as a stable baseline for comparison with other implementations.
If you notice a bug or incorrect inference, see if choosing a simpler worker solves the issue. Please report any bugs in the Barracuda GitHub repository.