About ML-Agents package (
The Unity ML-Agents package contains the C# SDK for the Unity ML-Agents Toolkit.
The package allows you to convert any Unity scene to into a learning environment and train character behaviors using a variety of machine learning algorithms. Additionally, it allows you to embed these trained behaviors back into Unity scenes to control your characters. More specifically, the package provides the following core functionalities:
- Define Agents: entities, or characters, whose behavior will be learned. Agents are entities that generate observations (through sensors), take actions, and receive rewards from the environment.
- Define Behaviors: entities that specifiy how an agent should act. Multiple agents can share the same Behavior and a scene may have multiple Behaviors.
- Record demonstrations of an agent within the Editor. You can use demonstrations to help train a behavior for that agent.
- Embedding a trained behavior into the scene via the Unity Inference Engine. Embedded behaviors allow you to switch an Agent between learning and inference.
Note that the ML-Agents package does not contain the machine learning algorithms for training behaviors. The ML-Agents package only supports instrumenting a Unity scene, setting it up for training, and then embedding the trained model back into your Unity scene. The machine learning algorithms that orchestrate training are part of the companion Python package.
The following table describes the package folder structure:
|Documentation~||Contains the documentation for the Unity package.|
|Editor||Contains utilities for Editor windows and drawers.|
|Plugins||Contains third-party DLLs.|
|Runtime||Contains core C# APIs for integrating ML-Agents into your Unity scene.|
|Tests||Contains the unit tests for the package.|
To install this ML-Agents package, follow the instructions in the Package Manager documentation.
This version of the Unity ML-Agents package is compatible with the following versions of the Unity Editor:
- 2018.4 and later
Training is limited to the Unity Editor and Standalone builds on Windows, MacOS, and Linux with the Mono scripting backend. Currently, training does not work with the IL2CPP scripting backend. Your environment will default to inference mode if training is not supported or is not currently running.
Inference is executed via the Unity Inference Engine.
All platforms supported.
All platforms supported except:
- WebGL and GLES 3/2 on Android / iPhone
NOTE: Mobile platform support includes:
- Vulkan for Android
- Metal for iOS.
If you enable Headless mode, you will not be able to collect visual observations from your agents.
Rendering Speed and Synchronization
Currently the speed of the game physics can only be increased to 100x real-time. The Academy also moves in time with FixedUpdate() rather than Update(), so game behavior implemented in Update() may be out of sync with the agent decision making. See Execution Order of Event Functions for more information.
You can control the frequency of Academy stepping by calling
Academy.Instance.DisableAutomaticStepping(), and then calling
Unity Inference Engine Models
Currently, only models created with our trainers are supported for running ML-Agents with a neural network behavior.
If you are new to the Unity ML-Agents package, or have a question after reading the documentation, you can checkout our GitHUb Repository, which also includes a number of ways to connect with us including our ML-Agents Forum.