UI support
- Overview and compatibility
- Setting up UI input
- Required Actions for UI
- The UI Input Module component
- Multiplayer UIs
- Virtual mouse cursor control
- Distinguishing between UI and game input
- Immediate Mode GUI
Overview and compatibility
Unity has various UI solutions. The Input System package's compatibility and workflow with these solutions varies depending on which UI solution you are using, and which version of Unity you are using.
In some cases you must use the UI Input Module (a component supplied in the Input System package) to define which actions are passed through from the Input System to the UI.
The three main UI solutions are UI Toolkit, Unity UI, and IMGUI. The compatibility and workflow for each of these are as follows:
For UI Toolkit, also known as "UI Elements" (an XML/CSS style UI solution):
- From Unity 2023.2 and onwards, the UI actions defined in the default project-wide actions directly map to UI Toolkit input. You do not need to use the UI Input Module component.
- In versions of Unity prior to 2023.2, you must use the UI Input Module component to define which actions are passed through from the Input System to the UI.
For Unity UI, also known as "uGUI" (a GameObject and Component style UI solution):
When using Unity UI (uGUI), you must always use the UI Input Module component to define which actions are passed through from the Input System to the UI.
For IMGUI (a script-based "Immediate Mode" UI using the OnGUI
method):
The Input System package is not compatible with IMGUI, however you can still use the Input System for other parts of your project such as gameplay. See the Immediate Mode GUI section for more information.
The compatibility above is summarized in the following table:
UI Solution | Compatible | UI Input Module component |
---|---|---|
UI Toolkit (2023.2+) | Yes | Not required |
UI Toolkit (pre 2023.2) | Yes | Required |
Unity UI (uGUI) | Yes | Required |
IMGUI | No | n/a |
Setting up UI input
The default project-wide actions comes with a "UI" Action Map, that contains all the actions required for UI interaction (shown in the image below). You can configure the bindings for these actions in the Actions Editor. Go to Project Settings > Input System Package, then select "UI" in the Action Maps column.
Required Actions for UI
The default project-wide actions comes with all the required actions to be compatible with UI Toolkit and Unity UI.
You can modify, add, or remove bindings to the named actions in the UI action map to suit your project, however in order to remain compatible with UI Toolkit, the name of the action map ("UI"), the names of the actions it contains, and their respective Action Types must remain the same.
These specific actions and types, which are expected by the UI Input Module class, are as follows:
Action Type | Control Type | Description | |
---|---|---|---|
Navigate | PassThrough | Vector2 | A vector used to select the currently active UI selectable in gamepad or arrow-key navigation-type input. |
Submit | Button | Button | Submits the currently selected UI selectable in navigation-type input |
Cancel | Button | Button | Exits any interaction with the currently selected UI selectable in navigation-type input |
Point | PassThrough | Vector2 | A 2D screen position. The cursor for pointer-type interaction. |
Click | PassThrough | Button | The primary button for pointer-type interaction. |
RightClick | PassThrough | Button | The secondary button for pointer-type interaction. |
MiddleClick | PassThrough | Button | The middle button for pointer-type interaction. |
ScrollWheel | PassThrough | Vector2 | The scrolling gesture for pointer-type interaction. |
Tracked Device Position | PassThrough | Vector3 | A 3D position of one or multiple spatial tracking devices, such as XR hand controllers. In combination with Tracked Device Orientation, this allows XR-style UI interactions by pointing at UI selectables in space. See tracked-type input. |
Tracked Device Orientation | PassThrough | Quaternion | a Quaternion representing the rotation of one or multiple spatial tracking devices, such as XR hand controllers. In combination with Tracked Device Position, this allows XR-style UI interactions by pointing at UI selectables in space. See tracked-type input. |
You can also reset the UI action map to its default bindings by selecting Reset from the More (⋮) menu, at the top right of the actions editor window. However, this will reset both the 'Player' and 'UI' action maps to their default bindings.
The UI Input Module component
When working with Unity UI (uGUI), or when using UI Toolkit in versions of Unity prior to Unity 2023.2, you must use the UI Input Module component which defines which actions are passed through to your UI, as well as some other UI-related input settings.
Note: If you have an instance of the Input System UI Input Module component in your scene, the settings on that component takes priority and are used instead of the UI settings in your project-wide actions. Also, The UI action map will be enabled, along with the default action map specified on any UI Input Module component in the scene.
The UI Input module is implemented in the class InputSystemUIInputModule
.
Using the UI Input Module
The UI Input Module is a component which you must add to a GameObject in your scene in order for your UI to receive input from the Input System. To do this:
- Create a new empty GameObject
- Click Add Component in the inspector
- In the search field displayed, type "Input System UI Input Module"
- Select Input System UI Input Module to add it to the GameObject.
UI Input Module properties
You can use the following properties to configure InputSystemUIInputModule:
Property | Description |
---|---|
Move Repeat Delay | The initial delay (in seconds) between generating an initial IMoveHandler.OnMove navigation event and generating repeated navigation events when the Move Action stays actuated. |
Move Repeat Rate | The interval (in seconds) between generating repeat navigation events when the Move Action stays actuated. Note that this is capped by the frame rate; there will not be more than one move repeat event each frame so if the frame rate dips below the repeat rate, the effective repeat rate will be lower than this setting. |
Actions Asset | An Input Action Asset containing all the Actions to control the UI. You can choose which Actions in the Asset correspond to which UI inputs using the following properties. By default, this references a built-in Asset named DefaultInputActions, which contains common default Actions for driving UI. If you want to set up your own Actions, create a custom Input Action Asset and assign it here. When you assign a new Asset reference to this field in the Inspector, the Editor attempts to automatically map Actions to UI inputs based on common naming conventions. |
Deselect on Background Click | By default, when the pointer is clicked and does not hit any GameObject , the current selection is cleared. This, however, can get in the way of keyboard and gamepad navigation which will want to work off the currently selected object. To prevent automatic deselection, set this property to false. |
Pointer Behavior | How to deal with multiple pointers feeding input into the UI. See pointer-type input. |
Cursor Lock Behavior | Controls the origin point of UI raycasts when the cursor is locked. |
How the bindings work
The UI input module can deal with three different types of input:
- Pointer-type input
- Navigation-type input
- Tracked-type input
For each of these types of input, input is sourced and combined from a specific set of Actions as detailed below.
Pointer-type input
To the UI, a pointer is a position from which clicks and scrolls can be triggered to interact with UI elements at the pointer's position. Pointer-type input is sourced from point, leftClick, rightClick, middleClick, and scrollWheel.
The UI input module does not have an association between pointers and cursors. In general, the UI is oblivious to whether a cursor exists for a particular pointer. However, for mouse and pen input, the UI input module will respect Cusor.lockState and pin the pointer position at (-1,-1)
whenever the cursor is locked. This behavior can be changed through the Cursor Lock Behavior property of the InputSystemUIInputModule.
Multiple pointer Devices may feed input into a single UI input module. Also, in the case of Touchscreen, a single Device can have the ability to have multiple concurrent pointers (each finger contact is one pointer).
Because multiple pointer Devices can feed into the same set of Actions, it is important to set the action type to PassThrough. This ensures that no filtering is applied to input on these actions and that instead every input is relayed as is.
From the perspective of InputSystemUIInputModule, each InputDevice that has one or more controls bound to one of the pointer-type actions is considered a unique pointer. Also, for each Touchscreen devices, each separate TouchControl that has one or more of its controls bound to the those actions is considered its own unique pointer as well. Each pointer receives a unique pointerId which generally corresponds to the deviceId of the pointer. However, for touch, this will be a combination of deviceId and touchId. Use ExtendedPointerEventData.touchId to find the ID for a touch event.
You can influence how the input module deals with concurrent input from multiple pointers using the Pointer Behavior setting.
Pointer Behavior | Description |
---|---|
Single Mouse or Pen But Multi Touch And Track | Behaves like Single Unified Pointer for all input that is not classified as touch or tracked input, and behaves like All Pointers As Is for tracked and touch input. If concurrent input is received on a Mouse and Pen , for example, the input of both is fed into the same UI pointer instance. The position input of one will overwrite the position of the other.Note that when input is received from touch or tracked devices, the single unified pointer for mice and pens is removed including IPointerExit events being sent in case the mouse/pen cursor is currently hovering over objects. This is the default behavior. |
Single Unified Pointer | All pointer input is unified such that there is only ever a single pointer. This includes touch and tracked input. This means, for example, that regardless how many devices feed input into Point, only the last such input in a frame will take effect and become the current UI pointer's position. |
All Pointers As Is | The UI input module will not unify any pointer input. Any device, including touch and tracked devices that feed input pointer-type actions, will be its own pointer (or multiple pointers for touch input). Note: This might mean that there will be an arbitrary number of pointers in the UI, and several objects might be pointed at concurrently. |
If you bind a device to a pointer-type action such as Left Click without also binding it to Point, the UI input module will recognize the device as not being able to point and try to route its input into that of another pointer. For example, if you bind Left Click to the Space
key and Point to the position of the mouse, then pressing the space bar will result in a left click at the current position of the mouse.
For pointer-type input (as well as for tracked-type input), InputSystemUIInputModule will send ExtendedPointerEventData instances which are an extended version of the base PointerEventData
. These events contain additional data such as the device and pointer type which the event has been generated from.
Navigation-type input
Navigation-type input controls the current selection based on motion read from the move action. Additionally, input from
submit will trigger ISubmitHandler
on the currently selected object and
cancel will trigger ICancelHandler
on it.
Unlike with pointer-type, where multiple pointer inputs may exist concurrently (think two touches or left- and right-hand tracked input), navigation-type input does not have multiple concurrent instances. In other words, only a single move vector and a single submit and cancel input will be processed by the UI module each frame. However, these inputs need not necessarily come from one single Device always. Arbitrary many inputs can be bound to the respective actions.
While, move should be set to PassThrough Action type, it is important that submit and cancel be set to the Button Action type.
Navigation input is non-positional, that is, unlike with pointer-type input, there is no screen position associcated with these actions. Rather, navigation actions always operate on the current selection.
Tracked-type input
Input from tracked devices such as XR controllers and HMDs essentially behaves like pointer-type input. The main difference is that the world-space device position and orientation sourced from trackedDevicePosition and trackedDeviceOrientation is translated into a screen-space position via raycasting.
Important: Because multiple tracked Devices can feed into the same set of Actions, it is important to set the action type to PassThrough. This ensures that no filtering is applied to input on these actions and that instead every input is relayed as is.
For this raycasting to work, you need to add TrackedDeviceRaycaster to the GameObject
that has the UI's Canvas
component. This GameObject
will usually have a GraphicRaycaster
component which, however, only works for 2D screen-space raycasting. You can put TrackedDeviceRaycaster alongside GraphicRaycaster
and both can be enabled at the same time without advserse effect.
Clicks on tracked devices do not differ from other pointer-type input. Therefore, actions such as Left Click work for tracked devices just like they work for other pointers.
Other notes about the UI Input Module
Upgrading from the Input Manager and the older Standalone Input Module
The Unity UI (uGUI) package contains an older equivalent module called "Standalone Input Module" which performs the same kind of integration between the Unity UI and the legacy Input Manager system.
If you have one of these older Standalone Input Module components on a GameObject in your project, and the Input System is installed, Unity displays a button in the Inspector offering to automatically replace it with the equivalent newer Input System UI Input Module for you.
UI Input Module priority
The UI Input Module component is not required with UI Toolkit in Unity 2023.2 and onwards. However, if you do use it, the settings on that component take priority over the UI settings in your project-wide actions.
Technical details
Input support for both Unity UI and UI Toolkit is based on the same EventSystem and BaseInputModule subsystem. In other words, the same input setup based on InputSystemUIInputModule supports input in either UI solution and nothing extra needs to be done.
Internally, UI Toolkit installs an event listener in the form of the PanelEventHandler
component which intercepts events that InputSystemUIInputModule
sends and translates them into UI Toolkit-specific events that are then routed into the visual tree. If you employ EventSystem.SetUITookitEventSystemOverride
, this default mechanism is bypassed.
Note: XR (tracked-type input) is not yet supported in combination with UI Toolkit. This means that you cannot use devices such as VR controllers to operate interfaces created with UI Toolkit.
There are some additional things worth noting:
- UI Toolkit handles raycasting internally. No separate raycaster component is needed like for uGUI. This means that TrackedDeviceRaycaster does not work together with UI Toolkit.
- A pointer click and a gamepad submit action are distinct at the event level in UI Toolkit. This means that if you, for example, do
the handler is not invoked when the button is "clicked" with the gamepad (abutton.RegisterCallback<ClickEvent>(_ => ButtonWasClicked());
NavigationSubmitEvent
and not aClickEvent
). If, however, you do
the handle is invoked in both cases.button.clicked += () => ButtonWasClicked();
Multiplayer UIs
The Input System can also handle multiple separate UI instances on the screen controlled separately from different input Bindings. This is useful if you want to have multiple local players share a single screen with different controllers, so that every player can control their own UI instance. To allow this, you need to replace the Event System component from Unity with the Input System's Multiplayer Event System component.
Unlike the Event System component, you can have multiple Multiplayer Event Systems active in the Scene at the same time. That way, you can have multiple players, each with their own UI Input Module and Multiplayer Event System components, and each player can have their own set of Actions driving their own UI instance. If you are using the Player Input component, you can also set it to automatically configure the player's UI Input Module to use the player's Actions. See the documentation on Player Input to learn how.
The properties of the Multiplayer Event System component are identical to those from the Event System component. Additionally, the Multplayer Event System component adds a Player Root property, which you can set to a GameObject that contains all the UI selectables this event system should handle in its hierarchy. Mouse input that this event system processes then ignores any UI selectables which are not on any GameObject in the Hierarchy under Player Root.
Virtual mouse cursor control
If your application uses gamepads and joysticks as an input, you can use the navigation Actions to operate the UI. However, it usually involves extra work to make the UI work well with navigation. An alternative way to operate the UI is to allow gamepads and joysticks to drive the cursor from a "virtual mouse cursor".
The Input System package provides a Virtual Mouse component for this purpose.
Note: This component is only compatible with the Unity UI (uGUI) system, and not UI Toolkit.
To see an example of the Virtual Mouse in a project, see the Gamepad Mouse Cursor sample included with the Input System package.
Using the Virtual Mouse component
To set up the Virtual Mouse component with the Unity UI system:
- Create a UI GameObject with an Image component. This GameObject is the mouse pointer. It can help to rename it "Pointer".
- Parent the pointer GameObject as a child of your Canvas GameObject that contains the UI which the cursor should operate on.
- Set the anchor position of the GameObject's
RectTransform
to the bottom left. - Ensure your pointer GameObject is the last child of the Canvas so that the cursor draws on top of everything else.
- Add a Virtual Mouse component to the GameObject.
- Drag the Image component of the pointer GameObject into the Cursor Graphic field of the Virtual Mouse component.
- Drag the Rect Transform component of the pointer GameObject to the Cursor Transform field of the Virtual Mouse component.
- If you want the virtual mouse to control the system mouse cursor, set Cursor Mode to Hardware Cursor If Available. In this mode, the Cursor Graphic is hidden when a system mouse is present and you use Mouse.WarpCursorPosition to move the system mouse cursor instead of the software cursor. The transform linked through Cursor Transform is not updated in that case.
- To configure the input to drive the virtual mouse, either add bindings on the various actions (such as Stick Action), or enable Use Reference and link existing actions from an Input Actions asset.
Important: Make sure the UI Input Module component on the UI's Event System does not receive navigation input from the same devices that feed into the Virtual Mouse component. If, for example, the Virtual Mouse component is set up to receive input from gamepads, and
Move
,Submit
, andCancel
on the UI Input Module are also linked to the gamepad, then the UI receives input from the gamepad on two channels.
At runtime, the component adds a virtual Mouse device which the InputSystemUIInputModule component picks up. The controls of the Mouse
are fed input based on the actions configured on the VirtualMouseInput component.
Note that the resulting Mouse input is visible in all code that picks up input from the mouse device. You can therefore use the component for mouse simulation elsewhere, not just with InputSystemUIInputModule.
Note: Do not set up gamepads and joysticks for navigation input while using the Virtual Mouse component. If both the Virtual Mouse component and navigation are configured, input is triggered twice: once via the pointer input path, and once via the navigation input path. If you encounter problems such as where buttons are pressed twice, this is likely the problem.
Distinguishing between UI and game input
UI in Unity receives input through the same mechanisms as the input for the rest of your game or app. There is no automatic mechanism that implicitly ensures that if a certain input – such as a click – is consumed by the UI, it is not also received by your gameplay code.
This can create ambiguities between, for example, code that responds to UI.Button.onClick
and code that responds to InputAction.performed
of an Action bound to <Mouse>/leftButton
.
Whether such ambiguities exist depends on how UIs are used. For example, you can avoid ambiguities by implementing your UI in one of the following ways:
- All interaction is performed through UI elements. A 2D/3D scene is rendered in the background but all interaction is performed through UI events (including those such as 'background' clicks on the
Canvas
). - UI is overlaid over a 2D/3D scene but the UI elements cannot be interacted with directly.
- UI is overlaid over a 2D/3D scene but there is a clear "mode" switch that determines if interaction is picked up by UI or by the game. For example, a first-person game on desktop may employ a cursor lock and direct input to the game while it is engaged whereas it may leave all interaction to the UI while the lock is not engaged.
When ambiguities arise, they do so differently for pointer-type and navigation-type.
Note
A sample called "UI vs Game Input" is provided with the package and can be installed from the Unity Package Manager UI in the editor. The sample demonstrates how to deal with a situation where ambiguities arise between inputs for UI and inputs for the game.
Handling ambiguities for pointer-type input
Input from pointers (mice, touchscreens, pens) can be ambiguous depending on whether or not the pointer is over a UI element when initiating an interaction. For example, if there is a button on screen, then clicking on the button may lead to a different outcome than clicking outside of the button and within the game scene.
If all pointer input is handled via UI events, no ambiguities arise as the UI will implicitly route input to the respective receiver. If, however, input within the UI is handled via UI events and input in the game is handled via Actions, pointer input will by default lead to both being triggered.
The easiest way to resolve such ambiguities is to respond to in-game actions by polling from inside MonoBehaviour.Update
methods and using EventSystem.IsPointerOverGameObject
to find out whether the pointer is over UI or not. Another way is to use EventSystem.RaycastAll
to determine if the pointer is currently over UI.
Note
Calling EventSystem.IsPointerOverGameObject
from within InputAction
callbacks such as InputAction.performed
will lead to a warning. The UI updates separately after input processing and UI state thus corresponds to that of the last frame/update while input is being processed.
Handling ambiguities for navigation-type input
Ambiguities for navigation-type Devices such as gamepads and joysticks (but also keyboards) cannot arise the same way that it does for pointers. Instead, your application has to decide explicitly whether to use input for the UI's Move
, Submit
, and Cancel
inputs or for the game. This can be done by either splitting control on a Device or by having an explicit mode switch.
Splitting input on a Device is done by simply using certain controls for operating the UI while using others to operate the game. For example, you could use the d-pad on gamepads to operate UI selection while using the sticks for in-game character control. This setup requires adjusting the bindings used by the UI Actions accordingly.
An explicit mode switch is implemented by temporarily switching to UI control while suspending in-game Actions. For example, the left trigger on the gamepad could bring up an item selection wheel which then puts the game in a mode where the sticks are controlling UI selection, the A button confirms the selection, and the B button closes the item selection wheel. No ambiguities arise as in-game actions will not respond while the UI is in the "foreground".
Immediate Mode GUI
The Input System package does not support Immediate Mode GUI (IMGUI) methods at runtime.
However, if you need to use IMGUI for your UI, it is possible to use legacy Input Manager input for your IMGUI user interface, while also using the Input System package for your in-game input.
When the Editor's Active Input Handling setting is set to "Input System Package" (which is the default, when using the Input System package), the OnGUI
methods in your player code won't receive any input events.
To restore functionality to runtime OnGUI
methods, you can change the Active Input Handling setting to "Both". Doing this means that Unity processes the input twice which could introduce a small performance impact.
This only affects runtime (play mode) OnGUI methods. Editor GUI code is unaffected and will receive input events regardless.