Version: Unity 6.4 (6000.4)
Language : English
Accessibility fundamentals
Animation

Screen readers

Screen readers are assistive technology that converts on-screen content into speech or refreshable braille output. They enable users who cannot see the screen or benefit from an additional sensory method to interact with your game or application. Screen readers interpret visual elementsA node of a visual tree that instantiates or derives from the C# VisualElement class. You can style the look, define the behaviour, and display it on screen as part of the UI. More info
See in Glossary
, provide auditory feedback about what’s on screen, and allow navigation through accessible touch gestures or keyboard commands.

Screen reader users

People who are blind or have low vision represent the primary user base. However, screen reader users also include people with learning disabilities like dyslexia who process auditory information more effectively, some people with autism who benefit from multi-sensory input, and people with temporary vision impairments.

Some users combine screen readers with screen magnification. Others listen at high speeds while viewing the screen. Screen reader usage patterns vary as widely as the people who depend on them.

How screen readers work

Screen readers transform a visual interface into an audio or tactile experience. They navigate through an accessibility hierarchy, a parallel structure that describes visual elements in text format.

When a screen reader encounters an application, it requests its accessibility hierarchy through the operating system. The OS queries the application through platform APIs. The application must respond with information such as element descriptions, states, and relationships.

Some screen readers navigate the accessibility hierarchy in a depth-first traversal order (navigating down child elements before moving to siblings), while others navigate based on the position of the elements on the screen.

As the user moves focus through accessible elements, the screen reader queries each element’s properties and announces relevant information.

The following diagram shows the announcement flow:

Your application's accessibility code provides information about the elements on the screen
    → The user navigates via the screen reader to a visual element (by touch, swipe, arrow key, etc.)
        → The screen reader queries the element's properties via the platform's accessibility API
            → The screen reader announces that information to the user
                → The user hears the element's description, value, role, state, and available actions

What screen readers need from an application

Screen readers require specific information about each accessible element. This information is organized into a data structure called an accessibility node, whose properties fall into the following categories.

The following properties are required for all accessible elements:

Property Description
Label The element’s name or concise description (such as button text or icon description). This is the main information the screen reader announces.
Frame The element’s position and size on the screen, used for touch navigation and the focus indicator.

The following properties are optional for relevant accessible elements:

Property Description
Role The element’s type (such as button, slider, checkbox, or text field). This informs the screen reader of the element’s behavior and interaction model.
Value The current value or content of the element (such as text content or slider value).
State The current status of the element (such as disabled, checked, or expanded).

The following properties provide navigation information:

Property Description
Hierarchy The element’s parent and child relationships, used for linear navigation and grouping.
Visibility Whether screen readers should “see” and focus on the element.

The following properties define interaction capabilities:

Property Description
Actions What users can do with the element (such as activate, adjust, dismiss, or expand).
Hints Additional context about interacting with the element.

Screen reader user interaction patterns

User input

When a screen reader is active, device interaction patterns change completely. For example, a single tap that normally activates a button instead announces the button to screen reader users. A swipe that normally scrolls instead navigates to the next element. Screen reader users employ entirely different gesture vocabularies or keyboard commands to navigate and interact with their devices.

Your application doesn’t need to handle specific touch or keyboard input (which can vary by platform, screen reader, and user preferences). When a screen reader is active, the operating system intercepts all user input and translates it into standard accessibility events, such as activate or dismiss. Your application only needs to respond to these high-level events.

User navigation patterns

Screen reader users develop efficient navigation strategies. Understanding these patterns helps you design more intuitive experiences.

The following table summarizes common screen reader navigation patterns:

Navigation pattern Description Design guidance
Linear exploration Moves through all elements sequentially. New users often explore this way initially. Ensure that your focus order follows visual layout and logical flow.
Landmark navigation Jumps between major sections. Users skip directly to headings, buttons, or form fields. Use semantic roles to enable this pattern.
Search and scan Lets users find specific content by keyword. Screen readers offer search functionality within pages. Provide clear, concise labels that include intuitive terms.
Rotor or quick-nav menu Categorizes elements by type. Users can list all buttons, all headings, or all containers separately. Proper role assignment enables this feature.

Keeping screen readers informed

Screen readers build a mental model of your interface based on the accessibility hierarchy. When your UI changes, you must notify the screen reader so it can update its model and inform the user.

The following table summarizes common types of changes that require screen reader notifications:

Change type When it occurs What to do
Screen or view changes The user navigates to a new screen, opens a dialog, or switches tabs. Update your accessibility hierarchy and send the appropriate events so the screen reader can rebuild its understanding of the interface and move focus to the new content.
Scrolling and content visibility Elements scroll in or out of view. Update your accessibility hierarchy so the screen reader knows which elements are currently accessible.
Dynamic content updates A loading indicator appears, a progress bar updates, or search results populate. Trigger an announcement for meaningful state transitions that affect user understanding or available actions. Not every animation needs to be announced.
Events outside user focus A download completes, a timer expires, or an error occurs outside the user’s current focus. Use accessibility announcements to convey this information without requiring a focus change.

Common screen reader challenges

The following table lists the most common challenges when implementing screen reader support, along with strategies to address them:

Challenge Problem Solution
Custom controls Screen readers expect standard roles and behaviors. A slider that looks like a dial still needs to behave like a slider. Assign the appropriate role and map custom interactions to standard accessibility patterns.
Overlapping elements Screen readers use element screen bounds for touch navigation. Overlapping bounds confuse spatial relationships. Ensure accessible elements have accurate, non-overlapping frame rectangles.
Rapid state changes Too many announcements overwhelm users. Prioritize announcements that change available actions or affect the user’s understanding of the current state. Not every frame-by-frame update needs verbalization.
Large numbers of elements Dense interfaces make it hard for screen reader users to find relevant content. Not all objects in a game sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary
need to be accessible. Prioritize key interactive elements.

Testing your screen reader support

Always test with real screen readers on your target platforms — enable them in device settings and navigate your entire experience from start to finish. Real screen readers reveal issues that simulators or automated tools miss, and they confirm that your implementation works as intended for actual users.

Use the following checklist for basic verification:

  • Every relevant element announces its label, value, and role
  • Linear navigation (such as swipe or arrow keys) follows logical order
  • State changes announce appropriately
  • All element actions work via screen reader gestures or commands
  • Elements hidden by menus or overlays are not accessible
  • Modal dialogs trap focus correctly
  • Dismissible overlays announce and close properly

Additional resources

Accessibility fundamentals
Animation