Vuforia is a cross-platform Augmented Reality (AR) and Mixed Reality (MR) application development platform, with robust tracking and performance on a variety of hardware (including mobile devices and mixed reality Head Mounted Displays (HMD) such as the Microsoft HoloLens). Unity’s integration of Vuforia allows you to create vision apps and games for Android and iOS using a drag-and-drop authoring workflow. A Vuforia AR+VR samples package is available on the Unity Asset Store, with several useful examples demonstrating the most important features of the platform.
Vuforia supports many third-party devices (such as AR/MR glasses), and VR devices with back-facing cameras (such as the Gear VR). See the Vuforia page on Devices for a full list of supported devices. See the Vuforia API reference for more information about classes, properties and functions used in the SDK.
You can use any device with a camera to test AR/MR games and applications built in Unity with Vuforia.
Before learning more about Vuforia and its supported features, you need to understand a number of important concepts. Among these concepts are forms of tracking and marker types commonly used in Vuforia applications.
In AR or MR, markers are images or objects registered with the application which act as information triggers in your application. When your device’s camera recognizes these markers in the real world (while running an AR or MR application), this triggers the display of virtual content over the world position of the marker in the camera view. Marker-based tracking can use a variety of different marker types, including QR codes, physical reflective markers, Image Targets and 2D tags. The simplest and most common type of marker in game applications is an Image Target.
Image Targets are a specific type of marker used in Marker-based tracking. They are images you manually register with the application, and act as triggers that display virtual content. For Image Targets, use images containing distinct shapes with complex outlines. This makes it easier for image recognition and tracking algorithms to recognize them.
Applications using Markerless tracking are more commonly location-based or position-based Augmented or Mixed Reality. This form of tracking relies on technologies such as GPS, accelerometer, gyroscope and more complex image processing algorithms, to place virtual objects or information in the environment. The VR hardware and software then treats these objects as if they are anchored or connected to specific real-world locations or objects.
2018–03–28 页面已发布并进行了编辑审查
Vuforia documentation updated for Unity XR API in 2017.3
Did you find this page useful? Please give it a rating:
Thanks for rating this page!
What kind of problem would you like to report?
Thanks for letting us know! This page has been marked for review based on your feedback.
If you have time, you can provide more information to help us fix the problem faster.
Provide more information
You've told us this page needs code samples. If you'd like to help us further, you could provide a code sample, or tell us about what kind of code sample you'd like to see:
You've told us there are code samples on this page which don't work. If you know how to fix it, or have something better we could use instead, please let us know:
You've told us there is information missing from this page. Please tell us more about what's missing:
You've told us there is incorrect information on this page. If you know what we should change to make it correct, please tell us:
You've told us this page has unclear or confusing information. Please tell us more about what you found unclear or confusing, or let us know how we could make it clearer:
You've told us there is a spelling or grammar error on this page. Please tell us what's wrong:
You've told us this page has a problem. Please tell us more about what's wrong:
Thank you for helping to make the Unity documentation better!
Your feedback has been submitted as a ticket for our documentation team to review.
We are not able to reply to every ticket submitted.