Version: 2018.3 (switch to 2019.1 )
HoloLens Web Camera
HoloLens video capture
Other Versions

HoloLens photo capture

Use the PhotoCapture API to take photos from the HoloLensAn XR headset for using apps made for the Windows Mixed Reality platform. More info
See in Glossary
web cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary
. You must enable the WebCam and Microphone capabilities to use the PhotoCapture API. The following example demonstrates how to take a photo using the PhotoCapture functionality and display it on a Unity GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary
.

using UnityEngine;
using System.Collections;
using System.Linq;
using UnityEngine.XR.WSA.WebCam;

public class PhotoCaptureExample : MonoBehaviour {
    PhotoCapture photoCaptureObject = null;
    Texture2D targetTexture = null;

    // Use this for initialization
    void Start() {
        Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
        targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);

        // Create a PhotoCapture object
        PhotoCapture.CreateAsync(false, delegate (PhotoCapture captureObject) {
            photoCaptureObject = captureObject;
            CameraParameters cameraParameters = new CameraParameters();
            cameraParameters.hologramOpacity = 0.0f;
            cameraParameters.cameraResolutionWidth = cameraResolution.width;
            cameraParameters.cameraResolutionHeight = cameraResolution.height;
            cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;

            // Activate the camera
            photoCaptureObject.StartPhotoModeAsync(cameraParameters, delegate (PhotoCapture.PhotoCaptureResult result) {
                // Take a picture
                photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
            });
        });
    }

    void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame) {
        // Copy the raw image data into the target texture
        photoCaptureFrame.UploadImageDataToTexture(targetTexture);

        // Create a GameObject to which the texture can be applied
        GameObject quad = GameObject.CreatePrimitive(PrimitiveType.Quad);
        Renderer quadRenderer = quad.GetComponent<Renderer>() as Renderer;
        quadRenderer.material = new Material(Shader.Find("Custom/Unlit/UnlitTexture"));

        quad.transform.parent = this.transform;
        quad.transform.localPosition = new Vector3(0.0f, 0.0f, 3.0f);

        quadRenderer.material.SetTexture("_MainTex", targetTexture);

        // Deactivate the camera
        photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
    }

    void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result) {
        // Shutdown the photo capture resource
        photoCaptureObject.Dispose();
        photoCaptureObject = null;
    }
}

Capture a photo to memory

When you capture an image to memory, a PhotoCaptureFrame is returned. A PhotoCaptureFrame contains both the native image data and spatial matrices that indicate where the image was taken.

Capturing an image to memory allows you to reference the captured image in a ShaderA small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info
See in Glossary
, or apply it to a GameObject. There are three ways to extract the image data from the PhotoCaptureFrame. These are:

  1. Access the image data as a Texture2D. This is the most common way to extract image data, because most components in Unity understand how to access image data in a Texture2D. Once your image has been captured to memory, you need to upload the image data into a Texture2D. Once the image data is uploaded into a Texture2D, you can then begin referencing that image data in materials, scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
    See in Glossary
    , or any other relevant element of your project. The Unity API Documentation has an example which demonstrates how to capture a photo to memory and then upload it to a Texture2D. See WebCam.PhotoCapture.TakePhotoAsync to learn how to capture a photo to a Texture2D. Uploading the image data to a Texture2D via the upload command is the easiest way to start working with your image data in Unity. The upload operation happens on the main thread. This operation is resource-intensive and can affect the performance of your project.

  2. Capture the native image data as a WinRT IMFMediaBuffer. See Microsoft’s documentation on IMFMediaBuffer to learn more. Make a copy of the native image data by passing a byte list into the PhotoFrame.CopyRawImageDataIntoBuffer function. Please note that the copy operation occurs on the main thread. This operation is resource-intensive and can affect the performance of your project.

  3. If you create your own plugin, or process the image data on a separate thread, you can get a pointer to the native image data via the PhotoFrame.GetUnsafePointerToBuffer function. The pointer returned is a pointer to an IMFMediaBuffer Component Object Model (COM). See Microsoft’s documentation on IMFMediaBuffer and Component Object Models for more information. Once you call this function, a reference is added to the COM object. You are responsible for releasing the reference when you no longer need the resource.

Did you find this page useful? Please give it a rating:

HoloLens Web Camera
HoloLens video capture