Image capture
Your app can access images captured by the device camera if the following conditions are met:
- Device platform supports camera feature
- User has accepted any required camera permissions
- Camera feature is enabled, for example ARCameraManager is active and enabled
The method you choose to access device camera images depends on how you intend to process the image. There are tradeoffs to either a GPU-based or a CPU-based approach.
Understand GPU vs CPU
There are two ways to access device camera images:
- GPU: GPU offers best performance if you will simply render the image or process it with a shader.
- CPU: Use CPU if you will access the image's pixel data in a C# script. This is more resource-intensive, but allows you to perform operations such as save the image to a file or pass it to a computer vision system.
Access images via GPU
Camera Textures are usually external Textures that do not last beyond a frame boundary. You can copy the Camera image to a Render Texture to persist it or process it further.
The following code sets up a command buffer that performs a GPU copy or "blit" to a Render Texture of your choice immediately. The code clears the the render texture before the copy by calling ClearRenderTarget.
// Create a new command buffer
var commandBuffer = new CommandBuffer();
commandBuffer.name = "AR Camera Background Blit Pass";
// Get a reference to the AR Camera Background's main texture
// We will copy this texture into our chosen render texture
var texture = !m_ARCameraBackground.material.HasProperty("_MainTex") ?
null : m_ARCameraBackground.material.GetTexture("_MainTex");
// Save references to the active render target before we overwrite it
var colorBuffer = Graphics.activeColorBuffer;
var depthBuffer = Graphics.activeDepthBuffer;
// Set Unity's render target to our render texture
Graphics.SetRenderTarget(m_RenderTexture);
// Clear the render target before we render new pixels into it
commandBuffer.ClearRenderTarget(true, false, Color.clear);
// Blit the AR Camera Background into the render target
commandBuffer.Blit(
texture,
BuiltinRenderTextureType.CurrentActive,
m_ARCameraBackground.material);
// Execute the command buffer
Graphics.ExecuteCommandBuffer(commandBuffer);
// Set Unity's render target back to its previous value
Graphics.SetRenderTarget(colorBuffer, depthBuffer);
Access images via CPU
To access the device camera image on the CPU, first call ARCameraManager.TryAcquireLatestCpuImage to obtain an XRCpuImage
.
Note
On iOS 16 or newer, you can also use ARKitCameraSubsystem.TryAcquireHighResolutionCpuImage. See High resolution CPU image to learn more.
XRCpuImage is a struct that represents a native pixel array. When your app no longer needs this resource, you must call XRCpuImage.Dispose to release the associated memory back to the AR platform. You should call Dispose
as soon as possible, as failure to Dispose
too many XRCpuImage
instances can cause the AR platform to run out of memory and prevent you from capturing new camera images.
Once you have an XRCpuImage
, you can convert it to a Texture2D or access the raw image data directly:
- Synchronous conversion to a grayscale or color TextureFormat
- Asynchronous conversion to grayscale or color
- Raw image planes
Synchronous conversion
To synchronously convert an XRCpuImage
to a grayscale or color format, call XRCpuImage.Convert:
public void Convert(
XRCpuImage.ConversionParams conversionParams,
IntPtr destinationBuffer,
int bufferLength)
This method converts the XRCpuImage
to the TextureFormat specified by the ConversionParams, then writes the data to destinationBuffer
.
Grayscale image conversions such as TextureFormat.Alpha8
and TextureFormat.R8
are typically very fast, while color conversions require more CPU-intensive computations.
Use XRCpuImage.GetConvertedDataSize if needed to get the required size for destinationBuffer
.
Example
The example code below executes the following steps:
- Acquire an
XRCpuImage
- Synchronously convert to an
RGBA32
color format - Apply the converted pixel data to a texture
// Acquire an XRCpuImage
if (!m_CameraManager.TryAcquireLatestCpuImage(out XRCpuImage image))
return;
// Set up our conversion params
var conversionParams = new XRCpuImage.ConversionParams
{
// Convert the entire image
inputRect = new RectInt(0, 0, image.width, image.height),
// Output at full resolution
outputDimensions = new Vector2Int(image.width, image.height),
// Convert to RGBA format
outputFormat = TextureFormat.RGBA32,
// Flip across the vertical axis (mirror image)
transformation = XRCpuImage.Transformation.MirrorY
};
// Create a Texture2D to store the converted image
var texture = new Texture2D(image.width, image.height, TextureFormat.RGBA32, false);
// Texture2D allows us write directly to the raw texture data as an optimization
var rawTextureData = texture.GetRawTextureData<byte>();
try
{
unsafe
{
// Synchronously convert to the desired TextureFormat
image.Convert(
conversionParams,
new IntPtr(rawTextureData.GetUnsafePtr()),
rawTextureData.Length);
}
}
finally
{
// Dispose the XRCpuImage after we're finished to prevent any memory leaks
image.Dispose();
}
// Apply the converted pixel data to our texture
texture.Apply();
The AR Foundation Samples GitHub repository contains a similar example that you can run on your device.
Asynchronous conversion
If you do not need to access the converted image immediately, you can convert it asynchronously.
Asynchronous conversion has three steps:
Call XRCpuImage.ConvertAsync(XRCpuImage.ConversionParams).
ConvertAsync
returns an XRCpuImage.AsyncConversion object to track the conversion status.Note
You can dispose
XRCpuImage
before asynchronous conversion completes. The data contained by theXRCpuImage.AsyncConversion
is not bound to theXRCpuImage
.Await the
AsyncConversion
status until conversion is done:while (!conversion.status.IsDone()) yield return null;
After conversion is done, read the status value to determine whether conversion succeeded.
AsyncConversionStatus.Ready
indicates a successful conversion.If successful, call AsyncConversion.GetData<T> to retrieve the converted data.
GetData<T>
returns aNativeArray<T>
that is a view into the native pixel array. You don't need to dispose thisNativeArray
, asAsyncConversion.Dispose
will dispose it.Important
You must explicitly dispose
XRCpuImage.AsyncConversion
. Failing to dispose anAsyncConversion
will leak memory until theXRCameraSubsystem
is destroyed.
Asynchronous requests typically complete within one frame, but can take longer if you queue multiple requests at once. Requests are processed in the order they are received, and there is no limit on the number of requests.
Examples
void AsynchronousConversion()
{
// Acquire an XRCpuImage
if (m_CameraManager.TryAcquireLatestCpuImage(out XRCpuImage image))
{
// If successful, launch an asynchronous conversion coroutine
StartCoroutine(ConvertImageAsync(image));
// It is safe to dispose the image before the async operation completes
image.Dispose();
}
}
IEnumerator ConvertImageAsync(XRCpuImage image)
{
// Create the async conversion request
var request = image.ConvertAsync(new XRCpuImage.ConversionParams
{
// Use the full image
inputRect = new RectInt(0, 0, image.width, image.height),
// Optionally downsample by 2
outputDimensions = new Vector2Int(image.width / 2, image.height / 2),
// Output an RGB color image format
outputFormat = TextureFormat.RGB24,
// Flip across the Y axis
transformation = XRCpuImage.Transformation.MirrorY
});
// Wait for the conversion to complete
while (!request.status.IsDone())
yield return null;
// Check status to see if the conversion completed successfully
if (request.status != XRCpuImage.AsyncConversionStatus.Ready)
{
// Something went wrong
Debug.LogErrorFormat("Request failed with status {0}", request.status);
// Dispose even if there is an error
request.Dispose();
yield break;
}
// Image data is ready. Let's apply it to a Texture2D
var rawData = request.GetData<byte>();
// Create a texture
var texture = new Texture2D(
request.conversionParams.outputDimensions.x,
request.conversionParams.outputDimensions.y,
request.conversionParams.outputFormat,
false);
// Copy the image data into the texture
texture.LoadRawTextureData(rawData);
texture.Apply();
// Dispose the request including raw data
request.Dispose();
}
There is also an overload of ConvertAsync
that accepts a delegate and does not return an XRCpuImage.AsyncConversion
, as shown in the example below:
public void GetImageAsync()
{
// Acquire an XRCpuImage
if (m_CameraManager.TryAcquireLatestCpuImage(out XRCpuImage image))
{
// Perform async conversion
image.ConvertAsync(new XRCpuImage.ConversionParams
{
// Get the full image
inputRect = new RectInt(0, 0, image.width, image.height),
// Downsample by 2
outputDimensions = new Vector2Int(image.width / 2, image.height / 2),
// Color image format
outputFormat = TextureFormat.RGB24,
// Flip across the Y axis
transformation = XRCpuImage.Transformation.MirrorY
// Call ProcessImage when the async operation completes
}, ProcessImage);
// It is safe to dispose the image before the async operation completes
image.Dispose();
}
}
void ProcessImage(
XRCpuImage.AsyncConversionStatus status,
XRCpuImage.ConversionParams conversionParams,
NativeArray<byte> data)
{
if (status != XRCpuImage.AsyncConversionStatus.Ready)
{
Debug.LogErrorFormat("Async request failed with status {0}", status);
return;
}
// Copy to a Texture2D, pass to a computer vision algorithm, etc
DoSomethingWithImageData(data);
// Data is destroyed upon return. No need to dispose
}
If you need the data to persist beyond the lifetime of your delegate, make a copy. See NativeArray<T>.CopyFrom.
Raw image planes
Note
An image "plane", in this context, refers to a channel used in the video format. It is not a planar surface and not related to ARPlane
.
Most video formats use a YUV encoding variant, where Y is the luminance plane, and the UV plane(s) contain chromaticity information. U and V can be interleaved or separate planes, and there might be additional padding per pixel or per row.
If you need access to the raw, platform-specific YUV data, you can get each image "plane" using the XRCpuImage.GetPlane
method as shown in the example below:
if (!cameraManager.TryAcquireLatestCpuImage(out XRCpuImage image))
return;
// Consider each image plane
for (int planeIndex = 0; planeIndex < image.planeCount; ++planeIndex)
{
// Log information about the image plane
var plane = image.GetPlane(planeIndex);
Debug.LogFormat("Plane {0}:\n\tsize: {1}\n\trowStride: {2}\n\tpixelStride: {3}",
planeIndex, plane.data.Length, plane.rowStride, plane.pixelStride);
// Do something with the data
MyComputerVisionAlgorithm(plane.data);
}
// Dispose the XRCpuImage to avoid resource leaks
image.Dispose();
XRCpuImage.Plane
provides direct access to a native memory buffer via NativeArray<byte>
. This represents a view into the native memory — you don't need to dispose the NativeArray
. You should consider this memory read-only, and its data is valid until the XRCpuImage
is disposed.