docs.unity3d.com
Search Results for

    Show / Hide Table of Contents

    Lidar segmentation

    Lidar segmentation is implemented with extensive use of the Perception package. It's recommended to go through the tutorial for perception package before beginning to work with lidar segmentation.

    To enable lidar segmentation, add the Perception package to your project, then attach Label components to GameObjects in your scene. The PathtracedSensingComponent in the scene will automatically use the label information to render a unique ID for each ray, which corresponds to the object hit by the ray.

    Note

    You must install the Perception package in your project to perform lidar segmentation.

    Note

    You can import the Perception library from the SensorSDK samples to access a lidar prefab and a demo scene demonstrating lidar segmentation.

    Warning

    Installing the perception package and running it the first time may prompt an error regarding a misconfigured output endpoint. This error has no effect on the lidar segmentation. To remove the error: select Edit > Project Settings > Perception, and change the active endpoint to No Output Endpoint or Perception Endpoint.

    Warning

    Lidar segmentation requires a PathtracedSensingComponent. A Camera-Based Lidar solution isn't supported yet.

    Convert lidar output to labels

    To convert the unique ID to the required format using the corresponding label configs, you can use one of two nodes: PhotosensorToInstanceSegmentation or PhotosensorToSemanticSegmentation. Alternatively, you can also use the raw data directly from the Photosensor node output.

    PhotosensorToInstanceSegmentation

    Instance Segmentation Node

    This node accepts the raw output from the photosensor, and encodes it into a 2D texture according to a specified IdLabelConfig, provided by the Perception package. The label config translates between the raw unique id randomly assigned per label component and the specified id for each GameObject. The node has the following parameters:

    • FrameHeight: The number of beams in the vertical arrangement.
    • FrameWidth: The number of firings per horizontal sweep.
    • RawData: The output from the photosensor node.
    • ID Label Configuration (binding): The corresponding configuration to use for the mapping.

    The node will output a 2D texture of integers corresponding to the IDs, of size (FrameWidth x FrameHeight) every lidar sweep.

    PhotosensorToSemanticSegmentation

    Semantic Segmentation Node

    This node utilizes the unique id rendered to output semantic segmentation details. The class labels are again provided by the SemanticLabelConfig object available in the Perception package. Unlike instance segmentation, the class labels are specified as colors corresponding to each class, thus this node's output can be visualized without using a lookup table to map the colors. The node interface is as follows:

    • FrameHeight: The number of beams in the vertical arrangement.
    • FrameWidth: The number of firings per horizontal sweep.
    • RawData: The output from the photosensor node.
    • Semantic Label Configuration (binding): The corresponding configuration to use for the mapping.

    The node will output a 2D RGB texture of size (FrameWidth x FrameHeight) every lidar sweep.

    Visualize the lidar segmentation

    To visualize the data, use a Lookup table node with a qualitative LUT, as it maps consecutive integers to distinct colors.

    In This Article
    Back to top
    Copyright © 2024 Unity Technologies — Trademarks and terms of use
    • Legal
    • Privacy Policy
    • Cookie Policy
    • Do Not Sell or Share My Personal Information
    • Your Privacy Choices (Cookie Settings)