Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibration Homepage
  • Calibration selection
  • Calibration Instructions Page
  • Approach selection
  • Configuration
  • Camera Intrinsic Parameters
  • Checkerboard target configuration
  • Charucoboard target configuration
  • Upload files from LiDAR and Camera
  • Sample CSV format
  • Detect target corners in images
  • Estimated extrinsic parameters
  • Mapping of target corner points
  • Auto detect target
  • Add estimated extrinsic parameters
  • Verifying the accuracy of the estimated extrinsic parameters
  • Run Calibration
  • Additional options in the run calibration
  • Deep Optimization
  • Max correspondence
  • Error stats
  • Download calibration parameters
  • Analyzing the extrinsic parameters in Visualization Mode:
  • Sensor fusion techniques
  • Extrinsic Calibration Output
  • Camera coordinates system
  • Sample Script
Export as PDF
  1. Calibration

Lidar-Camera Calibration (Single target)

PreviousData Collection for Lidar-Camera Calibration (Single Target)NextData Collection for Lidar-Camera Calibration (Targetless)

Last updated 4 months ago

Calibration Homepage

  • This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.

  • Click on New Calibration to create a new calibration dataset.

Calibration selection

Select LiDAR-Camera Calibration to create a new dataset.

Calibration Instructions Page

Upon selecting LiDAR-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.

Approach selection

Users can choose either the target-based or the targetless calibration. The target-based calibration uses the checkerboard/charucoboard as the calibration target, and the targetless calibration uses the scene captured in both LiDAR and the camera sensor data.

Configuration

Camera Intrinsic Parameters

Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can also load the JSON file.

  • Users can manually enter the intrinsic parameters if they already have them.

Checkerboard target configuration

  • Horizontal corners: Total number of inner corners from left to right. The blue dots shown in the above preview correspond to the horizontal corners.

  • Vertical corners: Total number of inner corners from top to bottom. The red dots shown in the above preview correspond to the vertical corners.

  • Square size: This is the length of the square's arm in meters. It corresponds to the length of the yellow square highlighted in the preview.

  • Left padding: The distance from the leftmost side of the board to the left-most corner point in meters. Corresponds to the left blue line in the preview.

  • Right padding: The distance from the rightmost side of the board to the rightmost corner point in meters. Corresponds to the right blue line in the preview.

  • Top padding: The distance from the topmost side of the board to the topmost corner point in meters. Corresponds to the top red line in the preview.

  • Bottom padding: The distance from the bottom-most side of the board to the bottom-most corner point in meters. Corresponds to the bottom red line in the preview.

  • On ground: Enable this if the checkerboard is placed on the ground and the point cloud has the ground points in the scene around the checkerboard placement.

  • Tilted: Enable this if the checkerboard is tilted.

Charucoboard target configuration

  • Rows: Total number of squares in the horizontal direction.

  • Columns: Total number of squares in the vertical direction.

  • Square size: It is the length of the arm of the square in meters.

  • Marker size: It is the length of the arm of the aruco marker in meters. This is usually 0.8 times the Square size.

  • Left padding: The distance from the board's left edge to the left of the first square in the row.

  • Right padding: The distance from the board's right edge to the right of the last square in the row.

  • Top padding: The distance from the board's bottom edge to the bottom of the last square in the column.

  • Bottom padding: The distance from the board's top edge to the top of the first square in the column.

  • On ground: Enable this if the checkerboard is placed on the ground and the point cloud has the ground points in the scene around the checkerboard placement.

  • Tilted: Enable this if the charcoboard is tilted.

Upload files from LiDAR and Camera

Add point cloud files from the LiDAR and images from the camera sensor. After adding, pair the point cloud files with the matching image files before continuing.

Sample CSV format

X, Y, Z
0,-0,-0
62.545,-3.5064,-3.5911
62.07,-3.5133,-4.1565
32.773,-1.8602,-3.4055

Detect target corners in images

Users can click on Detect corners to detect the corners in the target. This is an automated process, and our algorithm usually detects the corners in the image accurately.

Suppose the target corners are not auto-detected. Users can follow the steps below and add the four boundary markers to get the inner checkerboard corners.

Estimated extrinsic parameters

The extrinsic parameter space is vast, so we need an estimated entry point for optimization. The user can provide estimated extrinsic parameters in three ways.

Mapping of target corner points

Users can map the target corner points in the point cloud and get the initial estimates of the extrinsic parameters. Only one point cloud mapping is sufficient to get the initial estimates.

Auto detect target

Our algorithms can automatically detect targets in the point cloud if the lidar channel data is provided on the configuration page. Please note that the auto-detection might not work properly if there are many flat surfaces, like walls, ceilings, etc., in the scene.

Add estimated extrinsic parameters

Users can manually enter estimated extrinsic parameters.

Verifying the accuracy of the estimated extrinsic parameters

Estimated extrinsic parameters are crucial in generating accurate extrinsic parameters.

To get good initial estimates, users must clear the markers and redo the markings if the estimated parameters are way off.

Run Calibration

Users need to click on Calibrate to optimize the estimated extrinsic parameters further. All the uploaded pairs are used in the optimization process.

Additional options in the run calibration

Deep Optimization

Users can select deep optimization to optimize the extrinsic further for datasets with the Tilted option enabled on the configuration page.

Max correspondence

This value is used as input for the algorithm. Users can tweak the value by analyzing the fused point cloud LiDAR files. Suppose the difference between the input and the generated cloud is more significant; the user can try to increase the value of the max correspondence and retry to improve the calibration results.

Error stats

Users can use these error values to estimate the accuracy of the calibration results alongside visual confirmation. The closer the error stats to zero, the better the extrinsic parameters.

  • Translation Error: Mean of difference between the centroid of points of checkerboard in the LiDAR and the projected corners in 3D from an image. Values are shown in meters. This calculation happens in the LiDAR coordinate system. Note: If the board is only partially covered by the LiDAR, this value is inaccurate due to the error in the position of the centroid.

  • Plane Translation Error: Mean of the Euclidean distance between the centroid of projected corners in 3D from an image and plane of the target in the LiDAR. Values are shown in meters. Note: If the board is only partially covered by the LiDAR or the LiDAR scan lines are non-uniformly distributed, translation and reprojection errors are inaccurate, but this plane translation error is accurate even in these scenarios.

  • Rotation Error: Mean difference between the normals of the target in the point cloud and the projected corners in 3D from an image. Values are shown in degree. This calculation happens in the LiDAR coordinate system. Note: All LiDARs have noise when measuring distance. This will, in turn, cause noise in the target's point clouds and the target's normals. Usually, this metric cannot measure accurately below 1 degree. For an accurate rotation error, we suggest using a faraway straight edge such as a building edge, roofline, or straight pole and projecting the point cloud onto the image. The rotation error can be calculated from the number of pixels between the image edges and the projected points.

  • Reprojection Error: Mean difference between the centroid of the target corners from the image and the centroid of the projected target from the LiDAR space onto the image. This is calculated in the image coordinate system. Note: If the board is only partially covered by the LiDAR, this value is inaccurate due to the error in the position of the centroid.

  • Individual error stats for each image/LiDAR pair can be seen. The average shows the mean of the errors of all the eligible image/LiDAR pairs.

Download calibration parameters

Once the entire calibration is done, users can download all intrinsic and extrinsic parameters by clicking the Export button in the header.

Analyzing the extrinsic parameters in Visualization Mode:

Sensor fusion techniques

Users can use the following techniques to visualize the extrinsic parameters.

Frustum

Users can see the image's field of view in the LiDAR frame. This uses both the camera matrix and the extrinsic parameters. Image axes are also displayed according to the extrinsic parameters.

LiDAR points in image

Users can see the LiDAR points projected in the camera image using extrinsic parameters.

Color points from camera

Users can see the camera's color points in the lidar space using the extrinsic parameters.

Show target in LiDAR

Users can see the checkerboard points projected in the LiDAR frame using the extrinsic parameters.

Image: Target Identification

The target in the image is filled with points. If the target configuration the user provides is correct, there will be no overflow or underflow.

LiDAR: Extracted target:

This shows the extracted target from the original lidar file. We use this to calculate the error statistics. We compare the extracted target with the projected target.

Fused Point Cloud:

Targets from all the point clouds are cropped and fused into a single point cloud.

  • Input cloud: This contains the fuse of all input clouds filtering the target area. If the target is not in the LiDAR file, the user has to fix the extrinsic parameters by going back to the mapping step or manually updating them.

  • Generated target: This contains the fuse of all generated targets. If the target is inaccurate, the user has to fix the target configuration or the inner corner detection.

  • Input and generated target: This contains the fused output of the Input cloud and Generated target. This helps us to analyze the difference between the input and the generated output before optimization.

  • Target begin vs after optimization: This helps to know the difference between the generated target, using the extrinsic values before and after the optimization step.

Validate Ground Truth:

To verify the extrinsic parameters obtained from the calibration, we have an additional step that shows how close the final extrinsic values are to the actual extrinsic values of the setup.

Steps to be followed to validate ground truth:

  1. Select the Validate Ground Truth option displayed at the top panel in the visualizer

  2. From the image, select any edge of the board that will be used for error estimation

  3. Draw a line that exactly matches the edge selected from the image, which is called the Ground Truth line

  1. Draw another line joining the edge of the points that are projected from lidar onto the image (Projected Line)

  1. After adding both lines to the image, click the Validate Ground Truth button in the right panel. This generates the ground truth Angle and Pixel errors.

Extrinsic Calibration Output

  • roll, pitch, and yaw are in degrees, and px, py, and pz are in meters.

  • lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.

  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Camera coordinates system

We currently show three different types of camera coordinate systems. The extrinsic parameters change according to the selected Camera coordinate system. The export option exports the extrinsic parameters based on the selected camera coordinate system.

  • Optical coordinate system: It's the default coordinate system that we follow.

  • ROS REP 103: This is the coordinate system followed by ROS. When you change to this, you can see the change in the visualization and the extrinsic parameters.

  • NED: This follows the North-East-Down coordinate system.

Sample Script

This is a sample Python script to project lidar points on an image using extrinsic parameters. It uses the open3d and opencv libraries.

Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, .

Once the estimated extrinsic parameters are in the tool, users can visualize the parameters by clicking on the visualize button. In the visualization, we have a few sensor fusion techniques through which the accuracy of the extrinsic parameters can be visualized. For more details, visit .

click here
Tool usage guide for old UX
Sensor fusion techniques
2KB
project_lidar_points_to_image.py
Camera input section in Configuration page
Steps to add boundary markers
Ground Truth Line
Projected Line