Lidar-Camera Calibration (Targetless)

Calibration Homepage

  • This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.

  • Click on New Calibration to create a new calibration dataset.

Calibration selection

Select LiDAR-Camera Calibration to create a new dataset.

Calibration Instructions Page

Upon selecting LiDAR-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.

Approach selection

Users can choose either the target-based or the targetless calibration. The target-based calibration uses the checkerboard/charucoboard as the calibration target, and the targetless calibration uses the scene captured in both LiDAR and the camera sensor data.

Configuration

Camera Intrinsic Parameters

Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, click here.

  • Users can also load the JSON file.

  • Users can manually enter the intrinsic parameters if they already have them.

Camera input section in Configuration page

Upload files from LiDAR and Camera

Add point cloud files from the LiDAR and images from the camera sensor. After adding, pair the point cloud files with the matching image files before continuing.

Sample CSV format

X, Y, Z
0,-0,-0
62.545,-3.5064,-3.5911
62.07,-3.5133,-4.1565
32.773,-1.8602,-3.4055

Estimated extrinsic parameters

Mapping of corresponding points

To get the initial estimates, users can map any four corresponding points from the image and the point cloud data.

Manually enter extrinsic parameters

Alternatively, users can add the initial estimates if they know them. In such a case, users can skip manually adding the markers. Users can click Add estimated extrinsic parameters to add the initial estimates.

Verifying the accuracy of the estimated extrinsic parameters

Once the estimated extrinsic parameters are in the tool, users can visualize the parameters by clicking on the visualize button. In the visualization, we have a few sensor fusion techniques through which the accuracy of the extrinsic parameters can be visualized. For more details, visit Sensor fusion techniques.

Estimated extrinsic parameters are crucial in generating accurate extrinsic parameters.

To get good initial estimates, users must clear the markers and redo the markings if the estimated parameters are way off.

Segmentation

There are two types of segmentation approaches available for the user to select:

Auto segmentation

  1. This approach automates the segmentation of vehicles in point clouds and images using a deep learning model trained on various datasets.

Manual segmentation:

  1. Lidar: In this approach, the user needs to add bounding boxes in the lidar frame and fit the boxes to vehicles in the point cloud. The bounding boxes must be added for all the point clouds uploaded for calibration. This can be done by selecting the Bounding box mode, adding bounding boxes, and clicking Save Labels.

  2. Image: There are two ways to do manual segmentation

    1. Semantic Painting: Users can use the brush to paint the vehicles in the image and click on Save Labels.

    2. Segment anything: In this approach, Users place a cluster of points on each vehicle. The same vehicle points should be placed under the same category. Please place at least one point on each surface of the car, such as the windshield, sides, roof, etc., so that when the model runs, it doesn't miss any part of the vehicle. After placing the points in each image, please click on the save label to save the data.

Note: Auto segmentation is suggested initially. Based on the segmented vehicles in the point clouds and images, the user can decide whether to proceed with auto-segmentation or perform the segmentation manually.

Run Calibration

Users need to click on Calibrate to optimize the estimated extrinsic parameters further. All the uploaded pairs are used in the optimization process.

Additional options in the run calibration

Users can optimise only angles by selecting the Angles only check box. It is observed that enabling Angles only results in achieving better Sensor angle accuracy (note that sensor position is not optimized in this case).

Download calibration parameters

Once the entire calibration is done, users can download all intrinsic and extrinsic parameters by clicking the Export button in the header.

Analyzing the extrinsic parameters in Visualization Mode:

Sensor fusion techniques

Users can use the following techniques to visualize the extrinsic parameters.

Frustum: Users can see the image's field of view in the LiDAR frame. This uses both the camera matrix and the extrinsic parameters. Image axes are also displayed according to the extrinsic parameters.

LiDAR points in image: Users can see the LiDAR points projected in the camera image using extrinsic parameters.

Color points from camera: Users can see the camera's color points in the lidar space using the extrinsic parameters.

Error function

  • In an Ideal segmentation with perfect LiDAR-Camera calibration and accurate segmentation of both the point cloud and the camera images, the projection of the segmented LiDAR points will align precisely with the corresponding segmented pixels in the camera image.

  • Based on the above concept, we formulate our error function as follows:

    • E=1−αnum_segmented_points∑k=1N∑pi∈Pk1∣∣pi∣∣Dk(Ï€(pi,K,R,t))E = 1 - \alpha \frac{num\_segmented\_points}{\sum_{k=1}^{N} \sum_{p_i \in P_k} \frac{1}{||p_i||} D_k (\pi (p_i, K, R, t))}, where

    • Ï€\pi is the projection function that projects 3D Lidar points onto the camera image. K is the camera intrinsic matrix, R and t are the rotation and translation parameters that are being estimated in this case.

    • PkP_k is the set of all the segmented lidar points, and ∀pi∈Pk,∣∣pi∣∣\forall p_i \in P_k, || p_i || is the norm of the point pip_i in 3D space.

    • DkD_k is the alignment function that calculates the proximity between the projected Lidar points and the corresponding segmented pixels in the camera image.

    • α\alpha is a normalisation constant.

Graph

  • The graph below compares the ground truth error, calculated using our manual validation method, with the Deepen error function for 13 different extrinsic parameters.

  • The plot demonstrates a strong correlation between our error function and the ground truth error, within a 1-degree deviation from the ground truth.

  • The extrinsic angles estimated by Deepen are as follows: Roll = -91.676, Pitch = 1.263, Yaw = 179.204 in degrees with a ground truth deviation of only 0.25 degrees.

  • The extrinsic angles exhibiting the least deviation from the ground truth are -91.676 for Roll, 0.763 for Pitch, and, 179.204 for Yaw.

Extrinsic Calibration Output

  • roll, pitch, and yaw are in degrees, and px, py, and pz are in meters.

  • lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.

  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Camera coordinates system

We currently show three different types of camera coordinate systems. The extrinsic parameters change according to the selected Camera coordinate system. The export option exports the extrinsic parameters based on the selected camera coordinate system.

  • Optical coordinate system: It's the default coordinate system that we follow.

  • ROS REP 103: This is the coordinate system followed by ROS. When you change to this, you can see the change in the visualization and the extrinsic parameters.

  • NED: This follows the North-East-Down coordinate system.

Sample Script

This is a sample Python script to project lidar points on an image using extrinsic parameters. It uses the open3d and opencv libraries.

Tool usage guide for old UX

Last updated