Lidar-Camera Calibration (Targetless)
Last updated
Last updated
This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.
Click on New Calibration to create a new calibration dataset.
Select LiDAR-Camera Calibration to create a new dataset.
Upon selecting LiDAR-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.
Users can choose either the target-based or the targetless calibration. The target-based calibration uses the checkerboard/charucoboard as the calibration target, and the targetless calibration uses the scene captured in both LiDAR and the camera sensor data.
Intrinsic parameters for the camera are to be added here. Users have three options.
Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, click here.
Users can also load the JSON file.
Users can manually enter the intrinsic parameters if they already have them.
Add point cloud files from the LiDAR and images from the camera sensor. After adding, pair the point cloud files with the matching image files before continuing.
To get the initial estimates, users can map any four corresponding points from the image and the point cloud data.
Alternatively, users can add the initial estimates if they know them. In such a case, users can skip manually adding the markers. Users can click Add estimated extrinsic parameters to add the initial estimates.
Once the estimated extrinsic parameters are in the tool, users can visualize the parameters by clicking on the visualize button. In the visualization, we have a few sensor fusion techniques through which the accuracy of the extrinsic parameters can be visualized. For more details, visit Sensor fusion techniques.
Estimated extrinsic parameters are crucial in generating accurate extrinsic parameters.
To get good initial estimates, users must clear the markers and redo the markings if the estimated parameters are way off.
There are two types of segmentation approaches available for the user to select:
This approach automates the segmentation of vehicles in point clouds and images using a deep learning model trained on various datasets.
Lidar: In this approach, the user needs to add bounding boxes in the lidar frame and fit the boxes to vehicles in the point cloud. The bounding boxes must be added for all the point clouds uploaded for calibration. This can be done by selecting the Bounding box mode, adding bounding boxes, and clicking Save Labels.
Image: There are two ways to do manual segmentation
Semantic Painting: Users can use the brush to paint the vehicles in the image and click on Save Labels.
Segment anything: In this approach, Users place a cluster of points on each vehicle. The same vehicle points should be placed under the same category. Please place at least one point on each surface of the car, such as the windshield, sides, roof, etc., so that when the model runs, it doesn't miss any part of the vehicle. After placing the points in each image, please click on the save label to save the data.
Note: Auto segmentation is suggested initially. Based on the segmented vehicles in the point clouds and images, the user can decide whether to proceed with auto-segmentation or perform the segmentation manually.
Users need to click on Calibrate to optimize the estimated extrinsic parameters further. All the uploaded pairs are used in the optimization process.
Users can optimise only angles by selecting the Angles only check box. It is observed that enabling Angles only results in achieving better Sensor angle accuracy (note that sensor position is not optimized in this case).
Once the entire calibration is done, users can download all intrinsic and extrinsic parameters by clicking the Export button in the header.
Users can use the following techniques to visualize the extrinsic parameters.
Frustum: Users can see the image's field of view in the LiDAR frame. This uses both the camera matrix and the extrinsic parameters. Image axes are also displayed according to the extrinsic parameters.
LiDAR points in image: Users can see the LiDAR points projected in the camera image using extrinsic parameters.
Color points from camera: Users can see the camera's color points in the lidar space using the extrinsic parameters.
In an Ideal segmentation with perfect LiDAR-Camera calibration and accurate segmentation of both the point cloud and the camera images, the projection of the segmented LiDAR points will align precisely with the corresponding segmented pixels in the camera image.
Based on the above concept, we formulate our error function as follows:
The graph below compares the ground truth error, calculated using our manual validation method, with the Deepen error function for 13 different extrinsic parameters.
The plot demonstrates a strong correlation between our error function and the ground truth error, within a 1-degree deviation from the ground truth.
The extrinsic angles estimated by Deepen are as follows: Roll = -91.676, Pitch = 1.263, Yaw = 179.204 in degrees with a ground truth deviation of only 0.25 degrees.
The extrinsic angles exhibiting the least deviation from the ground truth are -91.676 for Roll, 0.763 for Pitch, and, 179.204 for Yaw.
roll, pitch, and yaw are in degrees, and px, py, and pz are in meters.
lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.
imagePoint3D is the 3d coordinates of a point in the camera coordinate system.
We currently show three different types of camera coordinate systems. The extrinsic parameters change according to the selected Camera coordinate system. The export option exports the extrinsic parameters based on the selected camera coordinate system.
Optical coordinate system: It's the default coordinate system that we follow.
ROS REP 103: This is the coordinate system followed by ROS. When you change to this, you can see the change in the visualization and the extrinsic parameters.
NED: This follows the North-East-Down coordinate system.
This is a sample Python script to project lidar points on an image using extrinsic parameters. It uses the open3d and opencv libraries.
, where
is the projection function that projects 3D Lidar points onto the camera image. K is the camera intrinsic matrix, R and t are the rotation and translation parameters that are being estimated in this case.
is the set of all the segmented lidar points, and is the norm of the point in 3D space.
is the alignment function that calculates the proximity between the projected Lidar points and the corresponding segmented pixels in the camera image.
is a normalisation constant.