Links

Lidar Camera Calibration

Calibration List:

  • This page contains the list of calibrations. Users can launch an existing dataset, delete and even manage user's access to these datasets.

Calibration Launch:

  • Users can click on the New button and then select the calibration type. In this case, users can select LiDAR-Camera Calibration.
  • Users can click on Get Started to go to the launch page.
  • Users can choose either the target-based or the targetless calibration. The target-based calibration uses the checkerboard/charucoboard as the calibration target, and the targetless calibration just uses the scene captured in both LiDAR and the camera sensor data.

Checkerboard Configuration:

Target-based calibration requires both the calibration dataset name and the target configuration.
Targetless calibration requires only the dataset name.
  • Enter the calibration dataset name.
  • Enter the target configuration.
  • Horizontal corners: Total number of inner corners from left to right. The blue dots shown in the above preview correspond to the horizontal corners.
  • Vertical corners: Total number of inner corners from top to bottom. The red dots shown in the above preview correspond to the vertical corners.
  • Square size: It is the length of the arm of the square in meters. The square size corresponds to the length of the yellow square highlighted in the preview.
  • Left padding: The distance from the leftmost side of the board to the left-most corner point in meters. Corresponds to the blue line in the preview.
  • Right padding: The distance from the rightmost side of the board to the rightmost corner point in meters. Corresponds to the red line in the preview.
  • Top padding: The distance from the topmost side of the board to the topmost corner point in meters. Corresponds to the red line in the preview.
  • Bottom padding: The distance from the bottom-most side of the board to the bottom-most corner point in meters. Corresponds to the blue line in the preview.
  • Is on ground?: Enable this if the checkerboard is placed on the ground. And also the point cloud has the ground points in its scene around the checkerboard placement.
  • Is tilted? : Enable this if the checkerboard is tilted.

Camera Intrinsic Parameters:

Intrinsic parameters for the camera are to be added here. Users have three options.
  • Users can use the intrinsic calibration tool and calibrate the results, save them to profile, and then load them here. Intrinsic calibration tool usage guide.
  • Users can also load the JSON file.
  • Users can manually enter the intrinsic parameters if they already have them.

Upload Images and LiDAR Pair:

  • Users need to upload images and LiDAR pairs.
  • Each pair must have a checkerboard in their view. Please make sure that the checkerboard is in a different position in each pair.
  • On the left users can upload lidars and on the right users can upload images.
  • Continue button would be disabled until there are equal number of point cloud and image files.

Supported Formats: pcd, csv, and bin

CSV file format

We support a simple CSV format with X, Y, Z values:
"X","Y","Z"
0,-0,-0
62.545,-3.5064,-3.5911
62.07,-3.5133,-4.1565
32.773,-1.8602,-3.4055

Detect target corners in images:

This step is only required for the target based calibration.
  • Checkerboard inner corners have to be detected. In most cases, they are auto-detected.
If the checkboard corners are not auto detected. Users can follow the below mentioned steps and add the four boundary markers to get the inner checkerboard corners.
Steps to add boundary markers

Map target in point cloud:

  • Target based calibration :
Users can map the checkerboard border points in the point cloud and can get the initial estimates of the extrinsic parameters.
Only one point cloud mapping is sufficient to get the initial estimates. You do not need to map more than one point cloud.
  • Targetless calibration: Users can correspond any four points from the image and the point cloud data to get the initial estimates.
Alternatively, users can add the initial estimates if they are already aware of those. In such a case users can skip manually adding the markers. Users can click on 'Add estimated extrinsic parameters', to add the initial estimates.

Verifying the accuracy of the estimated extrinsic parameters:

Once the estimated extrinsic parameters are in the tool, users can visualize the parameters by clicking on the visualize button. In the visualization, we have a few sensor fusion techniques through which the accuracy of the extrinsic parameters can be visualized. Sensor fusion techniques.
If the estimated parameters are way off then the users need to clear the markers and re-mark the corresponding four markers to get good initial estimates.
Estimated extrinsic parameters play a crucial role in generating good accurate extrinsic parameters.

Run Calibration:

  • Users need to click on Run calibration to further optimize the estimated extrinsic parameters. All the uploaded pairs are used in the optimization process.

Additional options in the run calibration

Target based: Users can select deep optimization to further optimize the extrinsic.
Targetless: Users can choose to optimize only angles by selecting the 'Angles only' check box.

Error stats (target based):

Users can use these error values to estimate the accuracy of the calibration results alongside visual confirmation. We extract the checkerboard from the raw point cloud of the LiDAR frame and compare it with the checkerboard corners in the 2-D image. The extracted checkerboard can be viewed from the visualizer. The three extrinsic error metrics along with their description are as follows.
  • Translation Error: Mean of difference between the centroid of points of checkerboard in the LiDAR and the projected corners in 3-D from an image. Values shown in meters. This calculation happens in the LiDAR coordinate system. Note: If the board is only partially covered by the LiDAR, this value is inaccurate due to the error in the position of the centroid.
  • Plane Translation Error: Mean of the Euclidean distance between centroid of projected corners in 3-D from an image and plane of the checkerboard/charucoboard in the LiDAR. Values shown in meters. Note: If the board is only partially covered by the LiDAR or the LiDAR scan lines are non-uniformly distributed, translation and reprojection errors are inaccurate but this plane translation error is accurate even in these scenarios.
  • Rotation Error: Mean of difference between the normals of checkerboard in the point cloud and the projected corners in 3-D from an image. Values are shown both in degree. This calculation happens in the LiDAR coordinate system.
  • Reprojection Error: Mean of difference between the centroid of image corners and projected lidar checkerboard points on the image in 3-D. Values shown in pixels. This calculation happens in the image coordinate system. Note: If the board is only partially covered by the LiDAR, this value is inaccurate due to the error in the position of the centroid.
  • Individual error stats for each image/LiDAR pair can be seen. Average shows the mean of the errors of all the eligible image/LiDAR pairs.
  • If the errors are closer to zero then they are better.

Download calibration parameters:

  • Once the entire calibration is done, users can download all intrinsic and extrinsic parameters.

Analyzing the extrinsic parameters in Visualization Mode:

Sensor fusion techniques:

Users can use the following techniques to visualize the extrinsic parameters.
  • When the check box of the frustum is enabled, users can see the field of view of the image in the LiDAR frame. This uses both the camera matrix and the extrinsic parameters. Image axes is also displayed according to the extrinsic parameters.
  • When the check box of the LiDAR points in image is enabled, users can see the LiDAR points projected in the camera image using the extrinsic params.
  • When the check box of the Checkerboard in LiDAR is enabled, users can see the checkerboard points projected in the LiDAR frame using the extrinsic params.

Image ‘Checkerboard Identification’:

  • This can be used to verify whether the checkerboard area is being properly identified or not.
  • Users can change the configuration of the checkerboard or can also retry detecting corners in order to fix the checkerboard identification.
  • This step displays the undistorted images. So users can verify if the un distortion is correct or not.

Image ‘Raw File’:

  • The raw image files are displayed.

LiDAR ‘Raw File’ :

  • The raw LiDAR files are displayed.

LiDAR ‘Extracted checkerboard’:

  • This shows the extracted checkerboard from the original lidar file. Used for the error stats calculation. We compare the extracted checkerboard with the projected checkerboard.

Fused Point Cloud: When a user enables the ‘Fused point cloud’, he can select a fused file among the following.

  • Input Cloud: This contains the fuse of all input clouds filtering the checkerboard area. If the checkerboard is not in the LiDAR file, then the user has to fix the extrinsic parameters by going back to the mapping step or manually updating the extrinsic parameters.
  • Generated Checkerboard: This contains the fuse of all generated checkerboards. If the checkerboard is not accurate, then the user has to fix the checkerboard configuration or the inner corner detection.
  • Input and Generated Checkerboard: This contains the fused output of above two files. This helps us to analyze the difference between the input and the generated output before optimization.
  • Checkerboard begin vs after optimization: This helps to know the difference between the generated checkerboard, using the extrinsic values before and after the optimization step.

Max correspondence:

This value is used as an input to the algorithm. Users can tweak the value by analyzing the fused point cloud LiDAR files. If the difference between the input and the generated cloud is more, then the user can try to increase the value of the max correspondence and retry improving the calibration results.

Extrinsic Calibration Output:

  • roll, pitch, yaw are in degrees and px, py, pz are in meters.
  • roll, pitch, yaw, px, py, pz are the extrinsic parameters downloaded from the calibration tool.
  • lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.
  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Deep optimization:

  • A new feature has been added which is deep optimization. Now the calibration can be further improved with deep optimization which uses the edge lines of the checkerboard in the optimization process.
  • In the visualization mode, users can use LiDAR drop-down and select the Edge points checkerboard to visualize the extracted edges of the checkerboard from the raw LiDAR.
  • Users can also use the 2D Line Reprojection Error to verify the individual error value of each pair. This shows the combined reprojection error of all four lines to the 2D scene.
  • Checkerboard should be tilted for enabling deep optimization. Users also have to check the option that the 'Is checkerboard tilted' to see the deep optimize button on the improved calibration accuracy mode. Please check the Deep optimization option on the improve calibration accuracy mode and then click on the Improve calibration accuracy for the deep optimization to run.

Camera sensor coordinates:

We currently show three different types of camera sensor coordinate system. On selecting the camera coordinate system, the extrinsic parameters change accordingly. The export option exports the extrinsic parameters based on the selected camera coordinate system.
  • Optical coordinate system: It's the default coordinate system which we follow.
  • ROS REP 103: It is the coordinate system followed by ROS. On changing to this, you can see the change in the visualization and the extrinsic parameters.
  • NED : This follows the north-east-down coordinate system.

Sample Script

This is a sample python script to use the calibration result to project lidar points on an image. It uses the open3d and opencv libraries.
project_lidar_points_to_image.py
2KB
Text