Links

Multi Target Lidar-Camera Calibration

Calibration List:

  • This page contains the list of calibrations. Users can launch an existing dataset and delete and even manage user access to these datasets. Users can click on the New button to create a new calibration

Calibration Launch:

  • Select the calibration type. In this case, users can select LiDAR-Camera Calibration.
  • Users can click on ‘Get Started’ to go to the launch page.

Calibration approach selection

Select Target based calibration

General and Sensor Configuration:

  1. 1.
    Target count = Multi
  2. 2.
    Provide Sensor name

Camera Intrinsic Parameters:

Intrinsic parameters for the camera are to be added here. Users have three options.
  • Users can use the intrinsic calibration tool and calibrate the results, save them to the profile, and then load them here. Intrinsic calibration tool usage guide.
  • Users can also load the JSON file.
  • Users can manually enter the intrinsic parameters if they already have them.

Target Configuration:

  • Dictionary: Dictionary of the charucoboard.
  • Horizontal Squares: Total number of squares in the horizontal direction
  • Vertical Squares: Total number of squares in the vertical direction
  • Square size (m): The length of the arm of the square in meters
  • Marker size (m): The length of the arm of the aruco marker. It is usually 0.8 times the Square size.
  • Bottom padding: The distance from the bottom-most side of the board to the bottom-most square in meters.
  • Top padding: The distance from the topmost side of the board to the topmost square in meters.
  • Right padding: The distance from the rightmost side of the board to the rightmost square in meters.
  • Left padding: The distance from the leftmost side of the board to the left-most square in meters.
  • On ground: Enable this if the target is placed on the ground. And also, the point cloud has the ground points in its scene around the target placement.
  • Tilted: Enable this if the target is tilted.
Click on Continue to Calibrate

Upload Images and LiDAR Pair:

  • Users need to upload images and LiDAR pair (s). Multi-target calibration work even with single pair.
  • Users can upload lidar files on the left, and on the right, users can upload images.
  • The Continue button would be disabled until there is an equal number of point cloud and image files.
Supported Formats: pcd, csv, and bin
Click on Continue

Detect corners in images:

Our algorithms automatically detect corners in the charucoboards
Click on Continue

LiDAR frame and Target selection for mapping

  1. 1.
    Click on Change next to the lidar file to select the file for mapping
  2. 2.
    Click on Change next to the Target to select the target for mapping

Map target in point cloud:

Mapping the target in the point cloud is needed for initial estimates of the extrinsic parameters. There are 3 ways to get the initial estimates.
  • Auto-detect target
Click on Auto-detect target
  • Add estimated extrinsic parameters:
Click on Add estimated extrinsic parameters
Provide estimated Sensor angles and Position and click on Done
  • Add marker (a manual process)
Click on Add marker
Mark four points as shown in the right panel and click on Done

Verifying the accuracy of the estimated extrinsic parameters:

Once the estimated extrinsic parameters are in the tool, users can visualize the parameters by clicking on the visualize button. In the visualization, we have a few sensor fusion techniques through which the accuracy of the extrinsic parameters can be visualized.
If the estimated parameters are way off, the users need to clear the markers and re-mark the corresponding four markers to get reasonable initial estimates.
Estimated extrinsic parameters play a crucial role in generating accurate extrinsic parameters.

Run calibration:

Users need to click on Run calibration to optimize the estimated extrinsic parameters further. All the uploaded pairs are used in the optimization process.
Click on Calibrate

Error stats

Users can use these error values to estimate the accuracy of the calibration results alongside visual confirmation. We extract the target from the raw point cloud of the LiDAR frame and compare it with the target corners in the 2D image. The extracted target can be viewed from the visualizer. The three extrinsic error metrics, along with their description, are as follows.
  • Translation: Mean difference between the target's centroid points in the LiDAR and the projected corners in 3D from an image. Values are shown in meters. This calculation happens in the LiDAR coordinate system.
  • Rotation: Mean difference between the normals of targets in the point cloud and the projected corners in 3D from an image. Values are shown both in degree. This calculation happens in the LiDAR coordinate system.
  • Re projection: Mean difference between the centroid of image corners and projected lidar target points on the image in 3D. Values are shown in meters. This calculation happens in the image coordinate system.
  • Individual error stats for each image/LiDAR pair can be seen. The average shows the mean of the errors of all the eligible image/LiDAR pairs.
  • The closer the error stats to zero, the better the extrinsics are.

Analyzing the extrinsic parameters in Visualization Mode:

Sensor fusion techniques:

Users can use the following techniques to visualize the extrinsic parameters.
  • When the check box of the frustum is enabled, users can see the field of view of the image in the LiDAR frame. This uses both the camera matrix and the extrinsic parameters. Image axes are also displayed according to the extrinsic parameters.
  • When the check box of the LiDAR points in image is enabled, users can see the LiDAR points projected in the camera image using the extrinsic params.
  • When the check box of the color points from camera is enabled, users can see the image pixels projected in the LiDAR frame using the extrinsic params.
When the check box of the Show target in LiDAR is enabled, users can see the target highlighted in the LiDAR frame using the extrinsic params.

Image ‘Target Identification':

  • This can be used to verify whether the target is identified properly in the image.
  • Users can change the configuration of the target or can also retry detecting corners to fix the target identification.
  • This step displays the undistorted images. So users can verify if the undistortion is correct or not.

Image ‘Raw File’:

  • The raw image files are displayed.

LiDAR ‘Raw File’ :

  • The raw LiDAR files are displayed.

LiDAR ‘Extracted target’:

  • This shows the extracted target from the original lidar file. They are used. In that case, for the error stats calculation. We compare the extracted target with the projected target.

Fused Point Cloud: When a user enables the ‘Fused point cloud’, he can select a fused file among the following.

  • Input Cloud: This contains the fuse of all input clouds filtering the target area. If the target is not in the LiDAR file, the user has to fix the extrinsic parameters by going back to the mapping step or manually updating the extrinsic parameters.
  • Generated target: This contains the fuse of all generated targets. The user must fix the target configuration or the inner corner detection if the target is inaccurate.
  • Input and generated target: This contains the fused output of the above two files. This helps us to analyze the difference between the input and the generated output before optimization.
  • Target begin vs after optimization: This helps to know the difference between the generated target, using the extrinsic values before and after the optimization step.

Max correspondence:

This value is used as an input to the algorithm. Users can tweak the value by analyzing the fused point cloud LiDAR files. Suppose the difference between the input and the generated cloud is more, then the user can try to increase the value of the max correspondence.

Extrinsic Calibration Output:

  • roll, pitch, yaw, px, py, pz are the extrinsic parameters downloaded from the calibration tool.
  • px, py, and pz are in meters.
  • roll, pitch, and yaw are in degrees
  • lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.
  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Deep optimization:

  • A new feature has been added, which is deep optimization. The calibration can be further improved with deep optimization, which uses the edge lines of the target in the optimization process.
  • In the visualization mode, users can use the LiDAR drop-down and select the Edge points target to visualize the extracted edges of the target from the raw LiDAR.
  • Users can also use the 2D Line Reprojection Error to verify the individual error value of each pair. This shows the combined reprojection error of all four lines to the 2D scene.
  • Please check the Deep optimization option on the improve calibration accuracy mode and click on the Improve calibration accuracy for the deep optimization to run. The targets should be tilted to enable deep optimization. Users must also check the option Tilted to see the deep optimize button on the improved calibration accuracy mode.

Camera sensor coordinates:

We currently show three different types of camera sensor coordinate systems. On selecting the camera coordinate system, the extrinsic parameters change accordingly. The export option exports the extrinsic parameters based on the selected camera coordinate system.
  • Optical coordinate system: It's the default coordinate system that we follow.
  • ROS REP 103: It is the coordinate system followed by ROS. On changing to this, you can see the change in the visualization and the extrinsic parameters.
  • NED: This follows the north-east-down coordinate system.