Lidar-Camera Calibration(Old)

Overview: Deepen Calibrate is a software tool that makes the critical task of sensor data calibration simple and quick.

Calibration List:

  • This page contains the list of calibrations. Users can launch an existing dataset, delete and even manage user’s access to these dataset.

Calibration Launch:

  • Users can click on ‘Get Started’ to go to the launch page.

  • Users can calibrate multiple cameras to LiDAR within the same dataset. But calibration needs to be performed individually for each camera/LiDAR combination.

Start calibration:

Camera Intrinsic Parameters:

  1. Intrinsic parameters for the camera are to be added here. Users have three options.

  2. Users can use the intrinsic calibration tool and calibrate the results. Save them to profile and then load them here.

  3. Or users can also load the JSON file.

Choice for manual or auto extrinsic parameters:

  • If the user has known extrinsic parameters and they can directly enter the values or else he can choose to calculate them using the tool.

Manual extrinsic parameters:

  • Users can update the extrinsic values manually.

  • They can choose to verify the values by going to ‘Visualization Mode’.

  • They can also further fine-tune these values.

Add Images and LiDAR Pair:

  • Users need to upload images and LiDAR pairs in case of extrinsic Calibration.

  • Each pair must have a checkerboard in their view. Please make sure that the checkerboard is in a different position in each pair.

  • Users can click on the image/LiDAR on the left side panel to get the image viewer or the LiDAR viewer.

  • Users can also delete/add image/LiDAR from the left side panel as well.

Checkerboard Configuration:

  • Users need to fill up the config of the checkerboard (Please refer to the Checkerboard Configuration Description section for more details).

Map Pair:

  • Users can click on ‘Start Mapping’ to go to the mapping mode. Here the user will have an image viewer on the left side and a LiDAR viewer on the right side.

  • Users have an option to add points in the image. And he has to map each point from the image to the corresponding area in the LiDAR viewer.

  • Users have the option to paint an area in the LiDAR for each selected point in the image.

  • The centroid of the painted area is taken into consideration.

  • The calibration results will depend on this step, if the selected area is small then the result would be better.

  • So the user has all the options to zoom in, zoom out, pan and rotate.

  • Users even have the option to erase a particular painted area and improve the correspondence relation.

  • In most cases, four points are preselected in the image. (All preselected four points are checkerboard borders). Just have to select and map each image point to LiDAR points.

  • Mapping can be done on any pair.

  • Users can navigate from one pair to another using the buttons ‘Map previous file’ and ‘Map next file’.

  • Once mapping is done, the user can move out of mapping mode by clicking on ‘Finish Mapping’.

  • It's sufficient if the user's map a single pair. There is no requirement to map all the image/LiDAR pairs.

  • User can click on ‘Run extrinsic Calibration’, to get the extrinsic parameters

  • 'Run extrinsic calibration' button is visible on selecting the image/LiDAR for which the mapping is done.

Visualization Mode:

  • Users can toggle ‘Enable Visualization Mode’ to go to visualization mode.

  • In this mode, the user can verify the extrinsic parameters by either checking frustum or lidar points on the image.

  • Users can project the generated checkerboard on the LiDAR viewer from the image.

  • Also, users can add a bounding box and look at its projection in the image.

  • Users can manually modify extrinsic parameters to improve those values by simultaneously looking at frustum and lidar points.

Detect corners in all images:

  • Once the users confirm the extrinsic parameters. They can fine tune the extrinsic parameters and improve them.

  • But the users must make sure that the extrinsic parameters are decent enough by using options provided in the visualization mode.

  • For this step, users have to identify the checkerboard corners in all images.

  • Auto-detect corners will work for most cases.

  • If auto-detect fails, users have to fallback to Manual corner detection. (Please refer to Manual Corner Detection Section)

Improve extrinsic calibration:

  • Finally, users can click on ‘Improve extrinsic calibration’. Once the user runs this the algorithm will try to improve the extrinsic parameters.

Analysing the improvement of extrinsic parameters:

  • Users can verify the extrinsic parameters in visualization mode as mentioned earlier.

  • But after improvising the extrinsic parameters, the user has an option to check and verify the algorithm behaviour as well. (Please refer to Analysing the improved results in Visualization Mode for more details.)

Error stats:

Users can use these error values to estimate the accuracy of the calibration results alongside visual confirmation. We extract the checkerboard from the raw point cloud of the LiDAR frame and compare it with the checkerboard corners in the 2-D image. The extracted checkerboard can be viewed from the visualizer. The three extrinsic error metrics along with their description are as follows.

  • Translation Error: Mean of difference between the centroid of points of checkerboard in the LiDAR and the projected corners in 3-D from an image. Values shown in meters. This calculation happens in the LiDAR coordinate system.

  • Rotation Error: Mean of difference between the normals of checkerboard in the point cloud and the projected corners in 3-D from an image. Values are shown both in degree. This calculation happens in the LiDAR coordinate system.

  • Reprojection Error: Mean of difference between the centroid of image corners and projected lidar checkerboard points on the image in 3-D. Values shown in meters. This calculation happens in the image coordinate system.

  • Individual error stats for each image/LiDAR pair can be seen. Average shows the mean of the errors of all the eligible image/LiDAR pairs.

  • If the errors are closer to zero then they are better.

Download calibration parameters:

  • Once the entire calibration is done, users can download all intrinsic and extrinsic parameters.

Save calibration dataset:

  • We have a save option on the top left corner. A user can click on the Save button to save the calibration dataset at any time during the calibration process.

Checkerboard Configuration Description:

  1. Horizontal Corner Count: These are the count of corners in the top row from first to the last. (left to right).

  2. Vertical Corner Count: These are the count of corners in the left column from the first to the last. (top to bottom).

  3. Square Size: It is the dimension of the square size in meters.

  4. Distance from left Corner: The distance from the leftmost side of the board to the left most corner point in meters.

  5. Distance from right Corner: The distance from the rightmost side of the board to the rightmost corner point in meters.

  6. Distance from top corner : The distance from the topmost side of the board to the topmost corner point in meters.

  7. Distance from bottom corner: The distance from the bottom-most side of the board to the bottom-most corner point in meters.

  8. Is checkerboard on the ground: Enable this if the checkerboard is on the ground.

Analyzing the improved results in Visualization Mode:

  1. Image ‘Checkerboard Identification’:

  • This can be used to verify whether the checkerboard area is being properly identified or not.

  • Users can change the configuration of the checkerboard or can also retry detecting corners in order to fix the checkerboard identification.

  • This step displays the undistorted images. So users can verify if the un distortion is correct or not.

2. Image ‘Raw File’:

  • The raw image files are displayed.

3. LiDAR ‘Raw File’ :

  • The raw LiDAR files are displayed.

4. LiDAR ‘Extracted checkerboard’:

  • This shows the extracted checkerboard from the original lidar file. Used for the error stats calculation. We compare extracted checkerboard with the projected checkerboard.

5. Fused Point Cloud: When a user enables the ‘Fused point cloud’, he can select a fused file among the following.

  • Input Cloud: This contains the fuse of all input clouds filtering the checkerboard area. If the checkerboard is not in the LiDAR file, then the user has to fix the extrinsic parameters by going back to the mapping step or manually updating the extrinsic parameters.

  • Generated Checkerboard: This contains the fuse of all generated checkerboards. If the checkerboard is not accurate, then the user has to fix the checkerboard configuration or the inner corner detection.

  • Input and Generated Checkerboard: This contains the fused output of above two files. This helps us to analyze the difference between the input and the generated output before optimization.

  • Checkerboard begin vs after optimization: This helps to know the difference between the generated checkerboard, using the extrinsic values before and after the optimization step.

  • Input and Generated Checkerboard after optimization: This contains the fused lidar data of input cloud and generated checkerboard after optimization. If they are overlapped. Then the user can make sure that the extrinsic values are accurate. Or else he can choose to retry improving the calibration results.

Manual Controls to move the generated checkerboard on the actual checkerboard:

  • Rotation and axis movement controls are added for the projected checkerboard in the visualization stage. The users can drag the projected checkerboard to align the actual checkerboard in the lidar viewer, extrinsic params are recalculated according to the change that was made. This is an additional way to get the initial estimates of the extrinsic params.

Max correspondence:

This value is used as an input to the algorithm. Users can tweak the value by analyzing the fused point cloud LiDAR files. If the difference between the input and the generated cloud is more, then the user can try to increase the value of the max correspondence and retry improving the calibration results.

Toolbar Options:

  • Users have an option to disable the tool tips.

  • Users have an option to reset the view of the image/LiDAR to default.

  • Users have an option to clear the points/corners added in the image/LiDAR.

Manual Corner Detection:

If the checkerboard corners are not auto-detected. Users can select four boundary points in the order (top-left, top-right , bottom-left, bottom-right). And then click on retry corner detection, to get the remaining inner corners of the checkerboard.

Extrinsic Calibration Output:

  • roll, pitch, yaw are in degrees and px, py, pz are in meters.

  • roll, pitch, yaw, px, py, pz are the extrinsic parameters downloaded from the calibration tool.

  • lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.

  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Deep optimisation:

  • A new feature has been added which is a deep optimisation. Now the calibration can be further improved with deep optimisation which uses the edge lines of the checkerboard in the optimisation process.

  • In the visualization mode, users can use LiDAR drop down and select the Edge points checkerboard to visualize the extracted edges of the checkerboard from the raw LiDAR.

  • Users can also use the 2D Line Reprojection Error to verify the individual error value of each pair. This shows the combined reprojection error of all the four lines to the 2D scene.

  • Checkerboard should be tilted for enabling deep optimisation. Users also has to check the option that the 'Is checkerboard tilted' to see the deep optimise button on the improve calibration accuracy mode. Please check the Deep optimisation option on the improve calibration accuracy mode and then click on the Improve calibration accuracy for the deep optimisation to run.

Camera sensor coordinates:

We currently show three different types of the camera sensor coordinate system. On selecting the camera coordinate system, the extrinsic parameters change accordingly. The export option exports the extrinsic parameters based on the selected camera coordinate system.

  • Optical coordinate system: Its the default coordinate system which we follow.

  • ROS REP 103: It is the coordinate system followed by ROS. On changing to this, you can see the change in the visualization and the extrinsic parameters.

  • NED : This follows the north-east-down coordinate system.

FAQ:

How do I get the controls to rotate and move the projected checkerboard?

Users can enable the checkbox ‘checkerboard in LiDAR’, the checkerboard will be projected in red color. Select the ‘Bounding Box Select’ option from the tool options of the LiDAR viewer. On hovering over the checkerboard the color of it changes to blue, now select the checkerboard to see the controls. All three rotations and movements are enabled.

Last updated