Comment on page
Multi Target Lidar-Camera Calibration
- This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.
- Click on New Calibration to create a new calibration dataset.
Select LiDAR-Camera Calibration to create a new dataset.
Upon selecting LiDAR-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.
Users can choose either the target-based or the targetless calibration. The target-based calibration uses the checkerboard/charucoboard as the calibration target, and the targetless calibration uses the scene captured in both LiDAR and the camera sensor data.
Select Target approach
Intrinsic parameters for the camera are to be added here. Users have three options.
- Users can also load the JSON file.
- Users can manually enter the intrinsic parameters if they already have them.
Camera input section in Configuration page
Select Target count = Multi
- Rows: Total number of squares in the horizontal direction.
- Columns: Total number of squares in the vertical direction.
- Square size: It is the length of the arm of the square in meters.
- Marker size: It is the length of the arm of the aruco marker in meters. This is usually 0.8 times the Square size.
- Left padding: The distance from the board's left edge to the left of the first square in the row.
- Right padding: The distance from the board's right edge to the right of the last square in the row.
- Top padding: The distance from the board's bottom edge to the bottom of the last square in the column.
- Bottom padding: The distance from the board's top edge to the top of the first square in the column.
- On ground: Enable this if the charcoboard is placed on the ground. Also, the point cloud has the ground points in the scene around the charcoboard placement.
- Tilted: Enable this if the charcoboard is tilted.
Add point cloud files from the LiDAR and images from the camera sensor. After adding, pair the point cloud files with the matching image files before continuing.
X, Y, Z
Our algorithms automatically detect corners in the charucoboards
Click on Continue
- 1.Click on Change next to the lidar file to select the file for mapping
- 2.Click on Change next to the Target to select the target for mapping
The extrinsic parameter space is huge, so we need an estimated entry point to optimize. There are three ways for the user to provide estimated extrinsic parameters.
Mapping the target in the point cloud is needed for initial estimates of the extrinsic parameters. There are 3 ways to get the initial estimates.
Users can map the target corner points in the point cloud and get the initial estimates of the extrinsic parameters. Only one point cloud mapping is sufficient to get the initial estimates.
Click on Add marker
Mark four points as shown in the right panel and click on Done
Our algorithms can automatically detect targets in the point cloud if the lidar channel data is provided on the configuration page. Please note the auto-detection might not work correctly if there are a lot of flat surfaces, like walls, ceilings, etc., in the scene.
Click on Auto-detect target
Add estimated extrinsic parameters
Users can manually enter estimated extrinsic parameters.
Click on Add estimated extrinsic parameters
Provide estimated Sensor angles and Position and click on Done
Once the estimated extrinsic parameters are in the tool, users can visualize the parameters by clicking on the visualize button. In the visualization, we have a few sensor fusion techniques through which the accuracy of the extrinsic parameters can be visualized. For more details, visit Sensor fusion techniques.
Estimated extrinsic parameters are crucial in generating accurate extrinsic parameters.
Users must clear the markers and redo the markings if the estimated parameters are way off to get good initial estimates.
Users need to click on Run calibration to optimize the estimated extrinsic parameters further. All the uploaded pairs are used in the optimization process.
Click on Calibrate
Deep Optimization: Users can select deep optimization to optimize the extrinsic further for datasets with the Tilted option enabled on the configuration page.
Max correspondence: This value is used as an input to the algorithm. Users can tweak the value by analyzing the fused point cloud LiDAR files. If the difference between the input and the generated cloud is greater, the user can try to increase the value of the max correspondence and retry, improving the calibration results.
Users can use these error values to estimate the accuracy of the calibration results alongside visual confirmation. The closer the error stats to zero, the better the extrinsic parameters.
- Translation Error: Mean of difference between the centroid of points of checkerboard in the LiDAR and the projected corners in 3D from an image. Values are shown in meters. This calculation happens in the LiDAR coordinate system. Note: If the board is only partially covered by the LiDAR, this value is inaccurate due to the error in the position of the centroid.
- Plane Translation Error: Mean of the Euclidean distance between the centroid of projected corners in 3D from an image and plane of the checkerboard/charucoboard in the LiDAR. Values are shown in meters. Note: If the board is only partially covered by the LiDAR or the LiDAR scan lines are non-uniformly distributed, translation and reprojection errors are inaccurate, but this plane translation error is accurate even in these scenarios.
- Rotation Error: Mean difference between the normals of the target in the point cloud and the projected corners in 3D from an image. Values are shown in degree. This calculation happens in the LiDAR coordinate system.
- Reprojection Error: Mean difference between the centroid of image corners and projected lidar checkerboard points on the image in 3-D. Values are shown in pixels. This calculation happens in the image coordinate system. Note: If the board is only partially covered by the LiDAR, this value is inaccurate due to the error in the position of the centroid.
- Individual error stats for each image/LiDAR pair can be seen. The average shows the mean of the errors of all the eligible image/LiDAR pairs.
Once the entire calibration is done, users can download all intrinsic and extrinsic parameters by clicking the Export button in the header.
Users can use the following techniques to visualize the extrinsic parameters.
Frustum: Users can see the field of view of the image in the LiDAR frame. This uses both the camera matrix and the extrinsic parameters. Image axes are also displayed according to the extrinsic parameters.
LiDAR points in image: Users can see the LiDAR points projected in the camera image using the extrinsic params.
color points from camera: Users can see the camera color points from the camera in the lidar space using the extrinsic params.
- Show target in LiDAR: Users can see the checkerboard points projected in the LiDAR frame using the extrinsic params.
The target in the image is filled with points. There will not be any overflow or underflow if the target configuration provided by the user is correct.
This shows the extracted target from the original lidar file. This is used for the error stats calculation. We compare the extracted target with the projected target.
Targets from all the point clouds are cropped and fused into a single point cloud.
- Input cloud: This contains the fuse of all input clouds filtering the target area. If the target is not in the LiDAR file, the user has to fix the extrinsic parameters by going back to the mapping step or manually updating the extrinsic parameters.
- Generated target: This contains the fuse of all generated targets. If the target is inaccurate, the user has to fix the target configuration or the inner corner detection.
- Input and generated target: This contains the fused output of the Input cloud and Generated target. This helps us to analyze the difference between the input and the generated output before optimization.
- Target begin vs after optimization: This helps to know the difference between the generated target, using the extrinsic values before and after the optimization step.
- roll, pitch, and yaw are in degrees, and px, py, and pz are in meters.
- lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.
- imagePoint3D is the 3d coordinates of a point in the camera coordinate system.
We currently show three different types of camera coordinate systems. The extrinsic parameters change according to the selected Camera coordinate system. The export option exports the extrinsic parameters based on the selected camera coordinate system.
- Optical coordinate system: It's the default coordinate system that we follow.
- ROS REP 103: It is the coordinate system followed by ROS. On changing to this, you can see the change in the visualization and the extrinsic parameters.
- NED: This follows the north-east-down coordinate system.
This is a sample Python script to project lidar points on an image using extrinsic parameters. It uses the open3d and opencv libraries.