Data Collection for Lidar-Camera Calibration (Single Target and Targetless)
Last updated
Last updated
Checkerboard of more than two horizontal inner corners and vertical inner corners. You can use the attached pdf. It has seven internal corners horizontally and nine internal corners vertically. https://drive.google.com/file/d/1mTR8HTpvROE1Pv0rmXEBVLSxs_yMDnvf/view?usp=sharing
Charucoboard of more than two horizontal and vertical squares. Click here for supported Charuco dictionaries.
Place the target roughly 3m to 5m from the camera. For the closest target position, the closer should be far enough so that all the board's edges are visible from the camera and lidar. So, it is highly recommended to do this capture inside a building rather than outside. No target position should be occluded in the camera or lidar view.
The same target should be used in all camera and lidar frames.
For example, please take images with a single board at various positions like the following.
The boards and all sensors should be static while collecting the data. To avoid time-synchronization problems, please keep the boards and the sensors stationary for at least 10 seconds while collecting each data pair.
For example, these are the steps to collect one set of calibration data:
Orient the camera toward the target. Start recording. Wait for 10 seconds (Don't move/rotate your car/robot/sensors). Stop recording. You must have a recording of images and lidar data for 10 seconds. Extract one image from the camera, and one frame of lidar data captured 5 seconds after the recording has started (e.g., if you start recording at 3:00:00, you stop recording at 3:00:10. We need a frame captured at 3:00:05) and save them.
Change the target's location and orientation. Start recording. Wait 10 seconds (Don't move/rotate your car/robot). Stop recording. Again, you must have a recording of images and lidar data for 10 seconds. Extract one image from the camera, and one frame of lidar data captured 5 seconds after the recording starts and save them.
Repeat the process for at least 5 data pairs.
on-ground: The target is placed on the ground (touching the ground). In such a case, enable the on ground flag in the target configuration. Also, make sure that the lidar data captures the ground points. By doing so, we can optimize the extrinsic parameters using the ground points.
Tilted: A holder holds the target up in the air and tilts right by around 45 degrees. In such a case, enable the Tilted flag in the target configuration. This approach enables deep optimization, and the extrinsic parameters are further optimized using the edge points of the board.
For the targetless calibration, users must record a scene in the camera and the LiDAR sensor. No target is required for this calibration, but the scene should have vehicles (cars and vans) in the camera and the LiDAR data. For better calibration, vehicles should be close to the LiDAR (3m - 10m) with a good number of points in LiDAR and present on both the left and right sides of the image; having too many vehicles may result in calibration errors. Ensure that the vehicles (including the ego) stop or move slowly. This reduces the effect of the time difference between LiDAR and the camera. Select 3-4 frames from the collected data that have vehicles on both sides of the images and vehicles close to the lidar.