Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibration Target
  • Data for camera-to-lidar extrinsic calibration
  • Optional board position
  • on ground
  • Tilted
Export as PDF
  1. Calibration

Data Collection for Multi Target Lidar-Camera Calibration

PreviousLidar-Camera Calibration (Targetless)NextMulti Target Lidar-Camera Calibration

Last updated 7 months ago

Calibration Target

Charucoboard is the calibration target. Charucoboard of the following dictionaries of any size and squares can be used as the target. Targets should be as flat as possible and the surface of the target should be anti-glare (matte finish)

for supported charuco dictionaries.

Note: When multiple targets are used, make sure targets are of different dictionaries.

We suggest that users make at least 1.0m*1.0m charcoboards for good corner detection.

Data for camera-to-lidar extrinsic calibration

Place the targets in the focus of the camera and the LiDAR's field of view. The targets should be placed on the ground or on a stand. Avoid holding the targets with your hands.

The targets and all sensors should be static while collecting the data. To avoid time-synchronization problems, please keep the boards and the sensors stationary for at least 10 seconds while collecting each calibration data set.

For example, these are the steps to collect one set of calibration data:

  1. Orient the camera toward the targets. Start recording. Wait for 10 seconds (Don't move/rotate your car/robot/sensors). Stop recording. You must have a recording of images and lidar data for 10 seconds. Extract one image from the camera and one frame of lidar data captured 5 seconds after the recording has started (e.g., if you start recording at 3:00:00, you stop recording at 3:00:10. We need a frame captured at 3:00:05) and save them.

  2. Change the targets' location and orientation. Start recording. Wait 10 seconds (Don't move/rotate your car/robot). Stop recording. Again, you must have a recording of images and lidar data for 10 seconds. Extract one image from the camera and one frame of lidar data captured 5 seconds after the recording starts and save them. (Dataset front_2)

  3. In the same manner, save Dataset front_3, front_4, and front_5.

Multi-target LiDAR Camera calibration works even with a single pair of images and a lidar frame. The more pairs, the better the accuracy will be.

Optional board position

on ground

The target is placed on the ground (touching the ground). In such a case, enable the on ground flag in the configuration page. Also, make sure that the lidar data captures the ground points. By doing so, we can optimize the extrinsic parameters using the ground points.

Tilted

A holder holds the target up in the air and tilts right by around 45 degrees. In such a case, enable the Tilted flag in the target configuration. This approach enables deep optimization, and the extrinsic parameters are further optimized using the edge points of the board.

Click here
Sample image for board on the ground
Sample image for tilted board