Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
Export as PDF
  1. Calibration

Data Collection for Radar-Camera Calibration

PreviousIMU Intrinsic CalibrationNextRadar-Camera Calibration

Last updated 1 year ago

Calibration Target:

Our current method for radar-camera calibration uses a checkerboard along with a trihedral corner reflector as the calibration target.

Please note that:

  1. The checkerboard can be of any size and can have different internal corners. The checkerboard cannot be made of metal such as aluminum because it will block radar signals. Other non-metal materials such as foam board, wood, or cardboard is fine.

  2. The trihedral corner reflector can be of any length.

Eg: You can print the attached pdf file on a foam board at 1.0 m x 0.6 m. Most print shops can print this. It has 5 internal corners horizontally and 9 internal corners vertically. Each square size is 10 cms. The distance from the left-most corner is 10cms, the distance from the right-most corner is 10 cms, the distance from the top-most corner is 10 cms, the distance from the right-most corner is 10cms.

The trihedral corner reflector is taped to the back of the checkerboard, as shown in the image below.

Note: The tip of the trihedral corner reflector must be in line with the internal corner of the checkerboard that is exactly in the middle of the pattern.

Data for radar-camera calibration:

Place the checkerboard at roughly 3 m - 10 m from the camera. For the closest position, the closer, the better, but it should be far enough so that all the edges of the board are visible from the camera and radar. For radar that can detect elevation, the height of the calibration target should be varied above/below the radar level. Otherwise, keep the calibration target at a levelled ground. The checkerboard should not be occluded in the camera or radar view.

The size of the checkerboard squares should be about 10 cm. The side of the squares must be parallel to the edge of the checkerboard.

The target and all sensors should be static while collecting the data. In order to avoid time-synchronization problems, please keep the board and the sensors stationary for at least 10 seconds while collecting each set of calibration data.

For example, these are the steps to collect one set of calibration data:

  1. Orient the camera and radar toward the calibration target. Start recording. Wait for 10 seconds (Don't move/rotate your car/robot/sensors). Stop recording. You must have a recording of images and radar data for 10 seconds. Extract one image from the camera and one frame of radar data captured 5 seconds after recording has started (e.g. if you start recording at 3:00:00, you stop recording at 3:00:10. We need a frame captured at 3:00:05) and save them.

  2. Change the checkerboard's location and orientation. Start recording. Wait for 10 seconds (Don't move/rotate your car/robot). Stop recording. Again, you must have a recording of images and radar data for 10 seconds. Extract one image from the camera and one frame of radar data captured 5 seconds after recording has started and save them.

  3. In the same manner, collect data for different positions of the calibration target.

Note: To achieve good results, collect at least 6-7 pairs of image and radar data.

Each calibration dataset supports only one pair of radar-camera sensors. To calibrate a new pair of radar-camera sensors, follow the above steps and create a new dataset.

https://drive.google.com/file/d/1mTR8HTpvROE1Pv0rmXEBVLSxs_yMDnvf/view?usp=sharing