Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
Export as PDF
  1. Calibration

Radar-Camera Calibration

PreviousData Collection for Radar-Camera CalibrationNextData Collection for IMU Vehicle calibration

Last updated 3 years ago

Overview: Deepen Calibrate is a software tool that makes the critical task of sensor data calibration simple and quick.

1. Calibrations:

  • This page allows the users to create, list, launch and delete calibration datasets. Admins can manage user's access to these datasets on this page.

  • Click on ‘New Calibration’ to create a new calibration dataset.

2. Calibration Selection:

  • Upon clicking the ‘New Calibration’ button, the user can make a selection from the different calibrations. Select ‘Radar-Camera Calibration’ to create a new radar-camera calibration dataset.

3. Calibration Instructions Page:

  • Upon selecting ‘Radar-Camera Calibration’, the user is welcomed to the instructions page

  • Click on ‘Get started’ after finishing with the instructions. A pop-up is displayed that asks for:

  1. Dataset name

  2. Edge length of the reflector

  3. Configuration of the checkerboard

The user must fill in all the mandatory fields and click on ‘Set Details’ to proceed ahead.

4. Calibration Pipeline:

Radar-Camera Calibration consists of two stages:

  • Radar Camera setup: Add camera details, files, and reflector coordinates.

  • Detect corners: Detect the checkerboard corners in each image.

  • Calibrate: Run the calibration algorithm and visualise the results.

  1. The user can choose to hit the ‘Save changes’ option at any stage to prevent loss of work.

  2. The user can navigate between the stages and update the details as per convenience.

  3. Download the results by using the ‘Export’ option.

  4. Click on the ‘Help Center’ button ( the top right corner) to get your doubts answered related to the calibration.

4.1 Radar Camera Setup:

This stage is broadly categorised into three steps:

  • Setup Camera Intrinsics: Add/upload your camera’s intrinsic parameters. The user can click on ‘Get intrinsic parameters’ to calibrate the camera.

  • Add files: Upload the files captured using the camera.

  • Add reflector coordinates: Enter the coordinates of the reflector captured using radar.

To move on to the calibration stage, complete the radar-camera setup and then click on continue.

4.2 Detect corners:

Users can click on the 'Detect corners' button. We try to auto-detect most of the corners, if the checkerboard corners detection fails user can add four border markers to detect all the inner checkerboard corners.

Users can select four boundary marker points in the order (top-left, top-right, bottom-left, bottom-right). And then click on the 'Detect corners' button, to get the remaining inner corners of the checkerboard.

4.3 Calibrate:

The user can review the files and camera intrinsics here before hitting the ‘calibrate’ button.​​ This may take up to 1-2 minutes for us to get accurate results, depending on the quality and number of image files.

Validate the results in Visualize mode and mark the calibration process as complete. These extrinsic, camera intrinsics and error parameters can be exported by using the ‘Export’ option.

Please note that :

  1. The visualization renders calibration results both in 2D and 3D.

  2. The reprojection error rate gives the mean distance error between points in 2D.

  3. The translational error rate gives the mean distance error between points in 3D.

5. Extrinsic Calibration Output:

  • roll, pitch, yaw, px, py, pz are the extrinsic parameters downloaded from the calibration tool.

  • reflectorPointsInRadar is the 3d coordinates of the reflector in the radar coordinate system.

  • reflectorPointsInCamera is the 3d coordinates of the reflector in the camera coordinate system.

Steps to upload the configuration by JSON file :

  1. Click on get started.

  2. Click on the import config button in the top right corner of the modal.

  3. Upload the JSON config file.

We process the config file and give users the option to select all or specific configs. Once the user clicks on done, the config is saved with us and the user doesn’t have to type anything else.

The only user input remaining now will be uploading images.

Important note: While uploading images, please keep in mind that the file name must be exactly the same as that mentioned in the JSON config file, else the user won’t see radar positions in the UI, and the user has to manually enter the radar positions.

If user want to skip some parent keys ["name", "type", "approach", "version", "sensors", "targets", "data"] they can skip it. Eg: if, user want to just enter radar position, then they can just add data section and skip all other keys. Keep in mind that you can only skip parent level keys and not nested keys.

JSON config with Description
{
   // Name of the calibration datasets.
   "name": "radar camera",
   // Type of the calibration.
   "type": "radar_camera_calibration",
   // Calibration approach type.
   "approach": "target",
   // Config version, defaults to 1.
   "version": 1,
   // List of sensors.
   "sensors": [
       {
	   // Type of sensor: [camera, radar]
           "type": "camera",
	   // Name of sensor
           "name": "camera1"
           // Camera lens type: pinhole/fisheye,
           "lens_model": "pinhole",
           "intrinsics": {
               "fx": 3092.1424896390727,
               "fy": 3086.110800366092,
               "cx": 1962.4427444227433,
               "cy": 1484.0856987530365,
               "distortion": {
                   "is_enabled": true,
                   "k1": 0.066861015025329,
                   "k2": -0.2939800579680618,
                   "k3": 0.27962457717517547,
                   "p1": -0.000008687426909951295,
                   "p2": 0.00009712594960396621
               }
           }
       },
       {
           "type": "radar",
           "name": "5mm radar"
       }
   ],
   // List of targets used in the calibration
   "targets": [
       {
           "type": "trihedral_corner_reflector",
           "edge_length": 0.1
       },
       {
           "type": "checkerboard",
           "square_dimension": 0.1,
           "thickness": 0.03,
           "horizontal_corners": 9,
           "vertical_corners": 5,
           "padding": {
               "top": 0.1,
               "right": 0.1,
               "bottom": 0.1,
               "left": 0.1
           }
       }
   ],
   // Any misc. data
   "data": {
       // File specific data
       "file_data": [
           {
	       // Uploaded file name
               "file_name": "1.jpg",
	       // Radar position
               "position": {
                   "x": -0.0354,
                   "y": 1.1188,
                   "z": 0.1833
               }
           },
           {
               "file_name": "2.jpg",
               "position": {
                   "x": 0,
                   "y": 1.0353,
                   "z": 0.1565
               }
           }
       ]
   }
}

The user can also choose to add the configuration by uploading a JSON file if required. Please check the section for more details.

A few other details

👍
Steps to upload the configuration by JSON file
2KB
Radar-Camera_Sample.json
Getting started modal showing upload config option in the top right corner.
Parsed uploaded JSON config data