Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
Export as PDF
  1. Calibration

Lidar Radar Calibration

PreviousData Collection for Lidar-Radar calibrationNextVehicle Lidar Calibration

Last updated 7 months ago

Overview: Deepen Calibrate is a software tool that makes the critical task of sensor data calibration simple and quick.

1. Calibrations:

  • This page allows the users to create, list, launch and delete calibration datasets. Admins can manage user's access to these datasets on this page.

  • Click on ‘New Calibration’ to create a new calibration dataset.

2. Calibration Selection:

  • Upon clicking the ‘New Calibration’ button, the user can make a selection from the different calibrations. Select ‘Lidar Radar’ Calibration’ to create a new lidar-radar calibration dataset.

3. Start Page:

The page lists the important instructions and requirements that you need to follow in order to complete the calibration process.

Start the app by clicking on the “Get Started” button

4. Add details to the configuration page:

You should fill all the details like board dimensions, edge length of trihedral corner reflector etc on this page

  • Calibration Name: This is the name you give to the present calibration to identify it when you visit it again.

  • Sensor name: provide the names of both radar and lidar sensors that you can refer to identify the camera when stats are shown.

  • Length: This is the length of the board which you are using in the calibration.

  • Breadth: This is the breadth of the board you are using in the calibration.

  • Thickness: This is the thickness of the board you are using in the calibration.

  • Edge Length: This is the edge length of the trihedral corner reflector which is used to collect radar data. ( Look at the preview to know exactly which is considered as edge length)

Once you have all the entries(i.e sensors, targets, and trihedral corner reflector), the top right button “Continue to Calibrate” is enabled. Click on that button to proceed.

5. Upload PCD files page:

Add the lidar point cloud files(.pcd files)

Once you upload all the lidar files, a button at the right bottom “Continue” is enabled. Click it to proceed.

6. Set reflector configuration page:

On this page, you need to set the position of the reflector corner from the radar(in meters) for each lidar file which was uploaded in the earlier step.

Once radar data is entered for the corresponding lidar files, “Continue” button will be enabled in the bottom right corner. Please click it to proceed.

{
 "1.pcd":{"x":1e-8,"y":2.651905,"z":0.086121},
 "2.pcd":{"x":-0.331663,"y":2.626169,"z":0.182349},
 "3.pcd":{"x":-0.315841,"y":2.502859,"z":0.142405},
 "4.pcd":{"x":-0.152646,"y":2.422984,"z":0.266207}
 }

7. Map target in point cloud page:

  • On this page you can see the point cloud in the middle of the screen, now you need to manually mark 4 corner points of the board in this point cloud with the order mentioned on the right side wooden board.

  • Please follow the ordering displayed on the right-panel wooden board, wrong ordering will lead to inaccurate results.

  • Such markings need to be done for all the uploaded point clouds.

  • Once marking is done in all the point cloud files, then the “Detect Board Points” button will be enabled at the bottom of the page. Click that button, which will detect the edge points of the target.

  • After this Continue button will be enabled at the right bottom of the pa, Click it to proceed.

8. Run Calibration Page:

  • Click the Run Calibration button at the bottom of the page to run your calibration.

  • After the Run Calibration algorithm is completed in the backend, it gives the extrinsic parameters and error stats on the right-side panel of the page.

9. Visualize:

Once Extrinsics are generated, you can visualize the results by clicking on the “Visualize” button to the top right.

You may also import a json file with all reflector positions by clicking on. Here is an example file: