Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
Export as PDF
  1. Calibration

Calibration Optimiser

Optimise the calibration parameters of multiple sensors together.

PreviousVehicle radar calibrationNextCalibration list page

Last updated 3 years ago

Overview: Optimise the calibration parameters of multiple sensors together.

Steps to use optimise the parameters:

  • The Optimiser feature can be launched using the “New Optimisation” button in the top right corner in the Optimisations list page for selected Calibration Workspace.

  • On the side menu bar we have Optimisations button which will navigate to the optimisation list page, by default the list will be empty.

  • When you launch the Optimiser flow, you will be redirected to the following page which will ask you to enter a minimum of 3 sensor pairs calibrations forming a loop (discussed further below)

  • A loop is formed between multiple sensors when all the calibration pairs are calculated such that a random walk from any sensor to itself can be completed. For example, suppose there are three sensors: S1, S2 and S3 capturing the same scene. Our workspace has calibration pairs S1-S2, S2-S3 and S1-S3. A closed loop is formed when you consider three sensors as nodes in a graph network and the calibration between them as edges.

  • This feature optimises multiple sensors focusing on the same scene by utilising the power of the loop-closure. Hence, the tool requires a minimum of 3 sensor pairs forming a loop.

  • Moving on with the example on the tool, we add a combination of sensor pairs using the “+ Select sensor pair” buttons.

  • Select the sensor using the search and calibration type filter from the list of calibrations in the given workspace.

  • After selecting the corresponding sensor pair using the “+” sign, rename the sensors according to your convention and click “Add sensors”.

  • Although the order of the sensor pairs does not matter, the user has to be extremely careful with the selection of correct sensor pairs, they should close a loop and input consistent custom names across all the sensor pairs.

  • Click “Add sensor pair” if there are more than three sensor pairs in the loop

  • Click “Done adding” once all the sensor pairs to be optimised are added.

  • You will be redirected to the following page with a visualiser and details of the sensor pairs:

  • Click the “Optimise” button on the top right of the page to optimise the sensor pairs and visualise the results. The page will look like the following once the results are calculated:

  • Understanding the optimised page results and their features:

    • Residual Error: These parameters represent the summary of all the sensor pairs given as input. Ideally, all of these parameters should be zero or tend towards zero after optimization.

    • Estimated Error Stats: We introduce error values for users to assess the quality of the calibration.

      - Residual error value: This value is the norm of the residual parameters.

      - Individual angle error: This is the estimated error in each calibration's angle parameters derived from residual parameters.

      - Individual position error: This is the estimated error in each calibration's position parameters derived from residual parameters.

    • Individual Sensor pair: Following the residual error values are the changes in each of the calibration’s parameters.

    • Toggle “Show optimised data” to visualise the sensors after running optimisation algorithm

    • “1 degree = 1 meter” indicates that the algorithm considers the units of angles and positions equivalent and optimises changes accordingly. The user can change the value of the meter scale accordingly to get results according to their needs.

    • Name the optimization and save the details

    - Click the Save button on the top right to save the added calibration details and optimisation results.

    • You will see a loader as the results are being saved in the database as an optimisation dataset with a dataset Id.

    • You can rename the dataset to the preferred name and click on save

    • If you click the back button on the top left, you will be redirected to the optimisations list page.

    • Clicking on any of the optimisation datasets, we get the optimisation dataset details and its visualisation that are saved in the database.

Estimate the ground truth error:

We can also use the optimiser tool to roughly estimate the ground truth error. Let say we have taken three sensors A, B, C and their individual pair wise calibrations which forms a loop as described above. We can click on the optimise button to have the residual error.

Ideally when we combine all three pair wise individual calibration results then we should get the residual error close to zero. So using the residual error we can now know the ground truth error for the combined three calibrations. And we observed that the each individual pair-wise ground truth calibration error is close to 1/3 of the total residual error. `

Camera sensor coordinates:

We currently show three different types of the camera sensor coordinate system. On selecting the camera coordinate system, the extrinsic parameters change accordingly. The export option exports the extrinsic parameters based on the selected camera coordinate system.

  • Optical coordinate system: Its the default coordinate system which we follow.

  • ROS REP 103: It is the coordinate system followed by ROS. On changing to this, you can see the change in the visualization and the extrinsic parameters.

  • NED : This follows the north-east-down coordinate system.