Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibration List:
  • Calibration Launch:
  • Start calibration:
  • Camera Intrinsic Parameters:
  • Choice for manual or auto extrinsic parameters:
  • Manual extrinsic parameters:
  • Add Images and LiDAR Pair:
  • Checkerboard Configuration:
  • Map Pair:
  • Visualization Mode:
  • Detect corners in all images:
  • Improve extrinsic calibration:
  • Analysing the improvement of extrinsic parameters:
  • Error stats:
  • Download calibration parameters:
  • Save calibration dataset:
  • Checkerboard Configuration Description:
  • Analyzing the improved results in Visualization Mode:
  • Extrinsic Calibration Output:
  • Deep optimisation:
  • Camera sensor coordinates:
  • FAQ:
Export as PDF
  1. Calibration

Lidar-Camera Calibration(Old)

PreviousMulti Target Lidar-Camera CalibrationNextVehicle-Camera Calibration

Last updated 3 years ago

Overview: Deepen Calibrate is a software tool that makes the critical task of sensor data calibration simple and quick.

Calibration List:

  • This page contains the list of calibrations. Users can launch an existing dataset, delete and even manage user’s access to these dataset.

Calibration Launch:

  • Users can click on ‘Get Started’ to go to the launch page.

  • Users can calibrate multiple cameras to LiDAR within the same dataset. But calibration needs to be performed individually for each camera/LiDAR combination.

Start calibration:

Camera Intrinsic Parameters:

  1. Intrinsic parameters for the camera are to be added here. Users have three options.

  2. Users can use the intrinsic calibration tool and calibrate the results. Save them to profile and then load them here.

  3. Or users can also load the JSON file.

Choice for manual or auto extrinsic parameters:

  • If the user has known extrinsic parameters and they can directly enter the values or else he can choose to calculate them using the tool.

Manual extrinsic parameters:

  • Users can update the extrinsic values manually.

  • They can choose to verify the values by going to ‘Visualization Mode’.

  • They can also further fine-tune these values.

Add Images and LiDAR Pair:

  • Users need to upload images and LiDAR pairs in case of extrinsic Calibration.

  • Each pair must have a checkerboard in their view. Please make sure that the checkerboard is in a different position in each pair.

  • Users can click on the image/LiDAR on the left side panel to get the image viewer or the LiDAR viewer.

  • Users can also delete/add image/LiDAR from the left side panel as well.

Checkerboard Configuration:

  • Users need to fill up the config of the checkerboard (Please refer to the Checkerboard Configuration Description section for more details).

Map Pair:

  • Users can click on ‘Start Mapping’ to go to the mapping mode. Here the user will have an image viewer on the left side and a LiDAR viewer on the right side.

  • Users have an option to add points in the image. And he has to map each point from the image to the corresponding area in the LiDAR viewer.

  • Users have the option to paint an area in the LiDAR for each selected point in the image.

  • The centroid of the painted area is taken into consideration.

  • The calibration results will depend on this step, if the selected area is small then the result would be better.

  • So the user has all the options to zoom in, zoom out, pan and rotate.

  • Users even have the option to erase a particular painted area and improve the correspondence relation.

  • In most cases, four points are preselected in the image. (All preselected four points are checkerboard borders). Just have to select and map each image point to LiDAR points.

  • Mapping can be done on any pair.

  • Users can navigate from one pair to another using the buttons ‘Map previous file’ and ‘Map next file’.

  • Once mapping is done, the user can move out of mapping mode by clicking on ‘Finish Mapping’.

  • It's sufficient if the user's map a single pair. There is no requirement to map all the image/LiDAR pairs.

  • User can click on ‘Run extrinsic Calibration’, to get the extrinsic parameters

  • 'Run extrinsic calibration' button is visible on selecting the image/LiDAR for which the mapping is done.

Visualization Mode:

  • Users can toggle ‘Enable Visualization Mode’ to go to visualization mode.

  • In this mode, the user can verify the extrinsic parameters by either checking frustum or lidar points on the image.

  • Users can project the generated checkerboard on the LiDAR viewer from the image.

  • Also, users can add a bounding box and look at its projection in the image.

  • Users can manually modify extrinsic parameters to improve those values by simultaneously looking at frustum and lidar points.

Detect corners in all images:

  • Once the users confirm the extrinsic parameters. They can fine tune the extrinsic parameters and improve them.

  • But the users must make sure that the extrinsic parameters are decent enough by using options provided in the visualization mode.

  • For this step, users have to identify the checkerboard corners in all images.

  • Auto-detect corners will work for most cases.

  • If auto-detect fails, users have to fallback to Manual corner detection. (Please refer to Manual Corner Detection Section)

Improve extrinsic calibration:

  • Finally, users can click on ‘Improve extrinsic calibration’. Once the user runs this the algorithm will try to improve the extrinsic parameters.

Analysing the improvement of extrinsic parameters:

  • Users can verify the extrinsic parameters in visualization mode as mentioned earlier.

  • But after improvising the extrinsic parameters, the user has an option to check and verify the algorithm behaviour as well. (Please refer to Analysing the improved results in Visualization Mode for more details.)

Error stats:

Users can use these error values to estimate the accuracy of the calibration results alongside visual confirmation. We extract the checkerboard from the raw point cloud of the LiDAR frame and compare it with the checkerboard corners in the 2-D image. The extracted checkerboard can be viewed from the visualizer. The three extrinsic error metrics along with their description are as follows.

  • Translation Error: Mean of difference between the centroid of points of checkerboard in the LiDAR and the projected corners in 3-D from an image. Values shown in meters. This calculation happens in the LiDAR coordinate system.

  • Rotation Error: Mean of difference between the normals of checkerboard in the point cloud and the projected corners in 3-D from an image. Values are shown both in degree. This calculation happens in the LiDAR coordinate system.

  • Reprojection Error: Mean of difference between the centroid of image corners and projected lidar checkerboard points on the image in 3-D. Values shown in meters. This calculation happens in the image coordinate system.

  • Individual error stats for each image/LiDAR pair can be seen. Average shows the mean of the errors of all the eligible image/LiDAR pairs.

  • If the errors are closer to zero then they are better.

Download calibration parameters:

  • Once the entire calibration is done, users can download all intrinsic and extrinsic parameters.

Save calibration dataset:

  • We have a save option on the top left corner. A user can click on the Save button to save the calibration dataset at any time during the calibration process.

Checkerboard Configuration Description:

  1. Horizontal Corner Count: These are the count of corners in the top row from first to the last. (left to right).

  2. Vertical Corner Count: These are the count of corners in the left column from the first to the last. (top to bottom).

  3. Square Size: It is the dimension of the square size in meters.

  4. Distance from left Corner: The distance from the leftmost side of the board to the left most corner point in meters.

  5. Distance from right Corner: The distance from the rightmost side of the board to the rightmost corner point in meters.

  6. Distance from top corner : The distance from the topmost side of the board to the topmost corner point in meters.

  7. Distance from bottom corner: The distance from the bottom-most side of the board to the bottom-most corner point in meters.

  8. Is checkerboard on the ground: Enable this if the checkerboard is on the ground.

Analyzing the improved results in Visualization Mode:

  1. Image ‘Checkerboard Identification’:

  • This can be used to verify whether the checkerboard area is being properly identified or not.

  • Users can change the configuration of the checkerboard or can also retry detecting corners in order to fix the checkerboard identification.

  • This step displays the undistorted images. So users can verify if the un distortion is correct or not.

2. Image ‘Raw File’:

  • The raw image files are displayed.

3. LiDAR ‘Raw File’ :

  • The raw LiDAR files are displayed.

4. LiDAR ‘Extracted checkerboard’:

  • This shows the extracted checkerboard from the original lidar file. Used for the error stats calculation. We compare extracted checkerboard with the projected checkerboard.

5. Fused Point Cloud: When a user enables the ‘Fused point cloud’, he can select a fused file among the following.

  • Input Cloud: This contains the fuse of all input clouds filtering the checkerboard area. If the checkerboard is not in the LiDAR file, then the user has to fix the extrinsic parameters by going back to the mapping step or manually updating the extrinsic parameters.

  • Generated Checkerboard: This contains the fuse of all generated checkerboards. If the checkerboard is not accurate, then the user has to fix the checkerboard configuration or the inner corner detection.

  • Input and Generated Checkerboard: This contains the fused output of above two files. This helps us to analyze the difference between the input and the generated output before optimization.

  • Checkerboard begin vs after optimization: This helps to know the difference between the generated checkerboard, using the extrinsic values before and after the optimization step.

  • Input and Generated Checkerboard after optimization: This contains the fused lidar data of input cloud and generated checkerboard after optimization. If they are overlapped. Then the user can make sure that the extrinsic values are accurate. Or else he can choose to retry improving the calibration results.

Manual Controls to move the generated checkerboard on the actual checkerboard:

  • Rotation and axis movement controls are added for the projected checkerboard in the visualization stage. The users can drag the projected checkerboard to align the actual checkerboard in the lidar viewer, extrinsic params are recalculated according to the change that was made. This is an additional way to get the initial estimates of the extrinsic params.

Max correspondence:

This value is used as an input to the algorithm. Users can tweak the value by analyzing the fused point cloud LiDAR files. If the difference between the input and the generated cloud is more, then the user can try to increase the value of the max correspondence and retry improving the calibration results.

Toolbar Options:

  • Users have an option to disable the tool tips.

  • Users have an option to reset the view of the image/LiDAR to default.

  • Users have an option to clear the points/corners added in the image/LiDAR.

Manual Corner Detection:

If the checkerboard corners are not auto-detected. Users can select four boundary points in the order (top-left, top-right , bottom-left, bottom-right). And then click on retry corner detection, to get the remaining inner corners of the checkerboard.

Extrinsic Calibration Output:

  • roll, pitch, yaw are in degrees and px, py, pz are in meters.

  • roll, pitch, yaw, px, py, pz are the extrinsic parameters downloaded from the calibration tool.

  • lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.

  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Deep optimisation:

  • A new feature has been added which is a deep optimisation. Now the calibration can be further improved with deep optimisation which uses the edge lines of the checkerboard in the optimisation process.

  • In the visualization mode, users can use LiDAR drop down and select the Edge points checkerboard to visualize the extracted edges of the checkerboard from the raw LiDAR.

  • Users can also use the 2D Line Reprojection Error to verify the individual error value of each pair. This shows the combined reprojection error of all the four lines to the 2D scene.

  • Checkerboard should be tilted for enabling deep optimisation. Users also has to check the option that the 'Is checkerboard tilted' to see the deep optimise button on the improve calibration accuracy mode. Please check the Deep optimisation option on the improve calibration accuracy mode and then click on the Improve calibration accuracy for the deep optimisation to run.

Camera sensor coordinates:

We currently show three different types of the camera sensor coordinate system. On selecting the camera coordinate system, the extrinsic parameters change accordingly. The export option exports the extrinsic parameters based on the selected camera coordinate system.

  • Optical coordinate system: Its the default coordinate system which we follow.

  • ROS REP 103: It is the coordinate system followed by ROS. On changing to this, you can see the change in the visualization and the extrinsic parameters.

  • NED : This follows the north-east-down coordinate system.

FAQ:

How do I get the controls to rotate and move the projected checkerboard?

Users can enable the checkbox ‘checkerboard in LiDAR’, the checkerboard will be projected in red color. Select the ‘Bounding Box Select’ option from the tool options of the LiDAR viewer. On hovering over the checkerboard the color of it changes to blue, now select the checkerboard to see the controls. All three rotations and movements are enabled.

Get Started page