Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibration Homepage
  • Calibration selection
  • Calibration Instructions Page
  • Approach selection
  • Configuration
  • Camera Intrinsic Parameters
  • Upload files from LiDAR and Camera
  • Sample CSV format
  • Estimated extrinsic parameters
  • Mapping of corresponding points
  • Manually enter extrinsic parameters
  • Verifying the accuracy of the estimated extrinsic parameters
  • Segmentation
  • Auto segmentation
  • Manual segmentation:
  • Run Calibration
  • Additional options in the run calibration
  • Download calibration parameters
  • Analyzing the extrinsic parameters in Visualization Mode:
  • Sensor fusion techniques
  • Error function
  • Graph
  • Extrinsic Calibration Output
  • Camera coordinates system
  • Sample Script
Export as PDF
  1. Calibration

Lidar-Camera Calibration (Targetless)

PreviousData Collection for Lidar-Camera Calibration (Targetless)NextData Collection for Multi Target Lidar-Camera Calibration

Last updated 3 months ago

Calibration Homepage

  • This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.

  • Click on New Calibration to create a new calibration dataset.

Calibration selection

Select LiDAR-Camera Calibration to create a new dataset.

Calibration Instructions Page

Upon selecting LiDAR-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.

Approach selection

Users can choose either the target-based or the targetless calibration. The target-based calibration uses the checkerboard/charucoboard as the calibration target, and the targetless calibration uses the scene captured in both LiDAR and the camera sensor data.

Configuration

Camera Intrinsic Parameters

Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can also load the JSON file.

  • Users can manually enter the intrinsic parameters if they already have them.

Upload files from LiDAR and Camera

Add point cloud files from the LiDAR and images from the camera sensor. After adding, pair the point cloud files with the matching image files before continuing.

Sample CSV format

X, Y, Z
0,-0,-0
62.545,-3.5064,-3.5911
62.07,-3.5133,-4.1565
32.773,-1.8602,-3.4055

Estimated extrinsic parameters

Mapping of corresponding points

To get the initial estimates, users can map any four corresponding points from the image and the point cloud data.

Manually enter extrinsic parameters

Alternatively, users can add the initial estimates if they know them. In such a case, users can skip manually adding the markers. Users can click Add estimated extrinsic parameters to add the initial estimates.

Verifying the accuracy of the estimated extrinsic parameters

Estimated extrinsic parameters are crucial in generating accurate extrinsic parameters.

To get good initial estimates, users must clear the markers and redo the markings if the estimated parameters are way off.

Segmentation

There are two types of segmentation approaches available for the user to select:

Auto segmentation

  1. This approach automates the segmentation of vehicles in point clouds and images using a deep learning model trained on various datasets.

Manual segmentation:

  1. Lidar: In this approach, the user needs to add bounding boxes in the lidar frame and fit the boxes to vehicles in the point cloud. The bounding boxes must be added for all the point clouds uploaded for calibration. This can be done by selecting the Bounding box mode, adding bounding boxes, and clicking Save Labels.

  2. Image: There are two ways to do manual segmentation

    1. Semantic Painting: Users can use the brush to paint the vehicles in the image and click on Save Labels.

    2. Segment anything: In this approach, Users place a cluster of points on each vehicle. The same vehicle points should be placed under the same category. Please place at least one point on each surface of the car, such as the windshield, sides, roof, etc., so that when the model runs, it doesn't miss any part of the vehicle. After placing the points in each image, please click on the save label to save the data.

Note: Auto segmentation is suggested initially. Based on the segmented vehicles in the point clouds and images, the user can decide whether to proceed with auto-segmentation or perform the segmentation manually.

Run Calibration

Users need to click on Calibrate to optimize the estimated extrinsic parameters further. All the uploaded pairs are used in the optimization process.

Additional options in the run calibration

Users can optimise only angles by selecting the Angles only check box. It is observed that enabling Angles only results in achieving better Sensor angle accuracy (note that sensor position is not optimized in this case).

Download calibration parameters

Once the entire calibration is done, users can download all intrinsic and extrinsic parameters by clicking the Export button in the header.

Analyzing the extrinsic parameters in Visualization Mode:

Sensor fusion techniques

Users can use the following techniques to visualize the extrinsic parameters.

Frustum: Users can see the image's field of view in the LiDAR frame. This uses both the camera matrix and the extrinsic parameters. Image axes are also displayed according to the extrinsic parameters.

LiDAR points in image: Users can see the LiDAR points projected in the camera image using extrinsic parameters.

Color points from camera: Users can see the camera's color points in the lidar space using the extrinsic parameters.

Error function

  • In an Ideal segmentation with perfect LiDAR-Camera calibration and accurate segmentation of both the point cloud and the camera images, the projection of the segmented LiDAR points will align precisely with the corresponding segmented pixels in the camera image.

  • Based on the above concept, we formulate our error function as follows:

    • E=1−αnum_segmented_points∑k=1N∑pi∈Pk1∣∣pi∣∣Dk(Ï€(pi,K,R,t))E = 1 - \alpha \frac{num\_segmented\_points}{\sum_{k=1}^{N} \sum_{p_i \in P_k} \frac{1}{||p_i||} D_k (\pi (p_i, K, R, t))}E=1−α∑k=1N​∑pi​∈Pk​​∣∣pi​∣∣1​Dk​(Ï€(pi​,K,R,t))num_segmented_points​, where

    • Ï€\piÏ€ is the projection function that projects 3D Lidar points onto the camera image. K is the camera intrinsic matrix, R and t are the rotation and translation parameters that are being estimated in this case.

    • PkP_kPk​ is the set of all the segmented lidar points, and ∀pi∈Pk,∣∣pi∣∣\forall p_i \in P_k, || p_i ||∀pi​∈Pk​,∣∣pi​∣∣ is the norm of the point pip_ipi​ in 3D space.

    • DkD_kDk​ is the alignment function that calculates the proximity between the projected Lidar points and the corresponding segmented pixels in the camera image.

    • α\alphaα is a normalisation constant.

Graph

  • The plot demonstrates a strong correlation between our error function and the ground truth error, within a 1-degree deviation from the ground truth.

  • The extrinsic angles estimated by Deepen are as follows: Roll = -91.676, Pitch = 1.263, Yaw = 179.204 in degrees with a ground truth deviation of only 0.25 degrees.

  • The extrinsic angles exhibiting the least deviation from the ground truth are -91.676 for Roll, 0.763 for Pitch, and, 179.204 for Yaw.

Extrinsic Calibration Output

  • roll, pitch, and yaw are in degrees, and px, py, and pz are in meters.

  • lidarPoint3D is the 3d coordinates of a point in the lidar coordinate system.

  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Camera coordinates system

We currently show three different types of camera coordinate systems. The extrinsic parameters change according to the selected Camera coordinate system. The export option exports the extrinsic parameters based on the selected camera coordinate system.

  • Optical coordinate system: It's the default coordinate system that we follow.

  • ROS REP 103: This is the coordinate system followed by ROS. When you change to this, you can see the change in the visualization and the extrinsic parameters.

  • NED: This follows the North-East-Down coordinate system.

Sample Script

This is a sample Python script to project lidar points on an image using extrinsic parameters. It uses the open3d and opencv libraries.

Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, .

Once the estimated extrinsic parameters are in the tool, users can visualize the parameters by clicking on the visualize button. In the visualization, we have a few sensor fusion techniques through which the accuracy of the extrinsic parameters can be visualized. For more details, visit .

The graph below compares the ground truth error, calculated using our manual validation method, with the for 13 different extrinsic parameters.

click here
Tool usage guide for old UX
Deepen error function
Sensor fusion techniques
2KB
project_lidar_points_to_image.py
Camera input section in Configuration page