Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Introduction
  • Folder Structure
  • Note
  • config.json for target-based calibration
  • Sample config.json
  • config.json for targetless calibration
  • Sample config.json
  • config.json key description
  • Quickstart
  • Upload file and calibrate
  • Request
  • Response for target-based calibration
  • Response for targetless calibration
  • Get Extrinsic Parameters
  • Request
  • Response for target-based approach
  • Response for targetless approach
Export as PDF
  1. Calibration
  2. API Documentation

LiDAR-LiDAR Calibration API

PreviousLidar Camera Calibration APINextVehicle Lidar Calibration API

Last updated 1 month ago

Introduction

The API requires the client to upload the PCDs and configuration for LiDAR-LiDAR setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.

  1. The client makes an Upload and calibrate API call, which uploads the files and runs the calibration algorithm on the lidar files uploaded with the given target configuration

  2. The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.

  3. The client can fetch the extrinsic parameters using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.

Folder Structure

Lidar frames from both lidars are needed to run the calibration.

  1. Place the Lidar frame from the first lidar in the lidar_1 folder and the Lidar frame from the second lidar in the lidar_2 folder. Provide the mappings of corresponding pcds in the config.

  2. config.json contains configuration details of the calibration

Note: Folder structure is optional. Users can place all files in the main directory and zip it.

Note

  1. The names of the folders and the lidar files shown here are for demonstration purposes. Users should avoid using space in the folder and the lidar filenames.

  2. The name of the JSON file should be config.json (case sensitive)

config.json for target-based calibration

{
    "calibration_name": "multiple lidars dataset",
    "calibration_type": "multi_lidar_calibration",
    "calibration_group_id": "xxxxxxxxxxxxxxxxxx",
    "multi_target": true,
    "lidar_1":
    {
        "name": "first lidar",
        "laser_channels": 16,
        "height": 0.76
    },
    "lidar_2":
    {
        "name": "second lidar",
        "laser_channels": 16,
        "height": 0.76
    },
    "targets":
    {
        "0":
        {
            "length": 0.6,
            "width": 1.0,
            "tilted": false
        },
        "1":
        {
            "length": 1.223,
            "width": 1.223,
            "tilted": false
        },
        "2":
        {
            "length": 1.223,
            "width": 1.223,
            "tilted": false
        }
    },
    "perform_auto_detection": true,
    "initial_estimates":
    {
        "roll": 0,
        "pitch": 0,
        "yaw": 0,
        "px": 0,
        "py": 0,
        "pz": 0
    },
    "data":
    {
        "lidar_1": "lidar_1/1.pcd",
        "lidar_2": "lidar_2/2.pcd"
    }
}

Sample config.json

config.json for targetless calibration

{
    "calibration_name": "multiple lidars dataset",
    "calibration_type": "multi_lidar_calibration",
    "calibration_group_id": "xxxxxxxxxxxxxxxxxx",
    "multiple_pcds_per_lidar": true,
    "is_target_based": false,
    "algorithm_name": "ndt",
    "voxel_size": 0.5,
    "max_correspondance": 0.2,
    "lidar_1":
    {
        "name": "first lidar",
        "ground_plane":[
        [
            -0.025,
            0.004,
            0.999,
            1.53
        ],[
            -0.025,
            0.004,
            0.999,
            1.53
        ]]
    },
    "lidar_2":
    {
        "name": "second lidar",
        "ground_plane":[
        [
            0.01,
            -0.0075,
            0.997,
            1.93
        ],[
            0.01,
            -0.0075,
            0.997,
            1.93
        ]]
    },
    "initial_estimates":
    {
        "roll": 0,
        "pitch": 0,
        "yaw": 0,
        "px": 0,
        "py": 0,
        "pz": 0
    },
    "data":
    {
        "mappings":
        [
        [
        "lidar1/lidar1pcd1.pcd",
        "lidar2/lidar2pcd1.pcd"
        ],
        [
        "lidar1/lidar1pcd2.pcd",
        "lidar2/lidar2pcd2.pcd"
        ]
        ]
    }
}

Sample config.json

config.json key description

Key
Value type
Description

calibration_name

string

Name of calibration

calibration_type

string

Non-editable field. Value should be multi_lidar_calibration

calibration_group_id

string

This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.

multi_target

boolean

true: if multiple targets are used

false: if single target is used

is_target_based

boolean

true: if the calibration uses target based approach false: if the calibration uses targetless based approach

algorithm_name

string

It is the algorithm which needs to be used to perform calibration. Supported values: 'gicp' , 'ndt' , 'custom_gicp' gicp - works best for dense point clouds ndt - works best for sparse point clouds custom_gicp - modified version of gicp, works best if there are good amount of ground points in both the lidar frames.

voxel_size

double

This key is required only when ndt algorithm is selected. voxel_size value is adjusted depending on the indoor/outdoor environment.

Note:

  1. For outdoor environment, its preferable to have a smaller voxel_size, and for indoors, a larger voxel_size is prefered.

  2. If voxel_size is not given, default value of 0.5 is considered

max_correspondance

double

This key is required only when custom_gicp algorithm is selected.

Note:

  1. Accepted range is from 0 to 1

  2. If max_correspondance is not given, default value of 0.2 is considered.

lidar_1

Object

name: It is the name given by the client to lidar_1(first lidar). The client can modify it as willed.

type: string laser_channels: It is the number of laser channels present in lidar_1 (This value is necessary to auto detect the board in lidar frame). Supported values: 16, 32, 64, 128 and 256 type: int height: It is the approximate height of the lidar_1 from ground. type: double ground_plane: list of lists where each list value is the equation of the ground plane in frame of reference of this lidar. It is expected to be a list of size 4 with the following convention: ground plane with [a, b, c, d] signifies the ground plane has the equation

a*x + b*y + c*z + d = 0 Note: This ground plane equation is required only if the selected algorithm is custom_gicp.

lidar_2

Object

name: It is the name given by the client to lidar_2(second lidar). The client can modify it as willed.

type: string laser_channels: It is the number of laser channels present in lidar_2 (This value is necessary to auto detect the board in lidar frame). Supported values: 16, 32, 64, 128 and 256 type: int height: It is the approximate height of the lidar_2 from ground. type: double

ground_plane: list of lists where each list value is the equation of the ground plane in frame of reference of this lidar. It is expected to be a list of size 4 with the following convention: ground plane with [a, b, c, d] signifies the ground plane has the equation

a*x + b*y + c*z + d = 0 Note: This ground plane equation is required only if the selected algorithm is custom_gicp.

targets

Object

It is a dictionary of dictionary with each dictionary having target properties.

length

double

length of the target used for calibration

width

double

width of the target used for calibration

tilted

boolean

true: if the board is titled to the right up to 45 degrees

false: if the board is not tilted

perform_auto_detection

boolean

true: if auto board detection is required for the boards in point cloud. laser_channels property for both lidars should be provided for this property to work. false: if auto board detection is not required

initial_estimates

Object with all values as double

This is an optional field. The initial estimates which will be optimised to get extrinsic parameters during calibration process. 1. roll 2. pitch 3. yaw 4. px 5. py 6. pz Note: 1. roll, pitch and yaw should be in degrees 2. px, py and pz should be in meters. 3. Delete this key if initial_estimates are not available or not to be used during calibration

data

Object

It stores the data related to the lidar files which needs to be uploaded.

lidar_1: It is the relative path of pcd file (targetbased ) / folder path(targetless) corresponding to first lidar lidar_2: It is the relative path of pcd file (targetbased ) / folder path(targetless) corresponding to second lidar.

boolean

Optional Argument incase you have old zip file with single pair of lidars default: True

mappings

list of lists

each list value is a list with 1st value corresponding to lidar1 and 2nd value corresponding to lidar 2

translation_bound

float

Optional Argument If the Algorithm used is custom_gicp them you can pass this argument which represents the maximum translation difference from the initial estimates.

rotation_bound

float

Optional Argument If the Algorithm used is custom_gicp them you can pass this argument which represents the maximum rotation difference from the initial estimates.

Quickstart

Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests.

Upload file and calibrate

This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.

https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset

Request

Path parameters

Parameter name
Parameter type
Description

clientId

string

ClientId obtained from Deepen AI

Body

Key
Value
Description

file

.zip file

Zip file containing config and pcds in a suitable format

Response for target-based calibration

{
    "dataset_id": "XXXXXXXXXXXXXXXXXXXXXXXX",
    "calibration_algorithm_version": "target_based:v1",
    "extrinsic_parameters": {
        "roll": 0.4239873552372359,
        "pitch": -0.23771056916989275,
        "yaw": 0.03574130775730624,
        "px": -0.20099097249221035,
        "py": 0.013142481183883614,
        "pz": -0.019420891404400203
    },
    "error_stats": {
        "translation_error": 0.0007433547143350861,
        "rotation_error": 0.2269911048682634
    },
    "INFO": "Auto detection worked well on this dataset."
}
Key
Description

dataset_id

A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.

calibration_algorithm_version

The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.

extrinsic_parameters

The extrinsic parameters are from the first to the second lidars for the given calibration setup.

error_stats

Translation Error indicates the distance between the center of the boards

Rotation Error indicates the angle between the target planes

Note: If initial estimates are provided, error_stats can't be calculated

INFO

This gives general information about the dataset (like auto-detection worked on this dataset or not)

Response for targetless calibration

{
    "dataset_id": "XXXXXXXXXXXXXXXXXXXXXXXX",
    "calibration_algorithm_version": "custom_gicp:v1",
    "extrinsic_parameters": {
        "roll": 0.4239873552372359,
        "pitch": -0.23771056916989275,
        "yaw": 0.03574130775730624,
        "px": -0.20099097249221035,
        "py": 0.013142481183883614,
        "pz": -0.019420891404400203
    },
    "error_stats":{
        "estimated_error_value": 0.4567592,
        "inlier_rmse": 0.08352666165069443
    }
}
Key
Description

dataset_id

A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.

calibration_algorithm_version

The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.

extrinsic_parameters

The extrinsic parameters are from the first to the second lidars for the given calibration setup.

estimated_error_value

It is an error in the extrinsic parameters estimated from the fitness score of the algorithm used.

Get Extrinsic Parameters

This GET api call returns dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.

https://tools.calibrate.deepen.ai/api/v2/external/datasets/{datasetId}/extrinsic_parameters

Request

Path parameters

Parameter name
Parameter type
Description

dataset_Id

string

datasetId obtained from the response of Upload file and calibrate API.

Response for target-based approach

{
    "dataset_id": "XXXXXXXXXXXXXXXXXXXXXXXX",
    "calibration_algorithm_version": "target_based:v1",
    "extrinsic_parameters": {
        "roll": 0.4239873552372359,
        "pitch": -0.23771056916989275,
        "yaw": 0.03574130775730624,
        "px": -0.20099097249221035,
        "py": 0.013142481183883614,
        "pz": -0.019420891404400203
    },
    "error_stats": {
        "translation_error": 0.0007433547143350861,
        "rotation_error": 0.2269911048682634
    }
}
Key
Description

dataset_id

A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.

calibration_algorithm_version

The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.

extrinsic_parameters

roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.

error_stats

translation error is given in meters and rotation error is given in degrees.

Response for targetless approach

{
    "dataset_id": "XXXXXXXXXXXXXXXXXXXXXXXX",
    "calibration_algorithm_version": "custom_gicp:v1",
    "extrinsic_parameters": {
        "roll": 0.4239873552372359,
        "pitch": -0.23771056916989275,
        "yaw": 0.03574130775730624,
        "px": -0.20099097249221035,
        "py": 0.013142481183883614,
        "pz": -0.019420891404400203
    },
    "error_stats": {
         "estimated_error_value": 0.4567592,
         "inlier_rmse": 0.08352666165069443
    }
}
Key
Description

dataset_id

A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.

calibration_algorithm_version

The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.

extrinsic_parameters

roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.

estimated_error_value

It is an error in the extrinsic parameters estimated from the fitness score of the algorithm used.

How to get Access Tokens can be found on the following link:

multiple_pcds_per_lidar    
Access token for APIs
1KB
config.json
1KB
config.json
All fles in the main direcory
Sub folder structure
contents of lidar_1 folder
contents of lidar_2 folder