Links

Lidar Camera Calibration API

Introduction

The API requires the client to upload the images, PCD (pcap, csv, and bin are also supported), and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
  1. 1.
    The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images and lidar files for the given configuration.
  2. 2.
    The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, extrinsic_camera_coordinate_system, extrinsic_parameters, error_stats, and projected_images.
  3. 3.
    The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_camera_coordinate_system, extrinsic_parameters, error_stats, and projected_images.

Folder Structure

We require image and lidar frame pairs from the camera and lidar for a given calibration.
  1. 1.
    Place the images captured from the camera in a folder.
  2. 2.
    Place the Lidar data captured from the LiDAR in a folder.
  3. 3.
    config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
All fles in the main direcory
Sub folder structure
Contents of the Camera folder
Contents of the LiDAR folder

Note

  1. 1.
    The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder, lidar, and image filenames.
  2. 2.
    The name of the JSON file should be config.json (case sensitive)

config.json for checkerboard

1
{
2
"calibration_name": "Lidar camera calibration",
3
"calibration_type": "lidar_camera_calibration",
4
"calibration_group_id": "xxxxxxxxxxxxxxxxxx",
5
"multi_target": false,
6
"max_correspondence": 0.05,
7
"deep_optimization": false,
8
"deep_optimization_approach": "custom_ransac",
9
"is_lidar_inverted": false,
10
"get_initial_estimates_from_lidar_autodetection": true,
11
"mapping_pair_to_use_for_initial_estimates": 0,
12
"target_matching_the_chosen_board": "0",
13
"board_to_chose_from_left": 0,
14
"lidar":
15
{
16
"name": "lidar"
17
},
18
"extrinsic_camera_coordinate_system": "OPTICAL",
19
"intrinsics":
20
{
21
"camera_name": "camera name",
22
"fx": 4809.13303863791,
23
"fy": 4804.6573641098275,
24
"cx": 1994.0408528062305,
25
"cy": 1441.0395643417517,
26
"k1": -0.03563526645635081,
27
"k2": 0.2338404159941449,
28
"k3": -1.3671429904044254,
29
"k4": 0,
30
"k5": 0,
31
"k6": 0,
32
"p1": -0.002478228973939787,
33
"p2": -0.0026861612981927407,
34
"distortion_enabled": false,
35
"lens_model": "pinhole"
36
},
37
"targets":
38
{
39
"0":
40
{
41
"horizontal_corners": 7,
42
"vertical_corners": 8,
43
"type": "checkerboard",
44
"square_size": 0.12,
45
"padding_right": 0.343,
46
"padding_left": 0.22,
47
"padding_top": 0.22,
48
"padding_bottom": 0.22,
49
"on_ground": false,
50
"tilted": true,
51
"ignore_top_edge": true
52
}
53
},
54
"data":
55
{
56
"mappings":
57
[
58
[
59
"camera/1.png",
60
"lidar/1.pcd"
61
],
62
[
63
"camera/2.png",
64
"lidar/2.pcd"
65
],
66
[
67
"camera/3.png",
68
"lidar/3.pcd"
69
],
70
[
71
"camera/4.png",
72
"lidar/4.pcd"
73
],
74
[
75
"camera/5.png",
76
"lidar/5.pcd"
77
],
78
[
79
"camera/6.png",
80
"lidar/6.pcd"
81
]
82
]
83
},
84
"extrinsic_params_initial_estimates":
85
{
86
"roll": -91.22985012342338,
87
"pitch": -1.8101400401363152,
88
"yaw": -87.84825901836496,
89
"px": 0.06356787067597357,
90
"py": -0.28854421270970754,
91
"pz": -0.015338954542810408
92
}
93
}

Sample config.json

config.json
2KB
Code

config.json for charucoboard

{
"calibration_name": "Lidar camera calibration",
"calibration_type": "lidar_camera_calibration",
"calibration_group_id": "xxxxxxxxxxxxxxxxxx",
"multi_target": false,
"max_correspondence": 0.05,
"deep_optimization": false,
"deep_optimization_approach": "custom_ransac",
"is_lidar_inverted": false,
"get_initial_estimates_from_lidar_autodetection": true,
"mapping_pair_to_use_for_initial_estimates": 0,
"target_matching_the_chosen_board": "0",
"board_to_chose_from_left": 0,
"lidar":
{
"name": "lidar"
},
"extrinsic_camera_coordinate_system": "OPTICAL",
"intrinsics":
{
"camera_name": "camera name",
"fx": 4809.13303863791,
"fy": 4804.6573641098275,
"cx": 1994.0408528062305,
"cy": 1441.0395643417517,
"k1": -0.03563526645635081,
"k2": 0.2338404159941449,
"k3": -1.3671429904044254,
"k4": 0,
"k5": 0,
"k6": 0,
"p1": -0.002478228973939787,
"p2": -0.0026861612981927407,
"distortion_enabled": false,
"lens_model": "pinhole"
},
"targets":
{
"0":
{
"rows": 14,
"columns": 14,
"type": "charucoboard",
"square_size": 0.08708571428,
"marker_size": 0.06966857142,
"dictionary": "5X5",
"padding_right": 0,
"padding_left": 0,
"padding_top": 0,
"padding_bottom": 0,
"on_ground": true,
"tilted": false,
"ignore_top_edge": true
},
"1":
{
"rows": 13,
"columns": 13,
"type": "charucoboard",
"square_size": 0.09378461538,
"marker_size": 0.0750276923,
"dictionary": "6X6",
"padding_right": 0,
"padding_left": 0,
"padding_top": 0,
"padding_bottom": 0,
"on_ground": true,
"tilted": false,
"ignore_top_edge": true
},
"2":
{
"rows": 12,
"columns": 12,
"type": "charucoboard",
"square_size": 0.1016,
"marker_size": 0.08128,
"dictionary": "7X7",
"padding_right": 0,
"padding_left": 0,
"padding_top": 0,
"padding_bottom": 0,
"on_ground": true,
"tilted": false,
"ignore_top_edge": true
},
"3":
{
"rows": 13,
"columns": 13,
"type": "charucoboard",
"square_size": 0.09378461538,
"marker_size": 0.0750276923,
"dictionary": "original",
"padding_right": 0,
"padding_left": 0,
"padding_top": 0,
"padding_bottom": 0,
"on_ground": true,
"tilted": false,
"ignore_top_edge": true
}
},
"data":
{
"mappings":
[
[
"Images/charuco_2_1.jpg",
"PCDs/charuco_2_1.pcd"
],
[
"Images/charuco_2_2.jpg",
"PCDs/charuco_2_2.pcd"
],
[
"Images/charuco_2_3.jpg",
"PCDs/charuco_2_3.pcd"
]
]
},
"extrinsic_params_initial_estimates":
{
"roll": -91.22985012342338,
"pitch": -1.8101400401363152,
"yaw": -87.84825901836496,
"px": 0.06356787067597357,
"py": -0.28854421270970754,
"pz": -0.015338954542810408
}
}

Sample config.json

config.json
4KB
Code

config.json key description

Key
Type
Description
calibration_name
string
Name of the calibration
calibration_type
string
Non-editable field. Value should be lidar_camera_calibration
calibration_group_id
string
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
get_initial_estimates_from_lidar_autodetection
Boolean
parameter to specify if we want to do autodetection of boards in lidar
mapping_pair_to_use_for_initial_estimates
Integer
Index (0 based) of the mapping data i.e the files pair to use for calculating initial estimates when initial estimates are not provided
target_matching_the_chosen_board
String
String corresponding to the target configuration. The target configuration should be the we want to use for initial estimates calculation
board_to_chose_from_left
number
optional parameter. When we are using "get_initial_estimates_from_lidar_autodetection" as true and the boards multiple boards we are using are of same size then we can specify the board to chose from left that should be considered for initial estimates calculation
multi_target
boolean
true: if multiple targets are used false: if single target is used
max_correspondence
double
Accepted range is from 0 to 1
deep_optimization
Boolean
Performs optimisation for the board edges. true: If tilted = true and deep optimisation is needed false: If deep optimisation is not required or the tilted = false
deep_optimization_approach
string
Accepted values 1. clustering 2. custom_ransac These are two approaches which are used in deep optimization. Users can select any one of these based on there requirement. Default value is clustering
is_lidar_inverted
Boolean
It gives information about whether the point cloud is inverted or non-inverted. By default, we consider the lidar as non-inverted.
lidar_name
string
It is the name given by the client to the lidar. The client can modify it as willed.
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values
  1. 1.
    OPTICAL
  2. 2.
    ROS_REP_103
  3. 3.
    NED
Default value is OPTICAL
camera_name
string
It is the name given by the client to the camera. The client can modify it as willed.
lens_model
string
Describes the type of lens used by the camera. Accepted values
  1. 1.
    pinhole
  2. 2.
    fisheye
fx
double
Focal length of the cameras in the X-axis. Value in pixels.
fy
double
Focal length of the camera in the Y-axis. Value in pixels.
cx
double
Optical centre of the camera in the X-axis. Value in pixels.
cy
double
Optical centre of the camera in the Y-axis. Value in pixels.
distortion_enabled
boolean
Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.
k1, k2, k3, k4, p1, p2
double
These are the values for distortion coefficients of the camera lens.Note:
  1. 1.
    If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)
  2. 2.
    If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)
  3. 3.
    These parameters are not required if distortion_enabled is false.
targets
Object
It is a dictionary of dictionary with each dictionary having target properties
type
string
Describes the type of target used. Accepted values
  1. 1.
    checkerboard
  2. 2.
    charucoboard
x (or) horizontal_corners
integer
number of horizontaol corners in the checkerboard (this property is needed if the type = checkerboard)
y (or) vertical_corners
integer
number of vertical corners in the checkerboar (this property is needed if the type = checkerboard)
rows
integer
number of horizontaol squares in the charucoboard (this property is needed if the type is charucoboard)
columns
integer
number of vertical squares in the charucoboard (this property is needed if the type is charcuboard)
square_size
double
Size of each square in meters
marker_size
double
The size of marker in a charucoboard in meters ( Normally it is 0.8 times of square size ) (this property is needed if the type is charucoboard)
dictionary
string
It is the string that defines the charuco dictionary of the target. We support
  1. 1.
    5X5
  2. 2.
    6X6
  3. 3.
    7X7
  4. 4.
    original
This property is needed if the type is charucoboard
padding_right
double
padding to the right of the board
padding_left
double
padding to the left of the board
padding_top
double
padding to the top of the board
padding_bottom
double
padding to the bottom of the board
on_ground
Boolean
true: if the board is kept on ground
false: if the board is not on the ground
tilted
Boolean
true: if the board is tilted false: if the board is not tilted
ignore_top_edge
Boolean
This is a field to improve the accuracy of deep optimization. If top part of the board is missing in the lidar frame, provide this flag as true else give it as false. By default false is taken
data
Object
It stores the data related to mapping of the camera and the lidar files
mappings
List of lists
It is a list of lists, where each sub-list is a tuple containing names of the image and pcd paired together.
Note:
  1. 1.
    The first element in the tuple should be the image path
  2. 2.
    The second element in the tuple should be the lidar frame path from the lidar
  3. 3.
    The client can name their image and lidar frame as they want, but they must have the same name in the mapping list and be present in the provided path
extrinsic_params_initial_estimates
Object with all values as double
The estimated extrinsic parameters which will be optimised during calibration process.
  1. 1.
    roll
  2. 2.
    pitch
  3. 3.
    yaw
  4. 4.
    px
  5. 5.
    py
  6. 6.
    pz

Quickstart

Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link: Access token for APIs

Upload file and calibrate

This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, extrinsic_camera_coordinate_system, extrinsic parameters, error_stats, and projected_images to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset

Request

Path parameters

Parameter name
Parameter type
Description
clientId
string
ClientId obtained from Deepen AI

Body

Key
Value
Description
file
.zip file
Zip file containing config and images in a suitable format

Response

{
"dataset_id": "XXXXXXXXXXXXXXXXX",
"calibration_algorithm_version": "XXXXXXXXXXXXXXXXX",
"extrinsic_camera_coordinate_system": "OPTICAL",
"extrinsic_parameters": {
"roll": -90.47755237974575,
"pitch": -0.38434110269976385,
"yaw": -87.95967045393508,
"px": 0.06958801619530329,
"py": -0.28251980028661544,
"pz": -0.010306058948604074
},
"error_stats": {
"translation_error": 0.04085960836364045,
"plane_translation_error": 0.00832952204,
"rotation_error": 0.7512576778920595,
"reprojection_error": 27.18615944133744
},
"projected_images": "XXXXXXXXXXurl_to_download_projected_imagesXXXXXX"
}
Key
Description
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_camera_coordinate_system
Camera coordinate system for extrinsic sensor angles (roll, pitch, and yaw).
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of difference between the centroid of points of checkerboard/charucoboard in the LiDAR and the projected corners in 3-D from an image
plane_translation_error: Mean of the Euclidean distance between the centroid of projected corners in 3-D from an image and plane of the checkerboard/charucoboard in the LiDAR
rotation_error: Mean of difference between the normals of the checkerboard/charucoboard in the point cloud and the projected corners in 3-D from an image reprojection_error: Mean of difference between the centroid of image corners and projected lidar checkerboard/charucoboard points on the image in 3-D
projected_images
This is a signed URL to download the images with corresponding lidar points projected on them using the extrinsics obtained at the end of the calibration. This URL has an expiry of 7 days from the moment it is generated. The image below shows an example image for this projection.
Image which has the projected points from the lidar frame onto the image frame using the extrinsics obtained from the calibration

Get Extrinsic Parameters

This GET api call returns dataset_id, extrinsic_camera_coordinate_system, extrinsic parameters, error_stats, and projected_images to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters/{extrinsic_camera_coordinate_system}

Request

Path parameters

Parameter name
Parameter type
Description
dataset_id
string
dataset_id obtained from the response of Upload file and calibrate API.
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values
  1. 1.
    OPTICAL
  2. 2.
    ROS_REP_103
  3. 3.
    NED
Default value is OPTICAL

Response

{
"dataset_id": "XXXXXXXXXXXXXXXXX",
"calibration_algorithm_version": "XXXXXXXXXXXXXXXXX",
"extrinsic_camera_coordinate_system": "OPTICAL",
"extrinsic_parameters": {
"roll": -90.47755237974575,
"pitch": -0.38434110269976385,
"yaw": -87.95967045393508,
"px": 0.06958801619530329,
"py": -0.28251980028661544,
"pz": -0.010306058948604074
},
"error_stats": {
"translation_error": 0.04085960836364045,
"plane_translation_error": 0.00832952204,
"rotation_error": 0.7512576778920595,
"reprojection_error": 27.18615944133744
},
"projected_images": "XXXXXXXXXXurl_to_download_projected_imagesXXXXXX"
}
Key
Description
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_camera_coordinate_system
Camera coordinate system for extrinsic sensor angles (roll, pitch, and yaw).
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of difference between the centroid of points of checkerboard/charucoboard in the LiDAR and the projected corners in 3-D from an image
plane_translation_error: Mean of the Euclidean distance between the centroid of projected corners in 3-D from an image and plane of the checkerboard/charucoboard in the LiDAR
rotation_error: Mean of difference between the normals of the checkerboard/charucoboard in the point cloud and the projected corners in 3-D from an image reprojection_error: Mean of difference between the centroid of image corners and projected lidar checkerboard/charucoboard points on the image in 3-D
projected_images
This is a signed URL to download the images with corresponding lidar points projected on them using the extrinsics obtained at the end of the calibration.