Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
To use the following APIs, you need an access token. For more details about access tokens, click here.
Targetless Overlapping Camera Calibration
Target Overlapping Camera Calibration
LiDAR-Camera Calibration
LiDAR-LiDAR Calibration
Vehicle-LiDAR Calibration
Radar-Camera Calibration
Target Camera-Vehicle Calibration
Calibration groups
Delete Calibrations
Global Optimiser
The API requires the client to upload the images and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images for the given configuration.
If the Upload and calibrate API call response contains dataset_id, extrinsic_parameters, and error_stats, the calibration process is completed without errors.
The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_parameters, and error_stats.
We require pairs of images from Camera-1 and Camera-2 for a given calibration.
Place the images captured from Camera-1 in a folder.
Place the images captured from Camera-2 in a folder.
config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder and the image names.
The name of the JSON file should be config.json
(case sensitive)
calibration_name
string
Name of the calibration
calibration_type
string
Non-editable field.
*Value should be stereo_camera_calibration
calibration_approach
string
Non-editable field.
*Value should be targetless
calibration_group_id
string
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
version
integer
Non-editable field *Value should be kept 1.
type
string
Non-editable field
Describes the kind of sensor, *value should be kept camera
camera_name
string
It is the name given by the client to the camera. The client can modify it as willed.
lens_model
string
Describes the type of lens used by the camera. Accepted values
pinhole
fisheye
order
int
An integer value to differentiate Camera-1 and Camera-2 inputs.
order = 1 for Camera-1
order = 2 for Camera-2
distance_between_two_cameras
double
Distance between Camera-1 and Camera-2 in meters.
fx
double
Focal length of the cameras in the X-axis. Value in pixels.
fy
double
Focal length of the camera in the Y-axis. Value in pixels.
cx
double
Optical centre of the camera in the X-axis. Value in pixels.
cy
double
Optical centre of the camera in the Y-axis. Value in pixels.
distortion_enabled
boolean
Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.
k1, k2, k3, k4, p1, p2
double
These are the values for distortion coefficients of the camera lens.
Note:
If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)
If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)
These parameters are not required if distortion_enabled is false.
data
dict
It stores the data related to mapping of the images
mappings
List of lists
It is a list of lists, where each sub-list is a tuple containing names of the images paired together.
Note:
The first element in the tuple should be the image path from the first camera (Camera-1)
The second element in the tuple should be the image path from the second camera (Camera-2).
The client can name their images as they want, but they must have the same name in the mapping list and be present in the suitable folder.
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link: Access token for APIs
This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, extrinsic_parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and images in a suitable format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
Epiline Point Distance: Average pixel distance of each point to its corresponding projected epiline.
Epipolar Error: Proportional to the distance of a point from its epiline. Does not have a physical meaning. It is the residual error from minimizing the epipolar constraints while calculating the fundamental/essential matrix.
If the data is empty. 'status': "error no files found"
This GET api call returns dataset_id, extrinsic_parameters, and error_stats.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{datasetId}/extrinsic_parameters
datasetId
string
datasetId obtained from the response of Upload file and calibrate API.
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
Epiline Point Distance: Average pixel distance of each point to its corresponding projected epiline.
Epipolar Error: Proportional to the distance of a point from its epiline. Does not have a physical meaning. It is the residual error from minimizing the epipolar constraints while calculating the fundamental/essential matrix
The API requires the client to upload the images and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images for the given configuration.
If the Upload and calibrate API call response contains dataset_id, extrinsic_parameters, and error_stats, the calibration process is completed without errors.
The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_parameters, and error_stats.
We require pairs of images from Camera-1 and Camera-2 for a given calibration.
Place the images captured from Camera-1 in a folder.
Place the images captured from Camera-2 in a folder.
config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
Note:
The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder and the image names.
The name of the JSON file should be config.json
(case sensitive)
This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, extrinsic_parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
If the data is empty. 'status': "error no files found"
This GET api call returns dataset_id, extrinsic_parameters, and error_stats.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{datasetId}/extrinsic_parameters
This page helps understand how to delete calibrations from a workspace.
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer “ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:
This POST api sends the user-provided calibration_ids to the server to delete them.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/delete_calibration_dataset
Path parameters
Body
On success, it returns the message: "Deleted the calibration_ids provided in the config.json"
On failure, it returns the message: "Unable to perform the operation. Please check the calibration_ids in the config.json"
The API requires the client to upload the PCD, and configuration for vehicle lidar setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the lidar files for the given configuration.
The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, extrinsic_parameters, and error_stats.
The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_parameters, and error_stats.
We require lidar frames for a given calibration.
Place the Lidar data captured from the LiDAR in a folder.
config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
The folder and lidar file names shown here are for demonstration purposes. Users should avoid using spaces in the folder and the lidar filename.
The name of the JSON file should be config.json
(case sensitive)
This POST API call sends a zip file to the server and runs the calibration algorithm. As the response, it returns dataset_id, extrinsic_parameters, and error_stats (error_stats won't be available for targetless calibration) to the user.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
This GET api call returns dataset_id, extrinsic_parameters, and error_stats.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{datasetId}/extrinsic_parameters
The setup should prevent false detections. For example, other plane surfaces of similar shape may be identified as a board, which might give false solutions. You can always check the boards identified on the web application.
The API requires the client to upload the images, PCD (pcap, csv, and bin are also supported), and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images and lidar files for the given configuration.
The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, extrinsic_camera_coordinate_system, extrinsic_parameters, error_stats, and projected_images.
The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_camera_coordinate_system, extrinsic_parameters, error_stats, and projected_images.
We require image and lidar frame pairs from the camera and lidar for a given calibration.
Place the images captured from the camera in a folder.
Place the Lidar data captured from the LiDAR in a folder.
config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder, lidar, and image filenames.
The name of the JSON file should be config.json
(case sensitive)
This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, extrinsic_camera_coordinate_system, extrinsic parameters, error_stats, and projected_images to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
This GET api call returns dataset_id, extrinsic_camera_coordinate_system, extrinsic parameters, error_stats, and projected_images to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters/{extrinsic_camera_coordinate_system}
Missing keys in the config.json (Example: order key is missing)
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer “ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:
Missing keys in the config.json (Example: order key is missing)
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer “ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:
calibration_name
string
Name of the calibration
calibration_type
string
Non-editable field.
*Value should be stereo_camera_calibration
calibration_approach
string
Non-editable field.
*Value should be target
calibration_group_id
string
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
camera_name
string
It is the name given by the client to the camera. The client can modify it as willed.
version
integer
Non-editable field *Value should be kept 1.
lens_model
string
Describes the type of lens used by the camera. Accepted values
pinhole
fisheye
order
int
An integer value to differentiate Camera-1 and Camera-2 inputs.
order = 1 for Camera-1
order = 2 for Camera-2
fx
double
Focal length of the cameras in the X-axis. Value in pixels.
fy
double
Focal length of the camera in the Y-axis. Value in pixels.
cx
double
Optical centre of the camera in the X-axis. Value in pixels.
cy
double
Optical centre of the camera in the Y-axis. Value in pixels.
distortion_enabled
boolean
Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.
k1, k2, k3, k4, p1, p2
double
These are the values for distortion coefficients of the camera lens.
Note:
If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)
If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)
These parameters are not required if distortion_enabled is false.
target.type
string
Non-editable field.
*Value should be checkerboard
horizontal_corners
int
Total number of horizontal inner corners in the checkerboard
vertical_corners
int
Total number of vertical inner corners in the checkerboard
square_size
double
size of the square in the checkerboard
data
dict
It stores the data related to mapping of the images
mappings
List of lists
It is a list of lists, where each sub-list is a tuple containing names of the images paired together.
Note:
The first element in the tuple should be the image path from the first camera (Camera-1)
The second element in the tuple should be the image path from the second camera (Camera-2).
The client can name their images as they want, but they must have the same name in the mapping list and be present in the suitable folder.
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and images in the above format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
mean_reprojection_error
left: Mean of the pixel error when the detected corners from the right image are projected onto the left image using the calibration results
right: Mean of the pixel error when the detected corners from the left image are projected onto the right image using the calibration results
mean_rotation_error: Mean of the angle between the planes of the 3D projections in a 3D scene of the checkerboard from the left images and right images
mean_translation_error: Mean of the distance between the means of the 3D projections of the checkerboard in a 3D scene from the left images and right images
datasetId
string
datasetId obtained from the response of Upload file and calibrate API.
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
mean_reprojection_error
left: Mean of the pixel error when the detected corners from the right image are projected onto the left image using the calibration results
right: Mean of the pixel error when the detected corners from the left image are projected onto the right image using the calibration results
mean_rotation_error: Mean of the angle between the planes of the 3D projections in a 3D scene of the checkerboard from the left images and right images
mean_translation_error: Mean of the distance between the means of the 3D projections of the checkerboard in a 3D scene from the left images and right images
clientId
string
ClientId obtained from Deepen AI
file
config.json
Name of the calibration group
calibration_name
string
Name of the calibration
calibration_type
string
Non-editable field.*Value should be lidar_vehicle_calibration
calibration_group_id
string
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
multi_target
boolean
true: if multiple targets are used false: if single target is used
lidar_name
string
It is the name given by the client to the lidar. The client can modify it as willed.
is_targetless_3d_lidar
boolean
true: for targetless calibration false: for target-based calibration
slam_algorithm_to_use
string
This parameter is needed when is_targetless_3d_lidar is true. Accepted values are 1. LOAM 2. ICP
laser_channels
integer
Laser channels of the lidar used. Accepted values are 16, 32, 64 and 128
targets
Object
It is a dictionary of dictionary with each dictionary having target properties. Accepted keys are 1. left 2. right 3. front 4. rear
length
double
length of the board of the target in meters
width
double
width of the board of the target in meters
tilted
Boolean
true: if the board is tilted false: if the board is not tilted
data
Object
It stores the data related to files of the lidar
files
Object
It is an Object, where each key is a string which is "file" in case of multi-target and "left", "right", "front" or "rear" containing the path to the file.
wheelbase
double
The length from the mid of the rear wheel to the mid of the front wheel on the same side
track
double
The length from the right mid of the wheel in front of vehicle to left mid of the wheel in front of the vehicle or The length from the right mid of the wheel in rear of vehicle to left mid of the wheel in rear of the vehicle
vehicle_configuration
Object
Object which has all the measurements 1. vehicle_shape 2. wheelbase 3. track 4. front_wheel_overhang 5. rear_wheel_overhang
vehicle_shape
string
rectangle or trapezoid based on the shape of the vehicle
breadth
double
breadth of the board of the target in meters
front_wheel_overhang
double
Overhang from the middle of the front wheels to the front end part of the vehicle in meters
rear_wheel_overhang
double
Overhang from the middle of the rear wheels to the rear end part of the vehicle in meters
is_targetless_3d_lidar
boolean
true if the calibration is targetless false for target-based calibration
is_2d_lidar
boolean
true if the calibration is 2d- lidar false if the calibration is 3d-lidar
auto_detect_lidar_board
boolean
true for auto detecting the board in the point cloud false otherwise In most cases when using api it is true other than if bounding box is provided
use_bounding_box_on_board_detection_failure
boolean
true if bounding box should be used when detection fails
bounding_box
Object of Objects
This is an object of objects with key corresponding to the target whose details are provided. Need to be provided if we want to use bounding box forboard detection The keys which gives the bounding box for a target 1. xmin 2. xmax 3. ymin 4. ymax 5. zmin 6. zmax The key refers to the corners of minimum and maximum values in all axis to create the bounding box within which the target is identified. These values are in the lidar frame.
all_lidar_data
Object
Object with all the lidar data and board configration
lidar_type
string
directional or 360 if there are 4 boards and 3 boards respectively
laser_channels
Integer
Number of channels the lidar has.
lidar_fov_direction
string
If the lidar is directional then this defines which direction with respect to the vehicle is the lidar present. Supports
front
rear
left
right
front_board_distance
double
Distance from front direction of the car to the board
left_board_distance
double
Distance from left direction of the car to the board
right_board_distance
double
Distance from right direction of the car to the board
rear_board_distance
double
Distance from rear direction of the car to the board
target_distance
double
Approximate distance between the lidar and the centroid of the board to differentiate the boards in case more than one are of similar size. needed only of the sizes are same in multiple boards
is_lidar_tilted
Boolean
If the lidar is tilted autodetection happends else if the lidar ground heigh is given then ground is detected at that much distance from ground
lidar_ground_height
number
distance from ground to the lidar when lidar is parallel to ground
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and pcd in a suitable format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
plane_distance_error is the mean of the distance between the plane LiDAR points to the respective planes.
datasetId
string
datasetId obtained from the response of Upload file and calibrate API.
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
plane_distance_error is the mean of the distance between the plane LiDAR points to the respective planes.
calibration_name
string
Name of the calibration
calibration_type
string
Non-editable field. Value should be lidar_camera_calibration
calibration_group_id
string
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
get_initial_estimates_from_lidar_autodetection
Boolean
parameter to specify if we want to do autodetection of boards in lidar
mapping_pair_to_use_for_initial_estimates
Integer
Index (0 based) of the mapping data i.e the files pair to use for calculating initial estimates when initial estimates are not provided
target_matching_the_chosen_board
String
String corresponding to the target configuration. The target configuration should be the we want to use for initial estimates calculation
board_to_chose_from_left
number
optional parameter. When we are using "get_initial_estimates_from_lidar_autodetection" as true and the boards multiple boards we are using are of same size then we can specify the board to chose from left that should be considered for initial estimates calculation
multi_target
boolean
true: if multiple targets are used false: if single target is used
max_correspondence
double
Accepted range is from 0 to 1
deep_optimization
Boolean
Performs optimisation for the board edges. true: If tilted = true and deep optimisation is needed false: If deep optimisation is not required or the tilted = false
deep_optimization_approach
string
Accepted values 1. clustering 2. custom_ransac These are two approaches which are used in deep optimization. Users can select any one of these based on there requirement. Default value is clustering
is_lidar_inverted
Boolean
It gives information about whether the point cloud is inverted or non-inverted. By default, we consider the lidar as non-inverted.
lidar_name
string
It is the name given by the client to the lidar. The client can modify it as willed.
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values
OPTICAL
ROS_REP_103
NED
Default value is OPTICAL
camera_name
string
It is the name given by the client to the camera. The client can modify it as willed.
lens_model
string
Describes the type of lens used by the camera. Accepted values
pinhole
fisheye
fx
double
Focal length of the cameras in the X-axis. Value in pixels.
fy
double
Focal length of the camera in the Y-axis. Value in pixels.
cx
double
Optical centre of the camera in the X-axis. Value in pixels.
cy
double
Optical centre of the camera in the Y-axis. Value in pixels.
distortion_enabled
boolean
Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.
k1, k2, k3, k4, p1, p2
double
These are the values for distortion coefficients of the camera lens.Note:
If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)
If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)
These parameters are not required if distortion_enabled is false.
targets
Object
It is a dictionary of dictionary with each dictionary having target properties
type
string
Describes the type of target used. Accepted values
checkerboard
charucoboard
x (or) horizontal_corners
integer
number of horizontaol corners in the checkerboard (this property is needed if the type = checkerboard)
y (or) vertical_corners
integer
number of vertical corners in the checkerboar (this property is needed if the type = checkerboard)
rows
integer
number of horizontaol squares in the charucoboard (this property is needed if the type is charucoboard)
columns
integer
number of vertical squares in the charucoboard (this property is needed if the type is charcuboard)
square_size
double
Size of each square in meters
marker_size
double
The size of marker in a charucoboard in meters ( Normally it is 0.8 times of square size ) (this property is needed if the type is charucoboard)
dictionary
string
It is the string that defines the charuco dictionary of the target. We support
5X5
6X6
7X7
original
This property is needed if the type is charucoboard
padding_right
double
padding to the right of the board
padding_left
double
padding to the left of the board
padding_top
double
padding to the top of the board
padding_bottom
double
padding to the bottom of the board
on_ground
Boolean
true: if the board is kept on ground
false: if the board is not on the ground
tilted
Boolean
true: if the board is tilted false: if the board is not tilted
ignore_top_edge
Boolean
This is a field to improve the accuracy of deep optimization. If top part of the board is missing in the lidar frame, provide this flag as true else give it as false. By default false is taken
data
Object
It stores the data related to mapping of the camera and the lidar files
mappings
List of lists
It is a list of lists, where each sub-list is a tuple containing names of the image and pcd paired together.
Note:
The first element in the tuple should be the image path
The second element in the tuple should be the lidar frame path from the lidar
The client can name their image and lidar frame as they want, but they must have the same name in the mapping list and be present in the provided path
extrinsic_params_initial_estimates
Object with all values as double
The estimated extrinsic parameters which will be optimized during calibration process. All values are required.
roll
pitch
yaw
px
py
pz
extrinsic_params_tolerance
Object with all values as double
Constraints on the range of extrinsic parameters. All values are optional. 1. px_min 2. px_max 3. py_min 4. py_max 5. pz_min 6. pz_max 7. roll_min 8. roll_max 9. pitch_min 10. pitch_max 11. yaw_min 12. yaw_max If any constraint is given, it is required that the initial estimate (extrinsic_params_initial_estimates) be within the min and max bounds. In addition, min values must be always <= max values.
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and images in a suitable format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_camera_coordinate_system
Camera coordinate system for extrinsic sensor angles (roll, pitch, and yaw).
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of difference between the centroid of points of checkerboard/charucoboard in the LiDAR and the projected corners in 3-D from an image
plane_translation_error: Mean of the Euclidean distance between the centroid of projected corners in 3-D from an image and plane of the checkerboard/charucoboard in the LiDAR
rotation_error: Mean of difference between the normals of the checkerboard/charucoboard in the point cloud and the projected corners in 3-D from an image reprojection_error: Mean of difference between the centroid of image corners and projected lidar checkerboard/charucoboard points on the image in 3-D
projected_images
This is a signed URL to download the images with corresponding lidar points projected on them using the extrinsics obtained at the end of the calibration. This URL has an expiry of 7 days from the moment it is generated. The image below shows an example image for this projection.
dataset_id
string
dataset_id obtained from the response of Upload file and calibrate API.
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values
OPTICAL
ROS_REP_103
NED
Default value is OPTICAL
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_camera_coordinate_system
Camera coordinate system for extrinsic sensor angles (roll, pitch, and yaw).
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of difference between the centroid of points of checkerboard/charucoboard in the LiDAR and the projected corners in 3-D from an image
plane_translation_error: Mean of the Euclidean distance between the centroid of projected corners in 3-D from an image and plane of the checkerboard/charucoboard in the LiDAR
rotation_error: Mean of difference between the normals of the checkerboard/charucoboard in the point cloud and the projected corners in 3-D from an image reprojection_error: Mean of difference between the centroid of image corners and projected lidar checkerboard/charucoboard points on the image in 3-D
projected_images
This is a signed URL to download the images with corresponding lidar points projected on them using the extrinsics obtained at the end of the calibration.
This page helps understand how to create new calibration groups, modify existing ones, and fetch all the existing ones from a workspace.
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer “ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link: Access token for APIs
This POST api sends the user-provided calibration group name to the server to create a new calibration group and returns calibration_group_name and calibration_group_id.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_group/create
clientId
string
ClientId obtained from Deepen AI
calibration_group_name
string
Name of the calibration group
calibration_group_name
Name of the calibration group created
calibration_group_id
A unique value to identify the calibration group. calibration_group_id can be used to add new datasets to the group.
This POST api updated the name of the calibration group associated with the calibration_group_id and returns calibration_group_name, calibration_group_id, and calibration_ids
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_group/modify
clientId
string
ClientId obtained from Deepen AI
calibration_group_id
string
Id of the calibration group to update
calibration_group_name
string
New name for the calibration group associated with the calibration_group_id
calibration_group_name
Updated name of the calibration group associated with the provided calibration_group_id
calibration_group_id
Data provided by the user in the request.
calibration_ids
List of calibrations (datasets) in the calibration group with the user-provided calibration_group_id in the request.
This GET api fetches existing calibration group details. Returns list of groups with details calibration_group_id, calibration_group_name and calibration_ids
https://tools.calibrate.deepen.ai/api/v2/external/clients/{client_id}/calibration_groups
clientId
string
ClientId obtained from Deepen AI
calibration_group_name
Name of the calibration group
calibration_group_id
A unique value to identify the calibration group. calibration_group_id can be used to add new datasets to the group.
calibration_ids
List of calibrations (calibration_id and calibration_name) in the calibration group for the corresponding calibration_group_id.
upgrouped_datasets
List of calibrations (calibration_id and calibration_name) that are not part of any calibration group.
This POST API adds a list of calibrations to a calibration group. On success, it returns a success message: "provided calibration_ids are added to the calibration group."
https://tools.calibrate.deepen.ai/api/v2/external/clients/{client_id}/calibration_group/add_calibrations
Path parameters
clientId
string
ClientId obtained from Deepen AI
Body
file
config.json file
config.json file contains calibration_group_id and calibration_ids to be added to the group.
On success, API returns a success message: "provided calibration_ids are added to the calibration group."
This API call can be used to move calibrations from one group to another group.
This POST API removes a list of calibrations from a calibration group. On success, it returns a success message: "provided calibration_ids are removed from the calibration group."
https://tools.calibrate.deepen.ai/api/v2/external/clients/{client_id}/calibration_group/remove_calibrations
Path parameters
clientId
string
ClientId obtained from Deepen AI
Body
file
config.json file
config.json file containing calibration_group_id and calibration_ids to be removed from the calibration group.
On success, it returns a success message: "provided calibration_ids are removed from the calibration group."
Overview: Access token is created for admin to access different APIs in your specific workspace.
How to create an access token:
Go to the top panel, click on "More".
2. Click on "Generate new token" on the top right of the screen to start generating a token.
3. Fill in your note and select the scope of users allowed to access the token.
4. Click on "Generate token" to create token access.
5. This token will not expire, you have to revoke the token access manually.
How to view/revoke created token:
Once the token is created, go to "Developer Tokens" to view the token.
You can click on the token to copy it and click on the "revoke" button to revoke the token manually.
The API requires the client to upload the configuration for Global Optimisation in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes a loop_optimisation API call, which runs the loop optimisation algorithm on the calibrations for the given configuration.
The calibration process is completed without errors if the Loop optimisation API call response contains each calibration-id with updated extrinsics and the before and after residual error of the loop
config.json contains configuration details of the loop_optimisation (dataset_ids_for_loop_optimisation, lambda_val, renamed_sensor_names and optimisation_name)
Note
The name of the JSON file should be config.json
(case sensitive)
This POST API call sends a zip file to the server and runs the loop optimisation algorithm. As the response, it returns the updated extrinsics and the residual error before the optimisation and after the optimisation to the user.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/loop_optimisation
This GET api call returns the complete dataset information for the global optimisation which is already created.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{datasetId}/extrinsic_parameters
The API requires the client to upload the images and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images and lidar files for the given configuration.
The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.
The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.
We require the images from the camera for a given calibration.
Place the images captured from the camera in a folder.
config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder, lidar, and image filenames.
The name of the JSON file should be config.json
(case sensitive)
This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, calibration_algorithm_version, extrinsic parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
This GET api call returns dataset_id, calibration_algorithm_version, extrinsic parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters
The API requires the client to upload the images and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images for the given configuration.
The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats.
The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats.
We require images from the camera and other configurations to calculate extrinsic parameters.
Place the images captured from the camera in a folder.
config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder and image filenames.
The name of the JSON file should be config.json
(case sensitive)
This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
This GET api call returns dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters/{extrinsic_camera_coordinate_system}
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer” and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:
dataset_ids_for_loop_optimisation
list of strings
Calibration ids that form a loop
optimisation_name ( Optional)
string
name of the optimisation default value: "Untitled_default_name"
renamed_sensor_names (Optional)
Object
In case the sensor names don't form a loop or there is not sensor name for the calibration this can be used to to define the names of the 1st and 2nd sensor. These names are used for loop optimisation. each key in the object refers to the dataset id for which the loop optimisation will be run. each value for the key is a list of exactly two string values which refers to the new name of the sensor to be used for loop finding in order.
lamda_val ( Optional)
float
This the approximate co-relation between the the angle and distance . Default value is 1 i.e 1 degree ~ 1 meter
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and pcd in a suitable format
residual_error
Key for the residual error . Values are the residual error before and after the optimisation.
roll, pitch, yaw , px, py and pz
These are the extrinsic parameters. roll, pitch, and yaw are given in degrees, and px, py, and pz are given in meters.
original_extrinsics
Extrinsics of the calibration before optimisation
optimised_extrinsics
Extrinsics after loop optimisation is done
first_sensor
first sensor name based on renaming you provided
second_sensor
second sensor name based on the renaming you provided
calibration_type
calibration_type of the calibration id
datasetId
string
datasetId obtained from the response of Global Optimisation
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
optimisation_name
The name of the optimisation for the dataset_id
lambda_value
This the approximate co-relation between the the angle and distance . Default value is 1 i.e 1 degree ~ 1 meter
original_extrinsics
Extrinsics of the calibration before optimisation
optimised_extrinsics
Extrinsics after loop optimisation is done
first_sensor
first sensor name based on renaming you provided
second_sensor
second sensor name based on the renaming you provided
calibration_type
calibration_type of the calibration id
roll, pitch, yaw , px, py and pz
These are the extrinsic parameters. roll, pitch, and yaw are given in degrees, and px, py, and pz are given in meters.
calibration_name
string
Name of the calibration
calibration_type
string
Non-editable field. Value should be radar_camera_calibration
calibration_group_id
string
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
multi_target
boolean
true: if multiple targets are used false: if single target is used
camera_name
string
It is the name given by the client to the camera. The client can modify it as willed.
lens_model
string
Describes the type of lens used by the camera. Accepted values
pinhole
fisheye
fx
double
Focal length of the cameras in the X-axis. Value in pixels.
fy
double
Focal length of the camera in the Y-axis. Value in pixels.
cx
double
Optical centre of the camera in the X-axis. Value in pixels.
cy
double
Optical centre of the camera in the Y-axis. Value in pixels.
distortion_enabled
boolean
Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.
k1, k2, k3, k4, p1, p2
double
These are the values for distortion coefficients of the camera lens.Note:
If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)
If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)
These parameters are not required if distortion_enabled is false.
targets
Object
It is a dictionary of dictionary with each dictionary having target properties
type
string
Describes the type of target used. Accepted values
checkerboard
charucoboard
x (or) horizontal_corners
integer
number of horizontaol corners in the checkerboard (this property is needed if the type = checkerboard)
y (or) vertical_corners
integer
number of vertical corners in the checkerboar (this property is needed if the type = checkerboard)
rows
integer
number of horizontaol squares in the charucoboard (this property is needed if the type is charucoboard)
columns
integer
number of vertical squares in the charucoboard (this property is needed if the type is charcuboard)
square_size
double
Size of each square in meters
marker_size
double
The size of marker in a charucoboard in meters ( Normally it is 0.8 times of square size ) (this property is needed if the type is charucoboard)
dictionary
string
It is the string that defines the charuco dictionary of the target. We support
4X4
5X5
6X6
7X7
original
charuco_apriltag_36h11
charuco_apriltag_25h9
This property is needed if the type is charucoboard
padding_right
double
padding to the right of the board
padding_left
double
padding to the left of the board
padding_top
double
padding to the top of the board
padding_bottom
double
padding to the bottom of the board
radar_targets
Object
It stores the data related to position of radar target
file_data
List of Objects
It stores the file_name and position
file_name
String
Name of the image file (a file with this name should be available in the zip file)
position
Object
Contains the x, y and z coordinates of the radar-target.
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and images in a suitable format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: The translational error rate gives the mean distance error between points in 3D (it is for individual files)
mean_translation_error: Mean of the translation_error of all the files.
reprojection_error: The reprojection error rate gives the mean distance error between points in 2D (it is for individual files) mean_reprojection_error: Mean of the reprojection_error of all the files
dataset_id
string
dataset_id obtained from the response of Upload file and calibrate API.
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: The translational error rate gives the mean distance error between points in 3D (it is for individual files).
mean_translation_error: Mean of the translation_error of all the files.
reprojection_error: The reprojection error rate gives the mean distance error between points in 2D (it is for individual files) mean_reprojection_error: Mean of the reprojection_error of all the files
calibration_name
String
Name of the calibration
calibration_type
String
Non-editable field. Value should be camera_vehicle_calibration
calibration_group_id
String
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
approach_type
String
Accepted values are 1. flatTerrain 2. roughTerrain
aruco_marker_size
Double
The size of the aruco marker pasted to the vehicle wheels. This parameter is required when approach_type = roughTerrain
vehicle_configuration
Object
Configuration of the vehicle
vehicle_shape
String
Accepted values are 1. rectangle 2. trapezoid
wheel_base
Double
The distance between the center of the left/right front wheel and the center of the left/right rear wheel. This parameter is required when vehicle_shape = rectangle
left_wheelbase
Double
The distance between the center of the left front wheel and the center of the left rear wheel. This parameter is required when vehicle_shape = trapezoid
right_wheelbase
Double
The distance between the center of the right front wheel and the center of the right rear wheel. This parameter is required when vehicle_shape = trapezoid
track
Double
The distance between the left edge of the front/rear wheel and the right edge of the front/rear wheel. This parameter is required when vehicle_shape = rectangle
front_track
Double
The distance between the left edge of the front wheel and the right edge of the front wheel. This parameter is required when vehicle_shape = trapezoid
rear_track
Double
The distance between the left edge of the rear wheel and the right edge of the rear wheel. This parameter is required when vehicle_shape = trapezoid
front_overhang
Double
The distance between the center of the front wheel to the front of the vehicle. This parameter is required when approach_type = flatTerrain
rear_overhang
Double
The distance between the center of the rear wheel to the rear of the vehicle. This parameter is required when approach_type = flatTerrain
front_wheel_diameter
Double
The distance from the bottom of the front left/right wheel to the top of the wheel. This parameter is required when approach_type = roughTerrain
rear_wheel_diameter
Double
The distance from the bottom of the rear left/right wheel to the top of the wheel. This parameter is required when approach_type = roughTerrain
intrinsics
Object
Intrinsic parameters of the camera used for data collection. This parameter is required when approach_type = flatTerrain
mounted_camera_intrinsics
Object
Intrinsics parameters of the camera mounted on the vehicle. This parameter is required when approach_type = roughTerrain
external_camera_intrinsics
Object
Intrinsics parameters of the external camera used during data collection. This parameter is required when approach_type = roughTerrain
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values 1. OPTICAL 2. ROS_REP_103 3. NED
Default value is OPTICAL
camera_name
string
It is the name given by the client to the camera. The client can modify it as willed.
lens_model
string
Describes the type of lens used by the camera. Accepted values
pinhole
fisheye
fx
double
Focal length of the cameras in the X-axis. Value in pixels.
fy
double
Focal length of the camera in the Y-axis. Value in pixels.
cx
double
Optical centre of the camera in the X-axis. Value in pixels.
cy
double
Optical centre of the camera in the Y-axis. Value in pixels.
distortion_enabled
boolean
Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.
k1, k2, k3, k4, p1, p2
double
These are the values for distortion coefficients of the camera lens.Note:
If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)
If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)
These parameters are not required if distortion_enabled is false.
targets
Object
It is a dictionary of dictionary with each dictionary having target properties
type
string
Describes the type of target used. Accepted values
checkerboard
x (or) horizontal_corners
integer
number of horizontaol corners in the checkerboard (this property is needed if the type = checkerboard)
y (or) vertical_corners
integer
number of vertical corners in the checkerboar (this property is needed if the type = checkerboard)
square_size
double
Size of each square in meters
padding_right
double
padding to the right of the board
padding_left
double
padding to the left of the board
padding_top
double
padding to the top of the board
padding_bottom
double
padding to the bottom of the board
target_configuration
Object
It stores the data related to mapping of the camera files and corresponding configuration. This parameter is needed when approach_type = flatTerrain
file_data
List of Objects
It is a list of Objects, where each Object is a image and it's corresponding configuration. This parameter is required when approach_type = flatTerrain
file_name: The name of the file (including the path in zip file).
target_placement: The accepted values are horizontal and vertical
vehicle_to_intersection: Distance from VRP to IRP
intersection_to_target: Distance from IRP to TRP
height: The distance from the ground to the bottom of the target.
files
Object
It should contain four lists with keys: mounted_camera_left_images, mounted_camera_right_images, external_camera_left_images and external_camera_right_images This key is required when approach_type = roughTerrain
mounted_camera_left_images
List
The name of the image taken from the mounted camera with the target placed on the left of the vehicle (including the path in the zip). This key is required when approach_type = roughTerrain
mounted_camera_right_images
List
The name of the image taken from the mounted camera with the target placed on the right of the vehicle (including the path in the zip). This key is required when approach_type = roughTerrain
external_camera_left_images
List
The list of images (including the path in the zip) taken from the external camera on the left side of the vehicle. This key is required when approach_type = roughTerrain
external_camera_right_images
List
The list of images (including the path in the zip) taken from the external camera on the right side of the vehicle. This key is required when approach_type = roughTerrain
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and images in a suitable format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of the distance between the centroids of the 3d projections of target corners and the target configuration in the Vehicle coordinate system.
rotation_error: Mean of the angle between the planes formed from 3d projections of target corners and the target configuration in the Vehicle coordinate system.
dataset_id
string
dataset_id obtained from the response of Upload file and calibrate API.
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values
OPTICAL
ROS_REP_103
NED
Default value is OPTICAL
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_camera_coordinate_system
Camera coordinate system for extrinsic sensor angles (roll, pitch, and yaw).
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of the distance between the centroids of the 3d projections of target corners and the target configuration in the Vehicle coordinate system.
rotation_error: Mean of the angle between the planes formed from 3d projections of target corners and the target configuration in the Vehicle coordinate system.
The API requires the client to upload the PCDs and configuration for LiDAR-LiDAR setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads the files and runs the calibration algorithm on the lidar files uploaded with the given target configuration
The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.
The client can fetch the extrinsic parameters using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.
Lidar frames from both lidars are needed to run the calibration.
Place the Lidar frame from the first lidar in the lidar_1 folder and the Lidar frame from the second lidar in the lidar_2 folder. Provide the mappings of corresponding pcds in the config.
config.json contains configuration details of the calibration
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
The names of the folders and the lidar files shown here are for demonstration purposes. Users should avoid using space in the folder and the lidar filenames.
The name of the JSON file should be config.json
(case sensitive)
calibration_name
string
Name of calibration
calibration_type
string
Non-editable field. Value should be multi_lidar_calibration
calibration_group_id
string
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
multi_target
boolean
true: if multiple targets are used
false: if single target is used
is_target_based
boolean
true: if the calibration uses target based approach false: if the calibration uses targetless based approach
algorithm_name
string
It is the algorithm which needs to be used to perform calibration. Supported values: 'gicp' , 'ndt' , 'custom_gicp' gicp - works best for dense point clouds ndt - works best for sparse point clouds custom_gicp - modified version of gicp, works best if there are good amount of ground points in both the lidar frames.
voxel_size
double
This key is required only when ndt algorithm is selected. voxel_size value is adjusted depending on the indoor/outdoor environment.
Note:
For outdoor environment, its preferable to have a smaller voxel_size, and for indoors, a larger voxel_size is prefered.
If voxel_size is not given, default value of 0.5 is considered
max_correspondance
double
This key is required only when custom_gicp algorithm is selected.
Note:
Accepted range is from 0 to 1
If max_correspondance is not given, default value of 0.2 is considered.
lidar_1
Object
name: It is the name given by the client to lidar_1(first lidar). The client can modify it as willed.
type: string
laser_channels: It is the number of laser channels present in lidar_1 (This value is necessary to auto detect the board in lidar frame).
Supported values: 16, 32, 64, 128 and 256
type: int
height: It is the approximate height of the lidar_1 from ground.
type: double
ground_plane: list of lists where each list value is the equation of the ground plane in frame of reference of this lidar. It is expected to be a list of size 4 with the following convention:
ground plane with [a, b, c, d] signifies the ground plane has the equation
a*x + b*y + c*z + d = 0 Note: This ground plane equation is required only if the selected algorithm is custom_gicp.
lidar_2
Object
name: It is the name given by the client to lidar_2(second lidar). The client can modify it as willed.
type: string
laser_channels: It is the number of laser channels present in lidar_2 (This value is necessary to auto detect the board in lidar frame).
Supported values: 16, 32, 64, 128 and 256
type: int
height: It is the approximate height of the lidar_2 from ground.
type: double
ground_plane: list of lists where each list value is the equation of the ground plane in frame of reference of this lidar. It is expected to be a list of size 4 with the following convention: ground plane with [a, b, c, d] signifies the ground plane has the equation
a*x + b*y + c*z + d = 0 Note: This ground plane equation is required only if the selected algorithm is custom_gicp.
targets
Object
It is a dictionary of dictionary with each dictionary having target properties.
length
double
length of the target used for calibration
width
double
width of the target used for calibration
tilted
boolean
true: if the board is titled to the right up to 45 degrees
false: if the board is not tilted
perform_auto_detection
boolean
true: if auto board detection is required for the boards in point cloud. laser_channels property for both lidars should be provided for this property to work. false: if auto board detection is not required
initial_estimates
Object with all values as double
This is an optional field. The initial estimates which will be optimised to get extrinsic parameters during calibration process. 1. roll 2. pitch 3. yaw 4. px 5. py 6. pz Note: 1. roll, pitch and yaw should be in degrees 2. px, py and pz should be in meters. 3. Delete this key if initial_estimates are not available or not to be used during calibration
data
Object
It stores the data related to the lidar files which needs to be uploaded.
lidar_1: It is the relative path of pcd file (targetbased ) / folder path(targetless) corresponding to first lidar lidar_2: It is the relative path of pcd file (targetbased ) / folder path(targetless) corresponding to second lidar.
boolean
Optional Argument incase you have old zip file with single pair of lidars default: True
mappings
list of lists
each list value is a list with 1st value corresponding to lidar1 and 2nd value corresponding to lidar 2
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests.
How to get Access Tokens can be found on the following link: Access token for APIs
This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and pcds in a suitable format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
The extrinsic parameters are from the first to the second lidars for the given calibration setup.
error_stats
Translation Error indicates the distance between the center of the boards
Rotation Error indicates the angle between the target planes
Note: If initial estimates are provided, error_stats can't be calculated
INFO
This gives general information about the dataset (like auto-detection worked on this dataset or not)
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
The extrinsic parameters are from the first to the second lidars for the given calibration setup.
estimated_error_value
It is an error in the extrinsic parameters estimated from the fitness score of the algorithm used.
This GET api call returns dataset_id, calibration_algorithm_version, extrinsic_parameters, and error_stats.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{datasetId}/extrinsic_parameters
dataset_Id
string
datasetId obtained from the response of Upload file and calibrate API.
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation error is given in meters and rotation error is given in degrees.
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
estimated_error_value
It is an error in the extrinsic parameters estimated from the fitness score of the algorithm used.