The API requires the client to upload the images and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.
The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images for the given configuration.
The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats.
The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats.
We require images from the camera and other configurations to calculate extrinsic parameters.
Place the images captured from the camera in a folder.
config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)
Note: Folder structure is optional. Users can place all files in the main directory and zip it.
The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder and image filenames.
The name of the JSON file should be config.json
(case sensitive)
calibration_name
String
Name of the calibration
calibration_type
String
Non-editable field. Value should be camera_vehicle_calibration
calibration_group_id
String
This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.
approach_type
String
Accepted values are 1. flatTerrain 2. roughTerrain
aruco_marker_size
Double
The size of the aruco marker pasted to the vehicle wheels. This parameter is required when approach_type = roughTerrain
vehicle_configuration
Object
Configuration of the vehicle
vehicle_shape
String
Accepted values are 1. rectangle 2. trapezoid
wheel_base
Double
The distance between the center of the left/right front wheel and the center of the left/right rear wheel. This parameter is required when vehicle_shape = rectangle
left_wheelbase
Double
The distance between the center of the left front wheel and the center of the left rear wheel. This parameter is required when vehicle_shape = trapezoid
right_wheelbase
Double
The distance between the center of the right front wheel and the center of the right rear wheel. This parameter is required when vehicle_shape = trapezoid
track
Double
The distance between the left edge of the front/rear wheel and the right edge of the front/rear wheel. This parameter is required when vehicle_shape = rectangle
front_track
Double
The distance between the left edge of the front wheel and the right edge of the front wheel. This parameter is required when vehicle_shape = trapezoid
rear_track
Double
The distance between the left edge of the rear wheel and the right edge of the rear wheel. This parameter is required when vehicle_shape = trapezoid
front_overhang
Double
The distance between the center of the front wheel to the front of the vehicle. This parameter is required when approach_type = flatTerrain
rear_overhang
Double
The distance between the center of the rear wheel to the rear of the vehicle. This parameter is required when approach_type = flatTerrain
front_wheel_diameter
Double
The distance from the bottom of the front left/right wheel to the top of the wheel. This parameter is required when approach_type = roughTerrain
rear_wheel_diameter
Double
The distance from the bottom of the rear left/right wheel to the top of the wheel. This parameter is required when approach_type = roughTerrain
intrinsics
Object
Intrinsic parameters of the camera used for data collection. This parameter is required when approach_type = flatTerrain
mounted_camera_intrinsics
Object
Intrinsics parameters of the camera mounted on the vehicle. This parameter is required when approach_type = roughTerrain
external_camera_intrinsics
Object
Intrinsics parameters of the external camera used during data collection. This parameter is required when approach_type = roughTerrain
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values 1. OPTICAL 2. ROS_REP_103 3. NED
Default value is OPTICAL
camera_name
string
It is the name given by the client to the camera. The client can modify it as willed.
lens_model
string
Describes the type of lens used by the camera. Accepted values
pinhole
fisheye
fx
double
Focal length of the cameras in the X-axis. Value in pixels.
fy
double
Focal length of the camera in the Y-axis. Value in pixels.
cx
double
Optical centre of the camera in the X-axis. Value in pixels.
cy
double
Optical centre of the camera in the Y-axis. Value in pixels.
distortion_enabled
boolean
Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.
k1, k2, k3, k4, p1, p2
double
These are the values for distortion coefficients of the camera lens.Note:
If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)
If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)
These parameters are not required if distortion_enabled is false.
targets
Object
It is a dictionary of dictionary with each dictionary having target properties
type
string
Describes the type of target used. Accepted values
checkerboard
x (or) horizontal_corners
integer
number of horizontaol corners in the checkerboard (this property is needed if the type = checkerboard)
y (or) vertical_corners
integer
number of vertical corners in the checkerboar (this property is needed if the type = checkerboard)
square_size
double
Size of each square in meters
padding_right
double
padding to the right of the board
padding_left
double
padding to the left of the board
padding_top
double
padding to the top of the board
padding_bottom
double
padding to the bottom of the board
target_configuration
Object
It stores the data related to mapping of the camera files and corresponding configuration. This parameter is needed when approach_type = flatTerrain
file_data
List of Objects
It is a list of Objects, where each Object is a image and it's corresponding configuration. This parameter is required when approach_type = flatTerrain
file_name: The name of the file (including the path in zip file).
target_placement: The accepted values are horizontal and vertical
vehicle_to_intersection: Distance from VRP to IRP
intersection_to_target: Distance from IRP to TRP
height: The distance from the ground to the bottom of the target.
files
Object
It should contain four lists with keys: mounted_camera_left_images, mounted_camera_right_images, external_camera_left_images and external_camera_right_images This key is required when approach_type = roughTerrain
mounted_camera_left_images
List
The name of the image taken from the mounted camera with the target placed on the left of the vehicle (including the path in the zip). This key is required when approach_type = roughTerrain
mounted_camera_right_images
List
The name of the image taken from the mounted camera with the target placed on the right of the vehicle (including the path in the zip). This key is required when approach_type = roughTerrain
external_camera_left_images
List
The list of images (including the path in the zip) taken from the external camera on the left side of the vehicle. This key is required when approach_type = roughTerrain
external_camera_right_images
List
The list of images (including the path in the zip) taken from the external camera on the right side of the vehicle. This key is required when approach_type = roughTerrain
Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link: Access token for APIs
This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset
clientId
string
ClientId obtained from Deepen AI
file
.zip file
Zip file containing config and images in a suitable format
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of the distance between the centroids of the 3d projections of target corners and the target configuration in the Vehicle coordinate system.
rotation_error: Mean of the angle between the planes formed from 3d projections of target corners and the target configuration in the Vehicle coordinate system.
This GET api call returns dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters
https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters/{extrinsic_camera_coordinate_system}
dataset_id
string
dataset_id obtained from the response of Upload file and calibrate API.
extrinsic_camera_coordinate_system
string
Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).
Accepted values
OPTICAL
ROS_REP_103
NED
Default value is OPTICAL
dataset_id
A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.
calibration_algorithm_version
The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.
extrinsic_camera_coordinate_system
Camera coordinate system for extrinsic sensor angles (roll, pitch, and yaw).
extrinsic_parameters
roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.
error_stats
translation_error: Mean of the distance between the centroids of the 3d projections of target corners and the target configuration in the Vehicle coordinate system.
rotation_error: Mean of the angle between the planes formed from 3d projections of target corners and the target configuration in the Vehicle coordinate system.