Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Introduction
  • Folder Structure
  • Note
  • config.json for flat terrain
  • Sample config.json
  • config.json for rough terrain
  • Sample config.json
  • config.json key description
  • Quickstart
  • Upload file and calibrate
  • Request
  • Response
  • Get Extrinsic Parameters
  • Request
  • Response
Export as PDF
  1. Calibration
  2. API Documentation

Target Camera-Vehicle Calibration API

PreviousRadar Camera Calibration APINextTargetless Camera-Vehicle Calibration API

Last updated 7 months ago

Introduction

The API requires the client to upload the images and configuration for camera setup in a zip file (.zip extension) in the format defined below. The contents of the zip file are called a dataset.

  1. The client makes an Upload and calibrate API call, which uploads their files and runs the calibration algorithm on the images for the given configuration.

  2. The calibration process is completed without errors if the Upload and calibrate API call response contains dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats.

  3. The client can call the Get Extrinsic Parameters API using the dataset_id obtained from the Upload and calibrate API. This API responds with dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats.

Folder Structure

We require images from the camera and other configurations to calculate extrinsic parameters.

  1. Place the images captured from the camera in a folder.

  2. config.json contains configuration details of the calibration (intrinsic parameters, calibration name, etc.)

Note: Folder structure is optional. Users can place all files in the main directory and zip it.

Note

  1. The names of the folders and the images shown here are for demonstration purposes. Users should avoid using space in the folder and image filenames.

  2. The name of the JSON file should be config.json (case sensitive)

config.json for flat terrain

{
    "calibration_name": "vehicle camera calibration",
    "calibration_type": "camera_vehicle_calibration",
    "calibration_group_id": "xxxxxxxxxxxxxxxxxx",
    "approach_type": "flatTerrain",
    "vehicle_configuration":
    {
        "vehicle_shape": "rectangle",
        "wheelbase": 1.2,
        "track": 1.2,
        "front_overhang": 0,
        "rear_overhang": 0
    },
    "intrinsics":
    {
        "camera_fov_direction": "front",
        "camera_name": "camera name",
        "fx": 3137.8917920421727,
        "fy": 3147.404894083494,
        "cx": 1503.00100386124,
        "cy": 1981.5612526342654,
        "k1": 0.11123900395736337,
        "k2": -0.46818899385463597,
        "k3": 0.5567458795729378,
        "p1": 0.000043018947460877414,
        "p2": -0.00040536349561382673,
        "lens_model": "pinhole",
        "distortion_enabled": true
    },
    "targets":
    {
        "0":
        {
            "horizontal_corners": 5,
            "vertical_corners": 9,
            "type": "checkerboard",
            "square_size": 0.1,
            "padding_right": 0.1,
            "padding_left": 0.1,
            "padding_top": 0.1,
            "padding_bottom": 0.1
        }
    },
    "target_configuration":
    {
        "file_data":
        [
            {
                "file_name": "IMG_9696.jpg",
                "target_placement": "vertical",
                "vehicle_to_intersection": 1.9,
                "intersection_to_target": 0,
                "height": 0
            },
            {
                "file_name": "IMG_9697.jpg",
                "target_placement": "vertical",
                "vehicle_to_intersection": 1.9,
                "intersection_to_target": 0,
                "height": 0
            },
            {
                "file_name": "IMG_9698.jpg",
                "target_placement": "vertical",
                "vehicle_to_intersection": 1.9,
                "intersection_to_target": 0,
                "height": 0
            },
            {
                "file_name": "IMG_9699.jpg",
                "target_placement": "vertical",
                "vehicle_to_intersection": 1.9,
                "intersection_to_target": 0,
                "height": 0
            }
        ]
    }
}

Sample config.json

config.json for rough terrain

{
    "calibration_name": "vehicle camera calibration",
    "calibration_type": "camera_vehicle_calibration",
    "calibration_group_id": "xxxxxxxxxxxxxxxxxx",
    "approach_type": "roughTerrain",
    "aruco_marker_size": 0.359,
    "vehicle_configuration":
    {
        "vehicle_shape": "rectangle",
        "wheelbase": 1.7266,
        "track": 1.2,
        "front_wheel_diameter": 0.768,
        "rear_wheel_diameter": 0.768
    },
    "mounted_camera_intrinsics":
    {
        "camera_name": "camera name",
        "fx": 2204.465766956982,
        "fy": 2200.718949580785,
        "cx": 1987.2093520754504,
        "cy": 1489.9660572902758,
        "k1": -0.23936177115777293,
        "k2": 0.08598068870627995,
        "k3": -0.017426786421056686,
        "p1": -0.00022312630506811154,
        "p2": -0.00014031224535402575,
        "distortion_enabled": true,
        "lens_model": "pinhole"
    },
    "external_camera_intrinsics":
    {
        "camera_name": "camera name",
        "fx": 2204.465766956982,
        "fy": 2200.718949580785,
        "cx": 1987.2093520754504,
        "cy": 1489.9660572902758,
        "k1": -0.23936177115777293,
        "k2": 0.08598068870627995,
        "k3": -0.017426786421056686,
        "p1": -0.00022312630506811154,
        "p2": -0.00014031224535402575,
        "distortion_enabled": true,
        "lens_model": "pinhole"
    },
    "targets":
    {
        "0":
        {
            "horizontal_corners": 10,
            "vertical_corners": 4,
            "type": "checkerboard",
            "square_size": 0.16,
            "padding_right": 0.281,
            "padding_left": 0.281,
            "padding_top": 0.257,
            "padding_bottom": 0.257
        }
    },
    "files":
    {
        "mounted_camera_left_images":
        [
            "IMG_20221230_132204_00_002.jpg"
        ],
        "mounted_camera_right_images":
        [
            "IMG_20221230_132214_00_003.jpg"
        ],
        "external_camera_left_images":
        [
            "IMG_20221230_140807_00_019.jpg",
            "IMG_20221230_140822_00_020.jpg"
        ],
        "external_camera_right_images":
        [
            "IMG_20221230_135056_00_012.jpg",
            "IMG_20221230_135236_00_015.jpg",
            "IMG_20221230_135027_00_011.jpg",
            "IMG_20221230_135213_00_014.jpg"
        ]
    }
}

Sample config.json

config.json key description

Key
Type
Description

calibration_name

String

Name of the calibration

calibration_type

String

Non-editable field. Value should be camera_vehicle_calibration

calibration_group_id

String

This is an optional key. Provide valid calibration_group_id to add the dataset to calibration group.

approach_type

String

Accepted values are 1. flatTerrain 2. roughTerrain

aruco_marker_size

Double

The size of the aruco marker pasted to the vehicle wheels. This parameter is required when approach_type = roughTerrain

vehicle_configuration

Object

Configuration of the vehicle

vehicle_shape

String

Accepted values are 1. rectangle 2. trapezoid

wheel_base

Double

The distance between the center of the left/right front wheel and the center of the left/right rear wheel. This parameter is required when vehicle_shape = rectangle

left_wheelbase

Double

The distance between the center of the left front wheel and the center of the left rear wheel. This parameter is required when vehicle_shape = trapezoid

right_wheelbase

Double

The distance between the center of the right front wheel and the center of the right rear wheel. This parameter is required when vehicle_shape = trapezoid

track

Double

The distance between the left edge of the front/rear wheel and the right edge of the front/rear wheel. This parameter is required when vehicle_shape = rectangle

front_track

Double

The distance between the left edge of the front wheel and the right edge of the front wheel. This parameter is required when vehicle_shape = trapezoid

rear_track

Double

The distance between the left edge of the rear wheel and the right edge of the rear wheel. This parameter is required when vehicle_shape = trapezoid

front_overhang

Double

The distance between the center of the front wheel to the front of the vehicle. This parameter is required when approach_type = flatTerrain

rear_overhang

Double

The distance between the center of the rear wheel to the rear of the vehicle. This parameter is required when approach_type = flatTerrain

front_wheel_diameter

Double

The distance from the bottom of the front left/right wheel to the top of the wheel. This parameter is required when approach_type = roughTerrain

rear_wheel_diameter

Double

The distance from the bottom of the rear left/right wheel to the top of the wheel. This parameter is required when approach_type = roughTerrain

intrinsics

Object

Intrinsic parameters of the camera used for data collection. This parameter is required when approach_type = flatTerrain

mounted_camera_intrinsics

Object

Intrinsics parameters of the camera mounted on the vehicle. This parameter is required when approach_type = roughTerrain

external_camera_intrinsics

Object

Intrinsics parameters of the external camera used during data collection. This parameter is required when approach_type = roughTerrain

extrinsic_camera_coordinate_system

string

Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).

Accepted values 1. OPTICAL 2. ROS_REP_103 3. NED

Default value is OPTICAL

camera_name

string

It is the name given by the client to the camera. The client can modify it as willed.

lens_model

string

Describes the type of lens used by the camera. Accepted values

  1. pinhole

  2. fisheye

fx

double

Focal length of the cameras in the X-axis. Value in pixels.

fy

double

Focal length of the camera in the Y-axis. Value in pixels.

cx

double

Optical centre of the camera in the X-axis. Value in pixels.

cy

double

Optical centre of the camera in the Y-axis. Value in pixels.

distortion_enabled

boolean

Makes use of distortion coefficients (k1, k2, k3, k4, p1, p2) for the calibration algorithm when set true. Distortion coefficients (k1, k2, k3, k4, p1, p2) are not required if it is false.

k1, k2, k3, k4, p1, p2

double

These are the values for distortion coefficients of the camera lens.Note:

  1. If the lens_model is pinhole we require k1, k2, k3, p1, and p2 values (no need of k4)

  2. If the lens_model is fisheye then we require the k1, k2, k3, and k4 values. (p1 and p2 are not needed)

  3. These parameters are not required if distortion_enabled is false.

targets

Object

It is a dictionary of dictionary with each dictionary having target properties

type

string

Describes the type of target used. Accepted values

  1. checkerboard

x (or) horizontal_corners

integer

number of horizontaol corners in the checkerboard (this property is needed if the type = checkerboard)

y (or) vertical_corners

integer

number of vertical corners in the checkerboar (this property is needed if the type = checkerboard)

square_size

double

Size of each square in meters

padding_right

double

padding to the right of the board

padding_left

double

padding to the left of the board

padding_top

double

padding to the top of the board

padding_bottom

double

padding to the bottom of the board

target_configuration

Object

It stores the data related to mapping of the camera files and corresponding configuration. This parameter is needed when approach_type = flatTerrain

file_data

List of Objects

It is a list of Objects, where each Object is a image and it's corresponding configuration. This parameter is required when approach_type = flatTerrain

  1. file_name: The name of the file (including the path in zip file).

  2. target_placement: The accepted values are horizontal and vertical

  3. vehicle_to_intersection: Distance from VRP to IRP

  4. intersection_to_target: Distance from IRP to TRP

  5. height: The distance from the ground to the bottom of the target.

files

Object

It should contain four lists with keys: mounted_camera_left_images, mounted_camera_right_images, external_camera_left_images and external_camera_right_images This key is required when approach_type = roughTerrain

mounted_camera_left_images

List

The name of the image taken from the mounted camera with the target placed on the left of the vehicle (including the path in the zip). This key is required when approach_type = roughTerrain

mounted_camera_right_images

List

The name of the image taken from the mounted camera with the target placed on the right of the vehicle (including the path in the zip). This key is required when approach_type = roughTerrain

external_camera_left_images

List

The list of images (including the path in the zip) taken from the external camera on the left side of the vehicle. This key is required when approach_type = roughTerrain

external_camera_right_images

List

The list of images (including the path in the zip) taken from the external camera on the right side of the vehicle. This key is required when approach_type = roughTerrain

Quickstart

Upload file and calibrate

This POST api call sends a zip file to the server and runs the calibration algorithm. Returns dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.

https://tools.calibrate.deepen.ai/api/v2/external/clients/{clientId}/calibration_dataset

Request

Path parameters

Parameter name
Parameter type
Description

clientId

string

ClientId obtained from Deepen AI

Body

Key
Value
Description

file

.zip file

Zip file containing config and images in a suitable format

Response

{
    "dataset_id": "XXXXXXXXXXXXXXXXX",
    "extrinsic_camera_coordinate_system": "OPTICAL",
    "calibration_algorithm_version": "target_based:v1.010101",
    "extrinsic_parameters": {
        "roll": -96.93828475587402,
        "pitch": -2.589261966902198,
        "yaw": -91.64337716058188,
        "px": -0.15427558722337362,
        "py": 0.17379099735975762,
        "pz": 1.3477892475114783
    },
    "error_stats": {
        "translation_error": 0.0005721550101091656,
        "rotation_error": 0.001247820779879391,
        "reprojection_error": 0.5748822618783547
    }
}
Key
Description

dataset_id

A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.

calibration_algorithm_version

The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.

extrinsic_parameters

roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.

error_stats

translation_error: Mean of the distance between the centroids of the 3d projections of target corners and the target configuration in the Vehicle coordinate system.

rotation_error: Mean of the angle between the planes formed from 3d projections of target corners and the target configuration in the Vehicle coordinate system.

Get Extrinsic Parameters

This GET api call returns dataset_id, extrinsic_camera_coordinate_system, calibration_algorithm_version, extrinsic_parameters, and error_stats to the user as the response.

https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters

https://tools.calibrate.deepen.ai/api/v2/external/datasets/{dataset_id}/extrinsic_parameters/{extrinsic_camera_coordinate_system}

Request

Path parameters

Parameter name
Parameter type
Description

dataset_id

string

dataset_id obtained from the response of Upload file and calibrate API.

extrinsic_camera_coordinate_system

string

Camera coordinate system for extrinsic sensor angles (roll, pitch and yaw).

Accepted values

  1. OPTICAL

  2. ROS_REP_103

  3. NED

Default value is OPTICAL

Response

{
    "dataset_id": "XXXXXXXXXXXXXXXXX",
    "extrinsic_camera_coordinate_system": "OPTICAL",
    "extrinsic_parameters": {
        "roll": -96.93828475587402,
        "pitch": -2.589261966902198,
        "yaw": -91.64337716058188,
        "px": -0.15427558722337362,
        "py": 0.17379099735975762,
        "pz": 1.3477892475114783
    },
    "error_stats": {
        "translation_error": 0.0005721550101091656,
        "rotation_error": 0.001247820779879391,
        "reprojection_error": 0.5748822618783547
    },
    "calibration_algorithm_version": "target_based:v1.010101"
}
Key
Description

dataset_id

A unique value to identify the dataset. dataset_id can be used to retrieve the extrinsic parameters.

calibration_algorithm_version

The version of the algorithm used to calculate extrinsic parameters. This value can be used to map extrinsic parameters to a specific algorithm version.

extrinsic_camera_coordinate_system

Camera coordinate system for extrinsic sensor angles (roll, pitch, and yaw).

extrinsic_parameters

roll, pitch, and yaw are given in degrees and px, py, and pz are given in meters.

error_stats

translation_error: Mean of the distance between the centroids of the 3d projections of target corners and the target configuration in the Vehicle coordinate system.

rotation_error: Mean of the angle between the planes formed from 3d projections of target corners and the target configuration in the Vehicle coordinate system.

Before invoking the APIs, the client must obtain the clientId and auth token from Deepen AI. If you are a calibration admin, you can create different Access Tokens using the UI and use those instead. clientId is part of the path parameters in most API calls, and the auth token should be prefixed with “Bearer“ and passed to the ‘Authorization’ header in all API requests. How to get Access Tokens can be found on the following link:

Access token for APIs
2KB
config.json
2KB
config.json
All fles in the main direcory