Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibration Homepage
  • Calibration selection
  • Calibration Instructions Page
  • Approach selection
  • Configuration
  • Vehicle configuration
  • Camera Intrinsic Parameters
  • Target configuration
  • Upload files from the mounted camera
  • Detect target corners in images
  • Add target configuration for images
  • Run Calibration
  • Visualizer
  • Error stats
  • Extrinsic Calibration Output
  • Camera coordinates system
  • Vehicle Coordinate Frame
Export as PDF
  1. Calibration

Vehicle-Camera Calibration

PreviousLidar-Camera Calibration(Old)NextData Collection for Vehicle-Camera Calibration

Last updated 10 months ago

Calibration Homepage

  • This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.

  • Click on New Calibration to create a new calibration dataset.

Calibration selection

Select Vehicle-Camera Calibration to create a new dataset.

Calibration Instructions Page

Upon selecting Vehicle-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.

Approach selection

Select the Terrain as Flat and approach as Target

Configuration

Vehicle configuration

Camera Intrinsic Parameters

Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can also load the JSON file.

  • Users can manually enter the intrinsic parameters if they already have them.

Target configuration

  • Horizontal corners: Total number of inner corners from left to right. The blue dots shown in the above preview correspond to the horizontal corners.

  • Vertical corners: Total number of inner corners from top to bottom. The red dots shown in the above preview correspond to the vertical corners.

  • Square size: It is the length of the arm of the square in meters. The square size corresponds to the length of the yellow square highlighted in the preview.

  • Left padding: The distance from the leftmost side of the board to the left-most corner point in meters. Corresponds to the left blue line in the preview.

  • Right padding: The distance from the rightmost side of the board to the rightmost corner point in meters. Corresponds to the right blue line in the preview.

  • Top padding: The distance from the topmost side of the board to the topmost corner point in meters. Corresponds to the top red line in the preview.

  • Bottom padding: The distance from the bottom-most side of the board to the bottom-most corner point in meters. Corresponds to the bottom red line in the preview.

Upload files from the mounted camera

Detect target corners in images

Users can click on Detect corners to detect the corners in the target. This is an automated process, and our algorithm usually detects the corners in the image accurately.

Add target configuration for images

For each image, enter the following target configuration

  1. Distance from Vehicle Reference Point (VRP) to Intersection Reference Point (IRP).

  2. If the board is placed perpendicular to the ground and on the ground directly, the target height should be 0. If the the board is placed perpendicular to the ground, but above ground level, the target height should be the distance from the bottom edge of the board to the ground level. If the board is parallel to the ground, the target height is the thickness of the board itself.

  3. Distance from Intersection Reference Point (IRP) to Target Reference Point (TRP).

Run Calibration

Users must click Run calibration to calculate the extrinsic parameters and error stats.

Visualizer

Visualize button shows the 3d representation of the car and its wheels, along with the camera center and its frustum.

Error stats

Users can use these error values to estimate the accuracy of the calibration results alongside visual confirmation. The closer the error stats to zero, the better the extrinsic parameters.

  • Translation Error: The distance between the centroids of the 3d projections of target corners and the target configuration in the Vehicle coordinate system.

  • Rotation Error: The angle between the planes formed from 3d projections of target corners and the target configuration in the Vehicle coordinate system.

Extrinsic Calibration Output

  • roll, pitch, and yaw are in degrees, and px, py, and pz are in meters.

  • vehiclePoint3D is the 3d coordinates of a point in the vehicle coordinate system.

  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.

Camera coordinates system

We currently show three different types of camera coordinate systems. The extrinsic parameters change according to the selected Camera coordinate system. The export option exports the extrinsic parameters based on the selected camera coordinate system.

  • Optical coordinate system: It's the default coordinate system that we follow.

  • ROS REP 103: It is the coordinate system followed by ROS. On changing to this, you can see the change in the visualization and the extrinsic parameters.

  • NED: This follows the north-east-down coordinate system.

Vehicle Coordinate Frame

The origin of the vehicle coordinate frame is the midpoint of the line joining the rear wheel centers on the ground.

  1. X-axis points in the vehicle's forward direction

  2. Y-axis towards the left of the vehicle and

  3. Z-axis pointing upward

For more details,

Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, .

VRP, IRP, and TRP info can be found

click here
Rectangle vehicle
Trapezoid vehicle
Camera input section in Configuration page
click here
here