Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibration Homepage
  • Calibration selection
  • Calibration Instructions Page
  • Approach selection
  • Configuration
  • Target-based
  • Targetless
  • Upload files from both cameras
  • Detect feature points (for targetless calibration only)
  • Detect Corners and Run Calibration (target-based)
  • Run Calibration (targetless)
  • Error stats
  • Target-based
  • Targetless
  • Visualise
  • Target-based
  • Targetless
Export as PDF
  1. Calibration

Overlapping-Camera Calibration

PreviousData Collection for Overlapping-Camera CalibrationNextData collection guide for Overlapping Camera Calibration (Multiple-Targets)

Last updated 6 months ago

Calibration Homepage

  • This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.

  • Click on New Calibration to create a new calibration dataset.

Calibration selection

Select LiDAR-Camera Calibration to create a new dataset.

Calibration Instructions Page

Upon selecting LiDAR-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.

Approach selection

Users can choose either the target-based or the targetless calibration. The target-based calibration uses the checkerboard as the calibration target, and the targetless calibration uses the features available in the scene.

Configuration

Target-based

Checkerboard target configuration

  • Horizontal corners: Total number of inner corners from left to right. The blue dots shown in the above preview correspond to the horizontal corners.

  • Vertical corners: Total number of inner corners from top to bottom. The red dots shown in the above preview correspond to the vertical corners.

  • Square size: This is the length of the square's arm in meters. It corresponds to the length of the yellow square highlighted in the preview.

Camera Intrinsic Parameters

Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can also load the JSON file.

  • Users can manually enter the intrinsic parameters if they already have them.

Targetless

Camera Intrinsic Parameters

Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can also load the JSON file.

  • Users can manually enter the intrinsic parameters if they already have them.

Upload files from both cameras

You need to add the images taken from the left and right cameras of the Overlapping Camera. This page allows you to pair them manually. It also has an “Auto Pair” button to pair the files automatically based on their filename order.

Detect feature points (for targetless calibration only)

Choose the feature matching algorithm and click on Detect for feature detection

Detect Corners and Run Calibration (target-based)

Run Calibration (targetless)

On this page, the user has to click Run calibration to get the calibration results. Once clicked, the images are processed, and the calibration results are shown on the right side of the page.

Each epiline is the projection of the corresponding point from the opposite image. Hence, the quality of the calibration can be judged by how close the epilines are to the corresponding points. Also, all the epilines in one image converge to the epipole (could be outside the image). For convenience, you can see the Epiline point distance (average distance of the point to its respective epiline) error stat.

Error stats

You can toggle the error stats and select individual errors or average errors. Individual Errors show the errors related to an image pair chosen on the left pane.

Target-based

  • Left Reprojection Error: The pixel error when the detected corners from the right image are projected onto the left image using the calibration results

  • Right Reprojection Error: The pixel error when the detected corners from the left image are projected onto the right image using the calibration results

  • Translation Error: The distance between the centroids (means) of the 3D projections of the checkerboard in a 3D scene from the left images and right images

  • Rotation Error: The angle between the planes of the 3D projections in a 3D scene of the checkerboard from the left images and right images

Targetless

  • Epiline Point Distance: Average pixel distance of each point to its corresponding projected epiline.

  • Epipolar Error: Proportional to the distance of a point from its epiline. Does not have a physical meaning. It is the residual error from minimizing the epipolar constraints while calculating the fundamental/essential matrix.

Visualise

Target-based

Clicking on the visualize button takes the user to the visualization screen. The user can see the cameras and the selected image-pair checkerboard on the screen. To get a better sense of the view, the user can move around in the 3D scene using the mouse left (for pan) or right (for orbit) button click-drag. Also, at the top-right, in the options dropdown, you have the option to show/hide all the checkerboards from all the image pairs at once.

Targetless

Only frustums are visible for the targetless calibration.

Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, .

Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, .

click here
click here
targetbased images
Targetless images
Targetless calibration epilines
Visualization for Target based approach
Visualization for Targetless based approach