Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibrations Homepage
  • Calibration Selection
  • Calibration Instructions Page
  • Configuration for Checkerboard
  • Configuration for Charucoboard
  • Calibration Pipeline
  • Add images
  • Settings (optional)
  • Run calibration
  • Verify Results
  • Error Stats
  • Undistorted Image
  • Checkerboard Coverage
  • Save to profile
Export as PDF
  1. Calibration

Camera Intrinsic calibration

PreviousData Collection for Camera intrinsic CalibrationNextData Collection for Lidar-Camera Calibration (Single Target)

Last updated 5 months ago

Calibrations Homepage

  • This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.

  • Click on New Calibration to create a new calibration dataset.

Calibration Selection

Select the Camera Intrinsic Calibration button to create a new dataset.

Calibration Instructions Page

Upon selecting Camera Intrinsic Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.

Configuration for Checkerboard

  • Target configuration = Checkerboard

  • Enable use EXIF metadata to use of EXIF metadata from images to optimize calibration. Disable this if using an External Lens for the Camera is recommended.

  • Camera lens model: For wide-angle cameras, use Fish-eye and Standard for the rest

    • Standard is the Brown-Conrady camera model

    • Fish-eye is the Kannala-Brandt camera model

  • Horizontal corners: Number of horizontal inner corners in the checkerboard

  • Vertical corners: Number of vertical inner corners in the checkerboard

Configuration for Charucoboard

  • Target configuration = Charuco Board

  • Enable use EXIF metadata to use of EXIF metadata from images to optimize calibration. Disable this if using an External Lens for the Camera is recommended.

  • Camera lens model: For wide-angle cameras, use Fish-eye and Standard for the rest

  • Horizontal corners: Number of chessboard squares in the horizontal direction in charucoboard

  • Vertical corners: Number of chessboard squares in the vertical direction in charucoboard

  • Square size: The size of the square in the board in meters

  • Marker size: The size of the ArUco marker present inside the charucoboard in meters.

Calibration Pipeline

Add images

Upload the images taken from the camera for which intrinsics need to be calculated.

Settings (optional)

Alpha is the new scaling parameter in Camera Intrinsic Calibration. Alpha can have values between 0 and 1.

  1. Alpha=0, it returns an undistorted image with minimum unwanted pixels

  2. Alpha=1, all pixels are retained with some extra black pixels.

Default: Includes k1, k2, k3, p1, p2 in distortion coefficients

Extended intrinsics: Includes k1, k2, p1, p2, k3, k4, k5, k6 in distortion coefficients

Minimal(K1,K2): excludes k3 from distortion coefficients and includes only k1, k2, p1, p2

Run calibration

Click the Calibrate button at the bottom to trigger calibration. You can see the intrinsic parameters and error statistics in the right panel upon completion.

Verify Results

Error Stats

The Reprojection error is in pixels. It is the mean of the Euclidean distance between the auto-detected checkerboard corners and reprojected checkerboard corners. The closer the Reprojection error is to zero, the better the intrinsics are.

Uncertainties (only for fisheye camera model)

The uncertainties represent the potential variability in the optimized parameters. They reflect how precisely each parameter is determined during calibration, based on the provided input data.

  • Lower uncertainty: Indicates higher confidence in the parameter's accuracy and reliability.

  • Higher uncertainty: Suggests the parameter is less well-determined, which could result from insufficient data, poor data distribution, or sensitivity to noise and errors.

By analyzing these uncertainties, you can assess the quality and robustness of the calibration results.

Undistorted Image

  • Users can visualize the Undistorted image to check the quality of the intrinsics.

  • The side-by-side view can be used to check both the distorted and undistorted images simultaneously

Checkerboard Coverage

  • Checkerboard coverage shows the area covered by the checkerboard corners from all uploaded images. The higher the coverage, the better the intrinsic parameters.

    • 0 - 50% is low coverage

    • 51 - 70% is moderate coverage

    • 71 - 100% is Good coverage

  • Users can see the individual reprojection error of all the checkerboard corner points. A color ramp is used to depict the reprojection error. A light red color shows a lower reprojection error, and a darker red indicates a higher reprojection error.

Save to profile

Camera Intrinsic parameters can be saved to the profile for easier import in other calibrations.

Charucoboard Dictionary: There are multiple types of aruco markers from which the charuco board can be made. Please visit for supported charuco dictionaries.

here