Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Calibration Homepage
  • Calibration selection
  • Calibration Instructions Page
  • Approach selection
  • Configuration
  • Camera Intrinsic Parameters
  • Upload image files from the mounted camera
  • Add lanes in the Image
  • Run Calibration
  • Visualizer
  • Error function
  • Graph
  • Extrinsic Calibration Output:
Export as PDF
  1. Calibration
  2. Vehicle-Camera Calibration

Lane based Targetless Vehicle Camera Calibration

Lane based Targetless Vehicle-Camera calibration:

PreviousData collection for lane based targetless vehicle-camera calibrationNextData Collection for Rough Terrain Vehicle-Camera Calibration

Last updated 4 months ago

Calibration Homepage

  • This page lets users view, create, launch, and delete calibration datasets. Admins can manage users’ access to these datasets on this page.

  • Click on New Calibration to create a new calibration dataset.

Calibration selection

Select Vehicle-Camera Calibration to create a new dataset.

Calibration Instructions Page

Upon selecting Vehicle-Camera Calibration, the user is welcomed to the instructions page. Click on Get started to start the calibration setup.

Approach selection

  1. Select the Terrain as Flat and approach as Targetless

  1. Select the Lane based calibration option "Atleast 3 equidistant lane boundary lines (New)" option

Configuration

Camera Intrinsic Parameters

Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can also load the JSON file.

  • Users can manually enter the intrinsic parameters if they already have them.

Upload image files from the mounted camera

Add lanes in the Image

  1. Select any of the uploaded image and draw at least 3 straight lines resembling the equidistant lane boundaries present in the image.

  2. Repeat the above step for at least 10 images for better accuracy of the final calibrated results.

  3. Upon completion, Click 'Continue' to move into the calibration page.

Run Calibration

In the calibration page, Users must click Calibrate to calculate the extrinsic parameters of the camera with respect to the vehicle coordinate system

Visualizer

On successful calibration, click 'Visualize' button on the top right to view the Birds Eye View (BEV) representation of the camera image according to the calibrated extrinsic parameters.

Error function

  • For an ideal calibration, the lanes should appear parallel and equidistant when transformed in the BEV (Bird's Eye View) images.

  • Based on the above theory, we calculate the Parallelism error and Equidistant error and combine those two to get the final error.

  • Parallelism Error

    • Pe=1N∑i=1N∣mi−m^∣P_e = \frac{1}{N}\sum_{i=1}^{N} |m_i - \hat{m}|Pe​=N1​∑i=1N​∣mi​−m^∣,

    • where mim_imi​ is slope of ithithith line in the BEV image and m^\hat{m}m^ is the mean slope of all the lanes

  • Equidistant error

    • Ee=1N∑i=1N∣ci+1−ci∣1+m^2E_e = \frac{1}{N}\sum_{i=1}^{N} \frac{|c_{i+1} - c_i|}{\sqrt{1 + \hat{m}^2}}Ee​=N1​∑i=1N​1+m^2​∣ci+1​−ci​∣​,

    • where cic_ici​ is intercept of ithithith line in its BEV equation ax + by + c = 0 and m^\hat{m}m^ is the mean slope of all the lanes

  • Error=αPe+βEeNc, where α,β,and Nc are normalisation constantsError = \frac{\alpha P_e + \beta E_e}{N_c}, \space where \space \alpha, \beta, and \space N_c \space are \space normalisation \space constantsError=Nc​αPe​+βEe​​, where α,β,and Nc​ are normalisation constants.

Graph

  • The plot shows a strong correlation between our error function and the actual deviation from the ground truth within 1.3 degree of the ground truth.

  • The ground truth for the above mentioned dataset is 0.634 (Roll), -0.430 (Pitch), 0.310 (Yaw) and the estimated extrinsic parameters by optimising our error function is 1.568, -0.215, 0.327 with a deviation of just 0.95 degrees from the ground truth with most of the error being in the estimation of Roll angle.

Extrinsic Calibration Output:

  • The extrinsic parameters of the camera are with respect to the vehicle ROS coordinate system.

  • In the ROS coordinate system of a vehicle, the X-axis is facing along the vehicle direction, Y-axis is towards the left of the vehicle, and the Z-axis is perpendicular to the road plane facing towards the top.

  • In the tool, the extrinsic parameters Roll, Pitch, Yaw are in degrees.

  • Roll, Pitch, and Yaw are the extrinsic parameters downloaded from the calibration tool

Users can use the Camera Intrinsic calibration tool to calibrate the results, save them to the profile, and then load them here. For more details, .

The below graph is a plot between the actual ground truth error and on 9 observations including the ground truth on a dataset

click here
Kitti
Deepen error function
Camera input section in Configuration page