Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
  • Save calibration dataset:
  • Extrinsic Calibration Output:
Export as PDF
  1. Calibration

Rough Terrain Vehicle-Camera Calibration

PreviousData Collection for Rough Terrain Vehicle-Camera CalibrationNextCalibration Toolbar options

Last updated 2 years ago

  1. Calibration list page, where the users can load an existing dataset or create a new one.

2. New calibration selection modal.

3. Get started page of vehicle-camera setup.

4. Calibration settings modal.

  • Dataset name can be added here.

  • The user has to select the shape of the vehicle. Either rectangle or trapezoid.

For rectangle-shaped vehicles, users can input the measured values

For trapezoid-shaped vehicles, users can input the following measured values.

Description for vehicle details:

Configure checkerboard and Aruco:

AruCo markers are used for automatic wheel detection. Add the measurement of the marker.

Similarly, checkerboard configurations need to be updated.

5. Enter the intrinsic parameters for the mounted camera.

  • Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can use the intrinsic calibration tool and calibrate the results. Save them to profile and then load them here.

  • Or users can also load the JSON file.

6. Add images related to the mounted camera. One for the left view and the other for the right view.

7. Detect the checkerboard corners for both the mounted camera images.

  • Click on detect corners to get the checkerboard corners auto-detected.

  • Or else manually add the border corners to get all checkerboard corners.

8. Enter the intrinsic parameters for the external camera.

  • Intrinsic parameters for the camera are to be added here. Users have three options.

  • Users can use the intrinsic calibration tool and calibrate the results. Save them to profile and then load them here.

  • Or users can also load the JSON file.

9. Upload images are taken from the external camera for the left view.

10. Detect the checkerboard corners for all the left view external camera images.

  • Click on detect corners to get the checkerboard corners auto-detected.

  • Or else manually add the border corners to get all checkerboard corners.

11. Front and rear wheels are auto-detected.

  • Wheel points are auto-detected, users can view the markers by selecting undistorted images.

12. Upload images taken from the external camera for the right view.

13. Detect the checkerboard corners for both the right view external camera images.

  • Click on detect corners to get the checkerboard corners auto-detected.

  • Or else manually add the border corners to get all checkerboard corners.

14. Front and rear wheels are auto-detected.

  • Wheel points are auto-detected, users can view the markers by selecting undistorted images.

15. Click on the run calibration button. This takes all the input configuration and the file data to get the calibrated results.

16. The Top right bar shows the extrinsic parameters.

17. Visualize button shows the 3d representation of the car and its wheels. Along with the camera center and its frustum.

18. Export option helps the user to export the calibrated data for the mounted camera with the vehicle.

19. Users can check the error stats and add more images to see the change in error stats.

  • Reprojection Error: Its value is the mean delta of the marked wheel point and the reprojection of the calibrated wheel position. It's measured in pixels.

  • Translation Error: Its value is the mean delta of the distance between the ray produced from marked wheel points and the calibrated wheel position in 3d space. It's measured in meters.

Save calibration dataset:

We have a save option in the top right corner. A user can click on the Save button to save the calibration dataset at any time during the calibration process.

Extrinsic Calibration Output:

  • roll, pitch, and yaw are in degrees and px, py, pz are in meters.

  • roll, pitch, and yaw, px, py, pz are the extrinsic parameters downloaded from the calibration tool.

  • vehiclePoint3D is the 3d coordinates of a point in the vehicle coordinate system.

  • imagePoint3D is the 3d coordinates of a point in the camera coordinate system.