Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
Export as PDF
  1. 3D input/output

JSON input format for uploading a dataset in a point cloud project.

Details on JSON format dataset for point cloud type projects

`The zip file should contain a set of JSON files and camera images. Each JSON file will correspond to one point cloud frame of the dataset. JSON files should be at the root directory level. The order of the frames is decided by the order of JSON filename sorted in ascending order. For instance, filenames can be 0001.json, 0002.json, 0003.json, ... Filenames can also be 0.json, 1.json, 2.json, …

Each JSON file should be an object with the following 5 fields:

  1. Images

  2. Timestamp

  3. Points

  4. Device position

  5. Device heading

1. Images

An image (array) - of all camera images corresponding to one point cloud frame. Usually, the number of images corresponds to the number of cameras in the system. If there are no images, please use an empty array. Each element is a JSON object. Fields in the image object are as follows:

  1. fx (float) - focal length in x direction.

  2. fy (float) - focal length in the y direction.

  3. cx (float) - x coordinate of principal point.

  4. cy (float) - y coordinate of principal point.

  5. timestamp (float) - time in seconds when the image was captured.

  6. image_url (string) - corresponds to an image path inside the zip file, e.g. “images/0001.png”. It can also be an external URL. Image types supported are .jpeg and .png.

  7. position (object) - position of the camera with respect to the world frame. Details of JSON objects can be found below.

  8. heading (object) - orientation of the camera with respect to the world frame. Please find details of the JSON object below.

  9. camera_model (string) - the camera model to be used for undistorting the image. Supported values for camera_model : pinhole (default) - uses k1, k2, p1, p2, k3, k4 distortion coefficients fisheye - uses k1, k2, k3, k4 distortion coefficients mod_kannala - uses k1, k2, k3, k4 distortion coefficients

  10. k1 (float) - distortion coefficient.

  11. k2 (float) - distortion coefficient.

  12. p1 (float) - distortion coefficient.

  13. p2 (float) - distortion coefficient.

  14. k3 (float) - distortion coefficient.

  15. k4 (float) - distortion coefficient.

  16. camera_name - this is optional, but if given in JSON file, tool will use the same name to refer to this camera instead of using camera_0, camera_1, etc.

A sample image JSON is as follows:

2. Timestamp

Timestamp (float) – time in seconds at which the point cloud frame was captured.

3. Points

Points can be given in 2 formats-the first format is an array of JSON objects, and the second format is base64 encoded strings of points and intensities. Points in JSON object array format: A Points array of JSON objects of all LiDAR points having their x, y, z, i, r, g, b,d values. x, y and z values are mandatory and i, r, g, b and d values are optional for each point. In general, the “up” direction towards the sky should be in the positive z direction for the visualization to work correctly. Each element of the array is a JSON object, as shown in this section. rgb value in xyzrgb type point will be supported in a future release. Each point can have other values like velocity, as well, for which we can add custom support. Fields in point object are as follows:

  1. x (float) – x coordinate of the point, in meters.

  2. y (float) – y coordinate of the point, in meters.

  3. z (float) – z coordinate of the point, in meters.

  4. i (float) - intensity value between 0 and 1, this is an optional field

  5. d (integer) - non-negative device id to represent points from multiple sensors, this is an optional field

x, y and z values are in world coordinates. If you are unable to put the point cloud in the world coordinate, you can fall back to the local LiDAR coordinate and let us know. We will contact you about the issue.

For Multi Lidar points, add the field 'd' in the points array to represent lidar id, it should be a non-negative integer value. A sample point JSON object is as follows:

"points": [
        {
            "i": 4.00,
            "x": -0.10,
            "y": 6.22,
            "z": 1.66,
            "d": 1
        },
        {
            "i": 11.00,
            "x": -0.14,
            "y": 9.20,
            "z": 1.80,
            "d": 2
        },
        {
            "i": 14.00,
            "x": -0.17,
            "y": 10.69,
            "z": 1.52,
            "d": 3
        }
]

If you want to add a name for each lidar id, then you need to add another field “multi_lidar_keys”, please note this is an optional field.

"multi_lidar_keys" : {
        "1" : "Lidar_1",
        "2" : "Lidar_2",
        "3" : "Lidar_3"
    }

4. Device position

A device_position (object) – position of LiDAR or camera with respect to world frame. Similar to the point cloud, if you are unable to put the device position in the world coordinate, you can fall back to the local LiDAR coordinate and let us know. We will contact you about the issue. For camera, if you do not have any position information, please use (0, 0, 0) and let us know. Fields in position object are as follows:

  1. x (float) – x coordinate of device/camera position, in meters.

  2. y (float) – y coordinate of device/camera position, in meters.

  3. z (float) – z coordinate of device/camera position, in meters.

Sample position JSON object:

5. Device heading

A device_heading (object) – orientation parameters of LiDAR or camera with respect to world frame. If you are unable to put LiDAR heading in world coordinate, please use the identity quaternion (x = 0, y = 0, z = 0, w = 1). If you can not obtain extrinsic camera calibration parameters, please also use the identity quaternion. We will contact you about this issue. Fields in the heading object are as follows, the 4 components are quaternions:

  1. x (float) – x component of device/camera orientation.

  2. y (float) – y component of device/camera orientation.

  3. z (float) – z component of device/camera orientation.

  4. w (float) – w component of device/camera orientation.

A sample heading JSON object is as follows:

Please note that in JSON, the order of the dictionary values of quaternion doesn't matter. Following two JSONs will give exact same result:

A sample JSON can be found below,

{
  "images": [
    {
      "fx": 561.997914,
      "timestamp": 1541186225.8394644,
      "p2": 0.00251,
      "k1": -0.142792,
      "p1": 0.001203,
      "k3": 0,
      "k2": 0.022846,
      "cy": 361.97667,
      "cx": 664.16411,
      "image_url": "0.png",
      "fy": 561.585651,
      "position": {
        "y": -152.77584902657554,
        "x": 311.21505956090624,
        "z": -10.854137529636024
      },
      "heading": {
        "y": -0.7046155108831117,
        "x": 0.034278837280808494,
        "z": 0.7070617895701465,
        "w": -0.04904659893885366
      }
      "camera_model": "pinhole",
      "camera_name": "front"
    },
    {
      "fx": 537.74122,
      "timestamp": 1541186225.8499014,
      "p2": -0.000507,
      "k1": -0.133161,
      "p1": -0.0007,
      "k3": 0,
      "k2": 0.020764,
      "cy": 353.596887,
      "cx": 687.798477,
      "image_url": "1.png",
      "fy": 541.411032,
      "position": {
        "y": -152.7458074214421,
        "x": 311.168923367011,
        "z": -10.855340458227541
      },
      "heading": {
        "y": -0.571381519522144,
        "x": -0.4283386878183726,
        "z": 0.5635977900941452,
        "w": 0.4152188081814165
      },
      "camera_name": "rear"
    }
  ],
  "timestamp": 1541186225.848686,
  "device_heading": {
    "y": -0.014390929501214435,
    "x": -0.006511549504752948,
    "z": -0.8798010889637369,
    "w": 0.4750795141124911
  },
  "points": [
    {
      "y": -147.6858459726749,
      "x": 319.51523557465174,
      "z": -11.55716049374703,
      "i": 0.32
    },
    {
      "y": -147.709804574419,
      "x": 319.5387083489352,
      "z": -11.559980704176585,
      "i": 0.32
    },
    {
      "y": -147.8861052361809,
      "x": 319.3026396838094,
      "z": -11.536266484496409,
      "i": 0.32
    }
  ],
  "device_position": {
    "y": -152.4309172401979,
    "x": 311.42759643080274,
    "z": -11.704321251954227
  }
}

Another sample (Multi Lidar):

{
  "images": [
    {
      "fx": 561.997914,
      "timestamp": 1541186225.8394644,
      "p2": 0.00251,
      "k1": -0.142792,
      "p1": 0.001203,
      "k3": 0,
      "k2": 0.022846,
      "cy": 361.97667,
      "cx": 664.16411,
      "image_url": "camera_1.jpg",
      "fy": 561.585651,
      "position": {
        "y": -152.77584902657554,
        "x": 311.21505956090624,
        "z": -10.854137529636024
      },
      "heading": {
        "y": -0.7046155108831117,
        "x": 0.034278837280808494,
        "z": 0.7070617895701465,
        "w": -0.04904659893885366
      },
      "camera_name": "front"
    },
{
      Another 5 cameras will have similar settings then followed by 
        "heading": {
        "y": -0.3485952869673183,
        "x": -0.7973743859088446,
        "z": 0.1897018750524244,
        "w": 0.45463019389638054
      }
    },
    {
      "fx": 354.848025,
      "timestamp": 1541186225.832587,
      "cy": 331.448458,
      "cx": 812.600868,
      "image_url": "another_camera_1.jpg",
      "fy": 356.521296,
      "position": {
        "y": -152.11720409699208,
        "x": 311.6143516201334,
        "z": -10.845064295267472
      },
      "heading": {
        "y": 0.2603210255938851,
        "x": -0.8843903286926823,
        "z": -0.11886794907638565,
        "w": 0.36872363747252623
      }
    }
  ],
  "timestamp": 1541186225.848686,
  "device_heading": {
    "y": -0.014390929501214435,
    "x": -0.006511549504752948,
    "z": -0.8798010889637369,
    "w": 0.4750795141124911
  },
  "points": [
        {
            "i": 4.00,
            "x": -0.10,
            "y": 6.22,
            "z": 1.66,
            "d": 1
        },
        {
            "i": 11.00,
            "x": -0.14,
            "y": 9.20,
            "z": 1.80,
            "d": 2
        },
        {
            "i": 14.00,
            "x": -0.17,
            "y": 10.69,
            "z": 1.52,
            "d": 3
        }
  ],
  "multi_lidar_keys" : {
        "1" : "Lidar_1",
        "2" : "Lidar_2",
        "3" : "Lidar_3"
  },
  "device_position": {
    "y": -152.4309172401979,
    "x": 311.42759643080274,
    "z": -11.704321251954227
  }
}

Previous3D Customer ReviewNextHow to convert ROS bag into JSON data for annotation

Last updated 1 year ago

If images are already undistorted, k1, k2, p1, p2, etc. should be all 0's. You can find more details on the camera parameters .

here
Creating/Uploading a dataset