Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
Export as PDF
  1. API Reference

Output JSON format

Details about JSON in downloaded output JSON file

Labels in a 2D and a 3D project can be downloaded in a JSON file. The Output JSON file consists of a labels array object, which has the following fields:

  1. file_id (string): refers to the ID of the image/frame to which the label belongs to.

  2. label_category_id (string): refers to the label category of the label.

  3. label_id (string): refers to the ID of the label.

  4. label_type (string): refers to the type of label created. It can have values of the following possible values, with their corresponding description: a. box: this is a 2D bounding box label b. lane: this is a 2D lane label c. polygon: this is a 2D polygon label d. point: this is a 2D point label e. scene: this is a 2D scene label f. 3d_bbox: this is a 3D bounding box label g. 3d_point: this is a 3D point label h. 3d_polyline: this is a 3D polyline label i. 3d_polygon: this is a 3D polygon label j. scene: this is a 3D scene label.

  5. stage_id(string): refers to the stage of the label

  6. attributes_source: refers to if attributes are added manually or automatically.

  7. create_time_millis: refers to the time at which the label is created in milliseconds.

  8. label_set_id: helps multiple users to label the same object using their label set id.

  9. labeller_email: displays the labeller email.

  10. sensor_id: displays the sensor id.

  11. update_time_millis: refers to the last updated time at which a label is updated in milliseconds.

  12. attributes (object): consists of the key-value pair for each label attribute.

  13. box (array): An array of 4 numbers representing a 2D bounding box. The first two numbers refer to the x, and y coordinates of the top left corner. The third and fourth number in the array refers to the length of the bounding box along the x and y axes, respectively. This field is populated only when the label type is a box.

  14. polygons (array): An array of polygons. Each polygon is an array of points. Each point is a 2-length array of x and y coordinates. This field is populated only when the label type is polygon or lane.

  15. point (array): consists of coordinates of the point. This field is populated only when the label type is the point.

  16. three_d_bbox (object): has the following fields: a. l (float): refers to the length of a 3D bounding box b. w (float): refers to the width of a 3D bounding box c. h (float): depicts the height of the 3D bounding box d. cx (float): refers to the x coordinate of the center of the 3D bounding box e. cy (float): refers to the y coordinate of the center of the 3D bounding box f. cz (float): refers to the z coordinate of the center of the 3D bounding box g. rot_z (float): depicts the angle (in radians) made by the front face of the 3D bounding box with the negative x-axis measured in the clockwise direction. h. quaternion: depicts the rotation angles of the box in quaternions. This field is populated only when the label type is 3d_bbox.

  17. three_d_point_indices (array): consists of indices of LiDAR points in the point cloud file. This field is populated only when the label type is 3d_point.

  18. three_d_polygon (array): is an array of x, y and z coordinates of different points comprising the polygon. This field is populated only when the label type is 3d_polygon.

  19. three_d_polyline (object): has the following fields: a. polyline (array): consists of x, y and z coordinate values of the points creating polyline. b. polyline_width (float): is the value of the width of the polyline. This field is populated only when the label type is 3d_polyline.

Here is the sample attached JSON file for reference:

Previous3D box pre-labeling model API

Last updated 1 year ago

5KB
Sample Output.json
Sample JSON file