Deepen AI - Enterprise
Deepen AI
  • Deepen AI Overview
  • FAQ
  • Saas & On Premise
  • Data Management
    • Data Management Overview
    • Creating/Uploading a dataset
    • Create a dataset profile
    • Auto Task Assignments
    • Task Life Cycle
    • Tasks Assignments
    • Creating a group
    • Adding a user
    • Embed labeled dataset
    • Export
    • Export labels
    • Import Labels
    • Import profile
    • Import profile via JSON
    • Access token for APIs
    • Data Streaming
    • Reports
    • Assessments
  • 2D/3D editors
    • Editor Content
    • AI sense (ML-assisted labeling)
    • Assisted 2d Segmentation
    • Scene Labeling
  • 2D Editor
    • 2D Editor Overview
    • 2D Bounding Boxes
    • 2D Polyline/Line
    • 2D Polygon
      • 2D Semantic/Instance Segmentation
        • 2D Segmentation (foreground/background)
    • 2D Points
    • 2D Semantic Painting
      • Segment Anything
      • Propagate Labels in Semantic Segementation
      • 2D Semantic Painting/Segmentation Output Format
    • 3D Bounding boxes on images
    • 2D ML-powered Visual Object Tracking
    • 2D Shortcut Keys
    • 2D Customer Review
  • 3D Editor
    • 3D Editor Overview
    • 3D Bounding Boxes — Single Frame/Individual Frame
    • 3D Bounding Boxes_Sequence
    • 3D Bounding Boxes Features
      • Label View
      • One-Click Bounding Box
      • Sequence Timeline
      • Show Ground Mesh
      • Secondary Views
      • Camera Views
      • Hide/UnHide Points in 3D Lidar
    • 3D Lines
    • 3D Polygons
    • 3D Semantic Segmentation/Painting
    • 3D Instance Segmentation/Painting
    • Fused Cloud
    • 3D Segmentation (Smart Brush)
    • 3D Segmentation (Polygon)
    • 3D Segmentation (Brush)
    • 3D Segmentation (Ground Polygon)
    • 3D Painting (Foreground/Background)
    • 3D Segmentation(3D Brush/Cube)
    • Label Set
    • 3D Shortcut Keys
    • 3D Customer Review
  • 3D input/output
    • JSON input format for uploading a dataset in a point cloud project.
    • How to convert ROS bag into JSON data for annotation
    • Data Output Format - 3D Semantic Segmentation
    • Data Output Format - 3D Instance Segmentation
  • Quality Assurance
    • Issue Creation
    • Automatic QA
  • Calibration
    • Calibration
    • Charuco Dictionary
    • Calibration FAQ
    • Data Collection for Camera intrinsic Calibration
    • Camera Intrinsic calibration
    • Data Collection for Lidar-Camera Calibration (Single Target)
    • Lidar-Camera Calibration (Single target)
    • Data Collection for Lidar-Camera Calibration (Targetless)
    • Lidar-Camera Calibration (Targetless)
    • Data Collection for Multi Target Lidar-Camera Calibration
    • Multi Target Lidar-Camera Calibration
    • Lidar-Camera Calibration(Old)
    • Vehicle-Camera Calibration
      • Data Collection for Vehicle-Camera Calibration
      • Vehicle Camera Targetless Calibration
      • Data collection for lane based targetless vehicle-camera calibration
      • Lane based Targetless Vehicle Camera Calibration
    • Data Collection for Rough Terrain Vehicle-Camera Calibration
    • Rough Terrain Vehicle-Camera Calibration
    • Calibration Toolbar options
    • Calibration Profile
    • Data Collection for Overlapping-Camera Calibration
    • Overlapping-Camera Calibration
    • Data collection guide for Overlapping Camera Calibration (Multiple-Targets)
    • Overlapping Camera Calibration (Multiple-Targets)
    • Data Collection for Vehicle-3D Lidar calibration
    • Data Collection for Vehicle-2D Lidar calibration
    • Vehicle Lidar (3D and 2D) Calibration
    • Data Collection for Vehicle Lidar Targetless Calibration
    • Data Collection for IMU Lidar Targetless Calibration
    • Vehicle Lidar Targetless Calibration
    • Data Collection for Non Overlapping Camera Calibration
    • Non-Overlapping-Camera Calibration
    • Multi Sensor Visualization
    • Data Collection for LiDAR-LiDAR Calibration
    • LiDAR-LiDAR Calibration
    • Data Collection for IMU Intrinsic calibration
    • IMU Intrinsic Calibration
    • Data Collection for Radar-Camera Calibration
    • Radar-Camera Calibration
    • Data Collection for IMU Vehicle calibration
    • Lidar-IMU Calibration
    • IMU Vehicle Calibration
    • Data Collection for vehicle radar calibration
    • Vehicle radar calibration
    • Calibration Optimiser
    • Calibration list page
    • Data collection for rough terrain vehicle-Lidar calibration
    • Rough terrain vehicle Lidar calibration
    • Surround view camera correction calibration
    • Data Collection for Surround view camera correction calibration
    • Data Collection for Lidar-Radar calibration
    • Lidar Radar Calibration
    • Vehicle Lidar Calibration
    • API Documentation
      • Targetless Overlapping Camera Calibration API
      • Target Overlapping Camera Calibration API
      • Lidar Camera Calibration API
      • LiDAR-LiDAR Calibration API
      • Vehicle Lidar Calibration API
      • Global Optimiser
      • Radar Camera Calibration API
      • Target Camera-Vehicle Calibration API
      • Targetless Camera-Vehicle Calibration API
      • Calibration groups
      • Delete Calibrations
      • Access token for APIs
    • Target Generator
  • API Reference
    • Introduction and Quickstart
    • Datasets
      • Create new dataset
      • Delete dataset
    • Issues
    • Tasks
    • Process uploaded data
    • Import 2D labels for a dataset
    • Import 3D labels for a dataset
    • Download labels
    • Labeling profiles
    • Paint labels
    • User groups
    • User / User Group Scopes
    • Download datasets
    • Label sets
    • Resources
    • 2D box pre-labeling model API
    • 3D box pre-labeling model API
    • Output JSON format
Powered by GitBook
On this page
Export as PDF
  1. 2D Editor
  2. 2D Semantic Painting

2D Semantic Painting/Segmentation Output Format

Details about output format for 2D semantic segmentation paint labels

Labels for 2D semantic segmentation can be downloaded as the combination of a binary (.npy) file for each frame inside the dataset and two supporting metadata (.json), and colors (.json) files per dataset.

The JSON data present in the metadata file is in the below format :

                                      {
                                          "sensor_id": {
                                              "file_id_1": [
                                                    "paint_category_1",
                                                    "paint_category_2",
                                                    "paint_category_3",
                                              ],
                                              "file_id_2": [
                                                  "paint_category_2",
                                                    "paint_category_3",
                                              ]
                                          } 
                                     }

Here, paint_categories is a list of paint categories that are painted in that particular frame, which are configured at the dataset level.

The JSON data present in the colors file is in the below format :

                                   {
                      "format": [
                        "b",
                        "g",
                        "r"
                      ],
                      "paint_category_1": [
                        66,
                        45,
                        115
                      ],
                      "paint_category_2": [
                        70,
                        199,
                        184
                      ]
                }

Here, paint_category_1 is a list of r, g, and b values of the particular category listed in format order, for example here it's blue, green and red.

The npy file is a binary file which contains pixel-level information for the label category. For a file with image_width w pixel and image_height h pixel, npy file contains (h*w) bytes that represent the label category. This npy file will be per frame.

For example, consider a dataset containing a file (file_id_1) with an image_width of 1216 and image_height 2560 pixels, respectively. Assuming that the pixel point at image_width 1200 and image_height 2500 in the file of the dataset is annotated as below :

                                      label_category : "paint_category_2"

For the above scenario, the npy file will contain 3112960 bytes (hw). The annotation information for the point will be present at 3041200th (y * image Width + x) (25001216 + 1200) index. The byte value at this index would be 2 which is the based index of "paint_category_2" in the paint categories provided in the metadata for a particular file(file_id_1). The value 0 for the label category is reserved for unpainted points.

To extract the metadata and compression details, you will need to look at the response headers. Below is an example of response header

< HTTP/2 200 < server: gunicorn/20.0.4 < date: Fri, 11 Dec 2020 15:12:39 GMT < content-type: text/html; charset=utf-8 < paint-metadata: {"format": "pako_compressed", "paint_categories": [“Drivable region”, “Uneven terrain”]}

Remember the following information:

  • You can obtain the paint_categories and compression format from the data above.

  • The compression format is "pako_compressed,” and you can use pako decompression to retrieve the paint labels for each point.

  • For visualization purposes, you can use colors.json.

  • Regarding the paint categories, your understanding is correct. We assign 0 for unlabeled points and use the index from the paint_categories field in the paint labels.

PreviousPropagate Labels in Semantic SegementationNext3D Bounding boxes on images

Last updated 8 months ago