Details on JSON format dataset for point cloud type projects
`The zip file should contain a set of JSON files and camera images. Each JSON file will correspond to one point cloud frame of the dataset. JSON files should be at the root directory level. The order of the frames is decided by the order of JSON filename sorted in ascending order. For instance, filenames can be 0001.json, 0002.json, 0003.json, ... Filenames can also be 0.json, 1.json, 2.json, …
Each JSON file should be an object with the following 5 fields:
Images
Timestamp
Points
Device position
Device heading
1. Images
An image (array) - of all camera images corresponding to one point cloud frame. Usually, the number of images corresponds to the number of cameras in the system. If there are no images, please use an empty array. Each element is a JSON object. Fields in the image object are as follows:
fx (float) - focal length in x direction.
fy (float) - focal length in the y direction.
cx (float) - x coordinate of principal point.
cy (float) - y coordinate of principal point.
timestamp (float) - time in seconds when the image was captured.
image_url (string) - corresponds to an image path inside the zip file, e.g. “images/0001.png”. It can also be an external URL. Image types supported are .jpeg and .png.
position (object) - position of the camera with respect to the world frame. Details of JSON objects can be found below.
heading (object) - orientation of the camera with respect to the world frame. Please find details of the JSON object below.
camera_model (string) - the camera model to be used for undistorting the image. Supported values for camera_model : pinhole (default) - uses k1, k2, p1, p2, k3, k4 distortion coefficients fisheye - uses k1, k2, k3, k4 distortion coefficients mod_kannala - uses k1, k2, k3, k4 distortion coefficients
k1 (float) - distortion coefficient.
k2 (float) - distortion coefficient.
p1 (float) - distortion coefficient.
p2 (float) - distortion coefficient.
k3 (float) - distortion coefficient.
k4 (float) - distortion coefficient.
camera_name - this is optional, but if given in JSON file, tool will use the same name to refer to this camera instead of using camera_0, camera_1, etc.
If images are already undistorted, k1, k2, p1, p2, etc. should be all 0's. You can find more details on the camera parameters here.
A sample image JSON is as follows:
2. Timestamp
Timestamp (float) – time in seconds at which the point cloud frame was captured.
3. Points
Points can be given in 2 formats-the first format is an array of JSON objects, and the second format is base64 encoded strings of points and intensities. Points in JSON object array format: A Points array of JSON objects of all LiDAR points having their x, y, z, i, r, g, b,d values. x, y and z values are mandatory and i, r, g, b and d values are optional for each point. In general, the “up” direction towards the sky should be in the positive z direction for the visualization to work correctly. Each element of the array is a JSON object, as shown in this section. rgb value in xyzrgb type point will be supported in a future release. Each point can have other values like velocity, as well, for which we can add custom support. Fields in point object are as follows:
x (float) – x coordinate of the point, in meters.
y (float) – y coordinate of the point, in meters.
z (float) – z coordinate of the point, in meters.
i (float) - intensity value between 0 and 1, this is an optional field
d (integer) - non-negative device id to represent points from multiple sensors, this is an optional field
x, y and z values are in world coordinates. If you are unable to put the point cloud in the world coordinate, you can fall back to the local LiDAR coordinate and let us know. We will contact you about the issue.
For Multi Lidar points, add the field 'd' in the points array to represent lidar id, it should be a non-negative integer value. A sample point JSON object is as follows:
If you want to add a name for each lidar id, then you need to add another field “multi_lidar_keys”, please note this is an optional field.
4. Device position
A device_position (object) – position of LiDAR or camera with respect to world frame. Similar to the point cloud, if you are unable to put the device position in the world coordinate, you can fall back to the local LiDAR coordinate and let us know. We will contact you about the issue. For camera, if you do not have any position information, please use (0, 0, 0) and let us know. Fields in position object are as follows:
x (float) – x coordinate of device/camera position, in meters.
y (float) – y coordinate of device/camera position, in meters.
z (float) – z coordinate of device/camera position, in meters.
Sample position JSON object:
5. Device heading
A device_heading (object) – orientation parameters of LiDAR or camera with respect to world frame. If you are unable to put LiDAR heading in world coordinate, please use the identity quaternion (x = 0, y = 0, z = 0, w = 1). If you can not obtain extrinsic camera calibration parameters, please also use the identity quaternion. We will contact you about this issue. Fields in the heading object are as follows, the 4 components are quaternions:
x (float) – x component of device/camera orientation.
y (float) – y component of device/camera orientation.
z (float) – z component of device/camera orientation.
w (float) – w component of device/camera orientation.
A sample heading JSON object is as follows:
Please note that in JSON, the order of the dictionary values of quaternion doesn't matter. Following two JSONs will give exact same result:
A sample JSON can be found below,
Another sample (Multi Lidar):