Assessments

Assessments are benchmark exams designed to test annotators labeling and reviewer's skills.

Assessments are designed to automatically evaluate the proficiency of users, generate a score, and produce a comprehensive report. At present, we provide support for automatic evaluation of 2D bounding boxes. However, we are committed to expanding our support for additional label types soon. To create an assessment, follow the below steps:

  1. Open the datasets page and click on “More”.

  2. Click on “Assessments” and then “New” to create a new assessment.

  3. Enter the assessment name, attendees, passmark, IoU matching threshold, time limit, and maximum attempts.

  4. Select a benchmark dataset that has labels and is marked as “Done”.

  5. Once the assessment is created, Users can launch the dataset and start the assessment.

  6. Annotate the bounding boxes and submit the assessment once all the labels are annotated.

  7. You can go back to the assessment page to view the results. If you fail the assessment, you can reattempt it.

  8. To compare the assessment results with the benchmark dataset, the assessment creator must click on the view results button.

  9. The results show missing objects and false positives.

Last updated