Assessments
Assessments are benchmark exams designed to test annotators labeling and reviewer's skills.
Last updated
Assessments are benchmark exams designed to test annotators labeling and reviewer's skills.
Last updated
Assessments are designed to automatically evaluate the proficiency of users, generate a score, and produce a comprehensive report. At present, we provide support for automatic evaluation of 2D bounding boxes. However, we are committed to expanding our support for additional label types soon. To create an assessment, follow the below steps:
Open the datasets page and click on “More”.
Click on “Assessments” and then “New” to create a new assessment.
Enter the assessment name, attendees, passmark, IoU matching threshold, time limit, and maximum attempts.
Select a benchmark dataset that has labels and is marked as “Done”.
Once the assessment is created, Users can launch the dataset and start the assessment.
Annotate the bounding boxes and submit the assessment once all the labels are annotated.
You can go back to the assessment page to view the results. If you fail the assessment, you can reattempt it.
To compare the assessment results with the benchmark dataset, the assessment creator must click on the view results button.
The results show missing objects and false positives.