Quality Management

Fully configurable metrics for understanding quality of the results. Identify and capture bias early.

Identify Issues

Compare annotators between each other with Inter agreement matrix or side by side comparison of annotations

Colaborator agreement matrix
Colaborator agreement matrix

Consensus Options

Different types of consensus and agreements metrics provide gives you a deep understanding of the overall quality and individual annotator performances, here are a few examples of what you get:

Per-entity agreement provides statistics about
Collaborator agreement using Cohen Cappa
Fully configurable algorithms

Ground truth manager

Verify the quality using the Ground truth manager. You can mark any item as fully valid labeled, and we will use that to compute quality statistics for you

Verification Manager

Capture Qualitative errors with the Verification Manager. Gives you a quick overview of the overall quality.