MODEL-DRIVEN FEEDBACK FOR ANNOTATION
First Claim
1. A method for producing consistent annotation between multiple human annotators using a single, automatic trained model, comprising:
- providing different parts of a corpus stored in memory on an annotation system to multiple human annotators to perform annotations thereon;
identifying potential inconsistencies between the annotations made by each of the human annotators and annotation predictions made by a single, automatic model, wherein the single, automatic model is stored in memory on an annotation system and performs annotation predictions using a processor;
allowing each human annotator to independently control the confidence threshold selectivity of the model via a user interface (UI) to alter the visualization level of agreement between the respective annotator and the model;
notifying the human annotator of an inconsistency, if the confidence of the prediction exceeds the selected threshold, with a visualization level proportional to the exceed value;
allowing each human annotator to review and independently revise the inconsistency identified by the automatic model; and
updating the model based on the revisions and immediately making the updated model available to all human annotators.
2 Assignments
0 Petitions
Accused Products
Abstract
A system, a method and a computer readable media for providing model-driven feedback to human annotators. In one exemplary embodiment, the method includes manually annotating an initial small dataset. The method further includes training an initial model using said annotated dataset. The method further includes comparing the annotations produced by the model with the annotations produced by the annotator. The method further includes notifying the annotator of discrepancies between the annotations and the predictions of the model. The method further includes allowing the annotator to modify the annotations if appropriate. The method further includes updating the model with the data annotated by the annotator.
45 Citations
1 Claim
-
1. A method for producing consistent annotation between multiple human annotators using a single, automatic trained model, comprising:
-
providing different parts of a corpus stored in memory on an annotation system to multiple human annotators to perform annotations thereon; identifying potential inconsistencies between the annotations made by each of the human annotators and annotation predictions made by a single, automatic model, wherein the single, automatic model is stored in memory on an annotation system and performs annotation predictions using a processor; allowing each human annotator to independently control the confidence threshold selectivity of the model via a user interface (UI) to alter the visualization level of agreement between the respective annotator and the model; notifying the human annotator of an inconsistency, if the confidence of the prediction exceeds the selected threshold, with a visualization level proportional to the exceed value; allowing each human annotator to review and independently revise the inconsistency identified by the automatic model; and updating the model based on the revisions and immediately making the updated model available to all human annotators.
-
Specification