Authors: Eric Heim,Alexander Seitel,Jonas Andrulis,Fabian Isensee,Christian Stock,Tobias Ross,Lena Maier-Hein
ArXiv: 1611.08527
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1611.08527v4
With the rapidly increasing interest in machine learning based solutions for
automatic image annotation, the availability of reference annotations for
algorithm training is one of the major bottlenecks in the field. Crowdsourcing
has evolved as a valuable option for low-cost and large-scale data annotation;
however, quality control remains a major issue which needs to be addressed. To
our knowledge, we are the first to analyze the annotation process to improve
crowd-sourced image segmentation. Our method involves training a regressor to
estimate the quality of a segmentation from the annotator's clickstream data.
The quality estimation can be used to identify spam and weight individual
annotations by their (estimated) quality when merging multiple segmentations of
one image. Using a total of 29,000 crowd annotations performed on publicly
available data of different object classes, we show that (1) our method is
highly accurate in estimating the segmentation quality based on clickstream
data, (2) outperforms state-of-the-art methods for merging multiple
annotations. As the regressor does not need to be trained on the object class
that it is applied to it can be regarded as a low-cost option for quality
control and confidence analysis in the context of crowd-based image annotation.