Document Type
Conference Paper
Rights
Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Disciplines
Computer Sciences
Abstract
Nowadays crowdsourcing is widely used in supervised machine learning to facilitate the collection of ratings for unlabelled training sets. In order to get good quality results it is worth rejecting results from noisy/unreliable raters, as soon as they are discovered. Many techniques for filtering unreliable raters rely on the presentation of training instances to the raters identified as most accurate to date. Early in the process, the true rater reliabilities are not known and unreliable raters may be used as a result. This paper explores improving the quality of ratings for train- ing instances by performing re-rating. The re-rating relies on the detection of such in- stances and the acquisition of additional ratings for them when the rating process is over. We compare different approaches to re-rating and compare the improvements in labeling accuracy and the labeling costs of these approaches.
DOI
https://doi.org/10.21427/D73899
Recommended Citation
Tarasov, A., Delany, S.J., MacNamee, : Improving Performance by Re-Rating in the Dynamic Estimation of Rater Reliability, Machine Learning Meets Crowdsourcing Workshop in conjunction with International Conference on Machine Learning, Atlanta, Georgia, USA, 2013, doi:10.21427/D73899
Publication Details
Machine Learning Meets Crowdsourcing Workshop in conjunction with International Conference on Machine Learning, Atlanta, Georgia, USA, 2013