Skip to content

Latest commit

 

History

History
52 lines (52 loc) · 2.06 KB

2024-09-12-an24a.md

File metadata and controls

52 lines (52 loc) · 2.06 KB
title abstract openreview software section layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Convergence Behavior of an Adversarial Weak Supervision Method
Labeling data via rules-of-thumb and minimal label supervision is central to Weak Supervision, a paradigm subsuming subareas of machine learning such as crowdsourced learning and semi-supervised ensemble learning. By using this labeled data to train modern machine learning methods, the cost of acquiring large amounts of hand labeled data can be ameliorated. Approaches to combining the rules-of-thumb falls into two camps, reflecting different ideologies of statistical estimation. The most common approach, exemplified by the Dawid-Skene model, is based on probabilistic modeling. The other, developed in the work of Balsubramani-Freund and others, is adversarial and game-theoretic. We provide a variety of statistical results for the adversarial approach under log-loss: we characterize the form of the solution, relate it to logistic regression, demonstrate consistency, and give rates of convergence. On the other hand, we find that probabilistic approaches for the same model class can fail to be consistent. Experimental results are provided to corroborate the theoretical results.
q9TqTSk9cy
Papers
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
an24a
0
Convergence Behavior of an Adversarial Weak Supervision Method
1
49
1-49
1
false
An, Steven and Dasgupta, Sanjoy
given family
Steven
An
given family
Sanjoy
Dasgupta
2024-09-12
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence
244
inproceedings
date-parts
2024
9
12