iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://proceedings.neurips.cc/paper/2017/hash/186a157b2992e7daed3677ce8e9fe40f-Abstract.html
Collaborative PAC Learning

Collaborative PAC Learning

Part of Advances in Neural Information Processing Systems 30 (NIPS 2017)

Bibtex Metadata Paper Reviews Supplemental

Authors

Avrim Blum, Nika Haghtalab, Ariel D. Procaccia, Mingda Qiao

Abstract

We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept. We ask how much more information is required to learn an accurate classifier for all players simultaneously. We refer to the ratio between the sample complexity of collaborative PAC learning and its non-collaborative (single-player) counterpart as the overhead. We design learning algorithms with O(ln(k)) and O(ln^2(k)) overhead in the personalized and centralized variants our model. This gives an exponential improvement upon the naive algorithm that does not share information among players. We complement our upper bounds with an Omega(ln(k)) overhead lower bound, showing that our results are tight up to a logarithmic factor.