Abstract
It is important to collect credible training samples (x, y) for building data-intensive learning systems (e.g., a deep learning system). Asking people to report complex distribution p(x), though theoretically viable, is challenging in practice. This is primarily due to the cognitive loads required for human agents to form the report of this highly complicated information. While classical elicitation mechanisms apply to eliciting a complex and generative (and continuous) distribution p(x), we are interested in eliciting samples xi ∼ p(x) from agents directly. We coin the above problem sample elicitation. This paper introduces a deep learning aided method to incentivize credible sample contributions from self-interested and rational agents. We show that with an accurate estimation of a certain f-divergence function we can achieve approximate incentive compatibility in eliciting truthful samples. We then present an efficient estimator with theoretical guarantees via studying the variational forms of the f-divergence function. We also show a connection between this sample elicitation problem and f-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples. Experiments on synthetic data, MNIST, and CIFAR-10 datasets demonstrate that our mechanism elicits truthful samples. Our implementation is available at https://github.com/weijiaheng/Credible-sample-elicitation.git.
Original language | English (US) |
---|---|
Pages (from-to) | 2692-2700 |
Number of pages | 9 |
Journal | Proceedings of Machine Learning Research |
Volume | 130 |
State | Published - 2021 |
Event | 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021 - Virtual, Online, United States Duration: Apr 13 2021 → Apr 15 2021 |
Funding
This work is partially supported by the National Science Foundation (NSF) under grant IIS-2007951.
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability