Information Elicitation Meets Large Language Models
Published in The 20th Conference on Web and Internet Economics, 2024
Materials
Toolkit: https://drive.google.com/drive/folders/1j4lgQ-K8ZNhwQoE6yeBbX5QwMGk7SEw8?usp=sharing
Slide: https://yxlu.me/files/tutorial_wine24.pdf
Still under construction, expected to be completed in mid-December.
Presenters
Yuxuan Lu, Shengwei Xu
Abstract
Eliciting high-quality information from the crowd has become increasingly important, with applications spanning data labeling, reputation systems, peer grading, and peer review. Incentive mechanisms have been proposed to motivate truthful and informative reports by rewarding them more than untruthful or uninformative reports, including scoring rules, peer prediction, and Bayesian Truth Serum. Provable theoretical guarantees have been achieved.
However, traditional approaches are limited to relatively simple report formats, such as probability forecasts or multiple-choice answers. Recent studies have expanded these techniques to the domain of text-based reports, leveraging advancements in Large Language Models (LLMs), which significantly increases the applicability of Information Elicitation mechanisms, especially in applications where textual feedback is common and highly informative, such as academic peer reviews, online business reviews, and social media comments. Moreover, recent studies also suggest that peer prediction mechanisms can be utilized to evaluate LLMs by assessing their ability to generate informative content.
In addition to these current developments, this tutorial will also introduce a practical toolkit that leverages cloud computing resources, Google Colab, to deploy and fine-tune open-source LLMs for information elicitation research. This toolkit is designed specifically for theorists and researchers who may not have access to significant computational resources (GPUs) or extensive experience with LLM coding.