A Theoretical Perspective for Speculative Decoding Algorithm

Ming Yin*, Minshuo Chen, Kaixuan Huang, Mengdi Wang*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Transformer-based autoregressive sampling has been the major bottleneck for slowing down large language model inferences. One effective way to accelerate inference is Speculative Decoding, which employs a small model to sample a sequence of draft tokens and a large model to validate. Given its empirical effectiveness, the theoretical understanding of Speculative Decoding is falling behind. This paper tackles this gap by conceptualizing the decoding problem via markov chain abstraction and studying the key properties, output quality and inference acceleration, from a theoretical perspective. Our analysis covers the theoretical limits of speculative decoding, batch algorithms, and output quality-inference acceleration tradeoffs. Our results reveal the fundamental connections between different components of LLMs via total variation distances and show how they jointly affect the efficiency of decoding algorithms.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume37
StatePublished - 2024
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada
Duration: Dec 9 2024Dec 15 2024

Funding

The authors would like to thank anonymous reviewers for their valuable feedback. Mengdi Wang acknowledges the support by NSF IIS-2107304, NSF CPS-2312093, and ONR 1006977.

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'A Theoretical Perspective for Speculative Decoding Algorithm'. Together they form a unique fingerprint.

Cite this