The presidential deepfakes dataset

Aruna Sankaranarayanan*, Matthew Groh, Rosalind Picard, Andrew Lippman

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations


How do we evaluate media forensic techniques for detecting deepfakes? We present the Presidential Deepfakes Dataset (PDD), which consists of 32 videos, half of which are original videos and half of which are manipulated with audio impersonations, synthesized lip synchronizations, political misinformation, and situational artifacts. This dataset expands the context on which end-to-end media forensic systems can be evaluated. As an example, we evaluate the winning model of the DeepFake Detection Challenge on the PDD and find that it classifies 69% of the videos in the PDD accurately. We share this dataset publicly for researchers to evaluate their techniques with the intention of pre-bunking future misinformation attempts.

Original languageEnglish (US)
Pages (from-to)57-72
Number of pages16
JournalCEUR Workshop Proceedings
StatePublished - 2021
Event1st Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies, AIofAI 2021 - Montreal, Canada
Duration: Aug 19 2021 → …


  • Dataset
  • Deepfakes
  • DFDC
  • Disinformation
  • Media forensics
  • Misinformation
  • Politics

ASJC Scopus subject areas

  • General Computer Science


Dive into the research topics of 'The presidential deepfakes dataset'. Together they form a unique fingerprint.

Cite this