M2P2: Multimodal Persuasion Prediction Using Adaptive Fusion

Chongyang Bai, Haipeng Chen, Srijan Kumar, Jure Leskovec, V. S. Subrahmanian*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Identifying persuasive speakers in an adversarial environment is a critical task. In a national election, politicians would like to have persuasive speakers campaign on their behalf. When a company faces adverse publicity, they would like to engage persuasive advocates for their position in the presence of adversaries who are critical of them. Debates represent a common platform for these forms of adversarial persuasion. This paper solves two problems: the Debate Outcome Prediction (DOP) problem predicts who wins a debate while the Intensity of Persuasion Prediction (IPP) problem predicts the change in the number of votes before and after a speaker speaks. Though DOP has been previously studied, we are the first to study IPP. Past studies on DOP fail to leverage two important aspects of multimodal data: 1) multiple modalities are often semantically aligned, and 2) different modalities may provide diverse information for prediction. Our M2P2 (Multimodal Persuasion Prediction) framework is the first to use multimodal (acoustic, visual, language) data to solve the IPP problem. To leverage the alignment of different modalities while maintaining the diversity of the cues they provide, M2P2 devises a novel adaptive fusion learning framework which fuses embeddings obtained from two modules - an alignment module that extracts shared information between modalities and a heterogeneity module that learns the weights of different modalities with guidance from three separately trained unimodal reference models. We test M2P2 on the popular IQ2US dataset designed for DOP. We also introduce a new dataset called QPS (from Qipashuo, a popular Chinese debate TV show) for IPP. M2P2 significantly outperforms 4 recent baselines on both datasets.

Original languageEnglish (US)
Pages (from-to)942-952
Number of pages11
JournalIEEE Transactions on Multimedia
Volume25
DOIs
StatePublished - 2023

Funding

This work was supported in part by under Nos. OAC.-1835598 (CINES), OAC.-1934578 (HDR), CCF.-1918940 (Expeditions), IIS.-2030477 (RAPID), and IIS.-2027689 (RAPID), in part by DARPA under No. (MCS), in part by ARO under Nos. W911NF.-16.-1.-0342 (MURI), and W911NF.- 16.-1.-0171 (DURIP), and in part by Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, Amazon, JPMorgan Chase, Docomo, Hi728 tachi, JD.com, KDDI, NVIDIA, Dell, Toshiba, United- Health Group, Adobe, Facebook, Microsoft, and IDEaS Insitute. J. L. is a Chan Zuckerberg Biohub investigator.

Keywords

  • Multimodal learning
  • adaptive fusion
  • persuasion

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Media Technology
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'M2P2: Multimodal Persuasion Prediction Using Adaptive Fusion'. Together they form a unique fingerprint.

Cite this