Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models

Hongzhan Lin*, Ziyang Luo, Wei Gao, Jing Ma, Bo Wang, Ruichao Yang

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

1 Citation (Scopus)

Abstract

The age of social media is flooded with Internet memes, necessitating a clear grasp and effective identification of harmful ones. This task presents a significant challenge due to the implicit meaning embedded in memes, which is not explicitly conveyed through the surface text and image. However, existing harmful meme detection methods do not present readable explanations that unveil such implicit meaning to support their detection decisions. In this paper, we propose an explainable approach to detect harmful memes, achieved through reasoning over conflicting rationales from both harmless and harmful positions. Specifically, inspired by the powerful capacity of Large Language Models (LLMs) on text generation and reasoning, we first elicit multimodal debate between LLMs to generate the explanations derived from the contradictory arguments. Then we propose to fine-tune a small language model as the debate judge for harmfulness inference, to facilitate multimodal fusion between the harmfulness rationales and the intrinsic multimodal information within memes. In this way, our model is empowered to perform dialectical reasoning over intricate and implicit harm-indicative patterns, utilizing multimodal explanations originating from both harmless and harmful arguments. Extensive experiments on three public meme datasets demonstrate that our harmful meme detection approach achieves much better performance than state-of-the-art methods and exhibits a superior capacity for explaining the meme harmfulness of the model predictions.

Original languageEnglish
Title of host publicationWWW '24: Proceedings of the ACM on Web Conference 2024
PublisherAssociation for Computing Machinery (ACM)
Pages2359-2370
Number of pages12
Edition1st
ISBN (Electronic)9798400701719
DOIs
Publication statusPublished - 13 May 2024
Event33rd ACM Web Conference, WWW 2024 - , Singapore
Duration: 13 May 202417 May 2024
https://dl.acm.org/doi/proceedings/10.1145/3589334
https://dl.acm.org/doi/proceedings/10.1145/3589335

Publication series

NameWWW 2024 - Proceedings of the ACM Web Conference

Conference

Conference33rd ACM Web Conference, WWW 2024
Country/TerritorySingapore
Period13/05/2417/05/24
Internet address

Scopus Subject Areas

  • Computer Networks and Communications
  • Software

User-Defined Keywords

  • explainability
  • harmful meme detection
  • LLMs
  • multimodal debate

Fingerprint

Dive into the research topics of 'Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models'. Together they form a unique fingerprint.

Cite this