Skip to main navigation Skip to search Skip to main content

The (Un)desirable shield: consequences of perceived effects of warning labels on AI-generated political disinformation

  • Ran Wei
  • , Jingyi Pu
  • , Ven-Hwei Lo
  • , Xinzhi Zhang

Research output: Contribution to journalJournal articlepeer-review

Abstract

As AI-generated political disinformation proliferates, warning labels have emerged as a defining regulatory intervention. Drawing on Third-Person Effect (TPE) theory, this study investigates how exposure to warning labels on AI-generated disinformation shapes perceptual effects and behavioral consequences during the 2024 U.S. Presidential Election. A national online survey in the U.S. (N?=?2,373) examined the impact of warning labels attached to AI-generated political disinformation targeting both Democratic and Republican candidates. Results show that exposure to the warning labels significantly increased perceived effects on both oneself and the general public. These findings support the generalizability of TPE in politically charged environments and highlight its relevance in the domain of AI-generated disinformation. Regarding behavioral outcomes, perceived effects of warning labels on others predict support for restrictive policies and engagement in preventive actions. In contrast, perceived effects on oneself drive individual-level preventive behaviors, such as AI literacy enhancement, but do not lead to greater support for regulatory action. In addition, the perceived social desirability of warning labels was found to moderate these outcomes, particularly for anti-Republican disinformation, amplifying perceived influence among those endorsing the intervention. These findings advance TPE scholarship by highlighting the complex interplay between perception, partisanship, and regulatory attitudes, offering insights for the governance of AI-mediated information environments and the design of communication interventions for safeguarding information integrity.
Original languageEnglish
Number of pages20
JournalInformation Communication and Society
DOIs
Publication statusE-pub ahead of print - 12 Mar 2026

User-Defined Keywords

  • AI governance
  • AI regulation
  • AI-generated content
  • information integity
  • message desirability
  • misinformation resilience
  • political disinformation
  • third-person effect
  • warning labels

Fingerprint

Dive into the research topics of 'The (Un)desirable shield: consequences of perceived effects of warning labels on AI-generated political disinformation'. Together they form a unique fingerprint.

Cite this