Navigating Complexities: Gendered Political Misinformation, Cognitive Ability, and AI Imaginaries in the 2024 US Election

Research output: Contribution to conferenceConference paperpeer-review

Abstract

Introduction:
In the digital age, fake news presents a growing threat to democratic processes, particularly when intertwined with the rise of artificial intelligence (AI) technologies that are capable of generating persuasive misinformation. During the election, misinformation, specially gendered fake news (Stabile et al., 2019), could contribute to mistrust and influence public opinions. This study utilizes the timely 2024 US presidential election campaigns as the context to study these dynamics, focusing on the impact of AI-generated multimodal misinformation content and individuals’ susceptibility to such content. We examine the mechanisms how gendered political misinformation content (i.e., fake news regarding Trump vs. Harris) is believed, and accepted.

A 2 (text vs. text+image) × 2 (AI vs. non-AI misinformation) survey experiment was conducted in the swing states in the U.S. to investigate the roles of cognitive ability and need for cognition affecting the acceptance of political misinformation, with AI literacy and AI imaginaries serving as moderators.

Political misinformation and AI deepfakes
Political misinformation, defined as the deliberate spread of false or misleading information, undermines democratic processes by distorting the electorate’s perceptions of candidates and issues (Fallis, 2014). Studies have shown that the spread of misinformation on social media affects public trust in political institutions and shapes attitudes toward policy (Lewandowsky et al., 2012; Waisbord, 2021).

Research on AI’s role in misinformation has become urgent, particularly in light of deepfakes and other synthetic media. Deepfakes refer to AI-generatedsynthetic media, such as pictures, videos, or audio, that realistically alter or swap out a person’s appearance and actions in an already-existing piece of content (Doss et al., 2023). Utilizing deep learning algorithms, their ability to disseminate false information and trick viewers has drawn attention (Ahmed, 2023). Hwang, T., & Watts, D. J. (2020) discuss how deepfakes can be a powerful tool for disinformation campaigns and argue that the realistic nature of deepfakes can exploit cognitive biases and make misinformation more persuasive.

Cognitive ability and need for cognition in (mis)information processing
Cognitive ability plays a crucial role in information processing, with higher cognitive skills linked to greater skepticism and the ability to discern misinformation (Pennycook & Rand, 2018). Likewise, need for cognition (NFC), the tendency to engage in and enjoy thinking, has been shown to reduce belief in misinformation by enhancing analytical thinking (Su et al., 2021). However, few studies have examined how these individual differences interact with AI-driven multimodal misinformation.

Moderating Role of AI Literacy and AI Imaginaries
AI literacy, as a relatively new concept, has yet to be clearly defined. It is often regarded as an advanced form of digital literacy. At its core, AI literacy encompasses the ability to understand, interact with, and critically assess AI systems and their outputs (Linter, 2024). A review focused on conceptualizing AI literacy, drawing from existing literacy frameworks, identified four essential components: knowledge and understanding, usage, evaluation, and awareness of the ethical implications associated with AI usage.

AI imaginaries are conceptualized as the collective ideas, perceptions, and narratives that shape society’s understanding and expectations of artificial intelligence (Zhong, 2024). Such imaginaries are sociotechnological, which are collectively held and performed visions of desirable futures (Jasanoff, 2015, p. 4) and influenced by individuals’ imagination. In America, concern about artificial intelligence in daily life outweighs excitement (Kikuchi & Tyson, 2023), which may form a negative shared belief and attitude towards AI. This study will explore how AI imaginaries can play a part in individuals’ acceptance of political misinformation given the timely challenges.

Although previous research has suggested the importance of digital literacy in combating misinformation (Metzger & Flanagin, 2013), little is known about how AI-specific literacy and AI imaginaries function as protective factors.

To sum up, research questions and hypotheses are as follows:

RQ1. Does the modality of misinformation (text-only vs. text+image) influence participants’ acceptance of political misinformation?

RQ2. Does the source of misinformation (AI vs. human) influence participants’ acceptance of political misinformation?

H1. Cognitive ability is negatively associated with political disinformation acceptance. Individuals with a higher level of cognitive ability are less likely to accept political disinformation.

H2. Need for cognition is negatively associated with political disinformation acceptance. Individuals low in need for cognition are more likely to accept political disinformation.

RQ3. Does AI imaginaries moderate the relationship between cognitive ability and political disinformation acceptance?

RQ4. Does AI literacy moderate the relationship between cognitive ability and political disinformation acceptance?

Method:
After obtaining ethics approval from the university, this study was carried out in November 2024 with 228 participants from the US swing states (i.e., Pennsylvania, North Carolina, and Michigan) who were recruited using the CloudResearch to fulfill the questionnaire online. The participants’ average age was 40.95 (SD = 12.04), with 45.6% males, 53.9% females, and 0.4% non-binary; 72.8% White, 14.9% African American, 5.7% Asian, 4.4% Hispanic or Latino, and 2.2% other ethnicities. 26% of them belonged to the Republican Party while 59.2% belonged to the Democratic Party. Manipulation checks showed that our manipulations of source, χ2 (2, N = 228) = 186.74, p < 0.001, were effective. Each participant was compensated US$1.00 after completing the study.

Conclusion:
By exploring these relationships, this research contributes to the theoretical understanding of multimodality effects and information processing. Practical interventions could involve teaching students and the public to recognize the limitations and capabilities of AI in generating realistic but deceptive content; critically evaluate multimodal information by using digital tools and verification techniques; and understand cognitive biases and the role of need for cognition in shaping their interpretation of political content. Programs that emphasize these skills could empower citizens, contributing to a more informed and resilient public.
Original languageEnglish
Publication statusPublished - 15 Jul 2025
EventInternational Association for Media and Communication Research Conference, IAMCR 2025: Communicating Environmental Justice: Many Voices, One Planet - Nanyang Technological University, Singapore, Singapore
Duration: 13 Jul 202517 Jul 2025
https://iamcr.org/singapore2025 (Link to conference website)
https://iamcr.box.com/shared/static/j5shleei5r4gcid0anss9rk2cof80b51.pdf (Conference programme)

Conference

ConferenceInternational Association for Media and Communication Research Conference, IAMCR 2025
Country/TerritorySingapore
CitySingapore
Period13/07/2517/07/25
Internet address

User-Defined Keywords

  • misinformation
  • gendered fake news
  • deepfakes
  • AI literacy
  • US election

Fingerprint

Dive into the research topics of 'Navigating Complexities: Gendered Political Misinformation, Cognitive Ability, and AI Imaginaries in the 2024 US Election'. Together they form a unique fingerprint.

Cite this