Project Details
Description
The recent emergence of deepfakes—synthetic videos depicting someone saying or doing something that did not occur—has become a growing social concern. Various individuals leverage the power of artificial intelligence, such as machine learning, to generate these highly realistic and believable synthetic videos that have great potential to deceive, opening up new possibilities for the creation and spread of disinformation. Despite the online proliferation of deepfakes, relatively few empirical studies have explored their potential impact and corresponding corrective measures. Furthermore, existing evidence, while still preliminary, comes primarily from Western countries, so the question of whether and to what extent previous findings can be generalized to other contexts remains largely unknown.
Drawing on literature from social psychology, multimodal disinformation, and media literacy, the current project aims to bridge these gaps by examining the processing, effects, and interventions of deepfakes in the context of Hong Kong. By extending the research team’s prior work on misinformation and disinformation, this project will provide deeper insights into the question of whether, and if so, why people perceive the content of deepfakes as credible (credibility perception), mistake deepfakes as authentic (detection accuracy), and are willing to share deepfakes (sharing intention). Additionally, it seeks to explore whether and what types of digital media literacy interventions are effective in reducing the potentially harmful effects of deepfakes.
Using both qualitative and quantitative approaches, we will conduct three progressive studies using focus group, experimental, and longitudinal designs to provide a more complete picture of deepfakes. Study 1 will qualitatively explore people’s thoughts and intentions toward deepfakes and identify the types of cognitive heuristics (i.e., simple rules of thumb) commonly used when processing deepfakes. Building on Study 1, Study 2 will experimentally examine the unique effects of deepfakes (i.e., video-based disinformation) on viewers’ perceptual and behavioral responses compared to textual disinformation as well as the psychological mechanisms underlying the observed effects. Study 3 will employ a longitudinal experiment to test the short- and long-term effectiveness of different digital
media literacy intervention strategies in combating deepfakes.
The significance of this project is multifold. Theoretically, the findings will enrich the current understanding of deepfakes in the Asian context and make valuable contributions to the literature on multimodal disinformation and digital media literacy. Practically, the project will provide policymakers, interface designers, and online service providers with strategic guidance to address the growing threat of deepfakes and foster a safer, healthier online ecosystem.
Drawing on literature from social psychology, multimodal disinformation, and media literacy, the current project aims to bridge these gaps by examining the processing, effects, and interventions of deepfakes in the context of Hong Kong. By extending the research team’s prior work on misinformation and disinformation, this project will provide deeper insights into the question of whether, and if so, why people perceive the content of deepfakes as credible (credibility perception), mistake deepfakes as authentic (detection accuracy), and are willing to share deepfakes (sharing intention). Additionally, it seeks to explore whether and what types of digital media literacy interventions are effective in reducing the potentially harmful effects of deepfakes.
Using both qualitative and quantitative approaches, we will conduct three progressive studies using focus group, experimental, and longitudinal designs to provide a more complete picture of deepfakes. Study 1 will qualitatively explore people’s thoughts and intentions toward deepfakes and identify the types of cognitive heuristics (i.e., simple rules of thumb) commonly used when processing deepfakes. Building on Study 1, Study 2 will experimentally examine the unique effects of deepfakes (i.e., video-based disinformation) on viewers’ perceptual and behavioral responses compared to textual disinformation as well as the psychological mechanisms underlying the observed effects. Study 3 will employ a longitudinal experiment to test the short- and long-term effectiveness of different digital
media literacy intervention strategies in combating deepfakes.
The significance of this project is multifold. Theoretically, the findings will enrich the current understanding of deepfakes in the Asian context and make valuable contributions to the literature on multimodal disinformation and digital media literacy. Practically, the project will provide policymakers, interface designers, and online service providers with strategic guidance to address the growing threat of deepfakes and foster a safer, healthier online ecosystem.
Status | Active |
---|---|
Effective start/end date | 1/01/25 → 31/12/26 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.