“To Run or Not to Run?” How People React to AI Disaster Alerts When AI Makes a Mistake in Emergency Response

Research output: Contribution to conferenceConference paperpeer-review

Abstract

Introduction
As communities worldwide face increasing threats from extreme weather events and public health crises, building resilience at both individual and community levels is essential. Artificial intelligence (AI) has shown promise in bolstering resilience, particularly in emergency disaster response and risk communication, through its ability to predict, monitor, and disseminate real-time alerts during crises (Asokan et al., 2024). Among AI applications in disaster response, AI-powered social media bots stand out as one of the most accessible public-facing tools (Ogie et al., 2018). With social media now integral to disaster response and AI’s capability to rapidly process vast amounts of data, these bots provide up-to-date, actionable advice—from shelter locations to emergency protocols—enhancing community preparedness and resilience (Harika et al., 2024). While AI in disaster response has been widely studied at the organizational level, little is known about how communities perceive, process, and respond to AI assistance during crises. This study investigates how people interact with AI-powered emergency response alerts on social media compared to human-generated alerts and how these interactions shape message perceptions (credibility, trustworthiness, and helpfulness), community attitudes (perceived resilience, efficacy, and competence), emotional responses, and pro-community behavioral intentions.

Additionally, this study introduces an experimental manipulation of error type—specifically, a factual error—to examine AI versus human failure in disaster communication. Mistakes in critical messaging, such as incorrect evacuation routes, can have severe consequences. Prior research shows that AI failures often face harsher scrutiny than human errors in high-stakes contexts due to heightened expectations of accuracy and reliability (Erlei et al., 2024; Franklin et al., 2021). By testing error tolerance in AI-driven disaster communication, we assess public reactions to technological versus human mistakes and their impact on trust, credibility, and compliance with emergency advice. Thus, this study poses two key research questions:

RQ1: How do community members react to AI-powered disaster alerts compared to human-generated alerts?
RQ2: How do community members react to mistakes made by AI in disaster alerts compared to mistakes made by humans?

Methods
To answer these questions, we designed a 2 (Source: AI vs. Human) × 2 (Error: Mistake vs. No Mistake) between-subjects online experiment with 420 participants from a densely populated, disaster-prone community. Participants will browse a simulated social media feed featuring a disaster alert post about an impending hurricane. The post includes an evacuation directive and a designated shelter location. In the no-mistake condition, the post provides correct evacuation instructions; in the mistake condition, it contains an erroneous shelter address and route easily recognizable to local community members. Source attribution is clearly indicated, and all other visual and textual elements remain constant across conditions.

After providing consent, participants will complete a brief survey assessing baseline attitudes toward AI, social media usage, prior disaster experience, and perceptions of community resilience, efficacy, and competence. They will then be randomly assigned to one of four experimental conditions and instructed to browse the simulated feed for approximately 2–3 minutes, mimicking typical social media use. Immediately after exposure, participants will complete a post-exposure survey measuring message perception (credibility, trustworthiness, and helpfulness), emotional response (anxiety, urgency, fear, hope, and reassurance), community attitudes, and behavioral intentions (e.g., likelihood to follow the alert, share it, and assist others). Manipulation checks will confirm recognition of the message source and the presence or absence of an error.

Data will be analyzed using descriptive statistics and two-way ANOVAs on key dependent variables. If significant interactions emerge, simple effects analyses will further explore these differences. Additionally, mediation analyses will assess whether existing attitudes toward AI mediate the relationship between independent variables and outcomes such as message perception, community beliefs, and behavioral intentions.

Contribution
As AI systems become more integrated into crisis communication, understanding how mistakes impact trust, credibility, and compliance is critical for designing effective emergency messaging. We anticipate that AI-generated messages will be perceived as less credible than human-generated ones and that AI mistakes will lead to a sharper decline in trust and behavioral intentions compared to human errors. These findings will provide valuable insights into public attitudes toward AI in high-stakes situations and inform best practices for integrating AI into disaster communication strategies.
Original languageEnglish
Publication statusPublished - 17 Jul 2025
EventInternational Association for Media and Communication Research Conference, IAMCR 2025: Communicating Environmental Justice: Many Voices, One Planet - Nanyang Technological University, Singapore, Singapore
Duration: 13 Jul 202517 Jul 2025
https://iamcr.org/singapore2025 (Link to conference website)
https://iamcr.box.com/shared/static/j5shleei5r4gcid0anss9rk2cof80b51.pdf (Conference programme)

Conference

ConferenceInternational Association for Media and Communication Research Conference, IAMCR 2025
Country/TerritorySingapore
CitySingapore
Period13/07/2517/07/25
Internet address

User-Defined Keywords

  • Disaster Response
  • Community Resilience
  • Artificial Intelligence
  • Risk Communication

Fingerprint

Dive into the research topics of '“To Run or Not to Run?” How People React to AI Disaster Alerts When AI Makes a Mistake in Emergency Response'. Together they form a unique fingerprint.

Cite this