After Harm: A Plea for Moral Repair after Algorithms Have Failed

Research output: Contribution to journalJournal articlepeer-review

Abstract

In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.

Original languageEnglish
Article number26
Number of pages11
JournalScience and Engineering Ethics
Volume31
Issue number5
Early online date18 Sept 2025
DOIs
Publication statusPublished - Oct 2025

User-Defined Keywords

  • Algorithmic harm
  • Algorithmic imprint
  • AI ethics
  • AI governance
  • Moral repair
  • Post-harm scenarios

Fingerprint

Dive into the research topics of 'After Harm: A Plea for Moral Repair after Algorithms Have Failed'. Together they form a unique fingerprint.

Cite this