TY - GEN
T1 - When Algorithms Fail: The Case for Moral Repair (Extended Abstract)
AU - Wong, Pak-Hang
AU - Rieder, Gernot
N1 - GR was supported by the Research Council of Norway under the grant number 315580 "CoPol: COVID-19 contact tracing as Digital Politics" for this work.
Publisher copyright:
© 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
https://ojs.aaai.org/index.php/AIES/article/download/36752/38890
PY - 2025/10/15
Y1 - 2025/10/15
N2 - In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.
AB - In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.
U2 - 10.1609/aies.v8i3.36752
DO - 10.1609/aies.v8i3.36752
M3 - Conference proceeding
SN - 157735902X
SN - 9781577359029
T3 - Proceedings of the AAAI/ACM Conference on AI Ethics and Society
SP - 2730
EP - 2731
BT - Proceedings of the Eighth AAAI/ACM Conference on AI, Ethics, and Society (AIES-25) - Main Track III & Student Abstracts
A2 - Burton, Emanuelle
A2 - Mattei, Nicholas
A2 - Páez, Andrés
PB - AAAI press
T2 - The Eighth AAAI/ACM Conference on AI, Ethics, and Society (AIES-25)
Y2 - 20 October 2025 through 22 October 2025
ER -