Abstract
In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.
| Original language | English |
|---|---|
| Article number | 26 |
| Number of pages | 11 |
| Journal | Science and Engineering Ethics |
| Volume | 31 |
| Issue number | 5 |
| Early online date | 18 Sept 2025 |
| DOIs | |
| Publication status | Published - Oct 2025 |
User-Defined Keywords
- Algorithmic harm
- Algorithmic imprint
- AI ethics
- AI governance
- Moral repair
- Post-harm scenarios