Abstract
Machine learning algorithms can now identify patterns and correlations in (big) datasets and predict outcomes based on the identified patterns and correlations. They can then generate decisions in accordance with the outcomes predicted, and decision-making processes can thereby be automated. Algorithms can inherit questionable values from datasets and acquire biases in the course of (machine) learning. While researchers and developers have taken the problem of algorithmic bias seriously, the development of fair algorithms is primarily conceptualized as a technical task. In this paper, I discuss the limitations and risks of this view. Since decisions on “fairness measure” and the related techniques for fair algorithms essentially involve choices between competing values, “fairness” in algorithmic fairness should be conceptualized first and foremost as a political question and be resolved politically. In short, this paper aims to foreground the political dimension of algorithmic fairness and supplement the current discussion with a deliberative approach to algorithmic fairness based on the accountability for reasonableness framework (AFR).
Original language | English |
---|---|
Pages (from-to) | 225-244 |
Number of pages | 20 |
Journal | Philosophy and Technology |
Volume | 33 |
Issue number | 2 |
DOIs | |
Publication status | Published - Jun 2020 |
Scopus Subject Areas
- Philosophy
- History and Philosophy of Science
User-Defined Keywords
- Accountability for reasonableness
- Algorithmic bias
- Democratization
- Fairness
- Machine learning