Democratizing Algorithmic Fairness

Pak Hang Wong*

*Corresponding author for this work

    Research output: Contribution to journalJournal articlepeer-review

    93 Citations (Scopus)

    Abstract

    Machine learning algorithms can now identify patterns and correlations in (big) datasets and predict outcomes based on the identified patterns and correlations. They can then generate decisions in accordance with the outcomes predicted, and decision-making processes can thereby be automated. Algorithms can inherit questionable values from datasets and acquire biases in the course of (machine) learning. While researchers and developers have taken the problem of algorithmic bias seriously, the development of fair algorithms is primarily conceptualized as a technical task. In this paper, I discuss the limitations and risks of this view. Since decisions on “fairness measure” and the related techniques for fair algorithms essentially involve choices between competing values, “fairness” in algorithmic fairness should be conceptualized first and foremost as a political question and be resolved politically. In short, this paper aims to foreground the political dimension of algorithmic fairness and supplement the current discussion with a deliberative approach to algorithmic fairness based on the accountability for reasonableness framework (AFR).

    Original languageEnglish
    Pages (from-to)225-244
    Number of pages20
    JournalPhilosophy and Technology
    Volume33
    Issue number2
    DOIs
    Publication statusPublished - Jun 2020

    Scopus Subject Areas

    • Philosophy
    • History and Philosophy of Science

    User-Defined Keywords

    • Accountability for reasonableness
    • Algorithmic bias
    • Democratization
    • Fairness
    • Machine learning

    Fingerprint

    Dive into the research topics of 'Democratizing Algorithmic Fairness'. Together they form a unique fingerprint.

    Cite this