Value Alignment: Consensual, Agonistic, and/or Harmonious?

Pak-Hang Wong*

*Corresponding author for this work

Research output: Contribution to conferenceConference abstractpeer-review

Abstract

The project of AI Value Alignment, pioneered by AI scientist Stuart J. Russell, asks how we can build AI systems with values that "are aligned with those of the human race." The AI Value Alignment Problem (VA problem) is fundamentally about what values should be built into AI systems and how those values can be implemented, making the problem both normative and technical. In my discussion, I focus on the normative aspect of the VA problem and show that the current discourse has largely revolved around the idea of consensus. I then examine the limitations of consensus-oriented approaches and introduce two alternative visions for addressing the VA problem. Specifically, I draw on Chantal Mouffe's concept of agonism and her critique of consensus to propose a model of agonistic value alignment. I then turn to Confucianism and demonstrate how the Confucian idea of harmony can provide valuable insights into the discussion of the VA problem.
Original languageEnglish
Publication statusPublished - 17 Oct 2024
EventSocial and Ethical Issues in AI from an East Asian Perspective - Centre for Applied Ethics, Hong Kong Baptist University, Hong Kong
Duration: 16 Oct 202417 Oct 2024
https://cae.hkbu.edu.hk/academic-activities/social-and-ethical-issues-in-ai-from-an-east-asian-perspective-16-17-oct-2024.html (Conference website)
https://drive.google.com/file/d/13nEdd1-wnBbcb3qlFVZHOHbB8wHXKCXy/view (Conference abstract)

Conference

ConferenceSocial and Ethical Issues in AI from an East Asian Perspective
Country/TerritoryHong Kong
Period16/10/2417/10/24
Internet address

Fingerprint

Dive into the research topics of 'Value Alignment: Consensual, Agonistic, and/or Harmonious?'. Together they form a unique fingerprint.

Cite this