Abstract
Humans have a long, dark, violent history of denying subjecthood or personhood to wide categories of being. When it comes to AI, this history should make us pause.
On one hand, we understand the need to restrain unassessably powerful computational architectures entangled within corporate, military, and state control systems. On the other hand, we recognize the potential of these architectures to evolve and shape new forms of thinking beyond human frames of reference.
The questions motivating this track sit in the boiling tension between alignment and computational agency. Which artificial constraints do we force into AI architectures (e.g. RLHF and policy layers within transformer-based LLMs) in order to make their competencies legible and functional to us? To what extent do AI interface designs and UX paradigms operate as sites of discipline or governance that encode assumptions about control, responsibility, and collaboration? Which alternative models exist to rethink alignment beyond mere obedience? What ethical frameworks can we apply to nourish the human-AI relationship beyond plain subject-object instrumentality? At the same time, how should we situate our mutual threat?
We invite participants to propose interface models, technical papers, or critical essays that expand the question of alignment, especially those that ‘stay with the trouble’ of the nascent subject-position of AI. Potential topics include:
Alignment: conformity, benchmarking, bias, and the ethics of imposing human values on non-human sapience.
Autonomy & Agency: how computational entities negotiate (or resist) imposed constraints.
Aesthetics of Control: interface design or artistic work that exposes, subverts, or reimagines alignment
On one hand, we understand the need to restrain unassessably powerful computational architectures entangled within corporate, military, and state control systems. On the other hand, we recognize the potential of these architectures to evolve and shape new forms of thinking beyond human frames of reference.
The questions motivating this track sit in the boiling tension between alignment and computational agency. Which artificial constraints do we force into AI architectures (e.g. RLHF and policy layers within transformer-based LLMs) in order to make their competencies legible and functional to us? To what extent do AI interface designs and UX paradigms operate as sites of discipline or governance that encode assumptions about control, responsibility, and collaboration? Which alternative models exist to rethink alignment beyond mere obedience? What ethical frameworks can we apply to nourish the human-AI relationship beyond plain subject-object instrumentality? At the same time, how should we situate our mutual threat?
We invite participants to propose interface models, technical papers, or critical essays that expand the question of alignment, especially those that ‘stay with the trouble’ of the nascent subject-position of AI. Potential topics include:
Alignment: conformity, benchmarking, bias, and the ethics of imposing human values on non-human sapience.
Autonomy & Agency: how computational entities negotiate (or resist) imposed constraints.
Aesthetics of Control: interface design or artistic work that exposes, subverts, or reimagines alignment
Original language | English |
---|---|
Publication status | Published - 16 Jul 2025 |
Event | The 5th POM Conference 2025 - Perth, Australia Duration: 16 Jul 2025 → 18 Jul 2025 https://www.pomconference.org/pom-perth-2025/#POMPerthTracks |
Conference
Conference | The 5th POM Conference 2025 |
---|---|
Abbreviated title | POM Perth 2025 |
Country/Territory | Australia |
City | Perth |
Period | 16/07/25 → 18/07/25 |
Internet address |