Abstract
This study examined the possibility of cooperation between human and communicative artificial intelligence (AI) by conducting a prisoner’s dilemma experiment. A 2 (AI vs human partner) × 2 (cooperative vs non-cooperative partner) between-subjects six-trial prisoner’s dilemma experiment was employed. Participants played the strategy game with a cooperative AI, non-cooperative AI, cooperative human, and non-cooperative human partner. Results showed that when partners (both communicative AI and human partners) proposed cooperation on the first trial, 80% to 90% of the participants also cooperated. More than 75% kept the promise and decided to cooperate. About 60% to 80% proposed, committed, and decided to cooperate when their partner proposed and kept the commitment to cooperate across trials, no matter whether the partner was a cooperative human or communicative AI. Overall, participants were more likely to commit and cooperate with cooperative AI partners than with non-cooperative AI and human partners.
| Original language | English |
|---|---|
| Pages (from-to) | 2141-2151 |
| Number of pages | 11 |
| Journal | Behaviour and Information Technology |
| Volume | 42 |
| Issue number | 13 |
| Early online date | 9 Aug 2022 |
| DOIs | |
| Publication status | Published - 3 Oct 2023 |
User-Defined Keywords
- Artificial intelligence
- computers are social actors
- cooperation
- human–AI interaction
- human–machine communication
- social dilemmas
Fingerprint
Dive into the research topics of 'When communicative AIs are cooperative actors: A prisoner’s dilemma experiment on human–communicative artificial intelligence cooperation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver