TY - JOUR
T1 - When communicative AIs are cooperative actors: A prisoner’s dilemma experiment on human–communicative artificial intelligence cooperation
AU - Ng, Yu-Leung
N1 - Funding Information:
This study was supported by the Tier 2 Start-up Grant of Hong Kong Baptist University [grant number RC-SGT2/19-20/COMM/001].
Publisher Copyright:
© 2022 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2023/10/3
Y1 - 2023/10/3
N2 - This study examined the possibility of cooperation between human and communicative artificial intelligence (AI) by conducting a prisoner’s dilemma experiment. A 2 (AI vs human partner) × 2 (cooperative vs non-cooperative partner) between-subjects six-trial prisoner’s dilemma experiment was employed. Participants played the strategy game with a cooperative AI, non-cooperative AI, cooperative human, and non-cooperative human partner. Results showed that when partners (both communicative AI and human partners) proposed cooperation on the first trial, 80% to 90% of the participants also cooperated. More than 75% kept the promise and decided to cooperate. About 60% to 80% proposed, committed, and decided to cooperate when their partner proposed and kept the commitment to cooperate across trials, no matter whether the partner was a cooperative human or communicative AI. Overall, participants were more likely to commit and cooperate with cooperative AI partners than with non-cooperative AI and human partners.
AB - This study examined the possibility of cooperation between human and communicative artificial intelligence (AI) by conducting a prisoner’s dilemma experiment. A 2 (AI vs human partner) × 2 (cooperative vs non-cooperative partner) between-subjects six-trial prisoner’s dilemma experiment was employed. Participants played the strategy game with a cooperative AI, non-cooperative AI, cooperative human, and non-cooperative human partner. Results showed that when partners (both communicative AI and human partners) proposed cooperation on the first trial, 80% to 90% of the participants also cooperated. More than 75% kept the promise and decided to cooperate. About 60% to 80% proposed, committed, and decided to cooperate when their partner proposed and kept the commitment to cooperate across trials, no matter whether the partner was a cooperative human or communicative AI. Overall, participants were more likely to commit and cooperate with cooperative AI partners than with non-cooperative AI and human partners.
KW - Artificial intelligence
KW - computers are social actors
KW - cooperation
KW - human–AI interaction
KW - human–machine communication
KW - social dilemmas
UR - https://www.ingentaconnect.com/content/tandf/tbit/2023/00000042/00000013/art00004
UR - http://www.scopus.com/inward/record.url?scp=85135604454&partnerID=8YFLogxK
U2 - 10.1080/0144929X.2022.2111273
DO - 10.1080/0144929X.2022.2111273
M3 - Journal article
SN - 0144-929X
VL - 42
SP - 2141
EP - 2151
JO - Behaviour and Information Technology
JF - Behaviour and Information Technology
IS - 13
ER -