On Explainable AI and Abductive Inference

Kyrylo Medianovskyi, Ahti Veikko Pietarinen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outlines the key predicament in the current inductive paradigm of ML and the associated XAI techniques, and sketches the desiderata for a truly participatory, second-generation XAI, which is endowed with abduction.

Original languageEnglish
Article number35
Number of pages15
JournalPhilosophies
Volume7
Issue number2
DOIs
Publication statusPublished - Apr 2022
Externally publishedYes

Scopus Subject Areas

  • Philosophy
  • History and Philosophy of Science

User-Defined Keywords

  • abduction
  • causal attributions
  • counterfactuals
  • explainable AI (XAI)
  • explanation
  • induction
  • machine learning
  • understanding

Fingerprint

Dive into the research topics of 'On Explainable AI and Abductive Inference'. Together they form a unique fingerprint.

Cite this