Explainability in AI is crucial for trust, especially in decision-making. Logic-based systems allow knowledge to be represented in a formal and transparent way. A central reasoning mode for logic-based systems includes abductive reasoning: given an observation, find a plausible hypotheses that best explains the observed event.
Observation: A medical AI detects a high fever and a persistent cough.
Hypothesis 1: Influenza. Hypothesis 2: Pneumonia.
An abductive explanation selects the hypothesis that best explains the observation. Moreover, contrastive explanations help users see not just why a conclusion holds, but also why an alternative does not hold. This seminar explores logical frameworks for abductive and contrastive explanations as tools for enhancing AI transparency and user trust.
João Marques-Silva: Logic-Based Explainability: Past, Present and Future. ISoLA (4) 2024: 181-204
João Marques-Silva, Alexey Ignatiev: No silver bullet: interpretable ML models must be explained. Frontiers Artif. Intell. 6 (2023)
Sushmita Paul, Jinqiang Yu, Jip J. Dekker, Alexey Ignatiev, Peter J. Stuckey: Formal Explanations for Neuro-Symbolic AI. CoRR abs/2410.14219 (2024)
The seminar will be available in PAUL (TBA).