The paper aims to address the challenge of explanation size in traditional abductive explanations, which can be too lengthy for human decision-makers to digest. The authors propose locally-minimal probabilistic abductive explanations (PAXps) as a solution, which offer high-quality approximations of PAXps in practice while being efficient and practical.
Background: Traditional abductive explanations are based on Bayesian inference, which can generate lengthy explanations due to the cognitive limits of human decision-makers. Probabilistic abductive explanations (PAXps) address this limitation by offering approximate explanations that balance accuracy and explanation size. However, PAXps have their own complexity issues that make their exact computation unrealistic in many cases.
Proposed Algorithms: The paper proposes two novel algorithms for computing approximate locally-minimal explanations: (1) approximate model counting, and (2) sampling with probabilistic guarantees. These algorithms are designed to eliminate the need to analyze all possible subsets of a target set of features, which can significantly reduce explanation size without sacrificing accuracy.
Experimental Results: The authors evaluate the proposed algorithms on two case study classifiers: random forest (RF) and binary neural network (BNN). The results demonstrate the practical efficiency of the proposed algorithms, with explanation length reducing by one-third to two-thirds compared to traditional abductive explanations.
Conclusion: The paper offers a promising solution for high-stakes uses of machine learning where explanation size is critical. The proposed locally-minimal probabilistic abductive explanations (PAXps) are efficient, practical, and offer high-quality approximations of PAXps in practice. The algorithms are demonstrated to be effective through experimental results on two case study classifiers.
Computer Science, Machine Learning