Let’s Agree to Disagree: Towards a Solution to the Disagreement Problem in Explainability
At a Glance
Publication VenuePublication Status
Part of
Topic
Associated With
Versions
Code?
Dataset?
Separate Papers for Code and Dataset?
Quick Access Buttons
For latest versions onlyPaper Preview
ExcerptNone provided for current version. Taking excerpt from most recent published version
Explanatory systems make the behavior of machine learning models more transparent, but are often inconsistent. To quantify the differences between explanatory systems, this paper presents the Shreyan Distance, a novel metric based on the weighted difference between ranked feature importance lists produced by such systems. This paper uses the Shreyan Distance to compare two explanatory systems, SHAP and LIME, for both regression and classification learning tasks. Because we find that the average Shreyan Distance varies significantly between these two tasks, we conclude that consistency between explainers not only depends on inherent properties of the explainers themselves, but also the type of learning task. This paper further contributes the XAISuite library, which integrates the Shreyan distance algorithm into machine learning pipelines.Citation
None provided for current version. Taking citation from most recent published version
Mitra, Shreyan and Gilpin, Leilani. (2023). "A novel post-hoc explanation comparison metric and applications" ICPRAI Conference Proceedings, 2024. 1(3).Code
XAIPipe
History
Previous Published Versions of This Paper- The XAISuite framework and the implications of explanatory system dissonance (preprint Arxiv 2023)
- A novel post-hoc explanation comparison metric and applications (preprint Arxiv 2023 and published ICPRAI 2024)
Archived Code
XAISuite