skip to main content
10.1145/3531146.3533170acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Public Access

Rational Shapley Values

Published: 20 June 2022 Publication History

Abstract

Explaining the predictions of opaque machine learning algorithms is an important and challenging task, especially as complex models are increasingly used to assist in high-stakes decisions such as those arising in healthcare and finance. Most popular tools for post-hoc explainable artificial intelligence (XAI) are either insensitive to context (e.g., feature attributions) or difficult to summarize (e.g., counterfactuals). In this paper, I introduce rational Shapley values, a novel XAI method that synthesizes and extends these seemingly incompatible approaches in a rigorous, flexible manner. I leverage tools from decision theory and causal modeling to formalize and implement a pragmatic approach that resolves a number of known challenges in XAI. By pairing the distribution of random variables with the appropriate reference class for a given explanation task, I illustrate through theory and experiments how user goals and knowledge can inform and constrain the solution set in an iterative fashion. The method compares favorably to state of the art XAI tools in a range of quantitative and qualitative comparisons.

References

[1]
Kjersti Aas, Martin Jullum, and Anders Løland. 2021. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. Artif. Intell. 298(2021), 103502.
[2]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6(2018), 52138–52160.
[3]
J Angwin, J Larson, S Mattu, and L Kirchner. 2016. Machine bias. Technical Report. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[4]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org.
[5]
Leo Breiman. 2001. Random Forests. Mach. Learn. 45, 1 (2001), 1–33.
[6]
Grant D Brinkworth, Manny Noakes, Jonathan D Buckley, Jennifer B Keogh, and Peter M Clifton. 2009. Long-term effects of a very-low-carbohydrate weight loss diet compared with an isocaloric low-fat diet after 12 mo.Am. J. Clin. Neutr. 90, 1 (2009), 23–32.
[7]
Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Mach. Learn. 20, 3 (1995), 273–297.
[8]
Philip Dawid. 2012. The Decision-Theoretic Approach to Causal Inference. In Causality: Statistical Perspectives and Applications, Carlo Berzuini, Philip Dawid, and Luisa Bernardinelli (Eds.). Wiley, London, Chapter 4, 25–42.
[9]
Philip Dawid. 2015. Statistical Causality from a Decision-Theoretic Perspective. Annu. Rev. Stat. Appl. 2, 1 (2015), 273–303.
[10]
Philip Dawid. 2020. Decision-theoretic foundations for statistical causality. (2020). arXiv preprint, 2004.12493.
[11]
Dheera Dua and Casey Graff. 2017. UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA.
[12]
Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. 2004. Least Angle Regression. Ann. Stat. 32, 2 (2004), 407–499.
[13]
Paul Feyerabend. 1975. Against Method. New Left Books, London.
[14]
Aaron Fisher, Cynthia Rudin, and Francesca Dominici. 2019. All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously. J. Mach. Learn. Res. 20, 177 (2019), 1–81.
[15]
Luciano Floridi. 2019. The Logic of Information. Oxford University Press, Oxford.
[16]
Christopher Frye, Ilya Feige, and Colin Rowat. 2020. Asymmetric Shapley values: Incorporating causal knowledge into model-agnostic explainability. In Advances in Neural Information Processing Systems.
[17]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A Survey of Methods for Explaining Black Box Models. Comput. Surveys 51, 5 (2018), 1–42.
[18]
Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen. 2020. Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models. In Advances in Neural Information Processing Systems.
[19]
Dominik Janzing, Lenon Minorics, and Patrick Bloebaum. 2020. Feature relevance quantification in explainable AI: A causal problem. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, Vol. 108. 2907–2916.
[20]
Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2020. A survey of algorithmic recourse: Definitions, formulations, solutions, and prospects. (2020). arXiv preprint, 2010.04050.
[21]
Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic Recourse: From Counterfactual Explanations to Interventions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 353–362.
[22]
Indra Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. 2020. Problems with Shapley-value-based explanations as feature importance measures. In Proceedings of the 37th International Conference on Machine Learning(ICML ’20). 1–10.
[23]
Steffen L Lauritzen and Thomas S Richardson. 2002. Chain graph models and their causal interpretations. J. Royal Stat. Soc. Ser. B Methodol. 64, 3 (2002), 321–348.
[24]
Sanghack Lee and Elias Bareinboim. 2018. Structural Causal Bandits: Where to Intervene?. In Advances in Neural Information Processing Systems, Vol. 31. 2568–2578.
[25]
Sanghack Lee and Elias Bareinboim. 2019. Structural Causal Bandits with Non-Manipulable Variables. Proceedings of the AAAI Conference on Artificial Intelligence 33, 1(2019), 4164–4172.
[26]
Catherine Legg and Christopher Hookway. 2020. Pragmatism. In The Stanford Encyclopedia of Philosophy (fall 2020 ed.), Edward N Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[27]
Scott M Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2, 1 (2020), 56–67.
[28]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems. 4765–4774.
[29]
Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2019. Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers. In CausalML: Machine Learning and Causal Inference for Improved Decision Making Workshop, NeurIPS 2019.
[30]
Luke Merrick and Ankur Taly. 2020. The Explanation Game: Explaining Machine Learning Models Using Shapley Values. In Machine Learning and Knowledge Extraction - 4th International Cross-Domain Conference (CD-MAKE). Springer, 17–38.
[31]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267(2019), 1–38.
[32]
Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. In Proceedings of the International Conference on Fairness, Accountability, and Transparency in Machine Learning (Barcelona, Spain). 607–617.
[33]
W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116, 44(2019), 22071 – 22080.
[34]
Judea Pearl. 2000. Causality: Models, reasoning, and inference. Cambridge University Press, New York.
[35]
Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020. FACE: Feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 344–350.
[36]
Cynthia Rudin, Caroline Wang, and Beau Coker. 2020. The Age of Secrecy and Unfairness in Recidivism Prediction. Harvard Data Sci. Rev. 2, 1 (2020).
[37]
Chris Russell. 2019. Efficient Search for Diverse Coherent Explanations. In Proceedings of the International Conference on Fairness, Accountability, and Transparency. 20–28.
[38]
Lloyd Shapley. 1953. A Value for n-Person Games. In Contributions to the Theory of Games. Princeton University Press, Princeton, Chapter 17, 307–317.
[39]
Ilya Shpitser and Judea Pearl. 2008. Complete Identification Methods for the Causal Hierarchy. J. Mach. Learn. Res. 9(2008), 1941–1979.
[40]
Dylan Slack, Anna Hilgard, Sameer Singh, and Himabindu Lakkaraju. 2021. Reliable Post hoc Explanations: Modeling Uncertainty in Explainability. In Advances in Neural Information Processing Systems, Vol. 34. 9391–9404.
[41]
Erik Štrumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems 41, 3 (2014), 647–665.
[42]
Mukund Sundararajan and Amir Najmi. 2019. The many Shapley values for model explanation. In Proceedings of the ACM Conference.
[43]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable Recourse in Linear Classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 10–19.
[44]
John von Neumann and Oskar Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton University Press, Princeton, NJ.
[45]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard J. Law Technol. 31, 2 (2018), 841–887.
[46]
David S Watson. 2022. Conceptual challenges for interpretable machine learning. Synthese 200, 2 (2022), 65.
[47]
David S Watson and Luciano Floridi. 2021. The explanation game: a formal framework for interpretable machine learning. Synthese 198, 10 (2021), 9211–9242.
[48]
David S Watson, Limor Gultchin, Ankur Taly, and Luciano Floridi. 2022. Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice. Minds Mach. 32, 1 (2022), 185–218.
[49]
J Wexler, M Pushkarna, T Bolukbasi, M Wattenberg, F Viégas, and J Wilson. 2020. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics 26, 1(2020), 56–65.
[50]
Hui Zou and Trevor Hastie. 2005. Regularization and Variable Selection via the Elastic Net. J. Royal Stat. Soc. Ser. B Methodol. 67, 2 (2005), 301–320.

Cited By

View all
  • (2024)Explainability Is Not a GameCommunications of the ACM10.1145/363530167:7(66-75)Online publication date: 2-Jul-2024
  • (2024)Explainable Artificial Intelligence (XAI): A Systematic Literature Review on Taxonomies and Applications in FinanceIEEE Access10.1109/ACCESS.2023.334702812(618-629)Online publication date: 2024
  • (2024)Explaining complex systems: a tutorial on transparency and interpretability in machine learning models (part I)IFAC-PapersOnLine10.1016/j.ifacol.2024.08.57758:15(492-496)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Rational Shapley Values
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
        June 2022
        2351 pages
        ISBN:9781450393522
        DOI:10.1145/3531146
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 20 June 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Counterfactuals
        2. Decision theory
        3. Explainable artificial intelligence
        4. Interpretable machine learning
        5. Shapley values

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        FAccT '22
        Sponsor:

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)290
        • Downloads (Last 6 weeks)40
        Reflects downloads up to 20 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Explainability Is Not a GameCommunications of the ACM10.1145/363530167:7(66-75)Online publication date: 2-Jul-2024
        • (2024)Explainable Artificial Intelligence (XAI): A Systematic Literature Review on Taxonomies and Applications in FinanceIEEE Access10.1109/ACCESS.2023.334702812(618-629)Online publication date: 2024
        • (2024)Explaining complex systems: a tutorial on transparency and interpretability in machine learning models (part I)IFAC-PapersOnLine10.1016/j.ifacol.2024.08.57758:15(492-496)Online publication date: 2024
        • (2024)Explaining contributions of features towards unfairness in classifiers: A novel threshold-dependent Shapley value-based approachEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.109427138(109427)Online publication date: Dec-2024
        • (2024)SeqSHAP: Subsequence Level Shapley Value Explanations for Sequential PredictionsDatabase Systems for Advanced Applications10.1007/978-981-97-5562-2_6(89-104)Online publication date: 27-Oct-2024
        • (2024)Error Analysis of Shapley Value-Based Model Explanations: An Informative PerspectiveAI Verification10.1007/978-3-031-65112-0_2(29-48)Online publication date: 17-Jul-2024
        • (2024)Explainable Machine Learning for Categorical and Mixed Data with Lossless VisualizationArtificial Intelligence and Visualization: Advancing Visual Knowledge Discovery10.1007/978-3-031-46549-9_3(73-123)Online publication date: 25-Apr-2024
        • (2023)Disproving XAI Myths with Formal Methods – Initial Results2023 27th International Conference on Engineering of Complex Computer Systems (ICECCS)10.1109/ICECCS59891.2023.00012(12-21)Online publication date: 14-Jun-2023
        • (2023)In Defense of Sociotechnical PragmatismThe 2022 Yearbook of the Digital Governance Research Group10.1007/978-3-031-28678-0_10(131-164)Online publication date: 3-Jun-2023
        • (2022)Counterfactual Shapley Additive ExplanationsProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency10.1145/3531146.3533168(1054-1070)Online publication date: 21-Jun-2022

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media