skip to main content
research-article
Open access

Addressing Regulatory Requirements on Explanations for Automated Decisions with Provenance—A Case Study

Published: 20 January 2021 Publication History

Abstract

AI-based automated decisions are increasingly used as part of new services being deployed to the general public. This approach to building services presents significant potential benefits, such as the reduced speed of execution, increased accuracy, lower cost, and ability to adapt to a wide variety of situations. However, equally significant concerns have been raised and are now well documented such as concerns about privacy, fairness, bias, and ethics. On the consumer side, more often than not, the users of those services are provided with no or inadequate explanations for decisions that may impact their lives. In this article, we report the experience of developing a socio-technical approach to constructing explanations for such decisions from their audit trails, or provenance, in an automated manner. The work has been carried out in collaboration with the UK Information Commissioner’s Office. In particular, we have implemented an automated Loan Decision scenario, instrumented its decision pipeline to record provenance, categorized relevant explanations according to their audience and their regulatory purposes, built an explanation-generation prototype, and deployed the whole system in an online demonstrator.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 (2018), 52138--52160.
[2]
Floris Bex and Douglas Walton. 2016. Combining explanation and argumentation in dialogue. Argument Comput. 7, 1 (Jul. 2016), 55--68.
[3]
Reuben Binns. 2017. Fairness in machine learning: Lessons from political philosophy. In Proceedings of Machine Learning Research: Conference on Fairness, Accountability and Transparency, Vol. 81. 149--159.
[4]
Andrew Burt, Stuart Shirrell, Brenda Leong, and Xiangnong George Wang. 2018. Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models. Technical Report. Future of Privacy Forum.
[5]
Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O’Brien, Kate Scott, Stuart Schieber, James Waldo, David Weinberger, Adrian Weller, and Alexandra Wood. 2019. Accountability of AI Under the Law: The Role of Explanation. arxiv:1711.01134. Retrieved from https://arxiv.org/abs/1711.01134.
[6]
Lilian Edwards and Michael Veale. 2018. Enslaving the algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? IEEE Secur. Priv. 16, 3 (May 2018), 46--54. arXiv:1803.07540
[7]
European Union. 2016. Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Off. J. Eur. Union 59, L 199 (2016), 1--88.
[8]
Albert Gatt and Ehud Reiter. 2009. SimpleNLG: A realisation engine for practical applications. In Proceedings of the 12th European Workshop on Natural Language Generation (ENLG’09). Association for Computational Linguistics, Stroudsburg, PA, 90--93.
[9]
Yolanda Gil and Daniel Garijo. 2017. Towards automating data narratives. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (IUI’17). ACM Press, New York, NY, 565--576.
[10]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A survey of methods for explaining black box models. Comput. Surv. 51, 5 (Jan. 2019), 1--42. arxiv:1802.01933
[11]
Trung Dong Huynh, Sophie Stalla-Bourdillon, and Luc Moreau. 2019. Provenance-based Explanations for Automated Decisions: Final IAA Project Report. Technical Report. King’s College London. 27 pages.
[12]
Margot E. Kaminski. 2019. The right to explanation, explained. Berk. Technol. Law J. 34, 1 (2019), 189--218.
[13]
Benjamin Krarup, Michael Cashmore, Daniele Magazzeni, and Tim Miller. 2019. Towards model-based contrastive explanations for explainable planning. In Proceedings of the 2nd ICAPS Workshop on Explainable Planning (XAIP’19). 21--29.
[14]
Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765--4774.
[15]
Michael F. McEneney and Karl F. Kaufmann. 2005. Implementing the FACT act: Self-executing provisions. Bus. Law. 60, 2 (2005), 737--747.
[16]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267 (Feb. 2019), 1--38. arxiv:1706.07269
[17]
Christoph Molnar. 2019. Interpretable Machine Learning. Retrieved from https://christophm.github.io/interpretable-ml-book/.
[18]
Luc Moreau, Belfrit Victor Batlajery, Trung Dong Huynh, Danius Michaelides, and Heather Packer. 2018. A templating system to generate provenance. IEEE Trans. Softw. Eng. 44, 2 (Feb. 2018), 103--121.
[19]
Luc Moreau and Paolo Missier. 2013. PROV-DM: The PROV Data Model. Technical Report. World Wide Web Consortium. W3C Recommendation.
[20]
Darren P. Richardson and Luc Moreau. 2016. Towards the domain agnostic generation of natural language explanations from provenance graphs for casual users. In Proceedings of the International Conference on Provenance and Annotation of Data and Processes (IPAW’16), Marta Mattoso and Boris Glavic (Eds.). Lecture Notes in Computer Science, Vol. 9672. Springer, Cham, 95--106.
[21]
Aaron Rieke, Miranda Bogen, and David G. Robinson. 2018. Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods. Technical Report. Upturn and Omidyar Network.
[22]
The Royal Society. 2019. Explainable AI: The Basics. Retrieved from https://royalsociety.org/topics-policy/projects/explainable-ai/.
[23]
The UK Information Commissioner’s Office. 2020. Explaining Decisions Made with AI. Technical Report.
[24]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. J. Law Technol. 31, 2 (2017).
[25]
Tal Zarsky. 2016. The trouble with algorithmic decisions. Sci. Technol. Hum. Values 41, 1 (Jan. 2016), 118--132.

Cited By

View all
  • (2025)A Typology of Explanations for Explainability-by-DesignSSRN Electronic Journal10.2139/ssrn.5020690Online publication date: 2025
  • (2025)The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPRArtificial Intelligence and Law10.1007/s10506-024-09430-wOnline publication date: 13-Jan-2025
  • (2024)Explainable deep learning model for predicting money laundering transactionsInternational Journal on Smart Sensing and Intelligent Systems10.2478/ijssis-2024-002717:1Online publication date: 24-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Digital Government: Research and Practice
Digital Government: Research and Practice  Volume 2, Issue 2
COVID-19 Commentaries and Case Study
April 2021
119 pages
EISSN:2639-0175
DOI:10.1145/3442345
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 January 2021
Online AM: 25 November 2020
Accepted: 01 November 2020
Revised: 01 August 2020
Received: 01 February 2020
Published in DGOV Volume 2, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Explainable computing
  2. GDPR
  3. automated decisions
  4. data provenance

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)238
  • Downloads (Last 6 weeks)27
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)A Typology of Explanations for Explainability-by-DesignSSRN Electronic Journal10.2139/ssrn.5020690Online publication date: 2025
  • (2025)The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPRArtificial Intelligence and Law10.1007/s10506-024-09430-wOnline publication date: 13-Jan-2025
  • (2024)Explainable deep learning model for predicting money laundering transactionsInternational Journal on Smart Sensing and Intelligent Systems10.2478/ijssis-2024-002717:1Online publication date: 24-Jul-2024
  • (2024)How Digital Power Shapes the Rule of Law: The Logic and Mission of Digital Rule of LawInternational Journal of Digital Law and Governance10.1515/ijdlg-2024-00171:2(207-243)Online publication date: 28-Oct-2024
  • (2024)Towards the Development of a Copyright Risk Checker Tool for Generative Artificial Intelligence SystemsDigital Government: Research and Practice10.1145/37034595:4(1-21)Online publication date: 8-Nov-2024
  • (2024)Explainable Artificial Intelligence (XAI): A Systematic Literature Review on Taxonomies and Applications in FinanceIEEE Access10.1109/ACCESS.2023.334702812(618-629)Online publication date: 2024
  • (2024)A Reference Process for Assessing the Reliability of Predictive Analytics ResultsSN Computer Science10.1007/s42979-024-02892-45:5Online publication date: 18-May-2024
  • (2023)Designing a Human-centered AI Tool for Proactive Incident Detection Using Crowdsourced Data Sources to Support Emergency ResponseDigital Government: Research and Practice10.1145/36337845:1(1-19)Online publication date: 30-Nov-2023
  • (2023)On Specifying for TrustworthinessCommunications of the ACM10.1145/362469967:1(98-109)Online publication date: 21-Dec-2023
  • (2023)The Role of Explainable AI in the Research Field of AI EthicsACM Transactions on Interactive Intelligent Systems10.1145/359997413:4(1-39)Online publication date: 1-Jun-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media