Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.
-
Updated
Nov 19, 2019 - TeX
Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.
Paper for 2018 Joint Statistical Meetings: https://ww2.amstat.org/meetings/jsm/2018/onlineprogram/AbstractDetails.cfm?abstractid=329539
Code for the paper EXPLORA: AI/ML EXPLainability for the Open RAN Claudio Fiandrino, Leonardo Bonati, Salvatore d'Oro, Michele Polese, Tommaso Melodia, Joerg Widmer CoNEXT ’23, December 5–8, 2023, Paris, France DOI: 10.1145/3629141
An end-to-end, research-grade AI system for measuring human cognition. HCMS models mastery, confidence, learning stability, and adaptability through analysis, inference, validation, robustness testing, and explainability — bridging human-centered AI research and applied systems.
AI Explainability 360 Toolkit for Time-Series and Industrial Use Cases
Research paper exploring the use of argumentation theory in AI. This paper explores the optimal methodologies for the decision making of an AI agent.
Official implementation of 'Bootstrap Wasserstein Alignment for Stable Feature Attribution in Low-Data Regimes'
🧠 Measure human understanding and cognitive stability with HCMS, an AI-driven framework for evaluating confidence and reasoning consistency.
Master's Thesis in Applied Computer Science (110/110 Summa Cum Laude). A pipeline combining counterfactual feature importance (BoCSoR) and hierarchical Association Rule Mining for feature-driven bias detection in tabular classifiers. Validated on the 2024 ACS census microdata.
Add a description, image, and links to the xai topic page so that developers can more easily learn about it.
To associate your repository with the xai topic, visit your repo's landing page and select "manage topics."