Can Conformal Prediction Obtain Meaningful Safety Guarantees for ML Models?Download PDF

16 Mar 2023 (modified: 31 May 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Conformal Prediction, Deep Neural Nets, Safety-Integrity Levels
TL;DR: We inpect what happens with the Conformal Prediction set sizes of SOTA models, as the required coverage level goes beyond 90%.
Abstract: Conformal Prediction (CP) has been recently proposed as a methodology to calibrate the predictions of Machine Learning (ML) models so that they can output rigorous quantification of their uncertainties. For example, one can calibrate the predictions of an ML model into prediction sets, that guarantee to cover the ground truth class with a probability larger than a specified threshold. In this paper, we study whether CP can provide strong statistical guarantees that would be required in safety-critical applications. Our evaluation on the ImageNet demonstrates that using CP over state-of-the-art models fails to deliver the required guarantees. We corroborate our results by deriving a simple connection between the CP prediction sets and top-k accuracy.
4 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview