default search action
Thomas Fel
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j1]Léo Andéol, Thomas Fel, Florence de Grancey, Luca Mossina:
Conformal prediction for trustworthy detection of railway signals. AI Ethics 4(1): 157-161 (2024) - [c17]Katherine L. Hermann, Hossein Mobahi, Thomas Fel, Michael Curtis Mozer:
On the Foundations of Shortcut Learning. ICLR 2024 - [c16]Sabine Muzellec, Thomas Fel, Victor Boutin, Léo Andéol, Rufin VanRullen, Thomas Serre:
Saliency strikes back: How filtering out high frequencies improves white-box explanations. ICML 2024 - [c15]Agustin Picard, Lucas Hervier, Thomas Fel, David Vigouroux:
Influenciæ: A Library for Tracing the Influence Back to the Data-Points. xAI (4) 2024: 193-204 - [i25]Chris J. Hamblin, Thomas Fel, Srijani Saha, Talia Konkle, George A. Alvarez:
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images. CoRR abs/2402.10039 (2024) - [i24]Victor Boutin, Rishav Mukherji, Aditya Agrawal, Sabine Muzellec, Thomas Fel, Thomas Serre, Rufin VanRullen:
Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks. CoRR abs/2406.06079 (2024) - [i23]Thomas Fel, Louis Béthune, Andrew Kyle Lampinen, Thomas Serre, Katherine L. Hermann:
Understanding Visual Feature Reliance through the Lens of Complexity. CoRR abs/2407.06076 (2024) - [i22]Gabriel Kasmi, Amandine Brunetto, Thomas Fel, Jayneel Parekh:
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability. CoRR abs/2410.01482 (2024) - [i21]Mazda Moayeri, Vidhisha Balachandran, Varun Chandrasekaran, Safoora Yousefi, Thomas Fel, Soheil Feizi, Besmira Nushi, Neel Joshi, Vibhav Vineet:
Unearthing Skill-Level Insights for Understanding Trade-Offs of Foundation Models. CoRR abs/2410.13826 (2024) - [i20]Julien Colin, Lore Goetschalckx, Thomas Fel, Victor Boutin, Jay Gopal, Thomas Serre, Nuria Oliver:
Local vs distributed representations: What is the right basis for interpretability? CoRR abs/2411.03993 (2024) - 2023
- [c14]Fanny Jourdan, Agustin Martin Picard, Thomas Fel, Laurent Risser, Jean-Michel Loubes, Nicholas Asher:
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP. ACL (Findings) 2023: 5120-5136 - [c13]Léo Andéol, Thomas Fel, Florence de Grancey, Luca Mossina:
Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling. COPA 2023: 36-55 - [c12]Thomas Fel, Agustin Martin Picard, Louis Béthune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, Thomas Serre:
CRAFT: Concept Recursive Activation FacTorization for Explainability. CVPR 2023: 2711-2721 - [c11]Thomas Fel, Melanie Ducoffe, David Vigouroux, Rémi Cadène, Mikael Capelle, Claire Nicodème, Thomas Serre:
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis. CVPR 2023: 16153-16163 - [c10]Victor Boutin, Thomas Fel, Lakshya Singhal, Rishav Mukherji, Akash Nagaraj, Julien Colin, Thomas Serre:
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines? ICML 2023: 2953-3002 - [c9]Thomas Fel, Victor Boutin, Louis Béthune, Rémi Cadène, Mazda Moayeri, Léo Andéol, Mathieu Chalvidal, Thomas Serre:
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation. NeurIPS 2023 - [c8]Thomas Fel, Thibaut Boissin, Victor Boutin, Agustin Picard, Paul Novello, Julien Colin, Drew Linsley, Tom Rousseau, Rémi Cadène, Lore Goetschalckx, Laurent Gardes, Thomas Serre:
Unlocking Feature Visualization for Deep Network with MAgnitude Constrained Optimization. NeurIPS 2023 - [c7]Drew Linsley, Ivan F. Rodriguez Rodriguez, Thomas Fel, Michael Arcaro, Saloni Sharma, Margaret S. Livingstone, Thomas Serre:
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex. NeurIPS 2023 - [c6]Mathieu Serrurier, Franck Mamalet, Thomas Fel, Louis Béthune, Thibaut Boissin:
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective. NeurIPS 2023 - [i19]Victor Boutin, Thomas Fel, Lakshya Singhal, Rishav Mukherji, Akash Nagaraj, Julien Colin, Thomas Serre:
Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines? CoRR abs/2301.11722 (2023) - [i18]Léo Andéol, Thomas Fel, Florence de Grancey, Luca Mossina:
Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling. CoRR abs/2304.06052 (2023) - [i17]Fanny Jourdan, Agustin Martin Picard, Thomas Fel, Laurent Risser, Jean-Michel Loubes, Nicholas Asher:
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks. CoRR abs/2305.06754 (2023) - [i16]Drew Linsley, Pinyuan Feng, Thibaut Boissin, Alekh Karkada Ashok, Thomas Fel, Stephanie Olaiya, Thomas Serre:
Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception. CoRR abs/2306.03229 (2023) - [i15]Drew Linsley, Ivan Felipe Rodríguez, Thomas Fel, Michael Arcaro, Saloni Sharma, Margaret S. Livingstone, Thomas Serre:
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex. CoRR abs/2306.03779 (2023) - [i14]Thomas Fel, Thibaut Boissin, Victor Boutin, Agustin Martin Picard, Paul Novello, Julien Colin, Drew Linsley, Tom Rousseau, Rémi Cadène, Laurent Gardes, Thomas Serre:
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization. CoRR abs/2306.06805 (2023) - [i13]Thomas Fel, Victor Boutin, Mazda Moayeri, Rémi Cadène, Louis Béthune, Léo Andéol, Mathieu Chalvidal, Thomas Serre:
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation. CoRR abs/2306.07304 (2023) - [i12]Sabine Muzellec, Léo Andéol, Thomas Fel, Rufin VanRullen, Thomas Serre:
Gradient strikes back: How filtering out high frequencies improves explanations. CoRR abs/2307.09591 (2023) - [i11]Katherine L. Hermann, Hossein Mobahi, Thomas Fel, Michael C. Mozer:
On the Foundations of Shortcut Learning. CoRR abs/2310.16228 (2023) - 2022
- [c5]Julien Colin, Thomas Fel, Rémi Cadène, Thomas Serre:
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods. NeurIPS 2022 - [c4]Thomas Fel, Ivan F. Rodriguez Rodriguez, Drew Linsley, Thomas Serre:
Harmonizing the object recognition strategies of deep neural networks with humans. NeurIPS 2022 - [c3]Paul Novello, Thomas Fel, David Vigouroux:
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure. NeurIPS 2022 - [c2]Thomas Fel, David Vigouroux, Rémi Cadène, Thomas Serre:
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks. WACV 2022: 1565-1575 - [i10]Thomas Fel, Melanie Ducoffe, David Vigouroux, Rémi Cadène, Mikael Capelle, Claire Nicodeme, Thomas Serre:
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis. CoRR abs/2202.07728 (2022) - [i9]Thomas Fel, Lucas Hervier, David Vigouroux, Antonin Poché, Justin Plakoo, Rémi Cadène, Mathieu Chalvidal, Julien Colin, Thibaut Boissin, Louis Béthune, Agustin Martin Picard, Claire Nicodeme, Laurent Gardes, Grégory Flandin, Thomas Serre:
Xplique: A Deep Learning Explainability Toolbox. CoRR abs/2206.04394 (2022) - [i8]Paul Novello, Thomas Fel, David Vigouroux:
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure. CoRR abs/2206.06219 (2022) - [i7]Mathieu Serrurier, Franck Mamalet, Thomas Fel, Louis Béthune, Thibaut Boissin:
When adversarial attacks become interpretable counterfactual explanations. CoRR abs/2206.06854 (2022) - [i6]Mohit Vaishnav, Thomas Fel, Ivan Felipe Rodríguez, Thomas Serre:
Conviformers: Convolutionally guided Vision Transformer. CoRR abs/2208.08900 (2022) - [i5]Thomas Fel, Ivan Felipe Rodríguez, Drew Linsley, Thomas Serre:
Harmonizing the object recognition strategies of deep neural networks with humans. CoRR abs/2211.04533 (2022) - [i4]Thomas Fel, Agustin Martin Picard, Louis Béthune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, Thomas Serre:
CRAFT: Concept Recursive Activation FacTorization for Explainability. CoRR abs/2211.10154 (2022) - 2021
- [c1]Thomas Fel, Rémi Cadène, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, Thomas Serre:
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis. NeurIPS 2021: 26005-26014 - [i3]Thomas Fel, Rémi Cadène, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, Thomas Serre:
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis. CoRR abs/2111.04138 (2021) - [i2]Thomas Fel, Julien Colin, Rémi Cadène, Thomas Serre:
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods. CoRR abs/2112.04417 (2021) - 2020
- [i1]Thomas Fel, David Vigouroux:
Representativity and Consistency Measures for Deep Neural Network Explanations. CoRR abs/2009.04521 (2020)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-02 18:19 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint