default search action
Stephen Casper
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c7]Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas A. Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell:
Black-Box Access is Insufficient for Rigorous AI Audits. FAccT 2024: 2254-2272 - [i25]Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Alexander Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell:
Black-Box Access is Insufficient for Rigorous AI Audits. CoRR abs/2401.14446 (2024) - [i24]Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Xiaojun Xu, Yuguang Yao, Hang Li, Kush R. Varshney, Mohit Bansal, Sanmi Koyejo, Yang Liu:
Rethinking Machine Unlearning for Large Language Models. CoRR abs/2402.08787 (2024) - [i23]Aengus Lynch, Phillip Guo, Aidan Ewart, Stephen Casper, Dylan Hadfield-Menell:
Eight Methods to Evaluate Robust Unlearning in LLMs. CoRR abs/2402.16835 (2024) - [i22]Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell:
Defending Against Unforeseen Failure Modes with Latent Adversarial Training. CoRR abs/2403.05030 (2024) - [i21]Stephen Casper, Jieun Yun, Joonhyuk Baek, Yeseong Jung, Minhwan Kim, Kiwan Kwon, Saerom Park, Hayden Moore, David Shriver, Marissa Connor, Keltin Grimes, Angus Nicolson, Arush Tagade, Jessica Rumbelow, Hieu Minh Nguyen, Dylan Hadfield-Menell:
The SaTML '24 CNN Interpretability Competition: New Innovations for Concept-Level Interpretability. CoRR abs/2404.02949 (2024) - [i20]Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, Benjamin L. Edelman, Zhaowei Zhang, Mario Günther, Anton Korinek, José Hernández-Orallo, Lewis Hammond, Eric J. Bigelow, Alexander Pan, Lauro Langosco, Tomasz Korbak, Heidi Zhang, Ruiqi Zhong, Seán Ó hÉigeartaigh, Gabriel Recchia, Giulio Corsi, Alan Chan, Markus Anderljung, Lilian Edwards, Yoshua Bengio, Danqi Chen, Samuel Albanie, Tegan Maharaj, Jakob N. Foerster, Florian Tramèr, He He, Atoosa Kasirzadeh, Yejin Choi, David Krueger:
Foundational Challenges in Assuring Alignment and Safety of Large Language Models. CoRR abs/2404.09932 (2024) - [i19]Anka Reuel, Ben Bucknall, Stephen Casper, Tim Fist, Lisa Soder, Onni Aarne, Lewis Hammond, Lujain Ibrahim, Alan Chan, Peter Wills, Markus Anderljung, Ben Garfinkel, Lennart Heim, Andrew Trask, Gabriel Mukobi, Rylan Schaeffer, Mauricio Baker, Sara Hooker, Irene Solaiman, Alexandra Sasha Luccioni, Nitarshan Rajkumar, Nicolas Moës, Jeffrey Ladish, Neel Guha, Jessica Newman, Yoshua Bengio, Tobin South, Alex Pentland, Sanmi Koyejo, Mykel J. Kochenderfer, Robert Trager:
Open Problems in Technical AI Governance. CoRR abs/2407.14981 (2024) - [i18]Abhay Sheshadri, Aidan Ewart, Phillip Guo, Aengus Lynch, Cindy Wu, Vivek Hebbar, Henry Sleight, Asa Cooper Stickland, Ethan Perez, Dylan Hadfield-Menell, Stephen Casper:
Targeted Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs. CoRR abs/2407.15549 (2024) - [i17]Peter Slattery, Alexander K. Saeri, Emily A. C. Grundy, Jess Graham, Michael Noetel, Risto Uuk, James Dao, Soroush Pour, Stephen Casper, Neil Thompson:
The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence. CoRR abs/2408.12622 (2024) - [i16]Zhonghao He, Jascha Achterberg, Katie Collins, Kevin K. Nejad, Danyal Akarca, Yinzhu Yang, Wes Gurnee, Ilia Sucholutsky, Yuhan Tang, Rebeca Ianov, George Ogden, Chole Li, Kai Sandbrink, Stephen Casper, Anna Ivanova, Grace W. Lindsay:
Multilevel Interpretability Of Artificial Neural Networks: Leveraging Framework And Methods From Neuroscience. CoRR abs/2408.12664 (2024) - 2023
- [j1]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphaël Ségerie, Micah Carroll, Andi Peng, Phillip J. K. Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca D. Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell:
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. Trans. Mach. Learn. Res. 2023 (2023) - [c6]Stephen Casper, Dylan Hadfield-Menell, Gabriel Kreiman:
White-Box Adversarial Policies in Deep Reinforcement Learning. SafeAI@AAAI 2023 - [c5]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? EMNLP 2023: 4791-4797 - [c4]Stephen Casper, Tong Bu, Yuxiao Li, Jiawei Li, Kevin Zhang, Kaivalya Hariharan, Dylan Hadfield-Menell:
Red Teaming Deep Neural Networks with Feature Synthesis Tools. NeurIPS 2023 - [c3]Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell:
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. SaTML 2023: 464-483 - [i15]Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, Dylan Hadfield-Menell:
Benchmarking Interpretability Tools for Deep Neural Networks. CoRR abs/2302.10894 (2023) - [i14]Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell:
Explore, Establish, Exploit: Red Teaming Language Models from Scratch. CoRR abs/2306.09442 (2023) - [i13]Stephen Casper, Zifan Guo, Shreya Mogulothu, Zachary Marinov, Chinmay Deshpande, Rui-Jie Yew, Zheng Dai, Dylan Hadfield-Menell:
Measuring the Success of Diffusion Models at Imitating Human Artists. CoRR abs/2307.04028 (2023) - [i12]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphaël Ségerie, Micah Carroll, Andi Peng, Phillip J. K. Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca D. Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell:
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. CoRR abs/2307.15217 (2023) - [i11]Rusheb Shah, Quentin Feuillade-Montixi, Soroush Pour, Arush Tagade, Stephen Casper, Javier Rando:
Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation. CoRR abs/2311.03348 (2023) - [i10]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? CoRR abs/2312.03729 (2023) - 2022
- [c2]Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, Gabriel Kreiman:
Robust Feature-Level Adversaries are Interpretability Tools. NeurIPS 2022 - [i9]Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell:
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. CoRR abs/2207.13243 (2022) - [i8]Stephen Casper, Dylan Hadfield-Menell, Gabriel Kreiman:
White-Box Adversarial Policies in Deep Reinforcement Learning. CoRR abs/2209.02167 (2022) - [i7]Stephen Casper, Kaivalya Hariharan, Dylan Hadfield-Menell:
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks. CoRR abs/2211.10024 (2022) - 2021
- [c1]Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman:
Frivolous Units: Wider Networks Are Not Really That Wide. AAAI 2021: 6921-6929 - [i6]Daniel Filan, Stephen Casper, Shlomi Hod, Cody Wild, Andrew Critch, Stuart Russell:
Clusterability in Neural Networks. CoRR abs/2103.03386 (2021) - [i5]Stephen Casper, Max Nadeau, Gabriel Kreiman:
One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features. CoRR abs/2110.03605 (2021) - [i4]Shlomi Hod, Stephen Casper, Daniel Filan, Cody Wild, Andrew Critch, Stuart Russell:
Detecting Modularity in Deep Neural Networks. CoRR abs/2110.08058 (2021) - 2020
- [i3]Abdelrhman Saleh, Tovly Deutsch, Stephen Casper, Yonatan Belinkov, Stuart M. Shieber:
Probing Neural Dialog Models for Conversational Understanding. CoRR abs/2006.08331 (2020) - [i2]Stephen Casper:
The Achilles Heel Hypothesis: Pitfalls for AI Systems via Decision Theoretic Adversaries. CoRR abs/2010.05418 (2020)
2010 – 2019
- 2019
- [i1]Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman:
Removable and/or Repeated Units Emerge in Overparametrized Deep Neural Networks. CoRR abs/1912.04783 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 22:11 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint