bio photo

Jihye Choi

[CV]
Ph.D. Candidate
Department of Computer Sciences
University of Wisconsin-Madison
Office: CS 7378

 G. Scholar LinkedIn Github Twitter e-Mail

About


I am a Ph.D. candidate in the Computer Science Department at the University of Wisconsin-Madison, where I am advised by Somesh Jha. I also work closely with David Page at Duke University and Atul Prakash at the University of Michigan, Ann Arbor. Prior to this, I obtained my Master's degree from Carnegie Mellon University and my Bachelor's degree from Yonsei University.


My research envisions building trustworthy AI ecosystems in the wild.

🤖 Single-entity: A modern ML model often serves as a core decision-making unit. Understanding its behavior and ensuring it’s reliable is the first step toward creating genuinely trustworthy AI.

🤖↔️🤖 Multi-entity: But real-world AI involves more than a single model. Large Language Models (LLMs) may incorporate external knowledge bases or tools/function-callings, or even collaborate with other LLMs for advanced reasoning and mutual verification (e.g., multi-agent orchestration). Or multiple parties might exchange model updates trained on private data to produce a global model (e.g., federated learning).


With these settings in mind, I study various aspects of trustworthiness to ensure decisions made by AI systems to be truly reliable and deployable in the open world. These aspects can be grouped into a few key areas:

Robustness: Their decisions should not be disrupted by the changes in the input distribution. Ideally, they should remain robust against shifts in the input domain, whether caused by adversaries or arising naturally.

Explainability: Humans do not use systems they can’t trust, and explanations are integral to building and maintaining that trust. Understanding when and why the system fails is an essential step toward identifying its failure modes and providing actionable guidelines to improve it.

Privacy & Security: Along with the evolving landscape of decision-making pipelines and advanced models, new security and privacy threats emerge. One may ask whether adversaries can jailbreak LLM guardrails to generate harmful outputs. One may wonder how the inadvertantly memorized training data can be exploited by adversaries for privacy leakage. One may question the privacy guarantees for user local data in federated learning, especially when malicious parties are involved.



Updates


Dec 24: The homepage is under renovation. Please stay tuned...
Nov 24: I will be serving as a PC member for the DLSP Workshop at IEEE S&P'25.
Aug 24 : Will be presenting our work on the orchestration of LLM-powered agents for adverse drug event extraction at MLHC'24 - see you in Toronto! This work is a collaboration with the School of Medicine at Duke University and Langroid.