Highlights
- Pro
AI Safety and Alignment Group
aisa-group
AI Safety and Alignment Group at the ELLIS Institute Tübingen and Max Planck Institute for Intelligent Systems
Germany
Princeton Language and Intelligence (PLI)
princeton-pli
Princeton Language and Intelligence (PLI). Please also check out https://github.com/princeton-nlp for some of our legacy projects.
United States of America
Long Phan
justinphan3110
Research Engineer @ Center for AI Safety (CAIS). See my latest code at @justinphan3110cais
@centerforaisafety USA | VIetnam
Yiming Li
THUYimingLi
Research Fellow, NTU
Previous Research Professor, ZJU
Working on Trustworthy ML/GenAI
Ziming Liu
KindXiaoming
A PhD student at MIT, working in the intersection of physics and intelligence.
Massachusetts Institute of Technology MA, USA
FAR.AI
AlignmentResearch
Frontier alignment research to ensure the safe development and deployment of advanced AI systems.
Wanru Zhao
Ryan0v0
Do not go gentle into that good night🪐🧗
Cambridge Machine Learning Systems Lab & Vector Institute. Prev: AWS AI Lab, Microsoft Research
University of Cambridge
Michael Chen
ML-Chen
Member of Policy Staff at METR •
ex-Software Engineer at Stripe •
Georgia Tech
Stripe San Francisco
Cody Wild
decodyng
Research Engineer@CHAI
///
Writer of code, explainer of ideas, wrangler of cats
Center for Human-Compatible AI United States
Center for Human-Compatible AI
HumanCompatibleAI
CHAI seeks to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
Erik Jenner
ejnnr
Research Scientist at Google DeepMind, working on AGI safety & alignment
Google DeepMind
Yunhao Liu
EE-LiuYunhao
Rolling on the keyboard. Floating with the mouse.
University of California, Berkeley California
Zhijing Jin
zhijing-jin
PhD in NLP & Causality. Affiliated with Max Planck Institute, Germany & ETH & UMich. Supervised by Bernhard Schoelkopf, Rada Mihalcea, and Mrinmaya Sachan.
Max Planck Institute & ETH Zurich
PreviousNext