AI alignment research & engineering. Anthropic AI Safety Fellow.
-
Peking University / Anthropic Fellows
- London
-
07:41
(UTC +01:00) - tianyiqiu.net
Highlights
- Pro
Pinned Loading
-
PKU-Alignment/ProgressGym
PKU-Alignment/ProgressGym PublicAlignment with a millennium of moral progress. Spotlight@NeurIPS 2024 Track on Datasets and Benchmarks.
-
Prevail-Safe-AI/Modeling-Lock-in
Prevail-Safe-AI/Modeling-Lock-in PublicThe public repo for "The Lock-in Hypothesis" paper
C++
-
PKU-Alignment/align-anything
PKU-Alignment/align-anything PublicAlign Anything: Training All-modality Model with Feedback
-
preprint-and-disparities
preprint-and-disparities PublicDo preprint repositories reduce citation disparity in academic publications? An econometric study
Python
-
ioi2021-thesis
ioi2021-thesis PublicStatistical Modeling of the Informatics Olympiad Ecosystem (IOI 2021 National Training Team Thesis, Tianyi Qiu)
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.