-
Google DeepMind
- Singapore
-
14:15
(UTC +08:00) - https://jiatong-han.me
Stars
An SE(3)-invariant autoencoder for generating the periodic structure of materials [ICLR 2022]
Code with CliqueFlowmer model for Optimal Computational Materials Discovery
A RL framework for Crystal Structure Generation using GRPO
Official PyTorch Implementation for Learning a Generative Meta-Model of LLM Activations
Anthropic's original performance take-home, now open for you to try!
A text-guided diffusion model for crystal structure generation
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
P2P Docker registry capable of distributing TBs of data in seconds
Machine Learning Engineering Open Book
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep lear…
A simple web app for NUS students to track courses taken and check their GPA and Honours classification.
Designing a Dashboard for Transparency and Control of Conversational AI, https://arxiv.org/abs/2406.07882
(NeurIPS 2024 Oral 🔥) Improved Distribution Matching Distillation for Fast Image Synthesis
Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".
This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.
A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
Tools for understanding how transformer predictions are built layer-by-layer
Sparsify transformers with SAEs and transcoders
Code for our paper "Attending to Graph Transformers"
Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).
Training Sparse Autoencoders on Language Models