-
ShanghaiTech University
- Shanghai
-
06:21
(UTC +08:00)
Highlights
- Pro
Starred repositories
🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"
Central repository for biomolecular foundation models with shared trainers and pipeline components
PyTorch implementation of JiT https://arxiv.org/abs/2511.13720
Official repository of "ODesign: A World Model for Biomolecular Interaction Design"
[CVPR 2025] Diff2Flow: Training Flow Matching Models via Diffusion Model Alignment
Compress and Attend Transformers (CATs) 😸
flex-block-attn: an efficient block sparse attention computation library
Codes for our paper "UniMoMo: Unified Generative Modeling of 3D Molecules for De Novo Binder Design" (ICML 2025)
Source code for RNA-FrameFlow: SE(3) Flow Matching for 3D RNA Backbone Design
Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Official code release for Ambient Protein Diffusion
微舆:人人可用的多Agent舆情分析助手,打破信息茧房,还原舆情原貌,预测未来走向,辅助决策!从0实现,不依赖任何框架。
A partially latent flow matching model for the joint generation of a protein’s amino acid sequence and full atomistic structure, including both the backbone and side chain.
Implementation of paper "Uni-3DAR: Unified 3D Generation and Understanding via Autoregression on Compressed Spatial Tokens"
Utilities intended for use with Llama models.
Official implementation of All Atom Diffusion Transformers (ICML 2025)
A generalized computational framework for biomolecular modeling.
Efficient triton implementation of Native Sparse Attention.
Aligning protein generative models with experimental fitness
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
Trainable fast and memory-efficient sparse attention
Implementation of the dynamic chunking mechanism in H-net by Hwang et al. of Carnegie Mellon