-
University of Toronto
- Toronto
-
05:59
(UTC -05:00) - https://rexxxx1234.github.io/
- @RexMa9
- in/rex-ma-20a455113
Stars
An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"
🚀 Your next Python package needs a bleeding-edge project structure.
Research on Tabular Deep Learning: Papers & Packages
Fast & Simple repository for pre-training and fine-tuning T5-style models
Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
A Framework of Small-scale Large Multimodal Models
Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning
Codebase for Merging Language Models (ICML 2024)
Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Implementation of Alpha Fold 3 from the paper: "Accurate structure prediction of biomolecular interactions with AlphaFold3" in PyTorch
A generative model for programmable protein design
H-Net: Hierarchical Network with Dynamic Chunking
DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks
A PyTorch implementation of GraphSAGE. This package contains a PyTorch implementation of GraphSAGE.
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Generation of protein sequences and evolutionary alignments via discrete diffusion models
[ICCV 2023] A latent space for stochastic diffusion models
An Open-Source Package for Deep Learning to Hash (DeepHash)
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
🧬 Generative modeling of regulatory DNA sequences with diffusion probabilistic models 💨
Python library for distributed AI processing pipelines, using swappable scheduler backends.
An open-source platform for developing protein models beyond AlphaFold.
(NeurIPS 2022) On Embeddings for Numerical Features in Tabular Deep Learning