-
The University of Texas at Austin
- Austin, TX, USA
-
02:25
(UTC -05:00) - zhenyu.gallery
- @KyriectionZhang
Lists (8)
Sort Name ascending (A-Z)
🦾 Benchmarking
LLM Hospital💎 Efficient ML
Prune & Sparse & Quantization & KD & NAS🤖 General Topics
Architectures & Optimization & BlockChain & SSL & Speech & Recsys💍 Large Language Models
Next Step of LLMs🚀 My Stack
Open-source of Our Works💁 Quantum ML
ML for Quantum & Quantum for ML🗼 Toolbox
Visualization & Coding Tool🚩 Trustworthy ML
OoD & Adversarial & BackdoorStars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Models and examples built with TensorFlow
Making large AI models cheaper, faster and more accessible
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Code and documentation to train Stanford's Alpaca models, and generate the data.
A high-throughput and memory-efficient inference and serving engine for LLMs
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Universal LLM Deployment Engine with ML Compilation
Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
Fast and memory-efficient exact attention
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
[NeurIPS 2024] SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challen…
Ongoing research training transformer models at scale
The official GitHub page for the survey paper "A Survey of Large Language Models".
Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
Running large language models on a single GPU for throughput-oriented scenarios.
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
A PyTorch implementation of EfficientNet