-
@configint, KAIST
- Seoul, Republic of Korea
-
08:47
(UTC +09:00) - https://seominjoon.github.io
- @seo_minjoon
Stars
A CLI tool that helps manage training jobs on the SageMaker HyperPod clusters orchestrated by Amazon EKS
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
SGLang is a high-performance serving framework for large language models and multimodal models.
DreamGen: Nvidia GEAR Lab's initiative to solve the robotics data problem using world models
XLeRobot: Practical Dual-Arm Mobile Home Robot for $660
The Amazon S3 Connector for PyTorch delivers high throughput for PyTorch training jobs that access and store data in Amazon S3.
A high-throughput and memory-efficient inference and serving engine for LLMs
Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"
[EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
The world's largest GitHub Repository for LLMs + Robotics
[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted fo…
Transformer related optimization, including BERT, GPT
ImageBind One Embedding Space to Bind Them All
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
[AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following
Making large AI models cheaper, faster and more accessible
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
[ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning
Fast and memory-efficient exact attention
Automatically create Faiss knn indices with the most optimal similarity search parameters.
An open source implementation of CLIP.
Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)
[ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning