Highlights
- Pro
Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Inference code and configs for the ReplitLM model family
Run evaluation on LLMs using human-eval benchmark
llama.cpp with BakLLaVA model describes what does it see
Train Large Language Models on MLX.
Shepherd: A foundational framework enabling federated instruction tuning for large language models
Python implementation of the Swirld byzantine consensus algorithm
Easily deployed and managed private blockchains for enterprise and development use.
Agent0 is the SDK for agentic economies: open discovery and trust for agents
[EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens