🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
Updated
Sep 7, 2024 - Python
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Large Language Model Text Generation Inference
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
An open-source and enterprise-level monitoring system.
Embrace the APIs of the future. Hug aims to make developing APIs as simple as possible, but no simpler.
Free, open-source SQL client for Windows and Mac 🦅
A familiar HTTP Service Framework for Python.
A high-performance web server for Ruby, supporting HTTP/1, HTTP/2 and TLS.
LLM Finetuning with peft
A friendly library for parsing HTTP request arguments, with built-in support for popular web frameworks, including Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.
🩹Editing large language models within 10 seconds⚡
🦙 Integrating LLMs into structured NLP pipelines
The simplest way to serve AI/ML models in production
🤖 A PyTorch library of curated Transformer models and their composable components
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
Take Android screenshots with Falcons bright eye!
Add a description, image, and links to the falcon topic page so that developers can more easily learn about it.
To associate your repository with the falcon topic, visit your repo's landing page and select "manage topics."