Stars
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar
Fast Python Collaborative Filtering for Implicit Feedback Datasets
🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper. Don't be shy, join here: https://discord.gg/jP8KfhDhyN
proof-of-concept of Cursor's Instant Apply feature
MLX native implementations of state-of-the-art generative image models
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
A simple library for creating beautiful interactive prompts.
For inferring and serving local LLMs using the MLX framework