Skip to content

pinkfuwa/llumen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

llumen Logo

Llumen

πŸš€ The antidote to bloated AI interfaces.

A lightweight, performant chat application for the rest of us.

License: MPL 2.0 Nightly Docker status check MSRV zh-tw

Technology Stack

πŸ’‘ Why we built llumen

The Problem: The "Self-Hosted" Tradeoff: Powerful but Complex

If you have ever tried to self-host an LLM interface on a modest device, you know the struggle:

  1. The Bloat: Python-based containers that eat Gigabytes of RAM just to idle.
  2. The Lag: Waiting 30+ seconds for a server to boot and another minute to load chat history.
  3. The Config Hell: Spending hours wrestling with pipelines just to get a simple feature like "Title Generation" to work reliably.

The Solution: Simplicity by Engineering

We refused to accept that "powerful" means "heavy." We built llumen to fill the gap between commercial products (easy to setup, but no privacy) and power-user tools (private, but heavy & complex).

Feature Typical "Power User" UI llumen
Asset Footprint HUGE (GBs) Tiny (12MB)
RAM Usage High (Nightmare to Debug) < 128MB
Setup Time Hours of config Zero-Config

✨ Features

Don't let the size fool you. Llumen is lightweight in resources, but heavy on capability.

  • πŸ”Œ OpenAI Compatible: Works with OpenRouter, local models, or OpenAI-compatible server.
  • πŸš€ Blazing Fast: Sub-second cold starts. No more waiting.
  • 🧠 Smart & Deep: Built-in "Deep Research" capabilities, web-search integration.
  • 🎨 Rich Media: Handles PDF uploads, image generation, and renders complex LaTeX/Code.
  • 🀝 Run Anywhere: Windows, Linux, Docker, and fully optimized for Arm64 (yes, it fly on a Raspberry Pi).
video.mp4

⚑ Quickstart (The Proof)

Prove the speed yourself. If you have Docker, you are 30 seconds away from chatting.

Important

Default Credentials:

  • User: admin
  • Pass: P@88w0rd

🐳 Docker (Recommended)

Our multi-stage build produces a tiny, efficient container.

docker run -it --rm \
  -e API_KEY="<YOUR_OPENROUTER_API_KEY>" \
  -p 80:80 \
  -v "$(pwd)/data:/data" \
  ghcr.io/pinkfuwa/llumen:latest

That's it. No pipelines to configure. No dependencies to install.

See ./docs/sample for docker-compose examples.

πŸ“¦ Other Methods

Prefer a binary? We support that too. Check the Releases for Windows and Linux binaries.

πŸ”‘ Configuration (Optional)

It works out of the box, but if you want to tweak it:

  • API_KEY (required) β€” Your OpenRouter/Provider key.
  • OPENAI_API_BASE β€” Custom endpoint (Default: https://openrouter.ai/api).
  • DATABASE_URL β€” SQLite path (Default: sqlite://data/db.sqlite?mode=rwc).
  • BIND_ADDR β€” Network interface (Default: 0.0.0.0:80).

πŸ“– Documentation

  • User Guide: ./docs/user/README.md - Full features and usage.
  • For Developers:
    • Build from source: ./docs/chore/README.md
    • Architecture docs: ./docs/dev/README.md
Built with ❀️ by pinkfuwa. Keep it simple, keep it fast.