eBPF-powered Linux observability with optional AI incident detection
Linnix monitors every process on your Linux system using eBPF - forks, execs, exits, and basic CPU/memory telemetry. It can run standalone with a built-in rules engine, or you can add AI for natural language insights.
- Monitors process lifecycle events at the kernel level using eBPF
- Low overhead (<1% CPU on my test systems)
- Built-in rules engine catches common issues (fork storms, CPU spikes, runaway processes)
- Optional LLM integration for natural language incident analysis
- Works with any OpenAI-compatible API or local models (llama.cpp, Ollama, vLLM)
Traditional monitoring tools alert you after things break. I wanted something that could spot weird patterns early - like memory allocation rates that look off, or unusual fork behavior - before they cascade into actual outages.
eBPF gives us kernel-level visibility without the overhead of traditional monitoring agents. The LLM part is optional but helps spot patterns that simple threshold alerts miss.
git clone https://github.com/linnix-os/linnix.git && cd linnix
docker-compose up -d
# Stream live process events
curl -N http://localhost:3000/stream
# Get incident alerts from rules engine
curl http://localhost:3000/insights | jqNo AI needed for basic monitoring. Works out of the box.
git clone https://github.com/linnix-os/linnix.git && cd linnix
./setup-llm.sh # Downloads model (~2GB) and sets everything up
# Then open: http://localhost:8080This downloads a small model (linnix-3b, distilled for CPU inference) and starts the full stack.
┌──────────────────────────────────────────────────────────────┐
│ Kernel Space (eBPF) │
├──────────────────────────────────────────────────────────────┤
│ fork hook → exec hook → exit hook → CPU/mem sampling │
└────────────────────────┬─────────────────────────────────────┘
│ Perf buffers
▼
┌──────────────────────────────────────────────────────────────┐
│ User Space (cognitod) │
├──────────────────────────────────────────────────────────────┤
│ • Event processing • Process tree tracking │
│ • State management • Rules engine │
│ • HTTP/SSE API • Prometheus metrics │
└────────────────────────┬─────────────────────────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌─────────────┐
│ CLI │ │ Reasoner │ │ Prometheus │
│ Stream │ │ (AI) │ │ Grafana │
└─────────┘ └──────────┘ └─────────────┘
- Memory leaks
- Fork storms (cron job gone wrong)
- File descriptor exhaustion (before the service crashed)
- CPU thrashing (process stuck in a loop)
Not magic - just pattern matching on eBPF events. The AI part helps explain what's happening in plain language.
- Linux kernel 5.8+ with BTF support
- Privileged container or root access (needed for eBPF)
- ~100MB RAM for the daemon
- Optional: ~2GB for the AI model if you want that
Cognitod runs on port 3000:
GET /health- Health checkGET /metrics- Prometheus metricsGET /processes- All live processesGET /graph/:pid- Process ancestry graphGET /stream- Server-sent events (real-time)GET /insights- AI insights (if enabled)GET /alerts- Active alerts from rules engine
Works with any OpenAI-compatible endpoint:
# Use included model
./setup-llm.sh
# Or bring your own
export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions"
export LLM_MODEL="qwen2.5-7b"
linnix-reasoner --insightsSupports: llama.cpp, Ollama, vLLM, OpenAI, Anthropic, or anything with an OpenAI-compatible API.
Create /etc/linnix/linnix.toml:
[runtime]
offline = false # Set true to disable external calls
[telemetry]
sample_interval_ms = 1000 # How often to sample CPU/memory
[rules]
enabled = true
config_path = "/etc/linnix/rules.yaml"
[api]
bind_address = "127.0.0.1:3000"
[llm]
endpoint = "http://localhost:8090/v1/chat/completions"
model = "qwen2.5-7b"
timeout_secs = 120What works well:
- eBPF monitoring (stable, low overhead)
- Rules engine (catches common issues)
- Prometheus export
- Docker/Kubernetes monitoring
What's rough:
- AI detection is hit-or-miss (depends heavily on the model)
- No web UI yet (just APIs and CLI)
- Limited documentation
- Haven't tested beyond my own setups
What's missing:
- Multi-node management UI
- Better alerting integrations (working on PagerDuty, Slack)
- Historical data storage (currently in-memory only)
- More sophisticated correlation
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone and build
git clone https://github.com/linnix-os/linnix.git
cd linnix
cargo xtask build-ebpf
cargo build --release
# Install binaries
sudo cp target/release/cognitod /usr/local/bin/
sudo cp target/release/linnix-cli /usr/local/bin/
sudo cp target/release/linnix-reasoner /usr/local/bin/Contributions welcome! The code is mostly in Rust using the Aya framework for eBPF.
See CONTRIBUTING.md for guidelines.
Apache License 2.0 - see LICENSE
eBPF programs are dual-licensed under GPL-2.0 OR MIT (eBPF requirement).
Built with:
GitHub: github.com/linnix-os/linnix Twitter: @parth21shah