Skip to content

longregen/linnix

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Linnix

CI License Docker Pulls GitHub Stars

Linnix detecting a fork storm

eBPF-powered Linux observability with optional AI incident detection

Linnix monitors every process on your Linux system using eBPF - forks, execs, exits, and basic CPU/memory telemetry. It can run standalone with a built-in rules engine, or you can add AI for natural language insights.

What it does

  • Monitors process lifecycle events at the kernel level using eBPF
  • Low overhead (<1% CPU on my test systems)
  • Built-in rules engine catches common issues (fork storms, CPU spikes, runaway processes)
  • Optional LLM integration for natural language incident analysis
  • Works with any OpenAI-compatible API or local models (llama.cpp, Ollama, vLLM)

Why I built this

Traditional monitoring tools alert you after things break. I wanted something that could spot weird patterns early - like memory allocation rates that look off, or unusual fork behavior - before they cascade into actual outages.

eBPF gives us kernel-level visibility without the overhead of traditional monitoring agents. The LLM part is optional but helps spot patterns that simple threshold alerts miss.

Quick Start

Option 1: Docker (Simplest)

git clone https://github.com/linnix-os/linnix.git && cd linnix
docker-compose up -d

# Stream live process events
curl -N http://localhost:3000/stream

# Get incident alerts from rules engine
curl http://localhost:3000/insights | jq

No AI needed for basic monitoring. Works out of the box.

Option 2: With AI (Optional)

git clone https://github.com/linnix-os/linnix.git && cd linnix
./setup-llm.sh  # Downloads model (~2GB) and sets everything up

# Then open: http://localhost:8080

This downloads a small model (linnix-3b, distilled for CPU inference) and starts the full stack.

Architecture

┌──────────────────────────────────────────────────────────────┐
│                    Kernel Space (eBPF)                       │
├──────────────────────────────────────────────────────────────┤
│  fork hook  →  exec hook  →  exit hook  →  CPU/mem sampling  │
└────────────────────────┬─────────────────────────────────────┘
                         │ Perf buffers
                         ▼
┌──────────────────────────────────────────────────────────────┐
│                   User Space (cognitod)                      │
├──────────────────────────────────────────────────────────────┤
│  • Event processing    • Process tree tracking               │
│  • State management    • Rules engine                        │
│  • HTTP/SSE API        • Prometheus metrics                  │
└────────────────────────┬─────────────────────────────────────┘
                         │
         ┌───────────────┼───────────────┐
         │               │               │
         ▼               ▼               ▼
   ┌─────────┐    ┌──────────┐   ┌─────────────┐
   │ CLI     │    │ Reasoner │   │ Prometheus  │
   │ Stream  │    │ (AI)     │   │ Grafana     │
   └─────────┘    └──────────┘   └─────────────┘

What it catches

  • Memory leaks
  • Fork storms (cron job gone wrong)
  • File descriptor exhaustion (before the service crashed)
  • CPU thrashing (process stuck in a loop)

Not magic - just pattern matching on eBPF events. The AI part helps explain what's happening in plain language.

Requirements

  • Linux kernel 5.8+ with BTF support
  • Privileged container or root access (needed for eBPF)
  • ~100MB RAM for the daemon
  • Optional: ~2GB for the AI model if you want that

API Endpoints

Cognitod runs on port 3000:

  • GET /health - Health check
  • GET /metrics - Prometheus metrics
  • GET /processes - All live processes
  • GET /graph/:pid - Process ancestry graph
  • GET /stream - Server-sent events (real-time)
  • GET /insights - AI insights (if enabled)
  • GET /alerts - Active alerts from rules engine

LLM Integration (Optional)

Works with any OpenAI-compatible endpoint:

# Use included model
./setup-llm.sh

# Or bring your own
export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions"
export LLM_MODEL="qwen2.5-7b"
linnix-reasoner --insights

Supports: llama.cpp, Ollama, vLLM, OpenAI, Anthropic, or anything with an OpenAI-compatible API.

Configuration

Create /etc/linnix/linnix.toml:

[runtime]
offline = false  # Set true to disable external calls

[telemetry]
sample_interval_ms = 1000  # How often to sample CPU/memory

[rules]
enabled = true
config_path = "/etc/linnix/rules.yaml"

[api]
bind_address = "127.0.0.1:3000"

[llm]
endpoint = "http://localhost:8090/v1/chat/completions"
model = "qwen2.5-7b"
timeout_secs = 120

Current Status

What works well:

  • eBPF monitoring (stable, low overhead)
  • Rules engine (catches common issues)
  • Prometheus export
  • Docker/Kubernetes monitoring

What's rough:

  • AI detection is hit-or-miss (depends heavily on the model)
  • No web UI yet (just APIs and CLI)
  • Limited documentation
  • Haven't tested beyond my own setups

What's missing:

  • Multi-node management UI
  • Better alerting integrations (working on PagerDuty, Slack)
  • Historical data storage (currently in-memory only)
  • More sophisticated correlation

Installation from Source

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Clone and build
git clone https://github.com/linnix-os/linnix.git
cd linnix
cargo xtask build-ebpf
cargo build --release

# Install binaries
sudo cp target/release/cognitod /usr/local/bin/
sudo cp target/release/linnix-cli /usr/local/bin/
sudo cp target/release/linnix-reasoner /usr/local/bin/

Contributing

Contributions welcome! The code is mostly in Rust using the Aya framework for eBPF.

See CONTRIBUTING.md for guidelines.

License

Apache License 2.0 - see LICENSE

eBPF programs are dual-licensed under GPL-2.0 OR MIT (eBPF requirement).

Acknowledgments

Built with:

  • Aya - Rust eBPF framework
  • Tokio - Async runtime
  • Axum - Web framework
  • BTF - BPF Type Format

GitHub: github.com/linnix-os/linnix Twitter: @parth21shah

About

eBPF-powered Linux observability with AI incident detection. Apache-2.0 licensed.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 80.4%
  • HTML 9.5%
  • Shell 8.5%
  • Dockerfile 1.1%
  • Other 0.5%