An LLM-powered agent that intelligently manage network based on your intents in natural language in a software-defined networking (SDN) environment.
- Rigid and manual workflows: Administrators must translate high-level business goals into complex, vendor-specific flow rules. This is slow and prone to human error.
- Context-blindness and hallucination: General LLMs lack real-time topology and network state, so they can produce incorrect or unsafe configurations.
- Messy management and static knowledge: There is no unified interface for monitoring and control, and no feedback-driven knowledge base to improve the model over time.
- Integrated dashboard and knowledge base: Provide a real-time dashboard with a feedback-driven knowledge base (vector DB) that stores and retrieves successful configuration samples.
- Context-aware RAG pipeline: Inject live topology and similar configuration samples into the LLM prompt to improve technical accuracy.
- User intent translation: Build a natural language interface that converts high-level intents into ONOS-compatible JSON configurations.
-
Connectivity:
- HostToHost Intent
-
QoS control:
- HostToHost Intent with specific traffic types selector and priority
- Plan to further extend onos intent framework that allows OpenFLow meter injection
-
Load-balancing
- HostToHost Intent with specific traffic types selector and path
- Plan to develop a load balancer to be extended in onos intent framework
-
Access control
- FlowObjective intent with selector and treatment
-
Topology awareness and status retrieval
- A respontive UI developed using React
- Dynamic intent conflict resolution
- Rely on openflow flow rules instead of onos intent framework to expand scope
- Fine-tune LLM with more data collection
- Multi-controller, multi language support
Example intents:
- “Throttle bulk backups after 1 a.m. to keep latency low for production.”
- “Give the ‘VideoConf’ app higher priority on VLAN 20 until 6 p.m.”
- “Block SSH to servers outside the bastion for interns group.”
- refer here
- Significant components in this project
FYP/
├─ backend/
├─ main.py (FastAPI entrypoint)
├─ api/ (auth, chat, conversations, devices, hosts, config samples, tests, ONOS proxy)
├─ schemas/ (Pydantic request/response models)
├─ services/ (auth, chat, devices, config samples, llm, onos, rag, testbed, users)
├─ database/
├─ models.py (SQLAlchemy models)
├─ database.py (engine/session)
├─ seed_data.py (seed scripts)
├─ rag/ (embeddings + similarity search)
├─ data/ (seed JSON files)
├─ diagram/
├─ evaluation/
├─ frontend/
├─ src
├─ auth/ (Auth context)
├─ components/ (dashboard, chat, pages, layout)
├─ hooks/ (topology data)
├─ utils/ (API clients)
├─ llm-engine/ (legacy local LLM experiments)
├─ onos-testbed/
├─ scripts/ (topology + test runner)
├─ notes/ (testbed docs)
├─ main.py
- SDN Testbed
- ONOS Controller + Intent Framework
- Open vSwitch
- Linux namespaces (Mininet-based testbed)
- Frontend
- Vite + React + TypeScript
- Tailwind CSS
- D3-force + Chart.js
- Backend
- FastAPI + Pydantic
- Groq API client (LLM inference)
- RAG with SentenceTransformers
- Database
- PostgreSQL + pgvector
- SQLAlchemy
- Node.js 18+ (LTS recommended)
- Package manager: npm, pnpm, or yarn
8000: Backend API5173: React frontend6653: ONOS Listening OpenFlow8181: ONOS GUI
- CPU: Intel(R) Xeon(R) CPU D-1528 @ 1.90GHz
- Memory: 32 GB
- OS: Ubuntu 24.04.3 LTS
- No. of threads: 12
- Set required environment variables
export GROQ_API_KEY=your_key
export ONOS_API_URL=http://127.0.0.1:8181/onos/v1
export ONOS_USER=onos
export ONOS_PASS=rocks- Start backend FastAPI server
cd FYP
uv run uvicorn backend.main:app --host 0.0.0.0 --port 8000- Start frontend (separate terminal)
cd FYP/frontend
npm install
npm run devNote:
llm-engine/agent.pyandllm-engine/are legacy local-LLM experiments. The current backend uses the Groq API by default.