Oliva is a multi-agent assistant that helps users find products on Qdrant database using Langchain and superlinked.
| Requirement | Description |
|---|---|
| Database Population | Follow the setup instructions in the tabular-semantic-search-tutorial or download the snapshot in assets/snapshot.zip |
| Qdrant | Vector database for efficient similarity search and storage of embeddings. |
| Superlinked | Framework for building AI applications with semantic search capabilities. |
| Deepgram Account | Speech-to-text service account required for converting voice input into text. |
| Livekit Account | Real-time communication platform needed for handling voice interactions. |
| Python Knowledge | Understanding of Python programming language (version 3.12+). |
- Install project dependencies:
uv syncThis will create a virtual environment in .venv and install all required dependencies.
- Livekit account
Create a Livekit account in Livekit Cloud and get LIVEKIT_URL, LIVEKIT_API_KEY and LIVEKIT_API_SECRET.
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=secret
LIVEKIT_API_SECRET=********- Environment variables
Before running any Python scripts, set the following environment variables:
cp .env.example .env- Qdrant
Use docker to run Qdrant, set an API key wherever you want:
docker run -p 6333:6333 -p 6334:6334 \
-e QDRANT__SERVICE__API_KEY=******** \
-v "$(pwd)/qdrant_storage:/qdrant/storage:z" \
qdrant/qdrantmake oliva-startUse Agent playground and connect with your Livekit project to interact with the voice assistant.
If you prefer run locally, download the repo Agent playground and run npm run start.
oliva/
├── app/
│ ├── agents/
│ │ ├── implementations/ # Individual agent implementations
│ │ ├── core/ # Base classes and interfaces for agent components
│ │ └── langchain/
│ │ ├── base/ # Base LangChain integration classes
│ │ ├── config/ # LangChain configuration
│ │ ├── edges/ # Edge conditions for workflow routing
│ │ ├── nodes/ # Node implementations (agent, rewrite, generate)
│ │ └── tools/ # LangChain-specific tools
│ ├── voice_assistant/
│ └── utils/ # Shared utilities
The project follows a modular architecture implementing an agentic RAG (Retrieval-Augmented Generation) system:
-
Agent Components (
app/agents/)agents/: Contains specific agent implementationscore/: Defines core interfaces and abstract classes for:- State management
- Node implementations
- Edge conditions
- Tool interfaces
- Graph workflow definitions
-
LangChain Integration (
app/agents/integrations/langchain/)- Provides LangChain-specific implementations for:
- Document retrieval
- Tool operations
- State management
- Workflow nodes and edges
- Provides LangChain-specific implementations for:
-
Voice Assistant (
app/voice_assistant/)- LiveKit integration
- Voice interface implementation
- Speech-to-text and text-to-speech capabilities
- LiveKit integration
-
Utilities (
app/utils/)- Shared helper functions
- Common utilities used across modules
The system implements a graph-based workflow where each agent processes state through a series of nodes (functions) connected by conditional edges, supporting dynamic routing based on the agent's decisions.
Langchain workflow by superlinked
make agent-search-by-superlinkedLangchain workflow by json file
make agent-search-by-json| Technology | Version/Type | Role |
|---|---|---|
| Langchain | Latest | LLM application framework |
| Livekit | Cloud/Self-hosted | Real-time voice communication |
| Qdrant | Vector DB | Semantic search storage |
| Superlinked | Framework | Semantic search capabilities |
| Deepgram | API Service | Speech-to-text conversion |
| OpenAI | API Service | LLM provider |
| Python | 3.12+ | Core implementation |