Skip to content

Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval

License

Notifications You must be signed in to change notification settings

Papr-ai/memory-opensource

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Papr Memory 🧠

Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval.

License: AGPL-3.0 Python 3.8+ FastAPI

πŸš€ What is Papr Memory?

Papr Memory is the predictive memory layer for your AI agents that allows you to:

  • Store Information: Save text, documents, code snippets, and structured data
  • AI-Powered Search: Find relevant memories using natural language queries
  • Graph Relationships: Automatically discover and track connections between memories
  • Vector Embeddings: Semantic search powered by modern embedding models
  • Multi-Modal Support: Handle text, documents, images, and structured data
  • User Context: Personal memory spaces with fine-grained access control

πŸ’‘ Use Cases

  • Voice Agents for Customer Support: Enable intelligent voice assistants with persistent memory and context
  • B2B AI Agents: Knowledge management, RAG, and semantic search for enterprise applications
  • Coding Agents: Use custom ontology + GraphQL for significant improvements to context and search in your codebase
  • Financial AI Agents: Ingest financial documents using custom ontology + GraphQL for queries
  • Healthcare AI Agents: Secure, compliant memory management for healthcare applications
  • Retail AI Agents: Use custom ontology + GraphQL for intelligent product recommendations and customer insights

πŸ—οΈ Architecture Overview

graph TB
    Client[Client Applications] --> API[FastAPI Server]
    API --> Parse[Parse Server]
    API --> Mongo[(MongoDB)]
    API --> Neo4j[(Neo4j Graph DB)]
    API --> Qdrant[(Qdrant Vector DB)]
    API --> Redis[(Redis Cache)]

    subgraph "AI Services"
        OpenAI[OpenAI Embeddings]
        LLM[Language Models]
    end

    API --> OpenAI
    API --> LLM

    subgraph "Storage Layer"
        Parse --> Mongo
        Neo4j --> MemGraph[Memory Graph]
        Qdrant --> VectorStore[Vector Embeddings]
    end

    subgraph "Features"
        Search[Semantic Search]
        Graph[Graph Relationships]
        ACL[Access Control]
        Embed[Auto Embeddings]
    end
Loading

Predictive memory Architecture

Papr' Architecture

πŸ†š Open Source vs Cloud

Feature Open Source Cloud
Memory Storage βœ… βœ…
Vector Search βœ… βœ…
Graph Relationships βœ… βœ…
API Access βœ… βœ…
Self-Hosted βœ… ❌
Managed Infrastructure ❌ βœ…
Automatic Backups ❌ βœ…
Payment/Billing ❌ βœ…
Enterprise SSO ❌ βœ…
SLA Guarantees ❌ βœ…
Priority Support ❌ βœ…
Advanced Analytics ❌ βœ…
Document Ingestion with Durable Execution ❌ βœ…
GraphQL Instance with Custom Ontology ❌ βœ…
On-Device Predictions (< 100ms retrieval) ❌ βœ…

πŸ”§ Key Components

  • FastAPI Server: Main API layer with authentication and routing
  • Parse Server: User management, ACL, and structured data storage
  • MongoDB: Primary document storage and user data
  • Neo4j: Graph database for memory relationships and connections
  • Qdrant: Vector database for semantic search and embeddings
  • Redis: Caching layer for performance optimization

πŸš€ Quick Start

Prerequisites

  • Python 3.8+
  • Docker & Docker Compose (recommended)
  • Git
  • API Keys: OpenAI API key, Groq API key, and Deep Infra API key
    • Note: Hugging Face is also supported, and local Qwen on-device support will be added soon

Option 1: Docker Setup (Recommended)

For Open Source Setup, see the detailed guide: QUICKSTART_OPENSOURCE.md

Quick start:

  1. Clone the repository
git clone https://github.com/Papr-ai/memory-opensource.git
cd memory-opensource
  1. Copy environment configuration
# For open source setup
cp .env.example .env.opensource
# Edit .env.opensource with your API keys (OpenAI, Groq, Deep Infra)
# Note: Hugging Face is also supported, and local Qwen on-device support will be added soon
  1. Start all services
# Open source setup (auto-initializes everything)
docker-compose up -d
  1. Access the API

Note: The open-source setup automatically initializes schemas, creates a default user, and generates an API key on first run. Check container logs for your API key.

Option 2: Manual Setup

  1. Clone and setup Python environment
git clone https://github.com/Papr-ai/memory-opensource.git
cd memory-opensource
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt
  1. Start required services
# Recommended: Use docker-compose for open source setup
docker-compose up -d mongodb neo4j qdrant redis parse-server

# Or start individually (for development):
# MongoDB
docker run -d -p 27017:27017 --name mongo mongo:8.0.12

# Neo4j
docker run -d -p 7474:7474 -p 7687:7687 \
  -e NEO4J_AUTH=neo4j/password \
  --name neo4j neo4j:2025.10.1

# Qdrant
docker run -d -p 6333:6333 --name qdrant qdrant/qdrant:v1.16.0

# Redis
docker run -d -p 6379:6379 --name redis redis:7-alpine

# Parse Server
docker run -d -p 1337:1337 \
  -e PARSE_SERVER_APPLICATION_ID=papr-oss-app-id \
  -e PARSE_SERVER_MASTER_KEY=papr-oss-master-key \
  -e PARSE_SERVER_DATABASE_URI=mongodb://localhost:27017/papr_memory \
  --name parse parseplatform/parse-server:8.4.0
  1. Configure environment
# For open source
cp .env.example .env.opensource
# Edit .env.opensource with your service URLs and API keys

# For cloud/development
cp .env.example .env
# Edit .env with your service URLs and API keys
  1. Run the application
python main.py

πŸ“– API Documentation

Authentication

The API supports multiple authentication methods:

# API Key
curl -H "X-API-Key: your-api-key" http://localhost:5001/v1/memory

# Session Token
curl -H "X-Session-Token: your-session-token" http://localhost:5001/v1/memory

# Bearer Token (OAuth)
curl -H "Authorization: Bearer your-jwt-token" http://localhost:5001/v1/memory

Core Endpoints

Memory Management

# Add a memory
POST /v1/memory
{
  "content": "Your memory content",
  "type": "text",
  "metadata": {
    "tags": ["important", "work"],
    "location": "office"
  }
}

# Search memories
POST /v1/memory/search
{
  "query": "find relevant information",
  "max_memories": 10
}

# Get specific memory
GET /v1/memory/{memory_id}

# Update memory
PUT /v1/memory/{memory_id}

# Delete memory
DELETE /v1/memory/{memory_id}

Document Upload

# Upload document
POST /v1/documents
Content-Type: multipart/form-data
File: document.pdf

User Management

# Get user info
GET /v1/users/me

# Update user settings
PUT /v1/users/me

Interactive API Documentation

Once running, visit:

πŸ”§ Configuration

Environment Variables

Key environment variables (see .env.example for complete list):

# Server Configuration
PORT=5001
DEBUG=true
ENVIRONMENT=development

# Database URLs
MONGODB_URL=mongodb://localhost:27017/papr_memory
NEO4J_URL=bolt://localhost:7687
QDRANT_URL=http://localhost:6333
REDIS_URL=redis://localhost:6379

# Parse Server
PARSE_SERVER_URL=http://localhost:1337
PARSE_SERVER_APP_ID=your-app-id
PARSE_SERVER_MASTER_KEY=your-master-key

# AI Services
OPENAI_API_KEY=your-openai-key
OPENAI_ORGANIZATION=your-org-id
GROQ_API_KEY=your-groq-key
DEEPINFRA_API_KEY=your-deepinfra-key
# Note: Hugging Face is also supported, and local Qwen on-device support will be added soon

Advanced Configuration

  • Vector Search: Configure embedding models and search parameters
  • Graph Relationships: Customize relationship extraction and graph building
  • Access Control: Setup user roles and permissions
  • Caching: Configure Redis caching strategies
  • Monitoring: Setup logging and health checks

πŸ§ͺ Testing

Run Tests

# All tests
pytest

# Specific test categories
pytest tests/unit/
pytest tests/integration/
pytest tests/api/

# With coverage
pytest --cov=./ --cov-report=html

API Testing

# Health check
curl http://localhost:5001/health

# Test authentication
curl -H "X-API-Key: test-key" http://localhost:5001/v1/memory

# Test memory creation
curl -X POST -H "Content-Type: application/json" \
  -H "X-API-Key: test-key" \
  -d '{"content":"Test memory","type":"text"}' \
  http://localhost:5001/v1/memory

πŸ“š Examples

Python Client

import requests

# Initialize client
base_url = "http://localhost:5001"
headers = {"X-API-Key": "your-api-key"}

# Add memory
response = requests.post(
    f"{base_url}/v1/memory",
    json={
        "content": "Important meeting notes from today",
        "type": "text",
        "metadata": {
            "tags": ["meeting", "work"],
            "date": "2024-01-15"
        }
    },
    headers=headers
)
memory = response.json()

# Search memories
response = requests.post(
    f"{base_url}/v1/memory/search",
    json={"query": "meeting notes", "max_memories": 10},
    headers=headers
)
results = response.json()

JavaScript Client

const baseUrl = 'http://localhost:5001';
const headers = { 'X-API-Key': 'your-api-key' };

// Add memory
const addMemory = async (content, metadata = {}) => {
  const response = await fetch(`${baseUrl}/v1/memory`, {
    method: 'POST',
    headers: { ...headers, 'Content-Type': 'application/json' },
    body: JSON.stringify({ content, type: 'text', metadata })
  });
  return response.json();
};

// Search memories
const searchMemories = async (query) => {
  const response = await fetch(`${baseUrl}/v1/memory/search`, {
    method: 'POST',
    headers: { ...headers, 'Content-Type': 'application/json' },
    body: JSON.stringify({ query, max_memories: 10 })
  });
  return response.json();
};

🀝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Quick Contribution Steps

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/your-feature
  3. Make your changes and add tests
  4. Run tests: pytest
  5. Commit your changes: git commit -am 'Add some feature'
  6. Push to the branch: git push origin feature/your-feature
  7. Submit a pull request

πŸ“„ License

This project is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details.

πŸ†˜ Support


Built with ❀️ by the Papr team

About

Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages