LangEval is an enterprise-grade AI Agentic Evaluation Platform, pioneering the application of Active Testing and User Simulation strategies to ensure the quality, safety, and performance of Generative AI systems before they reach the market.
Tip
๐ Live POC Demo: Explore the platform in action at langeval.space
Unlike passive monitoring tools that only "catch errors" after an incident has occurred, LangEval allows you to proactively "attack" (Red-Teaming), stress-test, and evaluate Agents in a safe Sandbox environment.
- Why Choose LangEval?
- Core Features
- Detailed Installation Guide
- Contributing
- Support the Project
- System Architecture
- Technology Stack
- Project Structure
- Development Roadmap
- Reference Documentation
- License
In the era of Agentic AI, traditional evaluation methods (based on text similarity) are no longer sufficient. LangEval addresses the toughest challenges in Enterprise AI:
- Behavioral Evaluation (Behavioral Eval): Does the Agent follow business processes (SOP)? Does it call the correct Tools?
- Safety & Security: Can the Agent be Jailbroken? Does it leak PII?
- Automation: How to test 1000 conversation scenarios without 1000 testers?
- Data Privacy: Runs entirely On-Premise/Private Cloud, without sending sensitive data externally.
- Persona-based Simulation: Automatically generates thousands of "virtual users" with different personalities (Difficult, Curious, Impatient...) using Microsoft AutoGen.
- Multi-turn Conversation: Evaluates the ability to maintain context across multiple conversation turns, beyond simple Q&A.
- Dynamic Scenarios: Flexible test scenarios supporting logical branching (Decision Tree).
- Tiered Metrics System:
- Tier 1 (Response): Answer Relevancy, Toxicity, Bias.
- Tier 2 (RAG): Faithfulness (Anti-hallucination), Contextual Precision.
- Tier 3 (Agentic): Tool Correctness, Plan Adherence (Process compliance).
- Custom Metrics: Supports defining custom metrics using G-Eval (LLM-as-a-Judge).
- State Machine Management: Manages complex states of the test process.
- Self-Correction Loop: Automatically detects errors and retries with different strategies (Prompt Mutation) to find Agent weaknesses.
- Human-in-the-loop: Breakpoint mechanisms for human intervention and scoring when the AI is uncertain.
- Identity Management: Pre-integrated with Microsoft Entra ID (Azure AD B2C) for SSO.
- RBAC Matrix: Detailed permission control down to every button (Admin, Workspace Owner, AI Engineer, QA, Stakeholder).
- PII Masking: Automatically hides sensitive information (Email, Phone, CC) starting from the SDK layer.
- Battle Arena: Compares A/B Testing between two Agent versions (Split View).
- Root Cause Analysis (RCA): Failure Clustering to identify where the Agent frequently fails.
- Trace Debugger: Integrated Langfuse UI to trace every reasoning step (Thought/Action/Observation).
- Docker & Docker Compose (v2.20+)
- Node.js 18+ (LTS) & npm/yarn/pnpm
- Python 3.11+ (Optional, for running individual services locally)
- Git
git clone https://github.com/your-org/langeval.git
cd langevalCopy the .env.example file to .env in the root directory and update essential keys.
cp .env.example .env
# Edit .env file and update:
# 1. OPENAI_API_KEY=sk-... (Required for Simulation Agents)
# 2. GOOGLE_CLIENT_ID=... (Required for Auth)
# 3. GOOGLE_CLIENT_SECRET=...
# 4. NEXTAUTH_SECRET=... (Generate with: openssl rand -base64 32)We use Docker Compose to spin up the entire backend stack, including Databases (Postgres, ClickHouse, Qdrant), Message Queue (Kafka, Redis), and Core Services (Orchestrator, Resource Service).
# Start all backend services in the background
docker-compose up -dNote: This process may take a few minutes to download images and initialize databases (PostgreSQL, Qdrant, ClickHouse). Ensure all containers are
healthybefore proceeding.
Run the Next.js frontend application locally for the best development experience.
cd evaluation-ui
# Install dependencies
npm install
# Start the development server
npm run devOnce everything is running:
- AI Studio (Frontend): http://localhost:8080
- API Gateway: http://localhost:8000/docs
- Langfuse Dashboard: http://localhost:3000 (Check docker-compose for exposed port)
We adopt the Vibe Coding (AI-Assisted Development) process. We welcome contributions from the community!
Please carefully read CONTRIBUTING.md to understand how to use AI tools to contribute effectively and according to project standards.
If you find LangEval useful, please consider supporting its development to help us maintain server costs and coffee supplies! โ
LangEval adopts an Event-Driven Microservices architecture, optimized for deployment on Kubernetes (EKS) and horizontal scalability.
graph TD
user(("User (QA/Dev)"))
subgraph "LangEval Platform (EKS Cluster)"
ui("AI Studio (Next.js)")
api("API Gateway")
subgraph "Control Plane"
orch("Orchestrator Service<br>(LangGraph)")
resource("Resource Service<br>(FastAPI)")
identity("Identity Service<br>(Entra ID)")
end
subgraph "Compute Plane (Auto-scaling)"
sim("Simulation Worker<br>(AutoGen)")
eval("Evaluation Worker<br>(DeepEval)")
gen("Gen AI Service<br>(LangChain)")
end
subgraph "Data Plane"
pg[(PostgreSQL - Metadata)]
ch[(ClickHouse - Logs)]
kafka[(Kafka - Event Bus)]
redis[(Redis - Cache/Queue)]
qdrant[(Qdrant - Vector DB)]
end
end
user --> ui
ui --> api
api --> orch & resource & identity
orch -- "Dispatch Jobs" --> kafka
kafka -- "Consume Tasks" --> sim & eval
sim & eval -- "Write Logs" --> ch
orch -- "Persist State" --> redis & pg
gen -- "RAG Search" --> qdrant
We select "Best-in-Class" technologies for each layer:
| Layer | Technology | Reason for Selection |
|---|---|---|
| Frontend | Next.js 14, Shadcn/UI, ReactFlow | High performance, good SEO, standard Enterprise interface. |
| Orchestration | LangGraph | Better support for Cyclic Graphs compared to traditional LangChain Chains. |
| Simulation | Microsoft AutoGen | The most powerful framework currently available for Multi-Agent Conversation. |
| Evaluation | DeepEval | Deep integration with PyTest, supporting Unit Testing for AI. |
| Observability | Langfuse (Self-hosted) | Open Source, data security, excellent Tracing interface. |
| Database | PostgreSQL, ClickHouse, Qdrant | Polyglot Persistence: The right DB for the right job (Metadata, Logs, Vectors). |
| Queue/Stream | Kafka, Redis | Ensures High Throughput and Low Latency for millions of events. |
The project is organized using a Monorepo model for easy management and synchronized development:
langeval/
โโโ backend/
โ โโโ data-ingestion/ # Rust service: High-speed log processing from Kafka into ClickHouse
โ โโโ evaluation-worker/ # Python service: DeepEval scoring worker
โ โโโ gen-ai-service/ # Python service: Test data and Persona generation
โ โโโ identity-service/ # Python service: Auth & RBAC
โ โโโ orchestrator/ # Python service: Core logic, LangGraph State Machine
โ โโโ resource-service/ # Python service: CRUD APIs (Agents, Scenarios...)
โ โโโ simulation-worker/ # Python service: AutoGen simulators
โโโ evaluation-ui/ # Frontend: Next.js Web Application
โ โโโ docs/ # ๐ Detailed project documentation
โ โโโ ...
โโโ infrastructure/ # Terraform, Docker Compose, K8s manifests
โโโ ...
The project is divided into 3 strategic phases:
- Build Orchestrator Service with LangGraph.
- Integrate Simulation Worker (AutoGen) and Evaluation Worker (DeepEval).
- Complete Data Ingestion pipeline with Kafka & ClickHouse.
- Launch AI Studio with Visual Scenario Builder (Drag & Drop).
- Integrate Active Red-Teaming (Automated Attacks).
- Human-in-the-loop Interface (Review Queue for scoring).
- Battle Mode (Arena UI) for A/B Testing.
- Integrate CI/CD Pipeline (GitHub Actions Quality Gate).
- Self-Optimization (GEPA algorithm for Prompt self-correction).
The comprehensive documentation system (Architecture, API, Database, Deployment) is located in the evaluation-ui/docs/ directory. This is the Single Source of Truth.
-
Overview: Master Plan, Business Requirements
-
Technical: System Architecture, Database Design, API Spec
-
Authentication: Google OAuth Setup Guide
-
Operations: Deployment & DevOps, Security
This project is licensed under the MIT License. See the LICENSE file for more details.
LangEval Team - Empowering Enterprise AI with Confidence