OpenTelemetry-native instrumentation for AI applications
Standardized observability across LLMs, agents, and frameworks
Documentation β’ Examples β’ Slack β’ PyPI Packages β’ npm Packages
traceAI provides drop-in OpenTelemetry instrumentation for popular AI frameworks and LLM providers. It automatically captures traces, spans, and attributes from your AI workflowsβwhether you're using OpenAI, Anthropic, LangChain, LlamaIndex, or 20+ other frameworks.
- Zero-config tracing for OpenAI, Anthropic, LangChain, LlamaIndex, and more
- OpenTelemetry-native β works with any OTel-compatible backend (Jaeger, Datadog, Future AGI, etc.)
- Semantic conventions for LLM calls, agents, tools, and retrieval
- Python + TypeScript support with consistent APIs
- Key Features
- Quickstart
- Supported Frameworks
- Compatibility Matrix
- Architecture
- Contributing
- Resources
- Connect With Us
| Feature | Description |
|---|---|
| π― Standardized Tracing | Maps AI workflows to consistent OpenTelemetry spans & attributes |
| π Zero-Config Setup | Drop-in instrumentation with minimal code changes |
| π Multi-Framework | 20+ integrations across Python & TypeScript |
| π Vendor Agnostic | Works with any OpenTelemetry-compatible backend |
| π Rich Context | Captures prompts, completions, tokens, model params, tool calls, and more |
| β‘ Production Ready | Async support, streaming, error handling, and performance optimized |
1. Install
pip install traceai-openai2. Instrument your application
import os
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_openai import OpenAIInstrumentor
import openai
# Set up environment variables
os.environ["FI_API_KEY"] = "<your-api-key>"
os.environ["FI_SECRET_KEY"] = "<your-secret-key>"
os.environ["OPENAI_API_KEY"] = "<your-openai-key>"
# Register tracer provider
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="my_ai_app"
)
# Instrument OpenAI
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)
# Use OpenAI as normal - tracing happens automatically!
response = openai.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}]
)π‘ Tip: Swap
traceai-openaifor any supported framework (e.g.,traceai-langchain,traceai-anthropic)
1. Install
npm install @traceai/openai @traceai/fi-core2. Instrument your application
import { register, ProjectType } from "@traceai/fi-core";
import { OpenAIInstrumentation } from "@traceai/openai";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import OpenAI from "openai";
// Register tracer provider
const tracerProvider = register({
projectName: "my_ai_app",
projectType: ProjectType.OBSERVE,
});
// Register OpenAI instrumentation (before creating client!)
registerInstrumentations({
tracerProvider,
instrumentations: [new OpenAIInstrumentation()],
});
// Use OpenAI as normal - tracing happens automatically!
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.chat.completions.create({
model: "gpt-4.1",
messages: [{ role: "user", content: "Hello!" }],
});π‘ Tip: Works with Anthropic, LangChain, Vercel AI SDK, and more TypeScript frameworks
| Category | Supported Frameworks | Python | TypeScript |
|---|---|---|---|
| LLM Providers | OpenAI | β | β |
| Anthropic | β | β | |
| AWS Bedrock | β | β | |
| Google Vertex AI | β | - | |
| Google Generative AI | β | - | |
| Mistral AI | β | - | |
| Groq | β | - | |
| LiteLLM | β | - | |
| Agent Frameworks | LangChain | β | β |
| LlamaIndex | β | β | |
| CrewAI | β | - | |
| AutoGen | β | - | |
| OpenAI Agents | β | - | |
| Smol Agents | β | - | |
| Mastra | - | β | |
| Tools & Libraries | Haystack | β | - |
| DSPy | β | - | |
| Guardrails AI | β | - | |
| Instructor | β | - | |
| Portkey | β | - | |
| Pipecat | β | - | |
| Vercel AI SDK | - | β | |
| Standards | Model Context Protocol (MCP) | β | β |
Legend: β Supported | - Not yet available
traceAI is built on top of OpenTelemetry and follows standard OTel instrumentation patterns. This means you get:
π Full OpenTelemetry Compatibility
- Works with any OTel-compatible backend
- Standard OTLP exporters (HTTP/gRPC)
- Compatible with existing OTel setups
βοΈ Bring Your Own Configuration
You can use traceAI with your own OpenTelemetry setup:
Python: Custom TracerProvider & Exporters
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from traceai_openai import OpenAIInstrumentor
# Set up your own tracer provider
tracer_provider = TracerProvider()
trace.set_tracer_provider(tracer_provider)
# Add custom exporters (example with Future AGI)
# HTTP endpoint
otlp_exporter = OTLPSpanExporter(
endpoint="https://api.futureagi.com/tracer/v1/traces",
headers={
"X-API-KEY": "your-api-key",
"X-SECRET-KEY": "your-secret-key"
}
)
# Or use gRPC: OTLPSpanExporter(endpoint="grpc://grpc.futureagi.com:443", ...)
tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))
# Instrument with traceAI
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)TypeScript: Custom TracerProvider, Span Processors & Headers
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { Resource } from "@opentelemetry/resources";
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import { OpenAIInstrumentation } from "@traceai/openai";
// Create custom tracer provider
const provider = new NodeTracerProvider({
resource: new Resource({
"service.name": "my-ai-service",
}),
});
// Add custom OTLP exporter with headers (example with Future AGI)
// HTTP endpoint
const exporter = new OTLPTraceExporter({
url: "https://api.futureagi.com/tracer/v1/traces",
headers: {
"X-API-KEY": process.env.FI_API_KEY!,
"X-SECRET-KEY": process.env.FI_SECRET_KEY!,
},
});
// Or use gRPC: new OTLPTraceExporter({ url: "grpc://grpc.futureagi.com:443", ... })
// Add span processor
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
provider.register();
// Register traceAI instrumentation
registerInstrumentations({
tracerProvider: provider,
instrumentations: [new OpenAIInstrumentation()],
});π What Gets Captured
traceAI automatically captures rich telemetry data:
- Prompts & Completions: Full request/response content
- Token Usage: Input, output, and total tokens
- Model Parameters: Temperature, top_p, max_tokens, etc.
- Tool Calls: Function/tool names, arguments, and results
- Streaming: Individual chunks with delta tracking
- Errors: Detailed error context and stack traces
- Timing: Latency at each step of the AI workflow
All data follows OpenTelemetry Semantic Conventions for GenAI.
We welcome contributions from the community!
π Read our Contributing Guide for detailed instructions on:
- Setting up your development environment (Python & TypeScript)
- Running tests and code quality checks
- Submitting pull requests
- Adding new framework integrations
Quick Start:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Found a bug? Please open an issue with:
- Framework version
- traceAI version
- Minimal reproduction code
- Expected vs actual behavior
Request a feature? Please open an issue with:
- Use case or problem you're trying to solve
- Proposed solution or feature description
- Any relevant examples or mockups
- Priority level (nice-to-have vs critical)
| Resource | Description |
|---|---|
| π Website | Learn more about Future AGI |
| π Documentation | Complete guides and API reference |
| π¨βπ³ Cookbooks | Step-by-step implementation examples |
| π Changelog | All release notes and updates |
| π€ Contributing Guide | How to contribute to traceAI |
| π¬ Slack | Join our community |
| π Issues | Report bugs or request features |
Built with β€οΈ by the Future AGI team
β Star us on GitHub | π Report Bug | π‘ Request Feature