Skip to content

Kjdragan/Logfire_utility

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Logfire Traces Utility

A command-line utility using the new Logfire MCP, enabling natural language queries through a standalone MCP client. This gives you unrestricted access to your telemetry data —no more dealing with Claude chat’s output limits, slow speeds, or consuming your chat quota.

Unlike other tracing services, Logfire gives you a million free traces a month, making it:

💸 FREE 💸


Why Do You Need This?

When developing agent systems or LLM applications, understanding what prompts are being sent and what responses are being received is crucial. This utility allows you to:

  • Capture the full context of LLM interactions, including complete prompts and responses.
  • Save telemetry data to local files without chat interface restrictions.
  • Query telemetry data using natural language.
  • Access historical data not visible in most web interfaces.

🔑 Key Features

  1. Standalone MCP Client Connect directly to the Logfire MCP server without Claude’s limitations. Retrieve complete LLM prompts and responses for deeper debugging.

  2. Natural Language Queries Query your telemetry in plain English. CLI parameters are optional—the natural language interface covers most needs.

  3. Free, Enterprise-Grade Telemetry Built by the creators of Pydantic and PydanticAI, Logfire offers robust observability tools at no cost.

  4. Improved JSON Output Results are formatted for readability: nested structures are indented, and line breaks in content fields are preserved.

  5. Flexible Time Ranges Easily change how far back to search telemetry. Default is 5 minutes; customizable with CLI flags.


📦 About Logfire MCP

Logfire MCP is a standardized Model Context Protocol server built by the Pydantic team. It allows programmatic querying of your application’s telemetry data.

While you can use Logfire MCP via Claude Desktop, this utility bypasses the slow, token-limited interface by using a direct MCP connection and writing complete results to file.


🛠️ Installation

Prerequisites

Environment Variables

Set these in your environment or .env file:

  • LOGFIRE_READ_TOKEN: Your Logfire access token
  • OPENAI_API_KEY: For LLM query analysis

Setup with UV

git clone https://github.com/kdragan/Logfire_utility.git
cd Logfire_utility

uv# Create and activate virtual environment
uv venv -p 3.12

# Initialize project
uv init

# Install dependencies
uv add httpx openai pydantic mcp[cli] python-dotenv

# Or use requirements.txt
uv pip install -r requirements.txt

Usage

Run the script with your natural language query.:

uv run python get_logfire_traces.py "Your natural language query here"

Command-line Options

  • --minutes or -m: Number of minutes to look back (default: 5)
  • --output or -o: Output file path (default: auto-generated)
  • --token or -t: Logfire read token (overrides environment variable)
  • --openai-key or -k: OpenAI API key (overrides environment variable)
  • --analyze or -a: Analyze results with LLM (default: False)

Example Queries

Debugging LLM Interactions (Recommended)

This is the most valuable use case - retrieving the full LLM prompts and responses from your application:

uv run python get_logfire_traces.py "Get Logfire traces from the last 10 minutes that include LLM interactions. Include the complete attributes field with request_data.messages and response_data."

This query retrieves all LLM interactions, including:

  • The exact prompts sent to the LLM
  • The complete responses received
  • Metadata about the code that initiated the request
  • Timing information

Other Useful Queries

  1. Get recent exceptions:

    uv run python get_logfire_traces.py "Show me all exceptions from the last hour"
  2. Get specific fields only:

    uv run python get_logfire_traces.py "Get a summary of Logfire traces from the last 2 hours. Only include trace_id, created_at, and message fields." --minutes 120
  3. Filter by severity:

    uv run python get_logfire_traces.py "Show me all ERROR level logs from the last 24 hours" --minutes 1440
  4. Performance analysis:

    uv run python get_logfire_traces.py "Show me the most recent 10 traces with their duration values" --minutes 60

Output Format

Results are saved as JSON files with improved readability, including proper formatting of nested structures and preservation of line breaks in content fields. The output file path is displayed after the query completes.

Example output path:

Results saved to C:\path\to\logfire_arbitrary_query_20250406_125629.json

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages