A command-line utility using the new Logfire MCP, enabling natural language queries through a standalone MCP client. This gives you unrestricted access to your telemetry data —no more dealing with Claude chat’s output limits, slow speeds, or consuming your chat quota.
Unlike other tracing services, Logfire gives you a million free traces a month, making it:
💸 FREE 💸
When developing agent systems or LLM applications, understanding what prompts are being sent and what responses are being received is crucial. This utility allows you to:
- Capture the full context of LLM interactions, including complete prompts and responses.
- Save telemetry data to local files without chat interface restrictions.
- Query telemetry data using natural language.
- Access historical data not visible in most web interfaces.
-
Standalone MCP Client Connect directly to the Logfire MCP server without Claude’s limitations. Retrieve complete LLM prompts and responses for deeper debugging.
-
Natural Language Queries Query your telemetry in plain English. CLI parameters are optional—the natural language interface covers most needs.
-
Free, Enterprise-Grade Telemetry Built by the creators of Pydantic and PydanticAI, Logfire offers robust observability tools at no cost.
-
Improved JSON Output Results are formatted for readability: nested structures are indented, and line breaks in content fields are preserved.
-
Flexible Time Ranges Easily change how far back to search telemetry. Default is 5 minutes; customizable with CLI flags.
Logfire MCP is a standardized Model Context Protocol server built by the Pydantic team. It allows programmatic querying of your application’s telemetry data.
While you can use Logfire MCP via Claude Desktop, this utility bypasses the slow, token-limited interface by using a direct MCP connection and writing complete results to file.
- Python 3.12+
- UV package manager
- Logfire Read Token
- OpenAI API Key
Set these in your environment or .env
file:
LOGFIRE_READ_TOKEN
: Your Logfire access tokenOPENAI_API_KEY
: For LLM query analysis
git clone https://github.com/kdragan/Logfire_utility.git
cd Logfire_utility
uv# Create and activate virtual environment
uv venv -p 3.12
# Initialize project
uv init
# Install dependencies
uv add httpx openai pydantic mcp[cli] python-dotenv
# Or use requirements.txt
uv pip install -r requirements.txt
Run the script with your natural language query.:
uv run python get_logfire_traces.py "Your natural language query here"
--minutes
or-m
: Number of minutes to look back (default: 5)--output
or-o
: Output file path (default: auto-generated)--token
or-t
: Logfire read token (overrides environment variable)--openai-key
or-k
: OpenAI API key (overrides environment variable)--analyze
or-a
: Analyze results with LLM (default: False)
This is the most valuable use case - retrieving the full LLM prompts and responses from your application:
uv run python get_logfire_traces.py "Get Logfire traces from the last 10 minutes that include LLM interactions. Include the complete attributes field with request_data.messages and response_data."
This query retrieves all LLM interactions, including:
- The exact prompts sent to the LLM
- The complete responses received
- Metadata about the code that initiated the request
- Timing information
-
Get recent exceptions:
uv run python get_logfire_traces.py "Show me all exceptions from the last hour"
-
Get specific fields only:
uv run python get_logfire_traces.py "Get a summary of Logfire traces from the last 2 hours. Only include trace_id, created_at, and message fields." --minutes 120
-
Filter by severity:
uv run python get_logfire_traces.py "Show me all ERROR level logs from the last 24 hours" --minutes 1440
-
Performance analysis:
uv run python get_logfire_traces.py "Show me the most recent 10 traces with their duration values" --minutes 60
Results are saved as JSON files with improved readability, including proper formatting of nested structures and preservation of line breaks in content fields. The output file path is displayed after the query completes.
Example output path:
Results saved to C:\path\to\logfire_arbitrary_query_20250406_125629.json