A single-file Python script for running Dotprompt files.
Dotprompt is an prompt template format for LLMs where a .prompt file contains the prompt and metadata (model, schema, config) in a single file.
Quick start | Examples | Configuration | Providers | Caching | Spec compliance
curl -O https://raw.githubusercontent.com/chr15m/runprompt/main/runprompt
chmod +x runpromptCreate hello.prompt:
Run it:
export OPENAI_API_KEY="your-key"
echo '{"name": "World"}' | ./runprompt hello.prompt(You can get an OpenAI key from here: https://platform.openai.com/api-keys)
In addition to the following, see the tests folder for more example .prompt files.
cat article.txt | ./runprompt summarize.promptThe special {{STDIN}} variable always contains the raw stdin as a string.
Extract structured data using an output schema:
echo "John is a 30 year old teacher" | ./runprompt extract.prompt
# {"name": "John", "age": 30, "occupation": "teacher"}Schema uses Picoschema format. Fields ending with ? are optional. The format is field: type, description.
Pipe structured output between prompts:
echo "John is 30" | ./runprompt extract.prompt | ./runprompt generate-bio.promptThe JSON output from the first prompt becomes template variables in the second.
Make .prompt files directly executable with a shebang:
chmod +x hello.prompt
echo '{"name": "World"}' | ./hello.promptNote: runprompt must be in your PATH, or use a relative/absolute path in the shebang (e.g. #!/usr/bin/env ./runprompt).
Override frontmatter values from the command line:
./runprompt --model anthropic/claude-haiku-4-20250514 hello.prompt
./runprompt --output.format json extract.promptNote: CLI overrides set frontmatter values (model, config, output format, etc.), not template variables. To pass template variables, use stdin:
echo '{"name": "Alice"}' | ./runprompt hello.promptTemplates use Handlebars syntax. Supported features:
- Variable interpolation:
{{variableName}},{{object.property}} - Comments:
{{! this is a comment }} - Iteration:
{{#each items}}...{{/each}}with@index,@first,@last,@key - Sections:
{{#key}}...{{/key}}(renders if truthy) - Inverted sections:
{{^key}}...{{/key}}(renders if falsy)
Set API keys for your providers:
export ANTHROPIC_API_KEY="..." # https://console.anthropic.com/settings/keys
export OPENAI_API_KEY="..." # https://platform.openai.com/api-keys
export GOOGLE_API_KEY="..." # https://aistudio.google.com/app/apikey
export OPENROUTER_API_KEY="..." # https://openrouter.ai/settings/keysUse OPENAI_BASE_URL or BASE_URL to point at any OpenAI-compatible endpoint:
# Use Ollama
export OPENAI_BASE_URL="http://localhost:11434/v1"
./runprompt hello.prompt
# Or via CLI flag
./runprompt --base-url http://localhost:11434/v1 hello.promptWhen a custom base URL is set, the provider prefix in the model string is ignored and the OpenAI-compatible API format is used.
Override any frontmatter value via environment variables prefixed with RUNPROMPT_:
export RUNPROMPT_MODEL="anthropic/claude-haiku-4-20250514"
./runprompt hello.promptThis is useful for setting defaults across multiple prompt runs.
Use -v to see request/response details:
./runprompt -v hello.promptModels are specified as provider/model-name:
| Provider | Model format | API key |
|---|---|---|
| Anthropic | anthropic/claude-sonnet-4-20250514 |
Get key |
| OpenAI | openai/gpt-4o |
Get key |
| Google AI | googleai/gemini-1.5-pro |
Get key |
| OpenRouter | openrouter/anthropic/claude-sonnet-4-20250514 |
Get key |
OpenRouter provides access to models from many providers (Anthropic, Google, Meta, etc.) through a single API key.
Enable response caching to avoid redundant API calls during development:
# Enable caching with -c or --cache
./runprompt --cache hello.prompt
# Second run with same input uses cached response
./runprompt --cache hello.promptYou can also enable the cache across a whole pipeline with the env var:
export RUNPROMPT_CACHE=1; echo "..." | ./runprompt a.prompt | ./runprompt b.promptCached responses are stored in ~/.cache/runprompt/ (or $XDG_CACHE_HOME/runprompt/), based on the inputs applied to the template and frontmatter.
See --help for more information.
This is a minimal implementation of the Dotprompt specification. Not yet supported:
- Multi-message prompts (
{{role}},{{history}}) - Conditionals (
{{#if}},{{#unless}},{{else}}) - Helpers (
{{json}},{{media}},{{section}}) - Model config (
temperature,maxOutputTokens, etc.) - Partials (
{{>partialName}}) - Nested Picoschema (objects, arrays of objects, enums)
See TODO.md for the full roadmap.