A simple command line tool to generate text using any LLM (via litellm) based on ready made templated prompts.
heygpt stream-
To provide a simple command line tool to generate text using any LLM (OpenAI, Anthropic, Gemini, etc.) based on ready made templated prompts, in both
clias well asweb-uiinterface.
- There is an optional dependency of
fzffor interactive prompt selection. You can install it using your package manager. - refer: fzf README for more info on steps to install
fzf.
pip install heygptcliheygpt --helpFor debug logs use: export LOG_LEVEL=DEBUG or set LOG_LEVEL=DEBUG on windows.
You will need an API key from your LLM provider to use heygpt. Supported providers include OpenAI, Anthropic, Google (Gemini), and many more via litellm.
# gpt custom prompts (optional)
GPT_PROMPT_URL=<url-to-your-prompt-file>
# API Configuration (via litellm)
OPENAI_API_KEY=<your-api-key> # Works with OpenAI and OpenAI-compatible endpoints
MODEL=gpt-4o # optional model name (default: gpt-3.5-turbo)
# Supports: gpt-4o, claude-3-5-sonnet, gemini-pro, etc.In order to configure them you can use heygpt config command:
โฏ heygpt config --help
Usage: heygpt config [OPTIONS]
Configure heygpt.
โญโ Options --------------------------------------------------------+
โ --prompt-file TEXT Prompt file path. |
โ --prompt-url TEXT Prompt file url. |
โ --api-key TEXT API key (supports any LLM provider). |
โ --api-endpoint TEXT API endpoint URL. |
โ --model TEXT LLM model name. |
โ --help Show this message and exit. |
-------------------------------------------------------------------+Default model name is gpt-3.5-turbo for this tool. You can change it to any model supported by litellm (e.g., gpt-4o, claude-3-5-sonnet-20241022, gemini-pro, etc.).
heygpt config --api-key <your-api-key>
heygpt config --model gpt-4oPrompt YAML formate
# ~/path/to/prompts.yaml
- Title: Fix Grammar
Command:
- role: user
content: |
Review the provided text and correct any grammatical errors. Ensure that the text is clear, concise, and free of any spelling mistakes.To use your saved prompts run:
heygpt config --prompt-file ~/path/to/prompts.yaml
Here, --prompt-url and --prompt-file is optional. If you want to use own custom
prompts.
For providing a URL of yaml file containing your prompts.
# remote yaml file
heygpt config --prompt-url <url-to-your-prompt-file.yaml>Note: This is the default yaml used for prompts: default-prompts.yaml, for using your own prompts, you need to follow the same format as in this file.
For your own prompts by providing a URL to a yaml file containing your prompts. You can also use local yaml file by providing a relative path to it.
# local yaml file
heygpt config --prompt-file ~/path/to/prompts.yaml# ~/.config/heygpt/config.yaml
# You can manually add list of `available_models` in config file for easy access in streamlit UI.
- api_key: sk-proj-********
model: gpt-4o
available_models:
- gpt-4o
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-3.5-turbo
- claude-3-5-sonnet-20241022
- gemini-pro
prompt_file: /home/user/.config/heygpt/prompt.yamlheygpt ask-
heygptwill ask you to choose a prompt from a list of available templates. -
After that, it will ask you to enter your query/task and will provide you with the result based on type of prompt you selected.
-
You can specify a different model using the
--modelflag:
heygpt ask --model claude-3-5-sonnet-20241022
heygpt ask --model gemini-pro- For asking queries without any prompt templates you can use
--no-promptflag.
heygpt ask --no-promptheygpt wisper ../path/to/audio.mp3-
You can provide standard output as well to
heygpt askecho "why sky is blue" | heygpt ask --no-prompt
An other way to use it can be providing
wisperaudio 2 text, output toheygpt ask:heygpt wisper ../path/to/audio.mp3 | heygpt ask
This will start a streamlit server on localhost:
heygpt apiThis will start a fastapi server on localhost: