A desktop application for generating and enhancing prompts for Stable Diffusion. It leverages local AI models through Ollama to provide a rich, interactive, and creative environment for prompt engineering.
- Template-Based Generation: Create complex prompts using simple templates and
__wildcard__placeholders. - Live Preview & Interaction:
- Instantly see a generated prompt and click on wildcard-generated text to swap it with other options from the source file.
- Automatically detects missing wildcards used in your template and provides clickable links to generate them on the fly.
- AI-Powered Enhancement: Use a local LLM to enhance your base prompts, adding detail, style, and quality keywords.
- Automatic Variations: Generate cinematic, artistic, and photorealistic variations of your enhanced prompt with a single click.
- Interactive Template Editor:
- Right-click to "Brainstorm with AI" to get suggestions and refine your template in a dedicated chat window.
- Double-click any
__wildcard__to immediately open it in the Wildcard Manager. - Select any text and right-click to instantly turn it into a new wildcard file.
- Drag and drop
__wildcard__tags to easily reorder your prompt. - Get instant visual feedback with tooltips for
requiresclauses that are out of order.
- Advanced AI Brainstorming:
- A dedicated chat window to brainstorm ideas. Load existing wildcards or templates into the chat to have the AI help you refine, expand, and improve them.
- Generate new wildcard files, templates from a concept, templates from all your existing wildcards, or even linked wildcard files from scratch.
- The AI automatically detects when a generated template or wildcard requires new wildcards, and provides clickable links to generate them.
- Select any text in the conversation and have the AI rewrite it based on your instructions ("make it more poetic", "add more technical details", etc.).
- Full-Featured Wildcard Management:
- A powerful structured editor to easily manage complex choices with weights, tags, requirements, and includes.
- Choices are automatically sorted alphabetically when a file is loaded for a consistent editing experience.
- Advanced tools: Find & Replace, Find Similar Choices (fuzzy matching), and Find Duplicates.
- Intelligent Refactoring: When you rename a wildcard or change a choice's value, the app will offer to scan your entire project and automatically update all other wildcards that depend on it.
- Merge multiple wildcard files into a new one, intelligently combining their content.
- Scan your entire project to find unused wildcard files that can be archived or deleted.
- Interactive Validator: Scan all files for errors (e.g., a
requiresclause pointing to a non-existent value). Double-click an error to jump directly to the problematic file and choice, or right-click to fix common issues automatically. - Use AI to suggest new choices for a wildcard, or to automatically add weights, tags, and other metadata to your existing choices.
- SFW/NSFW Workflows: Keep your SFW and NSFW content completely separate. The app dynamically switches template, wildcard, and system prompt directories.
- Customizable System Prompts: Edit the underlying instructions given to the AI for enhancement and variations to tailor its output to your needs.
- History Viewer: Browse, search, and reuse all your past enhanced prompts. Tracks which template was used for each generation and allows you to mark favorites.
- Seed Management: Easily switch between a fixed seed for reproducible results and random seeds for variety.
- Modern UI: Features a clean, modern interface with light and dark themes and adjustable font sizes.
- Resource Management: Automatically unloads AI models from VRAM when they are no longer in use by any window, helping to manage system resources efficiently.
- Python 3.10+
- Ollama installed and running on your system.
- At least one LLM pulled in Ollama (e.g.,
qwen:7b,llama3:8b).qwenmodels are highly recommended for their creative capabilities. - Python libraries as listed in
requirements.txt.
-
Clone the repository:
git clone https://github.com/Akashijk/Prompt-Tool cd Prompt-Tool -
Install Ollama: Follow the instructions on ollama.com to install and start the Ollama server.
-
Pull an AI Model: Pull a model to be used for enhancement and brainstorming.
ollama run qwen:7b
-
Set up a Python Environment (Recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install Dependencies: Install the required Python packages using the provided
requirements.txtfile.pip install -r requirements.txt
-
Directory Structure: The application uses the following directory structure within the project root. You can create these folders and start adding your own
.txtfiles./ ├── templates/ (.txt files) │ ├── sfw/ │ └── nsfw/ ├── wildcards/ (.json files) │ ├── nsfw/ (for nsfw-only wildcards) │ └── ... (shared wildcards go in the root) └── system_prompts/ (.txt files) ├── sfw/ └── nsfw/templates/: Contains your prompt templates, organized by workflow.wildcards/: Contains your wildcard files in.jsonformat. This powerful format supports simple lists, weighted randomization ("weight": 5), context-aware choices ("requires": {"key": "value"}), dynamic wildcard inclusion ("includes": ["wildcard_name"]), and descriptive tags ("tags": ["tag1"]) for future filtering and organization. The root folder is for shared wildcards, and thensfwsubfolder is for NSFW-specific ones.system_prompts/: The application will automatically create default system prompts here. You can edit them via the UI (Tools -> System Prompt Editor).
-
Run the application:
python main.py # Assuming the main script is named main.pyVerbose Mode: For debugging or to see the raw output from the AI during brainstorming tasks, you can run the application with the
--verboseor-vflag:python main.py --verbose
-
Main Window Workflow:
- Workflow: Choose
SFWorNSFWfrom the "Workflow" menu. This changes the content available. - Model: Select an active Ollama model from the dropdown.
- Template: Select a template file. The content will appear in the editor.
- Generate: Click "Generate Next Preview" to see a prompt with wildcards filled in.
- Interact: In the preview pane, click on any highlighted text to see a menu of other options from that wildcard file. If your template uses a wildcard that doesn't exist, a link will appear below the preview allowing you to generate it.
- Enhance: When you're happy with the preview, click "Enhance This Prompt". A new window will appear showing the AI's enhanced version and any selected variations.
- Workflow: Choose
-
AI Brainstorming (
Tools -> AI Brainstorming):- Chat directly with the AI for general ideas or to generate new files from scratch.
- Load an existing wildcard or template (via the Wildcard Manager or Template Editor context menu) to have a focused, context-aware conversation about improving it.
- Use the "Generate..." buttons to have the AI create new content, including templates from a concept or from all of your existing wildcards.
- When the AI generates content that uses a new, non-existent wildcard, it will appear as a clickable link in the chat history, allowing you to generate it instantly.
- Right-click on text in the conversation to "Rewrite Selection with AI...".
-
Wildcard Manager (
Tools -> Wildcard Manager):- View all wildcard files for the current workflow.
- Select a file to view and edit its contents.
- Use the structured editor to manage complex choices, or switch to the raw text editor for direct JSON editing.
- Use the "Suggest Choices (AI)" button to have the AI generate new items for your list.
- Use the "Refine Choices (AI)" button to have the AI analyze your existing choices and add metadata like weights, tags, and requirements.
- Use the full suite of tools to find duplicates, merge files, or validate your entire project for errors.
- Click "Brainstorm with AI" to send the current wildcard list to the chat window for refinement.
- Ollama Server: Change the Ollama server URL via
Tools -> Ollama Server.... This is useful if you run Ollama on a different machine on your network. - Theme & Font: Change the UI theme (Light/Dark) and font size under the
Viewmenu. Your preferences are saved automatically. - System Prompts: Modify the core instructions given to the AI via
Tools -> System Prompt Editor. This gives you fine-grained control over how the AI enhances prompts and creates variations.
- Frontend: Built with Python's standard
tkinterlibrary and themed withsv-ttkfor a modern look and feel. - Backend: Interacts with a local Ollama instance via its REST API. All AI processing happens on your machine.
- Workflows: The SFW/NSFW toggle is a core feature that changes the directories from which templates, wildcards, and system prompts are loaded, ensuring strict content separation.
- State Management: The application tracks model usage across all windows and automatically sends requests to Ollama to unload models from VRAM when they are no longer active, helping to manage system resources.
The application is designed with a clear separation of concerns, divided into two main packages: core and gui.
-
main.py: The main entry point for the application. It handles command-line argument parsing (like--verbose) and initializes theGUIApp. -
core/: This package contains all the backend logic, decoupled from the user interface.prompt_processor.py: The central orchestrator. It coordinates interactions between the template engine, Ollama client, and history manager.template_engine.py: Manages loading, parsing, and resolving templates and wildcards, including the complex logic forrequiresandincludes.ollama_client.py: A dedicated client for all communication with the Ollama REST API, handling prompt enhancement, variations, and brainstorming chats.history_manager.py: Handles reading and writing to the prompt history files (which use the.jsonlformat).config.py: Centralizes all application settings and paths.default_content.py: Stores the default text for system prompts and variations, allowing for easy restoration.
-
gui/: This package contains all the frontendtkintercomponents.gui_app.py: The main application class (tk.Tk). It builds the main window and manages the lifecycle of all other tool windows.wildcard_manager.py,brainstorming_window.py, etc.: Each major feature has its own dedicated window class, promoting modularity.common.py,theme_manager.py, etc.: Contain reusable components like custom dialogs, tooltips, and theme management logic.
-
Data Directories:
templates/,wildcards/,system_prompts/: Store user-customizable content.history/: Stores the generated prompt history.assets/: Contains static assets like the application icon.
Here are solutions to some common issues you might encounter.
This is the most common issue. It means the application cannot connect to the Ollama service on your machine.
- Solution:
- Make sure you have installed Ollama from ollama.com.
- Open your terminal or command prompt and run
ollama ps. If the server is running, you will see a list of models. If it's not, you'll likely get a "connection refused" error. - If it's not running, start the Ollama application on your system. On macOS and Windows, this is usually a background application. On Linux, you may need to start it with
systemctl start ollama.
This error occurs when the application tries to use a model that Ollama doesn't have.
- Solution:
- Open your terminal and run
ollama listto see which models you have installed. - If the model you want to use is not in the list, pull it using
ollama run <model_name>(e.g.,ollama run qwen:7b). - Restart the Prompt Tool GUI to refresh the model list.
- Open your terminal and run
The time it takes for the AI to respond depends heavily on your computer's hardware (CPU, RAM, and especially VRAM on your GPU) and the size of the model you are using.
- Tips for Better Performance:
- Use smaller models (e.g., 7B models like
qwen:7borllama3:8b) for faster responses. Larger models (13B+) are higher quality but require more resources. - Ensure no other resource-intensive applications are running.
- If you have a dedicated GPU, make sure Ollama is configured to use it.
- Use smaller models (e.g., 7B models like
If you see warnings about "invalid JSON" when using the Wildcard Manager, it means a .json file has a syntax error.
- Solution:
- In the Wildcard Manager, the file will be loaded into the "Raw Text Editor".
- You can manually fix the syntax (e.g., add a missing comma, fix quotes).
- Alternatively, when you try to save, the application will offer to use an AI to attempt to fix the broken JSON for you.
Contributions are welcome! Please feel free to submit a pull request or open an issue for any bugs or feature requests.
This project is licensed under the MIT License. See the LICENSE file for details.