Skip to content

pixelkaos/prompt-tool

 
 

Repository files navigation

Prompt Tool GUI

A desktop application for generating and enhancing prompts for Stable Diffusion. It leverages local AI models through Ollama to provide a rich, interactive, and creative environment for prompt engineering.

Main Window Screenshot

Key Features

  • Template-Based Generation: Create complex prompts using simple templates and __wildcard__ placeholders.
  • Live Preview & Interaction:
    • Instantly see a generated prompt and click on wildcard-generated text to swap it with other options from the source file.
    • Automatically detects missing wildcards used in your template and provides clickable links to generate them on the fly.
  • AI-Powered Enhancement: Use a local LLM to enhance your base prompts, adding detail, style, and quality keywords.
  • Automatic Variations: Generate cinematic, artistic, and photorealistic variations of your enhanced prompt with a single click.
  • Interactive Template Editor:
    • Right-click to "Brainstorm with AI" to get suggestions and refine your template in a dedicated chat window.
    • Double-click any __wildcard__ to immediately open it in the Wildcard Manager.
    • Select any text and right-click to instantly turn it into a new wildcard file.
    • Drag and drop __wildcard__ tags to easily reorder your prompt.
    • Get instant visual feedback with tooltips for requires clauses that are out of order.
  • Advanced AI Brainstorming:
    • A dedicated chat window to brainstorm ideas. Load existing wildcards or templates into the chat to have the AI help you refine, expand, and improve them.
    • Generate new wildcard files, templates from a concept, templates from all your existing wildcards, or even linked wildcard files from scratch.
    • The AI automatically detects when a generated template or wildcard requires new wildcards, and provides clickable links to generate them.
    • Select any text in the conversation and have the AI rewrite it based on your instructions ("make it more poetic", "add more technical details", etc.).
  • Full-Featured Wildcard Management:
    • A powerful structured editor to easily manage complex choices with weights, tags, requirements, and includes.
    • Choices are automatically sorted alphabetically when a file is loaded for a consistent editing experience.
    • Advanced tools: Find & Replace, Find Similar Choices (fuzzy matching), and Find Duplicates.
    • Intelligent Refactoring: When you rename a wildcard or change a choice's value, the app will offer to scan your entire project and automatically update all other wildcards that depend on it.
    • Merge multiple wildcard files into a new one, intelligently combining their content.
    • Scan your entire project to find unused wildcard files that can be archived or deleted.
    • Interactive Validator: Scan all files for errors (e.g., a requires clause pointing to a non-existent value). Double-click an error to jump directly to the problematic file and choice, or right-click to fix common issues automatically.
    • Use AI to suggest new choices for a wildcard, or to automatically add weights, tags, and other metadata to your existing choices.
  • SFW/NSFW Workflows: Keep your SFW and NSFW content completely separate. The app dynamically switches template, wildcard, and system prompt directories.
  • Customizable System Prompts: Edit the underlying instructions given to the AI for enhancement and variations to tailor its output to your needs.
  • History Viewer: Browse, search, and reuse all your past enhanced prompts. Tracks which template was used for each generation and allows you to mark favorites.
  • Seed Management: Easily switch between a fixed seed for reproducible results and random seeds for variety.
  • Modern UI: Features a clean, modern interface with light and dark themes and adjustable font sizes.
  • Resource Management: Automatically unloads AI models from VRAM when they are no longer in use by any window, helping to manage system resources efficiently.

Requirements

  • Python 3.10+
  • Ollama installed and running on your system.
  • At least one LLM pulled in Ollama (e.g., qwen:7b, llama3:8b). qwen models are highly recommended for their creative capabilities.
  • Python libraries as listed in requirements.txt.

Installation & Setup

  1. Clone the repository:

    git clone https://github.com/Akashijk/Prompt-Tool
    cd Prompt-Tool
  2. Install Ollama: Follow the instructions on ollama.com to install and start the Ollama server.

  3. Pull an AI Model: Pull a model to be used for enhancement and brainstorming.

    ollama run qwen:7b
  4. Set up a Python Environment (Recommended):

    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
  5. Install Dependencies: Install the required Python packages using the provided requirements.txt file.

    pip install -r requirements.txt
  6. Directory Structure: The application uses the following directory structure within the project root. You can create these folders and start adding your own .txt files.

    /
    ├── templates/  (.txt files)
    │   ├── sfw/
    │   └── nsfw/
    ├── wildcards/  (.json files)
    │   ├── nsfw/ (for nsfw-only wildcards)
    │   └── ... (shared wildcards go in the root)
    └── system_prompts/ (.txt files)
        ├── sfw/
        └── nsfw/
    
    • templates/: Contains your prompt templates, organized by workflow.
    • wildcards/: Contains your wildcard files in .json format. This powerful format supports simple lists, weighted randomization ("weight": 5), context-aware choices ("requires": {"key": "value"}), dynamic wildcard inclusion ("includes": ["wildcard_name"]), and descriptive tags ("tags": ["tag1"]) for future filtering and organization. The root folder is for shared wildcards, and the nsfw subfolder is for NSFW-specific ones.
    • system_prompts/: The application will automatically create default system prompts here. You can edit them via the UI (Tools -> System Prompt Editor).

Usage

  1. Run the application:

    python main.py  # Assuming the main script is named main.py

    Verbose Mode: For debugging or to see the raw output from the AI during brainstorming tasks, you can run the application with the --verbose or -v flag:

    python main.py --verbose
  2. Main Window Workflow:

    • Workflow: Choose SFW or NSFW from the "Workflow" menu. This changes the content available.
    • Model: Select an active Ollama model from the dropdown.
    • Template: Select a template file. The content will appear in the editor.
    • Generate: Click "Generate Next Preview" to see a prompt with wildcards filled in.
    • Interact: In the preview pane, click on any highlighted text to see a menu of other options from that wildcard file. If your template uses a wildcard that doesn't exist, a link will appear below the preview allowing you to generate it.
    • Enhance: When you're happy with the preview, click "Enhance This Prompt". A new window will appear showing the AI's enhanced version and any selected variations.
  3. AI Brainstorming (Tools -> AI Brainstorming):

    • Chat directly with the AI for general ideas or to generate new files from scratch.
    • Load an existing wildcard or template (via the Wildcard Manager or Template Editor context menu) to have a focused, context-aware conversation about improving it.
    • Use the "Generate..." buttons to have the AI create new content, including templates from a concept or from all of your existing wildcards.
    • When the AI generates content that uses a new, non-existent wildcard, it will appear as a clickable link in the chat history, allowing you to generate it instantly.
    • Right-click on text in the conversation to "Rewrite Selection with AI...".
  4. Wildcard Manager (Tools -> Wildcard Manager):

    • View all wildcard files for the current workflow.
    • Select a file to view and edit its contents.
    • Use the structured editor to manage complex choices, or switch to the raw text editor for direct JSON editing.
    • Use the "Suggest Choices (AI)" button to have the AI generate new items for your list.
    • Use the "Refine Choices (AI)" button to have the AI analyze your existing choices and add metadata like weights, tags, and requirements.
    • Use the full suite of tools to find duplicates, merge files, or validate your entire project for errors.
    • Click "Brainstorm with AI" to send the current wildcard list to the chat window for refinement.

Configuration

  • Ollama Server: Change the Ollama server URL via Tools -> Ollama Server.... This is useful if you run Ollama on a different machine on your network.
  • Theme & Font: Change the UI theme (Light/Dark) and font size under the View menu. Your preferences are saved automatically.
  • System Prompts: Modify the core instructions given to the AI via Tools -> System Prompt Editor. This gives you fine-grained control over how the AI enhances prompts and creates variations.

How It Works

  • Frontend: Built with Python's standard tkinter library and themed with sv-ttk for a modern look and feel.
  • Backend: Interacts with a local Ollama instance via its REST API. All AI processing happens on your machine.
  • Workflows: The SFW/NSFW toggle is a core feature that changes the directories from which templates, wildcards, and system prompts are loaded, ensuring strict content separation.
  • State Management: The application tracks model usage across all windows and automatically sends requests to Ollama to unload models from VRAM when they are no longer active, helping to manage system resources.

Project Architecture

The application is designed with a clear separation of concerns, divided into two main packages: core and gui.

  • main.py: The main entry point for the application. It handles command-line argument parsing (like --verbose) and initializes the GUIApp.

  • core/: This package contains all the backend logic, decoupled from the user interface.

    • prompt_processor.py: The central orchestrator. It coordinates interactions between the template engine, Ollama client, and history manager.
    • template_engine.py: Manages loading, parsing, and resolving templates and wildcards, including the complex logic for requires and includes.
    • ollama_client.py: A dedicated client for all communication with the Ollama REST API, handling prompt enhancement, variations, and brainstorming chats.
    • history_manager.py: Handles reading and writing to the prompt history files (which use the .jsonl format).
    • config.py: Centralizes all application settings and paths.
    • default_content.py: Stores the default text for system prompts and variations, allowing for easy restoration.
  • gui/: This package contains all the frontend tkinter components.

    • gui_app.py: The main application class (tk.Tk). It builds the main window and manages the lifecycle of all other tool windows.
    • wildcard_manager.py, brainstorming_window.py, etc.: Each major feature has its own dedicated window class, promoting modularity.
    • common.py, theme_manager.py, etc.: Contain reusable components like custom dialogs, tooltips, and theme management logic.
  • Data Directories:

    • templates/, wildcards/, system_prompts/: Store user-customizable content.
    • history/: Stores the generated prompt history.
    • assets/: Contains static assets like the application icon.

Troubleshooting

Here are solutions to some common issues you might encounter.

"Ollama server is not running" Error

This is the most common issue. It means the application cannot connect to the Ollama service on your machine.

  • Solution:
    1. Make sure you have installed Ollama from ollama.com.
    2. Open your terminal or command prompt and run ollama ps. If the server is running, you will see a list of models. If it's not, you'll likely get a "connection refused" error.
    3. If it's not running, start the Ollama application on your system. On macOS and Windows, this is usually a background application. On Linux, you may need to start it with systemctl start ollama.

"Model not found" Error

This error occurs when the application tries to use a model that Ollama doesn't have.

  • Solution:
    1. Open your terminal and run ollama list to see which models you have installed.
    2. If the model you want to use is not in the list, pull it using ollama run <model_name> (e.g., ollama run qwen:7b).
    3. Restart the Prompt Tool GUI to refresh the model list.

Slow AI Responses

The time it takes for the AI to respond depends heavily on your computer's hardware (CPU, RAM, and especially VRAM on your GPU) and the size of the model you are using.

  • Tips for Better Performance:
    • Use smaller models (e.g., 7B models like qwen:7b or llama3:8b) for faster responses. Larger models (13B+) are higher quality but require more resources.
    • Ensure no other resource-intensive applications are running.
    • If you have a dedicated GPU, make sure Ollama is configured to use it.

Wildcard File Errors

If you see warnings about "invalid JSON" when using the Wildcard Manager, it means a .json file has a syntax error.

  • Solution:
    1. In the Wildcard Manager, the file will be loaded into the "Raw Text Editor".
    2. You can manually fix the syntax (e.g., add a missing comma, fix quotes).
    3. Alternatively, when you try to save, the application will offer to use an AI to attempt to fix the broken JSON for you.

Contributing

Contributions are welcome! Please feel free to submit a pull request or open an issue for any bugs or feature requests.

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%