✨Features | ⭳ Download | 🛠️Installation | 🎞️ Video | 🖼️Screenshots | 📖Wiki | 💬Discussion
This is a plugin to use generative AI in image painting and editing workflows from within Krita. For a more visual introduction, see www.interstice.cloud
The main goals of this project are:
- Precision and Control. Creating entire images from text can be unpredictable. To get the result you envision, you can restrict generation to selections, refine existing content with a variable degree of strength, focus text on image regions, and guide generation with reference images, sketches, line art, depth maps, and more.
- Workflow Integration. Most image generation tools focus heavily on AI parameters. This project aims to be an unobtrusive tool that integrates and synergizes with image editing workflows in Krita. Draw, paint, edit and generate seamlessly without worrying about resolution and technical details.
- Local, Open, Free. We are committed to open source models. Customize presets, bring your own models, and run everything local on your hardware. Cloud generation is also available to get started quickly without heavy investment.
- Inpainting: Use selections for generative fill, expand, to add or remove objects
- Live Painting: Let AI interpret your canvas in real time for immediate feedback. Watch Video
- Upscaling: Upscale and enrich images to 4k, 8k and beyond without running out of memory.
- Stable Diffusion: Supports Stable Diffusion 1.5, and XL. Partial support for SD3.
- ControlNet: Scribble, Line art, Canny edge, Pose, Depth, Normals, Segmentation, +more
- IP-Adapter: Reference images, Style and composition transfer, Face swap
- Regions: Assign individual text descriptions to image areas defined by layers.
- Job Queue: Queue and cancel generation jobs while working on your image.
- History: Preview results and browse previous generations and prompts at any time.
- Strong Defaults: Versatile default style presets allow for a streamlined UI.
- Customization: Create your own presets - custom checkpoints, LoRA, samplers and more.
See the Plugin Installation Guide for instructions.
A concise (more technical) version is below:
- Windows, Linux, MacOS
- On Linux/Mac: Python + venv must be installed
- recommended version: 3.11 or 3.10
- usually available via package manager, eg.
apt install python3-venv
To run locally a powerful graphics card with at least 6 GB VRAM (NVIDIA) is recommended. Otherwise generating images will take very long or may fail due to insufficient memory!
NVIDIA GPU | supported via CUDA |
AMD GPU | limited support, DirectML on Windows, ROCm on Linux (custom install) |
Apple Silicon | community support, MPS on macOS |
CPU | supported, but very slow |
- If you haven't yet, go and install Krita! Required version: 5.2.0 or newer
- Download the plugin.
- Start Krita and install the plugin via Tools ▸ Scripts ▸ Import Python Plugin from File...
- Point it to the ZIP archive you downloaded in the previous step.
- ⚠ This will delete any previous install of the plugin. If you are updating from 1.14 or older please read updating to a new version.
- Check Krita's official documentation for more options.
- Restart Krita and create a new document or open an existing image.
- To show the plugin docker: Settings ‣ Dockers ‣ AI Image Generation.
- In the plugin docker, click "Configure" to start local server installation or connect.
Note
If you encounter problems please check the FAQ / list of common issues for solutions.
Reach out via discussions, or report an issue here. Please note that official Krita channels are not the right place to seek help with issues related to this extension!
The plugin uses ComfyUI as backend. As an alternative to the automatic installation, you can install it manually or use an existing installation. If the server is already running locally before starting Krita, the plugin will automatically try to connect. Using a remote server is also possible this way.
Please check the list of required extensions and models to make sure your installation is compatible.
If you're looking for a way to easily select objects in the image, there is a separate plugin which adds AI segmentation tools.
Contributions are very welcome! Check the contributing guide to get started.
Live painting with regions (Click for video)
Inpainting on a photo using a realistic model
Reworking and adding content to an AI generated image
Adding detail and iteratively refining small parts of the image
Modifying the pose vector layer to control character stances (Click for video)
Control layers: Scribble, Line art, Depth map, Pose
- Image generation: Stable Diffusion
- Diffusion backend: ComfyUI
- Inpainting: ControlNet, IP-Adapter