Exploring the frontier of AI, multimodal systems, and ComfyUI innovation.
Building open-source tools that make complex AI workflows simple, creative, and powerful.
| Project | Description | Stars |
|---|---|---|
| ComfyUI-RMBG | Advanced background remover using multiple AI segmentation models | |
| ComfyUI-QwenVL | Integrates Qwen-VL & Qwen3-VL for multimodal text–image AI | |
| ComfyUI-JoyCaption | LLaVA-powered captioning node with GGUF model support | |
| ComfyUI-MiniCPM | Vision-language model integration for image captioning and analysis | |
| ComfyUI-WildPromptor | Streamlined prompt creation and management for ComfyUI |
At 1038lab, we explore the frontier where AI meets creativity and automation.
Our mission is to advance multimodal intelligence — systems that connect text, vision, and reasoning into cohesive, intuitive workflows.
We design and develop tools that help researchers, developers, and creators harness the potential of AI with precision, transparency, and creativity.
Specialized in ComfyUI ecosystem engineering, we build modular AI nodes and integrations that simplify complex model interactions.
From background removal to vision–language inference and prompt automation, our projects aim to make cutting-edge AI accessible to everyone.
We welcome collaboration and partnership opportunities in areas such as:
- Multimodal AI and creative automation
- ComfyUI system architecture and extensions
- Applied AI research and experimental projects
🤝 If you’re interested in working with us — whether through collaboration, sponsored development, or joint research —
feel free to reach out via GitHub Discussions or email our team.
Together, we can accelerate innovation and shape the next generation of intelligent tools.
“Pushing AI boundaries — from multimodal models to creative automation.”
1038lab • AI Lab