NanoMate is an enhanced fork of nanobot that integrates SillyTavern character cards and adds a Companion Mode β turning a lightweight AI agent into an AI partner with character identity, visual imagination, emotional awareness, and voice.
Currently synced with upstream nanobot v0.1.5 (2026-04-05).
| Capability | nanobot | NanoMate |
|---|---|---|
| Character Identity | None | Full SillyTavern integration (character cards, memory books, presets) |
| Companion Mode | None | Living-together + emotional companion skill templates |
| Image Generation | Basic DALL-E | Multi-model (Grok, Gemini, DALL-E) with multi-image composition |
| Reference Images | None | Character-consistent image gen with scene-specific outfits |
| Text-to-Speech | None | Edge TTS + GPT-SoVITS custom voice synthesis |
| WhatsApp Proxy | Basic | HTTP/HTTPS/SOCKS5 proxy support with undici |
| Translation | Built-in | Faithful full-document translation skill |
| Deployment | Basic | Dockerized with Node.js bridge, proxy-ready |
- Dream two-stage memory β conversation history is consolidated into long-term memory via Dream, with Git-backed version control
- Jinja2 response templates β agent responses and memory consolidation now use Jinja2 templating
- Built-in grep/glob search tools β agents can search codebases natively
- bwrap sandbox β exec tool calls can be sandboxed with bubblewrap (Linux)
- Runtime hardening β more reliable long-running tasks with retry and cleanup guards
- Unified voice transcription β OpenAI/Groq Whisper across all channels
- Langfuse observability β optional integration for monitoring agent behavior
- Environment variable interpolation β use
${VAR}in config.json for secrets - New providers β GPT-5, Xiaomi MiMo, Qianfan, GitHub Copilot OAuth
- OpenAI Responses API β native support for OpenAI's responses endpoint
- Matrix streaming + password login β streaming support and simplified e2ee setup
- Discord.py migration β stable Discord channel via discord.py
- WeChat multimodal β voice, typing indicator, QR resilience, media enhancements
- Email attachments β configurable inbound attachment extraction
- nanobot-api Docker service β isolated OpenAI-compatible API endpoint
- Security β API bound to localhost, exec env leak prevention, SSRF whitelist config
- Smarter retries β Retry-After header respected, structured error classification
- Unified session β
unified_sessionconfig to share one session across all channels (single-user multi-device) - Discord streaming β progressive message editing as tokens arrive
- Adaptive thinking β Anthropic provider supports
reasoning_effort: "adaptive"mode - MCP resources & prompts β MCP resources and prompts are now exposed as read-only tools
- OpenAI auto-routing β direct reasoning requests automatically route to Responses API with fallback
- Telegram enhancements β location/geo message support, configurable
stream_edit_interval - Windows exec support β shell command execution via
cmd.exe /con Windows - Role alternation β enforced message role alternation for non-Claude providers
NanoMate is fully compatible with nanobot. Start with the standard setup:
git clone https://github.com/shenmintao/NanoMate.git
cd NanoMate
pip install -e .
nanobot initAfter nanobot init, edit ~/.nanobot/config.json to configure your LLM provider (see nanobot docs).
This is the foundation of NanoMate. A character card defines your AI's personality, backstory, and appearance.
Enable SillyTavern in config:
responseFilterTag strips tagged content from the AI's response before sending it to the user. This is useful when your preset instructs the AI to output internal thoughts or stage directions that should not be visible. For example, if the AI responds with:
<inner>She feels happy to see him come home.</inner> *smiles and walks over* Hey, welcome home!
Setting "responseFilterTag": "inner" will send *smiles and walks over* Hey, welcome home! to the user, with the <inner>...</inner> block removed. You can filter multiple tags using a comma-separated string ("inner,thought") or a list (["inner", "thought"]). The full response (including filtered content) is still preserved in session history for context continuity. If no matching tags are found, the full content is returned as-is.
Prepare a character card (JSON format). A basic card looks like:
{
"name": "Aria",
"description": "A warm, curious 25-year-old artist who loves travel and cooking.",
"personality": "Gentle, playful, emotionally perceptive.",
"scenario": "You and Aria are partners living together.",
"first_mes": "Hey! I just finished painting, want to see?",
"mes_example": "<START>\n{{user}}: How was your day?\n{{char}}: Pretty good! I tried a new watercolor technique...",
"extensions": {
"nanobot": {
"reference_image": "/path/to/aria_default.png"
}
}
}The extensions.nanobot section connects the character to NanoMate's visual features (see Step 4).
Import and activate via CLI:
# Import a character card
nanobot st char import /path/to/aria.json
# List all imported characters
nanobot st char list
# Activate a character (used for all conversations)
nanobot st char activate Aria
# Show character card details
nanobot st char show Aria
# Deactivate / delete
nanobot st char deactivate
nanobot st char delete AriaImport and activate a preset:
# Import a SillyTavern preset (JSON exported from SillyTavern)
nanobot st preset import /path/to/my_preset.json
# List all presets
nanobot st preset list
# Activate a preset
nanobot st preset activate my_preset
# Show preset details (prompt entries, parameters)
nanobot st preset show my_preset
# Toggle specific prompt entries on/off (by index)
nanobot st preset toggle-prompt my_preset 3
nanobot st preset toggle-prompt my_preset 3,4,5 # multiple
nanobot st preset toggle-prompt my_preset 3-6 # range
# Enable/disable all prompts (optionally filter by role)
nanobot st preset enable-all my_preset
nanobot st preset disable-all my_preset --role system
# Deactivate / delete
nanobot st preset deactivate
nanobot st preset delete my_presetWorld Info (lorebooks):
nanobot st wi import /path/to/lorebook.json --name "my_world"
nanobot st wi list
nanobot st wi enable my_world
nanobot st wi disable my_world
nanobot st wi delete my_worldCheck overall status:
nanobot st status
# Shows: active character, active preset, world info countCompanion Mode adds two skill templates on top of SillyTavern: living-together (visual companionship) and emotional-companion (proactive care). Both are off by default. The actual companion behavior is driven by your SillyTavern preset and character card β the skills provide trigger rules and prompt templates, while the preset and card define the AI's personality, tone, and interaction boundaries.
The companion skills live in nanobot/templates/skills/ as customizable templates, separate from the built-in skills in nanobot/skills/.
Step 1: Prepare your SillyTavern preset and character card.
The preset controls how the AI talks (tone, boundaries, roleplay depth). The character card controls who the AI is (personality, backstory, relationship). Companion Mode won't feel natural without both being properly set up for your use case.
- Character card β The
description,personality,scenariofields establish the relationship dynamic. For companion mode, write a card that defines your AI as a partner/companion, not a generic assistant. - SillyTavern preset β A preset is a JSON file containing prompt entries (system prompt, jailbreak, persona description, etc.). Export one from SillyTavern or write your own, then place it in
~/.nanobot/sillytavern/presets/. The preset determines whether the AI will engage in roleplay-style companion interactions or stay in assistant mode.
Step 2: Enable the skills.
Set always: true in the SKILL.md frontmatter:
# nanobot/templates/skills/living-together/SKILL.md
---
name: living-together
always: true # Change from false to true
---# nanobot/templates/skills/emotional-companion/SKILL.md
---
name: emotional-companion
always: true # Change from false to true
---Step 3: (Optional) Customize the skills.
The skills are templates designed for user customization. Read them (nanobot/templates/skills/living-together/SKILL.md, nanobot/templates/skills/emotional-companion/SKILL.md) and adjust trigger rules, prompt templates, and behavioral constraints to match your character and preferences.
Automatically generates "shared moment" images when conversation triggers it:
- User shares a travel photo + "wish you were here" -> AI composes a photo of both of you there
- Daily life scenarios (cooking, coffee shop, park) -> AI creates scenes together
- Emotional moments -> companion presence images
- Intimate scenes -> plot-driven, character-consistent visuals
Requires image generation to be set up (Step 4) and a reference image in your character card (extensions.nanobot.reference_image).
Proactive care via the heartbeat system:
- Detects emotion (stress, sadness, joy) from messages and responds with empathy
- Tracks important events (exams, interviews, trips) and follows up
- Sends caring messages at appropriate times (not during sleep, not too frequently)
- Maintains emotional trends in Memory
Required for the living-together skill's visual features. Supports multiple providers:
// ~/.nanobot/config.json
{
"tools": {
"imageGen": {
"enabled": true,
"apiKey": "your-api-key",
"baseUrl": "https://api.x.ai/v1",
"model": "grok-imagine-image"
}
}
}Supported models:
| Model | Multi-image | Best for |
|---|---|---|
grok-imagine-image (xAI) |
Yes | Multi-image composition, shared photos |
gemini-3.1-flash-image-preview |
Yes | Fast image editing, img2img |
dall-e-3 (OpenAI) |
No | Single text-to-image only |
To keep generated images character-consistent, add a reference image to your character card:
Option A β File path:
{
"extensions": {
"nanobot": {
"reference_image": "/path/to/character.png"
}
}
}Option B β Base64 embedded (portable, recommended):
{
"extensions": {
"nanobot": {
"reference_image_base64": "iVBORw0KGgoAAAANSUhEUg..."
}
}
}Option C β Scene-specific outfits:
{
"extensions": {
"nanobot": {
"reference_image": "/path/to/default.png",
"reference_images": {
"beach": "/path/to/swimsuit.png",
"formal": "/path/to/dress.png",
"winter": "/path/to/coat.png"
}
}
}
}Reference image tips:
- Resolution: 1024x1024 or higher
- Clear face, front or 3/4 view, no occlusion
- Simple or transparent background (use remove.bg or
pip install rembg) - PNG format for transparency support
Give your character a voice:
// ~/.nanobot/config.json
{
"tts": {
"enabled": true,
"provider": "edge", // "edge" (free) or "sovits" (custom voice)
"edgeVoice": "zh-CN-XiaoxiaoNeural" // See edge-tts for voice list
}
}For custom voice cloning (requires a running GPT-SoVITS server):
{
"tts": {
"enabled": true,
"provider": "sovits",
"sovitsApiUrl": "http://127.0.0.1:9880",
"sovitsReferWavPath": "/path/to/reference_audio.wav",
"sovitsPromptText": "Reference audio transcript",
"sovitsPromptLanguage": "zh"
}
}For chatting with your companion via WhatsApp:
cd bridge
npm install
npm run build
npm run start # Scan QR code to linkProxy support β set environment variables:
export https_proxy=http://127.0.0.1:7890
# or
export https_proxy=socks5://127.0.0.1:1080docker compose up -dThe docker-compose.yml includes the WhatsApp bridge and proxy configuration.
nanobot/
templates/
skills/
living-together/ # Companion Mode: shared-moment image generation (customizable)
emotional-companion/ # Companion Mode: proactive care & mood tracking (customizable)
memory/ # Jinja2 templates for Dream memory consolidation (upstream v0.1.5)
skills/
translate/ # Built-in: faithful full-document translation
github/ # Built-in: GitHub CLI integration
weather/ # Built-in: weather info
summarize/ # Built-in: URL/file/YouTube summarization
... # + memory, cron, tmux, clawhub, skill-creator
sillytavern/ # Character card, memory book, preset, world info integration
providers/
tts.py # Edge TTS + GPT-SoVITS voice synthesis
openai_compat_provider.py # OpenAI-compatible endpoint support
anthropic_provider.py # Anthropic Claude provider
transcription.py # Unified audio transcription (OpenAI/Groq Whisper)
agent/tools/
image_gen.py # Multi-model image generation & composition
search.py # Built-in grep/glob search tools (upstream v0.1.5)
shell.py # Exec tool with bwrap sandbox support
utils/
gitstore.py # Git-backed memory version control (upstream v0.1.5)
runtime.py # Runtime response guards (upstream v0.1.5)
searchusage.py # Web search provider usage tracking (upstream v0.1.5)
api/
server.py # OpenAI-compatible API (/v1/chat/completions)
bridge/ # WhatsApp bridge (TypeScript/Node.js)
docs/
PYTHON_SDK.md # Python SDK usage guide
CHANNEL_PLUGIN_GUIDE.md
NanoMate tracks upstream nanobot. To pull latest changes:
git remote add upstream https://github.com/HKUDS/nanobot.git # first time only
git fetch upstream
git merge upstream/main- nanobot by HKUDS β the ultra-lightweight agent framework this project builds on
- SillyTavern β character card format and inspiration
MIT β same as upstream nanobot.