Skip to content

FredYakumo/zihuan-next

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

495 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

zihuan-next

English | 简体中文

zihuan-next is a Rust-based Agent service platform built around two ideas:

  • Agents run as persistent services.
  • Node graphs define reusable workflows and tools.

The graph stays focused on data flow. Long-lived behavior such as chat agents, HTTP-facing agents, task hosting, connection reuse, and runtime orchestration is hosted by the service layer.

zihuan-next

Quick Start

Requirements

  • Rust stable
  • Node.js 18+
  • pnpm

Optional services, depending on your setup:

  • MySQL
  • Redis
  • Weaviate
  • RustFS

Build

git clone https://github.com/FredYakumo/zihuan-next.git
cd zihuan-next
git submodule update --init --recursive

cd webui
pnpm install
cd ..

cargo build --release

The main binary embeds the frontend bundle from webui/dist/.

Run

docker compose -f docker/docker-compose.yaml up -d
./target/release/zihuan_next

Default address:

http://127.0.0.1:9951

Custom bind:

./target/release/zihuan_next --host 0.0.0.0 --port 9000

Highlights

  • Simple Agent capabilities are available out of the box.
  • Node graphs are used to design and reuse more complex workflows.
  • The same workflow can run directly as a task or be exposed as an Agent tool.
  • Connections and model refs are configured once and reused across Agents and graphs.

CLI graph runner

cargo build -p zihuan_graph_cli --release

./target/release/zihuan_graph_cli --file workflow_set/qq_agent_example.json
./target/release/zihuan_graph_cli --workflow qq_agent_example

How You Use It

1. Configure shared resources

In the admin UI, create:

  • connections
  • llm_refs
  • agents

These are stored in the system config file under a unified config center.

2. Build a workflow graph

Use /editor to define:

  • workflow steps
  • node parameters
  • function subgraphs
  • tool subgraphs
  • graph-local variables and inline values

3. Attach the workflow to an Agent

Use graph-backed tools in an Agent definition so the Agent can call them during inference. Simple Agent behavior can stay lightweight, while more complex multi-step logic can be moved into reusable graph workflows.

4. Operate the runtime

From the admin UI you can:

  • inspect tasks
  • watch logs
  • manage saved connections
  • inspect runtime connection instances
  • start or stop agents

Screenshots

main-ui graph-editor workflow qq agent editor-large shot-1 shot-2 shot-3 shot-4 shot-5 shot-6 shot-7

What This Project Is

zihuan-next combines:

  • a persistent Agent runtime
  • a browser-based workflow editor
  • a synchronous DAG graph engine
  • a shared tool-call loop for agents and graph tools
  • a unified configuration center for connections, model refs, and agents

In practice, you use it in three connected ways:

  1. Run agents as always-on services.
  2. Build workflows with the node graph editor.
  3. Expose those workflows as callable tools for agents.

This keeps graph topology simple while allowing complex behavior to live inside nodes, subgraphs, and agent tool loops.

Core Model

1. Agent service is the primary runtime

The main binary hosts long-lived agents such as:

  • qq_chat
  • http_stream

Agents can be enabled, disabled, started, stopped, and auto-started from the admin UI. They are not one-shot scripts; they are hosted services managed by the server runtime.

2. Node graphs are workflows

The graph engine executes a DAG synchronously. A graph run is ideal for:

  • data transformation
  • message processing
  • retrieval and storage steps
  • calling models
  • preparing tool results
  • encapsulating business logic in reusable subgraphs

The graph is intentionally not the place for long-lived listeners or service lifecycles.

3. Workflows can also become Agent tools

This is a central design point of zihuan-next.

The same node-graph logic can be used in two roles:

  • run directly as a workflow
  • mounted into an Agent as a callable tool

Agents can call graph-backed tools through the shared Brain/tool loop. This makes workflows reusable across interactive agents, service endpoints, and graph-driven automations without rewriting the same logic twice.

Unified Connections And Resources

Connections are first-class system configuration, not ad-hoc values hidden inside one workflow.

You define connection configs once in the admin UI, then reuse them from both:

  • agents
  • node graphs

Current resource types in the project include:

  • MySQL
  • Redis
  • Weaviate
  • RustFS / S3-style object storage
  • IMS Bot Adapter connections
  • Tavily

The runtime distinguishes between:

  • persistent connection configuration identified by config_id
  • live runtime connection instances identified by instance_id

Graphs and agents refer to config_id. The runtime creates or reuses live instances as needed. This makes database and service connections easy to manage centrally while still being directly consumable from graph nodes and agent runtimes.

Model Access

zihuan-next supports several ways to use LLM and embedding capabilities:

  • local inference with Candle-based models
  • local or self-hosted inference through llama.cpp
  • online model APIs
  • OpenAI Chat Completions compatible endpoints
  • OpenAI Responses compatible endpoints

Model endpoints are defined as reusable llm_refs in system configuration, then attached where needed by agents or graphs.

This allows one deployment to mix:

  • local inference for cost control or privacy
  • self-hosted inference for internal services
  • hosted APIs for general-purpose reasoning

Main Capabilities

  • Browser admin UI at /
  • Browser graph editor at /editor
  • Persistent agent hosting
  • Graph execution as task runs
  • Graph-backed Agent tools
  • Shared Brain tool-call loop
  • Reusable connection and model configuration
  • REST API and WebSocket event stream
  • Task logs and runtime inspection
  • Workflow-set loading and CLI execution

Workspace Layout

Package Responsibility
zihuan_core Shared types, config, errors
zihuan_agent Brain tool-call loop engine
zihuan_graph_engine Synchronous DAG graph runtime
model_inference LLM, embedding, and inference-related nodes
storage_handler Connection-backed resource nodes and runtime connection management
ims_bot_adapter IMS / QQ adapter integration
zihuan_service Long-lived agent hosting and agent-facing nodes
zihuan_graph_cli CLI graph runner
webui/ Vue admin UI and LiteGraph editor
src/ Main Salvo web server, API, and app runtime

Configuration Model

System-level configuration is stored in:

  • Windows: %APPDATA%/zihuan-next_aibot/system_config/system_config.json
  • Linux/macOS: $XDG_CONFIG_HOME or $HOME/.config/zihuan-next_aibot/system_config/system_config.json

Current shape:

{
  "version": 2,
  "configs": {
    "connections": [],
    "llm_refs": [],
    "agents": []
  }
}

Graph structure, inline values, variables, and embedded subgraphs live in graph JSON files or workflow-set files under workflow_set/.

config.yaml is only used by the Python Alembic migration flow for MySQL schema setup.

Documentation

License

AGPL-3.0. See LICENSE.

About

A Rust-based node-graph workflow platform for AI agents, synchronous graph execution, and service-hosted runtimes such as QQ chat agents and HTTP stream agents.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors