Abbreviated as
lango, 中文:懒狗
Website: http://lango.rpcx.io
🔀 Forked from paulnegz/langgraphgo - Enhanced with streaming, visualization, observability, and production-ready features.
This fork aims for feature parity with the Python LangGraph library, adding support for parallel execution, persistence, advanced state management, pre-built agents, and human-in-the-loop workflows.
go get github.com/smallnest/langgraphgo-
Core Runtime:
- Parallel Execution: Concurrent node execution (fan-out) with thread-safe state merging.
- Runtime Configuration: Propagate callbacks, tags, and metadata via
RunnableConfig. - Generic Types: Type-safe state management with generic StateGraph implementations.
- LangChain Compatible: Works seamlessly with
langchaingo.
-
Persistence & Reliability:
- Checkpointers: Redis, Postgres, SQLite, and File implementations for durable state.
- File Checkpointing: Lightweight file-based checkpointing without external dependencies.
- State Recovery: Pause and resume execution from checkpoints.
-
Advanced Capabilities:
- State Schema: Granular state updates with custom reducers (e.g.,
AppendReducer). - Smart Messages: Intelligent message merging with ID-based upserts (
AddMessages). - Command API: Dynamic control flow and state updates directly from nodes.
- Ephemeral Channels: Temporary state values that clear automatically after each step.
- Subgraphs: Compose complex agents by nesting graphs within graphs.
- Enhanced Streaming: Real-time event streaming with multiple modes (
updates,values,messages). - Pre-built Agents: Ready-to-use
ReAct,CreateAgent, andSupervisoragent factories. - Programmatic Tool Calling (PTC): LLM generates code that calls tools programmatically, reducing latency and token usage by 10x.
- State Schema: Granular state updates with custom reducers (e.g.,
-
Developer Experience:
- Visualization: Export graphs to Mermaid, DOT, and ASCII with conditional edge support.
- Human-in-the-loop (HITL): Interrupt execution, inspect state, edit history (
UpdateState), and resume. - Observability: Built-in tracing and metrics support.
- Tools: Integrated
TavilyandExasearch tools.
package main
import (
"context"
"fmt"
"log"
"github.com/smallnest/langgraphgo/graph"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/openai"
)
func main() {
ctx := context.Background()
model, _ := openai.New()
// 1. Create Graph
g := graph.NewMessageGraph()
// 2. Add Nodes
g.AddNode("generate", func(ctx context.Context, state any) (any, error) {
messages := state.([]llms.MessageContent)
response, _ := model.GenerateContent(ctx, messages)
return append(messages, llms.TextParts("ai", response.Choices[0].Content)), nil
})
// 3. Define Edges
g.AddEdge("generate", graph.END)
g.SetEntryPoint("generate")
// 4. Compile
runnable, _ := g.Compile()
// 5. Invoke
initialState := []llms.MessageContent{
llms.TextParts("human", "Hello, LangGraphGo!"),
}
result, _ := runnable.Invoke(ctx, initialState)
fmt.Println(result)
}- Basic LLM - Simple LangChain integration
- RAG Pipeline - Complete retrieval-augmented generation
- RAG with LangChain - LangChain components integration
- RAG with VectorStores - LangChain VectorStore integration (New!)
- RAG with Chroma - Chroma vector database integration (New!)
- Tavily Search - Tavily search tool integration (New!)
- Exa Search - Exa search tool integration (New!)
- Streaming - Real-time progress updates
- Conditional Routing - Dynamic path selection
- Parallel Execution - Fan-out/fan-in with state merging
- Complex Parallel Execution - Advanced parallel patterns with varying branch lengths (New!)
- Checkpointing - Save and resume state
- Visualization - Export graph diagrams
- Listeners - Progress, metrics, and logging
- Subgraphs - Nested graph composition
- Swarm - Multi-agent collaboration
- Create Agent - Flexible agent creation with options (New!)
- Dynamic Skill Agent - Agent with dynamic skill discovery and selection (New!)
- Chat Agent - Multi-turn conversation with session management (New!)
- Chat Agent Async - Async streaming chat agent (New!)
- Chat Agent Dynamic Tools - Chat agent with runtime tool management (New!)
- State Schema - Complex state management with Reducers
- Smart Messages - Intelligent message merging (Upserts)
- Command API - Dynamic control flow
- Ephemeral Channels - Temporary state management
- Streaming Modes - Advanced streaming patterns
- Time Travel / HITL - Inspect, edit, and fork state history
- Dynamic Interrupt - Pause execution from within a node
- Durable Execution - Crash recovery and resuming execution
- GoSkills Integration - Integration with GoSkills (New!)
- PTC Basic - Programmatic Tool Calling for reduced latency (New!)
- PTC Simple - Simple PTC example with calculator tools (New!)
- PTC Expense Analysis - Complex PTC scenario with data processing (New!)
- Tree of Thoughts - Advanced reasoning with search tree exploration (New!)
- PEV Agent - Problem-Evidence-Verification agent (New!)
- File Checkpointing - File-based checkpointing (New!)
- Generic State Graph - Type-safe generic state management (New!)
LangGraphGo automatically executes nodes in parallel when they share the same starting node. Results are merged using the graph's state merger or schema.
g.AddEdge("start", "branch_a")
g.AddEdge("start", "branch_b")
// branch_a and branch_b run concurrentlyPause execution to allow for human approval or input.
config := &graph.Config{
InterruptBefore: []string{"human_review"},
}
// Execution stops before "human_review" node
state, err := runnable.InvokeWithConfig(ctx, input, config)
// Resume execution
resumeConfig := &graph.Config{
ResumeFrom: []string{"human_review"},
}
runnable.InvokeWithConfig(ctx, state, resumeConfig)Quickly create complex agents using factory functions.
// Create a ReAct agent
agent, err := prebuilt.CreateReactAgent(model, tools)
// Create an agent with options
agent, err := prebuilt.CreateAgent(model, tools, prebuilt.WithSystemMessage("System prompt"))
// Create a Supervisor agent
supervisor, err := prebuilt.CreateSupervisor(model, agents)Generate code that calls tools directly, reducing API round-trips and token usage.
// Create a PTC agent
agent, err := ptc.CreatePTCAgent(ptc.PTCAgentConfig{
Model: model,
Tools: toolList,
Language: ptc.LanguagePython, // or ptc.LanguageGo
ExecutionMode: ptc.ModeDirect, // Subprocess (default) or ModeServer
MaxIterations: 10,
})
// LLM generates code that calls tools programmatically
result, err := agent.Invoke(ctx, initialState)See the PTC README for detailed documentation.
exporter := runnable.GetGraph()
fmt.Println(exporter.DrawMermaid()) // Generates Mermaid flowchart- Graph Operations: ~14-94μs depending on format
- Tracing Overhead: ~4μs per execution
- Event Processing: 1000+ events/second
- Streaming Latency: <100ms
go test ./... -vThis project is open for contributions! if you are interested in being a contributor please create feature issues first, then submit PRs..
MIT License - see original repository for details.