π¦ ARCHIVED PROJECT This project has been archived and is no longer actively maintained. While the code remains available for reference and learning purposes, no new features will be added and issues will not be addressed. Feel free to fork this repository if you'd like to continue development.
π Language / θ―θ¨
π€ Sage Agent is a powerful LLM-based agent system for general-purpose software engineering tasks, built in Rust with modern async architecture and clean design patterns.
This project has been archived as of July 2025.
After extensive development and experimentation, we've decided to archive this Rust-based implementation in favor of a TypeScript-based approach. Our analysis revealed that:
- π Ecosystem Advantage: TypeScript/Node.js has a significantly richer AI/LLM ecosystem with official SDKs from all major providers
- β‘ Development Velocity: Single-language stack dramatically improves development speed and team collaboration
- π§ Maintenance Simplicity: Unified toolchain reduces complexity and deployment overhead
- π Performance Reality: In AI agent scenarios, network I/O (LLM API calls) is the bottleneck, not CPU performance
This repository contains a fully functional Rust implementation with:
- β Complete concurrent tool execution system
- β Modern terminal UI with React + Ink
- β Multi-LLM provider support
- β Comprehensive tool ecosystem
- β Advanced trajectory recording
If you're interested in continuing this project:
- Fork this repository - All code is MIT licensed and ready to use
- Check the TypeScript migration insights - See our analysis in the commit history
- Consider hybrid approaches - The UI components and architecture patterns are valuable
We believe this codebase serves as an excellent reference implementation for Rust-based AI agents and demonstrates advanced patterns in concurrent tool execution.
This project is a Rust rewrite of the original Trae Agent by ByteDance. While maintaining the core functionality and philosophy of the original Python-based agent, Sage Agent brings:
- π Performance: Rust's zero-cost abstractions and memory safety
- β‘ Concurrency: Modern async/await patterns with Tokio
- π‘οΈ Type Safety: Compile-time guarantees and robust error handling
- ποΈ Modularity: Clean architecture with well-defined service boundaries
We extend our gratitude to the ByteDance team and the open-source community for creating the foundational Trae Agent project that inspired this implementation.
- π¦ Archive Notice
- β¨ Features
- ποΈ Architecture
- π Quick Start
- π οΈ Available Tools
- π Examples
- π Trajectory Recording
- π¨ Advanced Features
- β‘ Performance Optimization
- π§ Development
- π Documentation
- π Migration Insights
- π€ Contributing
- π License
| π€ AI Integration | π οΈ Developer Tools | π¨ User Experience |
|---|---|---|
| Multi-LLM Support (OpenAI, Anthropic, Google) |
Rich Tool Ecosystem (Code editing, Bash, Retrieval) |
Interactive CLI (Animations, Progress indicators) |
| Smart Context Handling | Task Management System | Terminal Markdown Rendering |
| Trajectory Recording | SDK Integration | Beautiful UI Components |
- π Multi-LLM Support: Compatible with OpenAI, Anthropic, Google, and other LLM providers
- π οΈ Rich Tool Ecosystem: Built-in tools for code editing, bash execution, codebase retrieval, and task management
- π» Interactive CLI: Beautiful terminal interface with animations and progress indicators
- π¦ SDK Integration: High-level SDK for programmatic usage
- π Trajectory Recording: Complete execution tracking and replay capabilities
- π Markdown Rendering: Terminal-based markdown display with syntax highlighting
- π Task Management: Built-in task planning and progress tracking
- ποΈ Clean Architecture: Modular design with clear separation of concerns
The project is organized as a Rust workspace with four main crates:
sage-core: Core library with agent execution, LLM integration, and tool managementsage-cli: Command-line interface with interactive mode and rich UIsage-sdk: High-level SDK for programmatic integrationsage-tools: Collection of built-in tools for various tasks
π‘ TL;DR:
cargo install sage-cli && sage- Get started in seconds!
# π One-line installation
cargo install --git https://github.com/majiayu000/sage sage-cli
# π― Start interactive mode
sage
# β¨ Or run a specific task
sage run "Create a Python script that calculates fibonacci numbers"- Rust: 1.85+ (latest stable recommended)
- Operating System: Linux, macOS, Windows
- Memory: Minimum 4GB RAM (8GB+ recommended)
- API Keys: API keys for your chosen LLM providers
# Clone the repository
git clone https://github.com/majiayu000/sage
cd sage-agent
# Build the project
cargo build --release
# Install the CLI
cargo install --path crates/sage-cli# Install from crates.io (if published)
cargo install sage-cli
# Or install from Git repository
cargo install --git https://github.com/majiayu000/sage sage-cli# Check version
sage --version
# Show help
sage --helpCreate a configuration file sage_config.json:
{
"providers": {
"openai": {
"api_key": "${OPENAI_API_KEY}",
"base_url": "https://api.openai.com/v1"
}
},
"default_provider": "openai",
"model_parameters": {
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 4000
},
"max_steps": 20,
"working_directory": "."
}# Interactive mode (default)
sage
# Run a specific task
sage run "Create a Python script that calculates fibonacci numbers"
# With custom configuration
sage --config-file my_config.json run "Analyze this codebase structure"use sage_sdk::{SageAgentSDK, RunOptions};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create SDK instance
let sdk = SageAgentSDK::new()?
.with_provider_and_model("openai", "gpt-4", None)?
.with_working_directory("./my-project")
.with_max_steps(10);
// Execute a task
let result = sdk.run("Create a README file for this project").await?;
if result.is_success() {
println!("β
Task completed successfully!");
println!("π Used {} tokens in {} steps",
result.statistics().total_tokens,
result.statistics().total_steps);
}
Ok(())
}Sage Agent comes with a comprehensive set of built-in tools:
bash: Execute shell commands and scriptsedit: Create and modify files with precise editing capabilitiesjson_edit: Specialized JSON file editingcodebase_retrieval: Intelligent code search and context retrievalsequential_thinking: Step-by-step reasoning and planningtask_done: Mark tasks as completed- Task Management:
view_tasklist,add_tasks,update_tasks,reorganize_tasklist
The examples/ directory contains various usage examples:
basic_usage.rs: Simple SDK usage patternscustom_tool.rs: Creating custom toolsmarkdown_demo.rs: Terminal markdown renderingui_demo.rs: Interactive UI components
Run examples with:
cargo run --example basic_usage
cargo run --example markdown_demo
cargo run --example trajectory_demoSage Agent automatically records detailed execution trajectories for debugging and analysis:
# Automatically generate trajectory files
sage run "Debug authentication module"
# Saved to: trajectories/trajectory_20250612_220546.json
# Custom trajectory file
sage run "Optimize database queries" --trajectory-file optimization_debug.jsonTrajectory files contain:
- LLM Interactions: All messages, responses, and tool calls
- Agent Steps: State transitions and decision points
- Tool Usage: Which tools were called and their results
- Metadata: Timestamps, token usage, and execution metrics
In interactive mode, you can:
- Enter any task description to execute
- Use
statusto view agent information - Use
helpto get available commands - Use
clearto clear the screen - Use
exitorquitto end the session
# Use OpenAI
sage run "Create Python script" --provider openai --model gpt-4
# Use Anthropic
sage run "Code review" --provider anthropic --model claude-3-5-sonnet
# Use custom working directory
sage run "Add unit tests" --working-dir /path/to/project- Command line arguments (highest priority)
- Configuration file values
- Environment variables
- Default values (lowest priority)
- Concurrent Processing: Sage Agent uses Tokio async runtime for efficient concurrent operations
- Memory Management: Rust's zero-cost abstractions ensure minimal runtime overhead
- Caching Strategy: Intelligent caching of LLM responses and tool results for improved performance
- Streaming Processing: Support for streaming LLM responses for better user experience
{
"model_parameters": {
"temperature": 0.1, // Lower randomness for more consistent results
"max_tokens": 2000, // Adjust based on task complexity
"stream": true // Enable streaming responses
},
"max_steps": 15, // Limit max steps to control costs
"timeout_seconds": 300 // Set reasonable timeout
}# Enable verbose logging
RUST_LOG=sage_core=debug,sage_cli=info cargo run
# Monitor token usage
sage run "Task description" --show-stats
# Performance profiling
RUST_LOG=trace cargo run --release# Build all crates
cargo build
# Build with optimizations
cargo build --release
# Run tests
cargo test
# Run with logging
RUST_LOG=debug cargo runsage-agent/
βββ crates/
β βββ sage-core/ # Core library
β β βββ src/agent/ # Agent execution logic
β β βββ src/llm/ # LLM client implementations
β β βββ src/tools/ # Tool system
β β βββ src/ui/ # Terminal UI components
β βββ sage-cli/ # Command-line interface
β βββ sage-sdk/ # High-level SDK
β βββ sage-tools/ # Built-in tools collection
βββ docs/ # Comprehensive documentation
β βββ user-guide/ # User documentation
β βββ development/ # Developer documentation
β βββ architecture/ # System architecture docs
β βββ api/ # API reference
β βββ planning/ # Project planning and roadmap
βββ examples/ # Usage examples
βββ trajectories/ # Execution trajectory files (gitignored)
βββ configs/ # Configuration templates and examples
βββ Cargo.toml # Workspace configuration
- Code Generation: Create files, functions, and entire modules
- Code Analysis: Understand and document existing codebases
- Refactoring: Modernize and improve code structure
- Testing: Generate and run test suites
- Documentation: Create comprehensive project documentation
- Automation: Automate repetitive development tasks
Sage Agent supports flexible configuration through JSON files and environment variables:
{
"providers": {
"openai": {
"api_key": "${OPENAI_API_KEY}",
"base_url": "https://api.openai.com/v1"
},
"anthropic": {
"api_key": "${ANTHROPIC_API_KEY}",
"base_url": "https://api.anthropic.com"
}
},
"default_provider": "openai",
"model_parameters": {
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 4000
},
"max_steps": 20,
"working_directory": ".",
"ui": {
"enable_animations": true,
"markdown_rendering": true
},
"trajectory": {
"enabled": false,
"directory": "trajectories",
"auto_save": true,
"save_interval_steps": 5
}
}Comprehensive documentation is available in the docs/ directory:
- User Guide - Installation, configuration, and usage
- Development Guide - Contributing and development setup
- Architecture Documentation - System design and architecture
- API Reference - Detailed API documentation
- Planning & Roadmap - Project roadmap and TODO lists
- Getting Started - New user guide
- Contributing Guide - How to contribute
- TODO Lists - Current development priorities
- MCP Integration Plan - Model Context Protocol support
- Documentation Consistency - Maintaining doc consistency
Import Errors:
# Try setting RUST_LOG
RUST_LOG=debug cargo runAPI Key Issues:
# Verify API keys are set
echo $OPENAI_API_KEY
echo $ANTHROPIC_API_KEY
# Check configuration
sage --show-configPermission Errors:
# Ensure proper permissions for file operations
chmod +x /path/to/your/projectOPENAI_API_KEY- OpenAI API keyANTHROPIC_API_KEY- Anthropic API keyGOOGLE_API_KEY- Google Gemini API keyOPENROUTER_API_KEY- OpenRouter API key
- Follow Rust official code style guidelines
- Add tests for new features
- Update documentation as needed
- Use appropriate type hints
- Ensure all tests pass before committing
During our development process, we conducted extensive analysis comparing Rust vs TypeScript for AI agent development. Here are our key findings:
| Aspect | Rust | TypeScript | Winner |
|---|---|---|---|
| Concurrent Tool Execution | ~120ms | ~150ms | Rust (+25%) |
| LLM API Calls | 1-5 seconds | 1-5 seconds | Tie |
| Overall Agent Response | 1.12-5.15s | 1.15-5.15s | Negligible difference |
Key Insight: In AI agent scenarios, network I/O dominates performance, making language choice less critical.
| Factor | Rust | TypeScript |
|---|---|---|
| Ecosystem Richness | Limited AI libraries | Rich AI/LLM ecosystem |
| Development Speed | Slower (compile times) | Faster (hot reload) |
| Team Onboarding | Steep learning curve | Familiar to most devs |
| Debugging | Complex (async + FFI) | Straightforward |
| Deployment | Complex (cross-platform) | Simple (Node.js) |
What Worked Well in Rust:
- β Concurrent tool execution: Excellent async/await patterns
- β Type safety: Compile-time guarantees prevented many bugs
- β Memory efficiency: Zero-cost abstractions
- β Clean architecture: Forced good design patterns
What Was Challenging:
- β UI Integration: Complex FFI for terminal UI
- β Ecosystem gaps: Missing AI-specific libraries
- β Build complexity: Cross-platform compilation issues
- β Development velocity: Slower iteration cycles
For AI Agent development, we recommend:
- TypeScript/Node.js for rapid prototyping and rich AI ecosystem
- Rust for performance-critical components (if needed)
- Hybrid approach for the best of both worlds
This Rust implementation demonstrates:
- Advanced concurrent programming patterns
- Clean architecture in systems programming
- Modern async Rust techniques
- Terminal UI development with React + Ink
Note: While this project is archived, we welcome discussions and learning exchanges! Please see our contributing guidelines for historical context on:
This project is licensed under the MIT License - see the LICENSE file for details.
Note: This Rust implementation maintains compatibility with the MIT License of the original Trae Agent project.
- Original Inspiration: This project is based on Trae Agent by ByteDance - a pioneering LLM-based agent for software engineering tasks
- Partial Inspiration: Augment Code - Advanced AI code assistant and context engine, providing valuable reference for agent tool system design
- Architecture Insights: Gemini CLI - Excellent reference for TypeScript-based AI agent architecture
- Built with Rust and modern async patterns
- Powered by leading LLM providers (GoogleγAnthropicγOpenAI, etc.)
- Inspired by the open-source community's commitment to intelligent development automation
- Special thanks to the Trae Agent contributors and maintainers for their foundational work
- Appreciation to the Augment Code team for their innovative work in AI-assisted development
- Gratitude to the Rust community for excellent async programming patterns and tools
This archived project serves as a comprehensive example of:
- Modern Rust development with async/await patterns
- Concurrent tool execution in AI agent systems
- Terminal UI development with React + Ink integration
- Clean architecture principles in systems programming
- LLM integration patterns and best practices
Sage Agent - A learning journey in AI agent architecture. πβ¨
"Every archived project teaches us something valuable for the next one."