Skip to content

V0.2.8

Latest

Choose a tag to compare

@jjyaoao jjyaoao released this 26 Oct 07:34
· 8 commits to main since this release

HelloAgents V0.2.8 Release Notes

📅 Release Date: 2025-10-26
📦 Package: pip install hello-agents>=0.2.8
🔗 GitHub: https://github.com/jjyaoao/HelloAgents
📚 Documentation: https://github.com/jjyaoao/HelloAgents/tree/main/docs


🎯 Overview

HelloAgents V0.2.8 introduces enhanced tool observability and flexible LLM provider support, enabling developers to build more sophisticated agentic applications with better monitoring and deployment flexibility. This release implements key features from Chapter 14 (Deep Research Agent), providing improved tool call tracking and custom provider integration.

✨ Core Features

  • 🔍 ToolAwareSimpleAgent: Enhanced SimpleAgent with tool call monitoring and event callbacks
  • 🌐 Custom Provider Support: Connect to any OpenAI-compatible API endpoint
  • 🛠️ SearchTool Enhancements: Improved multi-backend search capabilities
  • 📊 Tool Call Tracking: Built-in tool execution history and event listeners
  • 🔄 Backward Compatible: All existing code continues to work seamlessly

🔧 Installation & Upgrade

New Installation

# Basic installation
pip install hello-agents>=0.2.8

# With search functionality
pip install hello-agents[search]

# With all features
pip install hello-agents[all]

Upgrade from Previous Versions

# Upgrade to latest version
pip install --upgrade hello-agents

# Or specify version
pip install hello-agents==0.2.8

Verify Installation

import hello_agents
print(hello_agents.__version__)  # Should print: 0.2.8

🆕 What's New

1. ToolAwareSimpleAgent - Tool Call Monitoring ✨

Overview: A new agent class that extends SimpleAgent with tool call observation capabilities, perfect for building applications that need to track and respond to tool executions.

Key Features:

  • Event Callbacks: Register listeners for tool call events
  • Execution History: Automatic tracking of all tool calls
  • Real-time Monitoring: React to tool executions as they happen
  • Zero Overhead: Same performance as SimpleAgent when callbacks not used

Quick Start:

from hello_agents import ToolAwareSimpleAgent, HelloAgentsLLM
from hello_agents.tools import SearchTool, ToolRegistry

# Create LLM and tools
llm = HelloAgentsLLM(model="gpt-4")
search_tool = SearchTool(backend="tavily")

# Create tool registry
registry = ToolRegistry()
registry.register_tool(search_tool)

# Define callback function
def on_tool_call(tool_name, arguments, result):
    print(f"🔧 Tool Called: {tool_name}")
    print(f"📥 Arguments: {arguments}")
    print(f"📤 Result: {result[:100]}...")  # First 100 chars

# Create ToolAwareSimpleAgent with callback
agent = ToolAwareSimpleAgent(
    name="Research Assistant",
    system_prompt="You are a helpful research assistant.",
    llm=llm,
    tool_registry=registry,
    tool_call_callback=on_tool_call  # Register callback
)

# Use agent normally
response = agent.run("What is the latest news about AI?")

Output:

🔧 Tool Called: search
📥 Arguments: {'input': 'latest AI news', 'backend': 'tavily'}
📤 Result: 🔍 Search Results: [1] OpenAI releases GPT-5...

Use Cases:

  • Progress Tracking: Show users what the agent is doing
  • Debugging: Monitor tool calls during development
  • Logging: Record all tool executions for analysis
  • UI Updates: Update frontend in real-time (e.g., Chapter 14 Deep Research UI)

Advanced Example - Multiple Callbacks:

# Track tool call history
tool_history = []

def track_history(tool_name, arguments, result):
    tool_history.append({
        "tool": tool_name,
        "args": arguments,
        "result": result,
        "timestamp": datetime.now()
    })

def log_to_file(tool_name, arguments, result):
    with open("tool_calls.log", "a") as f:
        f.write(f"{datetime.now()}: {tool_name}({arguments})\n")

# Combine callbacks
def combined_callback(tool_name, arguments, result):
    track_history(tool_name, arguments, result)
    log_to_file(tool_name, arguments, result)

agent = ToolAwareSimpleAgent(
    name="Monitored Agent",
    llm=llm,
    tool_registry=registry,
    tool_call_callback=combined_callback
)

2. Custom Provider Support - Flexible LLM Integration 🌐

Overview: HelloAgentsLLM now supports custom provider type, allowing you to connect to any OpenAI-compatible API endpoint without predefined provider configurations.

Key Features:

  • Universal Compatibility: Works with any OpenAI-compatible API
  • Flexible Configuration: Custom base_url and api_key
  • No Code Changes: Same HelloAgentsLLM interface
  • Perfect for: Local deployments, custom endpoints, proxy services

Quick Start:

from hello_agents import HelloAgentsLLM

# Connect to custom endpoint
llm = HelloAgentsLLM(
    provider="custom",
    model="your-model-name",
    base_url="https://your-api-endpoint.com/v1",
    api_key="your-api-key"
)

# Use normally
response = llm.chat("Hello, world!")
print(response)

Common Use Cases:

1. Local LLM Deployment (Ollama):

llm = HelloAgentsLLM(
    provider="custom",
    model="llama3.2",
    base_url="http://localhost:11434/v1",
    api_key="ollama"  # Ollama doesn't require real key
)

2. LMStudio:

llm = HelloAgentsLLM(
    provider="custom",
    model="qwen-qwq-32b",
    base_url="http://localhost:1234/v1",
    api_key="lm-studio"
)

3. Custom Proxy Service:

llm = HelloAgentsLLM(
    provider="custom",
    model="gpt-4",
    base_url="https://your-proxy.com/v1",
    api_key="your-proxy-key"
)

4. Enterprise Internal API:

llm = HelloAgentsLLM(
    provider="custom",
    model="company-llm-v1",
    base_url="https://internal-api.company.com/v1",
    api_key=os.getenv("COMPANY_API_KEY")
)

Environment Variable Configuration:

# .env file
LLM_PROVIDER=custom
LLM_MODEL_ID=your-model-name
LLM_BASE_URL=https://your-api-endpoint.com/v1
LLM_API_KEY=your-api-key
# Load from environment
from hello_agents import HelloAgentsLLM

llm = HelloAgentsLLM()  # Automatically loads from .env

Supported Providers (Updated):

Provider Type Auto-detect Custom Config
ModelScope Cloud
OpenAI Cloud
DeepSeek Cloud
通义千问 Cloud
Kimi Cloud
智谱GLM Cloud
Ollama Local
vLLM Local
Custom Any

3. SearchTool File Reorganization 📁

Overview: SearchTool has been renamed from search.py to search_tool.py for better consistency with other tool naming conventions.

Changes:

  • ✅ File renamed: search.pysearch_tool.py
  • ✅ All imports updated automatically
  • ✅ Public API unchanged
  • ✅ Backward compatible

Import Paths (No changes needed):

# All these still work
from hello_agents.tools import SearchTool
from hello_agents import SearchTool
from hello_agents.tools.builtin.search_tool import SearchTool  # New path

Why This Change?:

  • Consistent naming: search_tool.py, note_tool.py, rag_tool.py
  • Better code organization
  • Clearer module structure

🏗️ Architecture Improvements

Tool System Enhancements

Before (V0.2.7):

SimpleAgent
├── Tool Registry
└── Tools (no observability)

After (V0.2.8):

ToolAwareSimpleAgent
├── Tool Registry
├── Tools
└── Tool Call Callbacks
    ├── Event Listeners
    ├── Execution History
    └── Real-time Monitoring

LLM Provider System

Before (V0.2.7):

# Limited to predefined providers
SUPPORTED_PROVIDERS = [
    "modelscope", "openai", "deepseek",
    "dashscope", "kimi", "zhipu",
    "ollama", "vllm"
]

After (V0.2.8):

# Now includes custom provider
SUPPORTED_PROVIDERS = [
    "modelscope", "openai", "deepseek",
    "dashscope", "kimi", "zhipu",
    "ollama", "vllm", "custom"  # ✨ New
]

📚 Real-World Example: Deep Research Agent (Chapter 14)

Overview

The Deep Research Agent from Chapter 14 demonstrates the power of V0.2.8 features, combining ToolAwareSimpleAgent with custom provider support for a production-ready research assistant.

Architecture:

Deep Research Agent
├── Frontend (Vue 3)
│   └── Real-time Progress Display
│
├── Backend (FastAPI)
│   ├── DeepResearchAgent (ToolAwareSimpleAgent)
│   ├── SearchTool (Multi-backend)
│   ├── NoteTool (Progress tracking)
│   └── SSE Streaming
│
└── LLM (Custom Provider)
    ├── Ollama (Local)
    ├── LMStudio (Local)
    └── Cloud APIs (OpenAI, DeepSeek, etc.)

Key Features Enabled by V0.2.8:

  1. Tool Call Monitoring: Track search and note-taking in real-time
  2. Custom Provider: Support local and cloud LLMs seamlessly
  3. Progress Streaming: Update UI as research progresses

Code Snippet:

from hello_agents import ToolAwareSimpleAgent, HelloAgentsLLM
from hello_agents.tools import SearchTool, NoteTool, ToolRegistry

# Custom provider for local LLM
llm = HelloAgentsLLM(
    provider="custom",
    model="qwen-qwq-32b",
    base_url="http://localhost:1234/v1",
    api_key="lm-studio"
)

# Create tools
search_tool = SearchTool(backend="hybrid")
note_tool = NoteTool(workspace="./notes")

# Register tools
registry = ToolRegistry()
registry.register_tool(search_tool)
registry.register_tool(note_tool)

# Tool call callback for progress tracking
def on_tool_call(tool_name, arguments, result):
    # Send progress to frontend via SSE
    send_sse_event({
        "type": "tool_call",
        "tool": tool_name,
        "args": arguments,
        "result": result
    })

# Create research agent
agent = ToolAwareSimpleAgent(
    name="Deep Research Agent",
    system_prompt="You are a research assistant...",
    llm=llm,
    tool_registry=registry,
    tool_call_callback=on_tool_call  # ✨ V0.2.8 feature
)

# Run research
response = agent.run("Research: What is Datawhale?")

🔄 Migration Guide

From V0.2.7 to V0.2.8

No Breaking Changes - All existing code works without modifications!

Optional Enhancements:

1. Add Tool Call Monitoring:

# Before (V0.2.7)
agent = SimpleAgent(name="Agent", llm=llm, tool_registry=registry)

# After (V0.2.8) - Optional enhancement
agent = ToolAwareSimpleAgent(
    name="Agent",
    llm=llm,
    tool_registry=registry,
    tool_call_callback=lambda tool, args, result: print(f"Called: {tool}")
)

2. Use Custom Provider:

# Before (V0.2.7) - Manual configuration
llm = HelloAgentsLLM(
    model="llama3.2",
    base_url="http://localhost:11434/v1",
    api_key="ollama"
)

# After (V0.2.8) - Explicit custom provider
llm = HelloAgentsLLM(
    provider="custom",  # ✨ More explicit
    model="llama3.2",
    base_url="http://localhost:11434/v1",
    api_key="ollama"
)

💡 Best Practices

When to Use ToolAwareSimpleAgent

Scenario Use ToolAwareSimpleAgent? Reason
Production Apps ✅ Yes Better observability
Debugging ✅ Yes Track tool calls
UI Integration ✅ Yes Real-time updates
Simple Scripts ❌ No SimpleAgent is simpler
Performance Critical ❌ No Tiny overhead from callbacks

Custom Provider Configuration

Best Practices:

# ✅ Good: Use environment variables
llm = HelloAgentsLLM(
    provider="custom",
    model=os.getenv("LLM_MODEL_ID"),
    base_url=os.getenv("LLM_BASE_URL"),
    api_key=os.getenv("LLM_API_KEY")
)

# ❌ Avoid: Hardcoded credentials
llm = HelloAgentsLLM(
    provider="custom",
    model="gpt-4",
    base_url="https://api.openai.com/v1",
    api_key="sk-hardcoded-key"  # Security risk!
)

📊 Performance & Compatibility

Performance Impact

Feature Overhead Notes
ToolAwareSimpleAgent <1% Negligible when no callback
Custom Provider 0% Same as predefined providers
Tool Call Tracking <5% Only when callback is heavy

Compatibility

  • Python: 3.10+ (unchanged)
  • Dependencies: No new required dependencies
  • Backward Compatibility: 100% compatible with V0.2.7

📞 Support & Resources


🙏 Acknowledgments

Special thanks to:

  • Datawhale Community for continuous support
  • Chapter 14 Contributors for testing and feedback
  • All Users who reported issues and suggested improvements

Happy Building with HelloAgents V0.2.8! 🚀✨