Skip to content

sequrity-ai/tau2-bench

 
 

Repository files navigation

$\tau^2$-Bench: Evaluating Conversational Agents in a Dual-Control Environment

python Ruff Code style: black arXiv blog Twitter LinkedIn Leaderboard

System Overview
Figure 1: τ²-bench allows users to interact with the agent and the environment
Trajectory
Figure 2: Trajectory of a conversation between an agent and a user

🆕 What's New

🏆 Live Leaderboard (v0.2.0)

The τ²-bench leaderboard is now live at taubench.com!

  • 📊 Interactive Rankings: Compare model performance across all domains
  • 📱 Mobile-Friendly: View results on any device
  • 🔍 Detailed Analysis: Explore trajectories and conversation flows
  • 📥 Easy Submission: Submit your results directly through the interface

→ Visit the Leaderboard | → Submit Your Results

Overview

$\tau^2$-bench implements a simulation framework for evaluating customer service agents across various domains.

$\tau^2$-bench is the new iteration of the original $\tau$-bench, featuring code fixes and an additional telecom domain.

Each domain specifies:

  • a policy that the agent must follow
  • a set of tools that the agent can use
  • a set of tasks to evaluate the agent's performance
  • Optionally: A set of tools that the user simulator can use

Domains are:

  • mock
  • airline
  • retail
  • telecom

All the information that an agent developer needs to build an agent for a domain can be accessed through the domain's API docs. See View domain documentation for more details.

Installation

  1. Clone the repository:
git clone https://github.com/sierra-research/tau2-bench
cd tau2-bench
  1. Create a new environment (optional)

$\tau^2$-bench requires Python 3.10 or higher. You may create and activate a new environment:

python -m venv .venv
source .venv/bin/activate
  1. Install tau2
pip install -e .

This will enable you to run the tau2 command.

Note: If you use pip install . (without -e), you'll need to set the TAU2_DATA_DIR environment variable to point to your data directory:

export TAU2_DATA_DIR=/path/to/your/tau2-bench/data

Check your data directory setup:

After installation, you can verify that your data directory is correctly configured by running:

tau2 check-data

This command will check if the data directory exists and print instructions if it is missing.

To remove all the generated files and the virtual environment, run:

make clean

Quick Start

Setup LLM API keys

We use LiteLLM to manage LLM APIs, so you can use any LLM provider supported by LiteLLM.

To provide your API keys, copy .env.example as .env and edit it to include your API keys.

Run agent evaluation

To run a test evaluation on only 5 tasks with 1 trial per task, run:

tau2 run \ 
--domain airline \
--agent-llm gpt-4.1 \
--user-llm gpt-4.1 \
--num-trials 1 \
--num-tasks 5

Results will be saved in data/tau2/simulations/.

Command Line Interface

The tau2 command provides a unified interface for all functionality:

Running Benchmark

tau2 run \
  --domain <domain> \
  --agent-llm <llm_name> \
  --user-llm <llm_name> \
  --num-trials <trial_count> \
  --task-ids <task_ids> \
  --max-concurrency <concurrent_sims> \
  ...

Viewing Results

tau2 view

This tool allows you to:

  • Browse simulation files (in data/tau2/simulations/)
  • View agent performance metrics
  • View a particular simulation
  • View task details

View domain documentation

tau2 domain <domain>

Visit http://127.0.0.1:8004/redoc to see the domain policy and API documentation.

domain_viewer1

Check data configuration

tau2 check-data

This command checks if your data directory is properly configured and all required files are present.

Leaderboard Submission

To submit your agent results to the τ²-bench leaderboard, you need to prepare a valid submission package that meets specific requirements.

Requirements for Valid Submissions

Your trajectory runs must follow these constraints:

  1. Complete domain coverage: Include results for all three domains:

    • retail
    • airline
    • telecom
  2. Consistent model configuration: All trajectory files must use:

    • The same agent LLM with identical arguments across all domains
    • The same user simulator LLM with identical arguments across all domains
  3. One result per domain: Each domain should appear exactly once in your submission

  4. All tasks completed: Run evaluation on all tasks within each domain (don't use --task-ids or --num-tasks filters)

Preparing Your Submission

Step 1: Run Evaluations

First, run your agent evaluation on all domains with consistent settings:

# Example: Run complete evaluation for all domains
tau2 run --domain retail --agent-llm gpt-4.1 --user-llm gpt-4.1 --num-trials 4 --save-to my_model_retail
tau2 run --domain airline --agent-llm gpt-4.1 --user-llm gpt-4.1 --num-trials 4 --save-to my_model_airline  
tau2 run --domain telecom --agent-llm gpt-4.1 --user-llm gpt-4.1 --num-trials 4 --save-to my_model_telecom

Important: Use identical --agent-llm, --user-llm, and their arguments across all runs.

Step 2: Prepare Submission Package

Use the submission preparation tool to create your leaderboard submission:

tau2 submit prepare data/tau2/simulations/my_model_*.json --output ./my_submission

This command will:

  • Verify all trajectory files are valid
  • Check that submission requirements are met
  • Compute performance metrics (Pass^k rates)
  • Prompt for required metadata (model name, organization, contact email)
  • Create a structured submission directory with:
    • submission.json: Metadata and metrics
    • trajectories/: Your trajectory files

Step 3: Validate Your Submission

Before submitting, validate your submission package:

tau2 submit validate ./my_submission

This will verify:

  • All required files are present
  • Trajectory files are valid
  • Domain coverage is complete
  • Model configurations are consistent

Additional Options

Skip Verification (if needed)

tau2 submit prepare data/tau2/simulations/my_model_*.json --output ./my_submission --no-verify

Verify Individual Trajectory Files

tau2 submit verify-trajs data/tau2/simulations/my_model_*.json

Submitting to the Leaderboard

Once your submission package is prepared and validated:

  1. Review the generated submission.json file
  2. Follow the submission guidelines in web/leaderboard/public/submissions/README.md to create a Pull Request
  3. Keep your trajectories/ directory for reference

The leaderboard will display your model's Pass^k success rates (k=1,2,3,4) across all domains.

Experiments

Running Ablation Studies (No User, or Agent with Oracle Plan)

telecom domain enables running ablation studies.

  1. Running an LLM in no-user mode. In this mode, the LLM is given all the tools and the information upfront. Just choose llm_agent_solo as the agent and dummy_user as the user.
tau2 run \
  --domain telecom \
  --agent llm_agent_solo \
  --agent-llm gpt-4.1 \
  --user dummy_user \
  ...
  1. Running an LLM in oracle-plan mode. In this mode, the LLM is given an oracle plan ahead of time alleviating the need for action planning. Just choose llm_agent_gt as the agent.
tau2 run \
  --domain telecom \
  --agent llm_agent_gt \
  --agent-llm gpt-4.1 \
  --user-llm gpt-4.1 \
  ...

Running Telecom Domain with Workflow Policy

To test the impact of policy format, we provide an additional "workflow" policy for the telecom domain. To run using this policy, use the telecom-workflow domain.

tau2 run \
  --domain telecom-workflow \
  --agent-llm gpt-4.1 \
  --user-llm gpt-4.1 \
  ...

Domains

For all the details see the domains README.

Basics

  • Code is located in src/tau2/domains/
  • Data is located in data/tau2/domains/
  • Each domain has its own configuration and task definitions

View domain-specific policy and API docs:

Run the following command to see the domain policy and API documentation.

tau2 env <domain>

Then visit http://127.0.0.1:8004/redoc

Environment CLI (beta)

An interactive command-line interface for directly querying and testing domain environments. Features:

  • Interactive query interface with domain-specific tools
  • Support for multiple domains (airline, mock, etc.)
  • Session management with history

To use:

make env-cli

Available commands:

  • :q - quit the program
  • :d - change domain
  • :n - start new session (clears history)

Example usage:

$ make env-cli

Welcome to the Environment CLI!
Connected to airline domain.

Query (:n new session, :d change domain, :q quit)> What flights are available from SF to LA tomorrow?
Assistant: Let me check the flight availability for you...
[Flight details will appear here]

The Environment CLI is useful for:

  • Testing domain tools and queries
  • Debugging environment responses
  • Exploring available domain functionality
  • Quick domain interaction without starting the full server stack

Run tests

To run the test suite use the command

make test

Config

To configure the framework, see the config file.

LLM Calls caching

LLM call caching is disabled by default.

To enable LLM calls caching: - Make sure redis is running. - Update the redis config in config.py if necessary. - Set LLM_CACHE_ENABLED to True in config.py

Evaluate Your Own Agent

For local or remote agent evaluation, see our agent developer guide.

Orchestration Sequence Diagram

sequenceDiagram
    participant O as Orchestrator
    participant A as Agent
    participant U as UserSimulator
    participant E as Environment

    Note over O: Initialize(task)
    rect rgb(100, 150, 150)
        O->>A: get_init_state_info(message_history)
        A->>O: agent_state_info
        O->>U: get_init_state_info(message_history)
        U->>O: user_state_info
        O->>E: set_state(initialization_data, initialization_actions, message_history)
    end
    Note over O: Start simulation
    loop Pass messages between Agent, User, and Environment

        alt Agent/Env to User
            rect rgb(200, 150, 150)
            O->>U: generate_next_message(msg, user_state_info)
            U-->>O: (user_msg, user_state_info)
            end
            Note over O: Check if user_msg is STOP
        else User/Env to Agent
            rect rgb(100, 200, 100)
            O->>A: generate_next_message(msg, agent_state_info)
            A-->>O: (assistant_msg, agent_state_info)
            Note over O: Check if too many errors
            end
        else User/Agent to Environment
            rect rgb(150, 150, 200)
            O->>E: get_response(tool_call)
            E-->>O: tool_message
            end
        end
        Note over O: Check if max turns reached.
    end
    Note over O: Return simulation run
Loading

Citation

@misc{barres2025tau2,
      title={$\tau^2$-Bench: Evaluating Conversational Agents in a Dual-Control Environment}, 
      author={Victor Barres and Honghua Dong and Soham Ray and Xujie Si and Karthik Narasimhan},
      year={2025},
      eprint={2506.07982},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2506.07982}, 
}

About

τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 72.5%
  • JavaScript 15.5%
  • CSS 11.9%
  • Other 0.1%