Skip to content

Conversation

@elasticdotventures
Copy link

No description provided.

@changeset-bot
Copy link

changeset-bot bot commented Jun 14, 2025

⚠️ No Changeset found

Latest commit: c2f12c2

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@elasticdotventures
Copy link
Author

OPENAI_API_BASE Implementation Report

Executive Summary

This report details the implementation of OPENAI_API_BASE support in the task-master codebase. The implementation enhances the flexibility of the system by allowing developers to connect to alternative OpenAI-compatible API endpoints, improving compatibility with various deployment environments and third-party services.

1. Initial State Overview

Prior to this implementation, the task-master codebase had limited flexibility for OpenAI API endpoints:

  • The system used hardcoded API endpoints for OpenAI services (https://api.openai.com)
  • While Azure OpenAI support existed through a separate configuration mechanism, there was no general solution for other OpenAI-compatible endpoints
  • Developers who wanted to use self-hosted or alternative OpenAI-compatible services had to modify the code directly

This created limitations for users who needed to:

  • Connect to private OpenAI instances
  • Use compatible third-party proxy services
  • Deploy in environments with specific networking requirements

2. Implementation Changes

The following changes were made to support custom OpenAI API endpoints:

  1. Added OPENAI_API_BASE environment variable support in the environment configuration
  2. Extended the configuration manager to store and retrieve custom OpenAI base URLs
  3. Modified the AI services module to respect this configuration with proper precedence rules
  4. Updated documentation and example files to inform developers of this new capability

3. Technical Approach

The implementation follows a clear precedence logic to determine which base URL to use for OpenAI API calls:

Environment Variable Resolution

// Special handling for OpenAI base URL from environment
if (providerName?.toLowerCase() === 'openai' && !baseURL) {
  const envBaseURL = resolveEnvVariable('OPENAI_API_BASE', session, effectiveProjectRoot);
  if (envBaseURL) {
    baseURL = envBaseURL;
    log('debug', `Using OpenAI base URL from environment variable: ${baseURL}`);
  }
}

Configuration Precedence

When determining which base URL to use for OpenAI API calls, the system follows this order:

  1. Role-specific Base URL: First checks if there's a role-specific base URL defined in .taskmaster/config.json for the current role (main, research, fallback)
  2. Global OpenAI Base URL: If no role-specific URL exists, checks for a global OpenAI base URL in configuration
  3. Environment Variable: If neither configuration option exists, checks for the OPENAI_API_BASE environment variable
  4. Default Value: If none of these are found, uses the default OpenAI API endpoint

This approach is similar to how Azure OpenAI endpoints are handled, but generalized for any OpenAI-compatible service.

Integration with Existing Code

The implementation leverages existing utility functions:

  • resolveEnvVariable() for accessing environment variables with proper fallbacks
  • getOpenAIBaseURL() for retrieving the global configuration setting
  • getBaseUrlForRole() for retrieving role-specific settings

4. Implementation Benefits

This implementation provides several key advantages:

  1. Enhanced Deployment Flexibility: Organizations can now deploy task-master in air-gapped environments using private API endpoints.

  2. Compatibility with Alternative Services: Support for third-party services that implement OpenAI-compatible APIs, including:

    • Local models (e.g., through LocalAI)
    • Self-hosted instances
    • Proxy services for rate limiting, caching, or cost management
  3. Developer Experience: Requires no code changes to use alternative endpoints - just simple configuration.

  4. Testing and Development: Makes testing against staging or development environments easier.

  5. Performance Optimization: Allows connecting to geographically closer endpoints for lower latency.

  6. Centralized Configuration: Provides consistent configuration patterns across different providers (OpenAI, Azure, etc.).

5. Developer Usage Guide

Setting a Custom OpenAI API Endpoint

There are multiple ways to configure a custom OpenAI API endpoint:

Option 1: Environment Variable (Preferred for Local Development)

Add to your .env file:

OPENAI_API_KEY="your_openai_api_key_here"
OPENAI_API_BASE="https://your-custom-endpoint.example.com"

Option 2: Global Configuration

In .taskmaster/config.json:

{
  "global": {
    "openaiBaseURL": "https://your-custom-endpoint.example.com",
    // other global settings...
  },
  // remaining configuration...
}

Option 3: Role-Specific Configuration

For more advanced scenarios, configure different endpoints per role:

{
  "models": {
    "main": {
      "provider": "openai",
      "modelId": "gpt-4o",
      "baseURL": "https://main-endpoint.example.com",
      // other settings...
    },
    "research": {
      "provider": "openai",
      "modelId": "gpt-4-turbo",
      "baseURL": "https://research-endpoint.example.com",
      // other settings...
    }
  }
}

Comparison with Azure Configuration

Unlike the Azure configuration which requires both an API key and endpoint, the OPENAI_API_BASE implementation:

  1. Works with your standard OpenAI API key
  2. Doesn't require Azure-specific authentication
  3. Simply redirects standard OpenAI API calls to your custom endpoint

Best Practices

  1. Always validate that your custom endpoint properly implements the OpenAI API specification
  2. Use HTTPS endpoints in production for security
  3. For environment-specific configurations, prefer environment variables over hardcoded URLs
  4. When troubleshooting connection issues, check logs for the detected base URL

Conclusion

The implementation of OPENAI_API_BASE support enhances the flexibility and configurability of the task-master codebase, enabling a wider range of deployment scenarios while maintaining a clean, consistent configuration interface for developers. This change allows the system to better adapt to various infrastructure requirements without compromising its core functionality.

@Crunchyman-ralph
Copy link
Collaborator

isn't baseURL inside config.json enough ?

@Crunchyman-ralph Crunchyman-ralph changed the base branch from main to next July 9, 2025 11:08
@prime-optimal
Copy link

For anyone finding this in the future that was struggling to get this work, this is what you have to do.

  • In config.json, change the provider of each model to "openai"
  • In config.json, for each provider, add a baseURL line for your custom OpenAI-compatible URL. In my case it was: "baseURL": "https://nano-gpt.com/api/v1"
  • Then edit your .env file and set OPENAI_API_KEY to your API key.

According to Claude Opus, each provider has their own API structure, so changing the base URL is not enough. Changing the provider to openai will force it to use the openai-compatible strucutre.

Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, does this imply changes in the .taskmaster/config.json ? if so, we should probably update it in the readme, and eventually inside apps/docs (our documentation)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants