Auto-discovers models from Tama local AI server and provides OpenCode with model configuration.
- Auto-detection: Finds tama running on default ports (11434, 8080)
- Model Discovery: Queries
/tama/v1/opencode/modelsfor rich model metadata - Configuration Enhancement: Adds model metadata (context limits, name, etc.)
- Graceful Fallback: Works even if tama is offline
Add to your opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-tama"]
}Or install via npm:
npm install opencode-tamaSimply install the plugin - it will auto-detect tama and discover models.
If you want to use a custom tama instance:
{
"provider": {
"tama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Tama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
}
}
}
}The plugin will still enhance this with auto-discovered models, merging with any manually configured ones.
If your tama instance is gated behind a bearer token (e.g. a public endpoint fronted by a reverse proxy), set the token in one of two ways:
-
TAMA_TOKENenvironment variable (highest priority):export TAMA_TOKEN=your-token-here -
apiKeyin youropencode.jsonprovider options:{ "provider": { "tama": { "npm": "@ai-sdk/openai-compatible", "options": { "baseURL": "https://tama.example.com/v1", "apiKey": "your-token-here" } } } }
The token is sent as Authorization: Bearer <token> on both model discovery and inference requests. When unset, no auth header is sent (fine for localhost).
- On opencode startup, the
confighook is called - Plugin checks for existing
tamaprovider or auto-detects on default ports - Queries
GET /tama/v1/opencode/modelsfrom tama - Merges discovered models into opencode's configuration
- Models appear in
/modelslist automatically
- Tama running with
tama serve - OpenCode with plugin support
MIT