$ indago scan --spec petstore.yaml --provider anthropic --use-llm-payloads
indago v0.9.0 — API security fuzzer
Target: https://petstore.example.com (14 endpoints)
Provider: anthropic (claude-sonnet-4-20250514)
[■■■■■■■■■■■■■■■■■■■■] 14/14 endpoints analyzed
LLM generated 847 payloads across 14 endpoints
FINDINGS SUMMARY
────────────────
HIGH 12 (SQLi: 4, IDOR: 3, Auth Bypass: 3, SSRF: 2)
MEDIUM 8 (Mass Assignment: 3, Data Exposure: 3, XSS: 2)
LOW 5 (Rate Limit: 3, Info Disclosure: 2)
Results: report.json (25 findings, curl commands included)
Indago is an API security fuzzer that uses LLMs to generate context-aware payloads instead of dumb wordlists. Point it at an API spec, and the LLM reads the endpoint names, parameter types, and business context to build attack payloads that actually make sense for your target. It supports local models too, so nothing has to leave your network.
# Pre-built binary (Linux amd64)
curl -L https://github.com/Su1ph3r/indago/releases/latest/download/indago-linux-amd64.tar.gz | tar xz
sudo mv indago /usr/local/bin/
# macOS (Apple Silicon)
curl -L https://github.com/Su1ph3r/indago/releases/latest/download/indago-darwin-arm64.tar.gz | tar xz
sudo mv indago /usr/local/bin/
# From source
go install github.com/Su1ph3r/indago/cmd/indago@latestIndago reads OpenAPI/Swagger specs, Postman collections, HAR files, Burp exports, raw URL lists, and GraphQL endpoints.
# Basic scan with LLM-powered payload generation
indago scan --spec api.yaml --provider anthropic --use-llm-payloads# Scan with auth and business context hint
indago scan --spec api.yaml --provider openai --use-llm-payloads \
--auth-header "Bearer $TOKEN" \
--context "E-commerce API with payment processing and user accounts"# GraphQL target
indago scan --url https://api.example.com --provider ollama \
--llm-url http://localhost:11434/v1 --use-llm-payloads \
--attacks graphql_depth,graphql_batch,graphql_alias# Resume an interrupted scan
indago scan --spec api.yaml --resume .indago-checkpoint.json# Dry run — see what would be tested, no requests sent
indago scan --spec api.yaml --dry-runIndago writes JSON, HTML, Markdown, SARIF, Nmap-style text, and Burp Suite XML.
# JSON report + HTML report in one pass
indago scan --spec api.yaml -f json -o results.json
indago scan --spec api.yaml -f html -o report.htmlEvery finding includes a curl command for reproduction.
Supported: OpenAI, Anthropic, Ollama, LM Studio.
# Pick one — set the relevant env var
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
# For local models, pass --llm-url instead
indago scan --spec api.yaml --provider lmstudio --llm-url http://localhost:1234/v1The --llm-concurrency flag controls how many parallel LLM calls run during payload generation (default 8).
Create ~/.indago.yaml or pass --config path/to/config.yaml.
provider:
name: anthropic # openai | anthropic | ollama | lmstudio
model: claude-sonnet-4-20250514
scan:
concurrency: 10 # parallel HTTP requests
rate_limit: 10.0 # requests/sec
timeout: 30s
attacks:
enabled: # omit to run all
- idor
- sqli
- auth_bypass
# Multi-auth differential analysis (detects BOLA/IDOR)
differential:
enabled: true
auth_contexts:
- name: admin
token: "admin-token"
priority: 0
- name: user
token: "user-token"
priority: 1
# Out-of-band callback detection
callback:
enabled: true
external_url: "https://your-callback-server.com"
timeout: 30s
# Checkpoint long scans
checkpoint:
file: .indago-checkpoint.json
interval: 30sPre-built profiles ship in configs/ — thorough.yaml, ci-quick.yaml, idor-focus.yaml, injection-focus.yaml.
Indago has a terminal UI with real-time progress, a findings list, and keyboard-driven triage. Enable it with --interactive or run indago interactive.
MIT