Sandbox for exploring AI agent frameworks against local LLMs served through an OpenAI-compatible endpoint (LM Studio, llama.cpp, Ollama, vLLM). A single LLMClient wraps the provider so every framework — OpenAI Agents, CrewAI, AutoGen, LangGraph, MCP, smolagents — talks to the same backend.
- Python: 3.13 (managed with uv)
- LLM access:
openaiSDK pointed at a local OpenAI-compatible server - Frameworks:
openai-agents,autogen-agentchat,langgraph,langchain-*,smolagents,semantic-kernel,mcp - UI / notebooks:
gradio,marimo,jupyterlab - Tooling:
pydantic+pydantic-settings,playwright,pypdf,sendgrid,firecrawl(via MCP)
.
├── main.py # smoke test: hits LLMClient with a single prompt
├── src/
│ ├── common/
│ │ ├── llm.py # LLMClient + LLMProvider enum (local model registry)
│ │ └── utils.py # JSON extraction, MCP result serialization, helpers
│ ├── config/
│ │ └── settings.py # pydantic-settings, reads .env
│ ├── modules/
│ │ ├── 1_foundations/ # notebooks: prompting, tool use, structured output
│ │ ├── 2_openai/ # OpenAI Agents SDK
│ │ ├── 3_crewai/
│ │ ├── 4_autogen/
│ │ ├── 5_langgraph/
│ │ ├── 6_mcp/ # MCP servers/clients (Firecrawl, fetch, etc.)
│ │ └── 8_smolagents/
│ └── resources/
└── logs/
# 1. Install deps
uv sync
# 2. Configure environment
cp .env.example .env # if present; otherwise create .envRequired .env keys:
BASE_URL_OPENAI=http://localhost:1234/v1 # LM Studio default
OPENAI_API_KEY=lm-studio # any non-empty string for local servers
LLM_MODEL=QWEN35 # key from LLMProvider enum
# Optional
FIRECRAWL_API_KEY=
PUSHOVER_BASE_URL=
PUSHOVER_USER_KEY=
PUSHOVER_APP_TOKEN=# Smoke test the local LLM
uv run python main.py
# Notebooks
uv run jupyter lab
uv run marimo editfrom src.common.llm import LLMClient
client = LLMClient()
response = client.run_prompt(
[{"role": "user", "content": "What is the capital of France?"}]
)
print(response)For structured output, pass a Pydantic model to client.run_structured(messages, response_model=MyModel).
Models live in the LLMProvider enum at src/common/llm.py. Add an entry mapping a stable key to the model id served by your local backend, then select it with LLM_MODEL=<KEY> or LLMClient(model=LLMProvider.<KEY>).
class LLMProvider(str, Enum):
QWEN35 = "qwen/qwen3.5-4b"
# ...
MY_MODEL = "vendor/model-id-as-served"