Python script for running LLM API calls with different models and saving responses to HTML files.
- Clone the repository
- Install dependencies:
pip install -r requirements.txt- Copy
.env-exampleto.env:
cp .env-example .env- Edit
.envfile with your settings:BASE_URL- API endpoint URLAPI_KEY- Your API keyMODELS- Comma-separated list of models to testPROMPT- The prompt to send to all modelsSYSTEM_PROMPT- System prompt that defines the AI behaviorTEMPERATURE- Temperature parameter for API calls (default: 0.3)FOLDER_NAME- Folder name where results are saved (default: output)
Run the script:
python llm_runner.pyThe script will:
- Load configuration from
.envfile - Process each model sequentially
- Create HTML files in the
output/directory - Name each file according to the model name (with
/replaced by_)
For model openai/gpt-5.2, the output file will be output/openai_gpt-5.2.html