This project provides a comprehensive set of Google Sheets custom formulas and sidebar UI for interacting with multiple AI providers including OpenAI and OpenRouter. It is designed to be easily extensible, allowing new formulas and providers to be added as the AI ecosystem evolves.
- Multi-provider support: Full support for OpenAI and OpenRouter, with access to models from Anthropic (Claude), Google (Gemini), OpenAI (GPT), and many more through OpenRouter
- Sidebar UI: Manage API keys, browse formulas, and view available models from an easy-to-use sidebar
- Flexible API key storage: Store your API keys privately (per user) or share them in the sheet/code for team use
- Direct API access: Supports any model available in OpenAI's and OpenRouter's APIs 🧠
- Response caching: Avoid rate limits, reduce costs, and speed up results 🚀
- Customizable: Tweak and extend via the Apps Script editor ✍️
- No third-party servers: All communication is direct with providers 🔐
- Free and open-source: Licensed under MIT 🤝
Example: Using =OPENAI()
to classify music genres in Google Sheets
- Open your Google Sheet.
- Go to
Extensions
->Apps Script
. - Copy both files from the
dist
folder into the Apps Scripts editor:Main.js
(contains all functions and logic)UI.html
(contains the sidebar interface)
- Save the script.
- Refresh your Google Sheet and open the "LLMs for Sheets" menu.
- Click "Open LLM Sidebar" to access the unified interface.
- Set your API keys:
- OpenAI: Get from https://platform.openai.com/api-keys
- OpenRouter: Get from https://openrouter.ai/keys
- Use formulas like
=OPENAI("Hello, how are you?")
or=OPENROUTER("Analyze this data", "anthropic/claude-3-sonnet")
in any cell.
To compose complex prompts by concatenating multiple cells, you can use the &
operator. For example:
=OPENAI("What is the difference between: " & A1 & " and " & A2) =OPENROUTER("Analyze the trends in this data: " & A1:A10, "anthropic/claude-3-sonnet")
=OPENAI("prompt", [model], [temperature], [maxTokens])
: Generate text using any OpenAI model, defaulting to gpt-4o.
=OPENROUTER("prompt", [model], [temperature], [maxTokens])
: Access any OpenRouter model including Claude, Gemini, and more.
=CHATGPTSYSTEMPROMPT("prompt")
: Set the system prompt that guides AI behavior for all providers.
=OPENAI("What is the capital of " & A1)
=OPENAI("Analyze this complex dataset", "gpt-4o", 0.3, 500)
=OPENROUTER("Translate to French: " & A1, "google/gemini-pro-1.5")
=OPENROUTER("Creative writing task", "anthropic/claude-3-sonnet", 0.7, 300)
📖 For comprehensive examples and use cases, see OPENAI_EXAMPLES.md
=OPENROUTER("Translate to Spanish: " & A1, "openai/gpt-4o")
=OPENROUTER("Generate code for: " & A1, "anthropic/claude-3-sonnet")
=OPENROUTER("Strategic analysis: " & A1:C10, "anthropic/claude-3-5-sonnet")
Use Case | Recommended Formula | Notes |
---|---|---|
General OpenAI tasks | =OPENAI() |
Uses gpt-4o by default |
Most popular model | =OPENROUTER(..., "google/gemini-pro-1.5") |
#1 ranked model on OpenRouter |
Complex reasoning | =OPENROUTER(..., "anthropic/claude-3-5-sonnet") |
#2 ranked, best for analysis |
Code generation | =OPENROUTER(..., "deepseek/deepseek-chat") |
#3 ranked, excellent for coding |
Multilingual tasks | =OPENROUTER(..., "qwen/qwen-2.5-72b-instruct") |
#4 ranked, great for languages |
The project is now organized into just 2 files for easy Apps Script import:
Main.js
: All functions, UI logic, shared utilities, and provider integrations (OpenAI + OpenRouter)UI.html
: Consolidated sidebar interface with tabs for all functionality
The sidebar provides a single interface to manage all AI providers with tabbed sections:
- Settings Tab: Manage API keys and system prompts for all providers
- Formulas Tab: Browse available formulas with examples and copy buttons
- Models Tab: Live model directory with 100+ models, search functionality, and popularity rankings
- OpenRouter Tab: Dedicated interface for OpenRouter-specific features
- Live API Integration: Real-time model data from OpenRouter API
- Top 25 Most Popular: Displays trending models by actual usage
- Provider Filtering: Separate OpenAI and OpenRouter models
- Search Functionality: Find models by name, ID, or provider
- Detailed Model Cards: Pricing, context length, and unique identifiers
- Smart Sorting: Models ranked by popularity and performance
Sidebar: Manage your API key, system prompt, and default settings
Sidebar: Browse all available OpenAI models
Sidebar: View and copy available OpenAI formulas
After installation, you can access all features through the "LLMs For Sheets" menu:
Open LLM Sidebar
: Access the unified interface with tabs for Settings, Formulas, Models, and OpenRouter features
Responses are cached to reduce API costs and improve performance:
- Default cache duration is 6 hours (21600 seconds)
- Cache can be:
- Disabled by setting duration to
0
- Set to indefinite with
-1
- Customized to any number of seconds
- Disabled by setting duration to
- Cached responses are unique per combination of prompt, model, max tokens, and temperature
- Cache is stored per user and not shared with other users
- You can modify cache duration in the sidebar settings
Controls the randomness/creativity of responses:
0.0
(default): Most deterministic, consistent responses0.7
(ChatGPT web default): More creative, varied responses- Range is 0.0 to 1.0
- Lower values are better for:
- Classification tasks
- Factual queries
- Data analysis
- Higher values are better for:
- Creative writing
- Brainstorming
- Generating diverse ideas
Controls the length of responses:
150
(default): Short, concise responses4096
(ChatGPT web default): Long, detailed responses- Higher values allow for:
- More detailed explanations
- Longer text generation
- But may increase API costs
- Lower values are good for:
- Quick answers
- Classification tasks
- Reducing API costs
- Can be adjusted per formula call or set globally in sidebar
You can modify these settings either:
- In the sidebar (affects all future calls)
- Per formula call using optional parameters:
=OPENAI("Your prompt", "gpt-4o", 0.7, 300)
// gpt-4o model, 0.7 temperature, 300 tokens
=OPENROUTER("Your prompt", "anthropic/claude-3-sonnet", 0.2, 500)
// Claude 3 Sonnet model, 0.2 temperature, 500 tokens
prompt
: Your input text or cell referencemodel
: OpenAI model ID (e.g., "gpt-4o-mini", "gpt-4o", "gpt-3.5-turbo")temperature
: Response creativity level from 0.0 to 1.0 (default: 0.0)maxTokens
: Maximum length of response (default: 150)
prompt
: Your input text or cell referencemodel
: OpenRouter model ID (e.g., "anthropic/claude-3-sonnet", "google/gemini-pro-1.5")temperature
: Response creativity level from 0.0 to 1.0 (default: 0.0)maxTokens
: Maximum length of response (default: 150)
- If you see
#ERROR!
: Check your API keys are set correctly for the provider you're using - If you see
EMPTY
: The prompt was empty or invalid - If you see rate limit errors: Try reducing requests or increasing cache duration
- If response is cut off: Increase
maxTokens
parameter - For OpenRouter errors: Check that the model ID is valid using the Models tab in the sidebar
When sharing sheets with formulas:
- Each user needs their own API key unless you store it in the sheet
- Cache responses are per-user and not shared
- System prompts are also per-user unless set in the sheet
-
Clone the repository:
git clone https://github.com/reganmcgregor/appscript-llm.git cd appscript-llm
-
Install dependencies:
pnpm install
-
Build the project:
pnpm build
This will:
- Compile TypeScript to JavaScript
- Generate the consolidated files in the
dist/
folder:Main.js
: All functions, UI logic, and provider integrationsUI.html
: Unified sidebar interface
-
Copy both files from
dist/
to your Google Apps Script editor
src/
: Source TypeScript and HTML filesMain.ts
: All functions, UI logic, shared utilities, and provider integrationsUI.html
: Consolidated sidebar interface
dist/
: Compiled JavaScript and HTML files (generated)assets/
: Documentation imagespackage.json
: Project configuration and dependencies
pnpm build # Build the project
pnpm lint # Run linter
For end-user installation instructions, see the Installation section above.
This project was originally inspired by urvana/appscript-chatgpt, but has since been significantly rewritten and extended. See the commit history for details on original contributions.
Author: Regan McGregor
License: MIT
Originally forked from urvana/appscript-chatgpt by Patricio López Juri. Major rewrites and new features by Regan McGregor.
Feel free to contribute to this project!
If you have suggestions, want to request support for a new provider, or have ideas for new formulas, please open an issue or a pull request.
Note: Each file must have the exact filename as shown above. The full code for each file can be found in the dist
folder of this repository.