A macOS AI assistant with chat, voice, local or cloud models, and approved automation.
The main assistant workspace with projects, threads, and the floating HUD.
Settings and AI Studio for providers, models, memory, and advanced controls.
The voice HUD for dictation, push-to-talk, and voice-first tasks.
Open Assist is a personal assistant for macOS. You can type, speak, run local or cloud models, and let the assistant take actions on your Mac after approval.
Main things it can do:
- chat in a full assistant workspace with projects and threads
- use voice for dictation, push-to-talk, or live voice conversations
- connect local or cloud AI providers
- run approved actions in browsers and supported macOS apps
- save thread notes, checkpoints, and memory
- schedule recurring tasks
- control the assistant remotely from Telegram
- generate images from the assistant
- project-based conversations
- thread notes and checkpoints
- attachments, tool activity, and progress in one place
- plan mode and agentic mode
- hold-to-talk and continuous dictation
- Apple Speech,
whisper.cpp, or cloud speech providers - live voice conversations with the assistant
- transcript history and quick paste of the last transcript
- local models
- OpenAI
- Anthropic
- Gemini
- Groq
- OpenRouter
- Ollama
- GitHub Copilot backend support
- browser control with your real signed-in browser profile
- screenshot-based computer use
- direct app actions in Finder, Terminal, Calendar, System Settings, Reminders, Contacts, Notes, and Messages
- approval flow before important actions
- custom and imported skills
- image generation
- scheduled jobs
- Telegram remote control
- Sparkle app updates
- Download the latest release from GitHub Releases.
- Open
Open Assist.app. - Open Settings -> AI & Models -> AI Studio.
- Connect a provider or set up local AI.
- Open the assistant and try a prompt.
Example prompts:
Help me write a short update for my team.Summarize these notes.Open Bluetooth settings.
- Open Settings -> AI & Models.
- Open AI Studio.
- Choose a local or cloud provider.
- Pick a model.
- Save your API key or finish sign-in if needed.
- Grant Microphone access.
- Grant Speech Recognition if you use Apple Speech.
- Grant Accessibility if you want reliable shortcuts and text insertion.
- Open Settings -> Speech & Input and choose a speech engine.
- Open Settings -> Automation.
- Allow Automation / Apple Events when macOS asks.
- Grant Screen Recording for screenshot-based computer use.
- Choose a browser profile if you want browser automation.
- Skills: add built-in, local, or GitHub-based skills per thread
- Scheduled Jobs: run recurring prompts on a cron schedule
- Telegram: pair a bot to use Open Assist from your phone
| Action | Default shortcut |
|---|---|
| Hold-to-talk | Option + Command + Space |
| Toggle continuous dictation | Control + Option + Command + Space |
| Paste last transcript | Option + Command + V |
You can change these in Settings.
- macOS 13.3 or newer
- Xcode 15+ if you want to build from source
- Microphone: voice input
- Speech Recognition: Apple Speech engine
- Accessibility: global shortcuts and direct text insertion
- Screen Recording: screenshot-based automation
- Automation / Apple Events: browser and app actions
- local use does not require an account
- no telemetry is enabled by default
- API keys and OAuth sessions are stored in macOS Keychain
- local settings, transcript history, and memory stay on your Mac
- if you use a cloud provider, your text or audio is sent to that provider
./build.shThis creates:
dist/Open Assist.app
Run it with:
open "dist/Open Assist.app"./build.sh --install
./build.sh --make-dmg--installcopies the app to/Applications/Open Assist.app--make-dmgalso createsdist/Open Assist.dmg
export DEVELOPER_ID="Your Name (TEAMID)"
./build.sh --make-dmg
Scripts/notarize.shScripts/notarize.sh expects:
APPLE_IDAPPLE_TEAM_IDAPPLE_APP_PASSWORD
Run package tests:
swift testRun smoke and regression scripts:
Scripts/run-tests.shRun insertion reliability checks:
Scripts/run-insertion-reliability.sh --regressionIf text insertion is not working as expected:
- turn on insertion diagnostics in app settings, or set
OPENASSIST_INSERTION_DIAGNOSTICS=1 - default log path:
/tmp/openassist-insertion-diagnostics.log - optional custom log path:
OPENASSIST_INSERTION_DIAGNOSTICS_PATH
Crash logs, when present:
~/Library/Logs/OpenAssist/crash.log
Helpful docs:
Sources/OpenAssist/ Main app code
Sources/OpenAssistObjCInterop/ Objective-C interop helpers
Resources/ Icons, plist, entitlements, assets
Scripts/ Build, test, release, and helper scripts
Tests/OpenAssistTests/ XCTest coverage
Docs/ User-facing docs
Wiki/ Product and setup notes
Vendor/Whisper/ Bundled whisper.cpp XCFramework
web/chat/ React chat UI inside the assistant window
Good places to start:
Sources/OpenAssist/App.swiftSources/OpenAssist/Services/Sources/OpenAssist/Views/Sources/OpenAssist/Assistant/