A proxy that allows you to use Ollama as a coding assistant similar to GitHub Copilot.
- Works as a drop-in replacement for GitHub Copilot
- Compatible with multiple IDEs
- Uses local LLMs through Ollama for privacy and control
- Customizable model selection
- Go 1.20 or higher
- Ollama installed and running
- At least 16GB RAM (depending on the LLM model used)
- Download Go from the official website:
# For Linux (adjust version as needed)
wget https://go.dev/dl/go1.25.0.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.25.0.linux-amd64.tar.gz
# For macOS (using Homebrew)
brew install go- Add Go to your PATH in
~/.bashrc,~/.zshrc, or equivalent:
export PATH=$PATH:/usr/local/go/bin
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
# Apply the changes
source ~/.bashrc # or source ~/.zshrc- Verify the installation:
go versionEnsure Ollama is installed:
curl -fsSL https://ollama.com/install.sh | sh
# For macOS (using Homebrew)
brew install ollamaollama pull qwen3-coder:30bYou can use other models as well, but qwen3-coder:30b is the default expected by ollama-copilot.
go install github.com/josuemontano/ollama-copilot@latest- Make sure Ollama is running:
ollama serve- In a separate terminal, start ollama-copilot:
ollama-copilot- Configure your IDE to use the proxy (see IDE Configuration below)
| Flag | Default | Description |
|---|---|---|
--port |
:11437 |
HTTP port to listen on |
--proxy-port |
:11438 |
HTTP proxy port to listen on |
--port-ssl |
:11436 |
HTTPS port to listen on |
--proxy-port-ssl |
:11435 |
HTTPS proxy port to listen on |
--cert |
"" |
Certificate file path (*.crt) for HTTPS |
--key |
"" |
Key file path (*.key) for HTTPS |
--model |
qwen3-coder:30b |
LLM model to use with Ollama |
--num-predict |
200 |
Maximum number of tokens to predict |
--prompt-template |
<|fim_prefix|> {{.Prefix}} <|fim_suffix|>{{.Suffix}} <|fim_middle|> |
Fill-in-middle template for prompts |
--verbose |
false |
Enable verbose logging mode |
Example with custom options:
ollama-copilot --model qwen2.5-coder:7b --num-predict 300 --verboseYou can configure the Ollama host using environment variables:
OLLAMA_HOST="http://192.168.133.7:11434" ollama-copilot- Install copilot.vim
- Configure variables in your Neovim config:
let g:copilot_proxy = 'http://localhost:11435'
let g:copilot_proxy_strict_ssl = v:false- Install the GitHub Copilot extension
- Configure settings (File > Preferences > Settings or Ctrl+,):
{
"github.copilot.advanced": {
"debug.overrideProxyUrl": "http://localhost:11437"
},
"http.proxy": "http://localhost:11435",
"http.proxyStrictSSL": false
}- Open settings (Ctrl+,)
- Set up edit completion proxying:
{
"features": {
"edit_prediction_provider": "copilot"
},
"show_completions_on_input": true,
"edit_predictions": {
"copilot": {
"proxy": "http://localhost:11435",
"proxy_no_verify": true
}
}
}(Experimental)
- Install copilot-emacs
- Configure the proxy in your Emacs config:
(use-package copilot
:straight (:host github :repo "copilot-emacs/copilot.el" :files ("*.el")) ;; if you don't use "straight", install otherwise
:ensure t
;; :hook (prog-mode . copilot-mode)
:bind (
("C-<tab>" . copilot-accept-completion)
)
:config
(setq copilot-network-proxy '(:host "127.0.0.1" :port 11434 :rejectUnauthorized :json-false))
)- If you encounter connection issues, make sure Ollama is running
- Verify that the correct ports are accessible
- Check logs by running with the
-verboseflag - Ensure your Go path is correctly set up in your environment