Skip to content

Releases: ollama/ollama

v0.24.0

14 May 02:24
c28ddc0

Choose a tag to compare

v0.24.0 Pre-release
Pre-release

What's Changed

Full Changelog: v0.23.4...v0.24.0-rc0

v0.30.0

13 May 14:32

Choose a tag to compare

v0.30.0 Pre-release
Pre-release

This version of Ollama will change the architecture to directly support llama.cpp instead of building on top of GGML, and allows for compatibility with GGUF file format. MLX is used to accelerate model inference on Apple Silicon.

While in pre-release we'd love feedback on:

  • Performance improvements or degradation
  • Errors or crashes that did not previously occur
  • Memory utilization improvements or degradation

Known issues:

  • laguna-xs.2 is not supported yet on this pre-release
  • llama3.2-vision is not supported yet on this pre-release

Installing:

Mac/Linux

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.30.0-rc15 sh

Windows

$env:OLLAMA_VERSION="0.30.0-rc15"; irm https://ollama.com/install.ps1 | iex

v0.23.4

13 May 20:40
3af1a00

Choose a tag to compare

What's Changed

  • ollama launch opencode now supports vision models with image inputs
  • Fixed formatting of Claude tool results when using local image paths

Full Changelog: v0.23.3...v0.23.4

v0.23.3

12 May 03:48
421faa0

Choose a tag to compare

What's Changed

Full Changelog: v0.23.2...v0.23.3

v0.23.2

07 May 20:23
f866e76

Choose a tag to compare

What's Changed

  • ollama launch no longer includes Claude Desktop due to the third-party integration being limited to Anthropic models.
  • Use ollama launch claude-desktop --restore to restore Claude Desktop to its normal state.
  • /api/show responses are now cached, improving median latency by ~6.7x which will increase load speed for integrations like VS Code.
  • Improved backup workflow when managing launch integrations
  • Cleaner image generation layout in the MLX runner

Full Changelog: v0.23.1...v0.23.2

v0.23.1

05 May 17:13
15e6076

Choose a tag to compare

Gemma 4 MTP (Multi-token Processing) for the MLX runner

Gemma 4 MTP speculative decoding is now supported on Macs. This can give over a 2x speed increase for the Gemma 4 31B model on coding tasks.

ollama run gemma4:31b-coding-mtp-bf16

What's Changed

Full Changelog: v0.23.0...v0.23.1

v0.23.0

03 May 03:34
9ba5a04

Choose a tag to compare

Claude Desktop

Claude Desktop is now supported with Ollama Launch.

Claude Cowork and Claude Code are supported within the Claude Desktop App.

ollama launch claude-desktop

Claude Cowork

ca1

Claude Code

ca2

Claude Code on the terminal can still be accessed through the CLI with:

ollama launch claude

Not supported yet

  • Web Search (coming soon)
  • Extensions

What's Changed

  • Launch Claude Desktop with ollama launch claude-desktop
  • The Ollama app now surfaces featured models from server-driven recommendations
  • Fixed OpenClaw gateway timeout on Windows by enforcing IPv4 loopback (thanks @UniquePratham)
  • Hardened Metal initialization to gracefully handle ggml kernel compilation failures

New Contributors

Full Changelog: v0.22.1...v0.23.0

v0.22.1

28 Apr 20:30
c7c2837

Choose a tag to compare

What's Changed

  • Updated the Gemma 4 renderer for thinking and tool calling improvements
  • Model recommendations are now updated without updating Ollama
  • Aligned the desktop app's launch page with ollama launch integrations
  • Fixed the Poolside integration title in ollama launch

Full Changelog: v0.22.0...v0.22.1

v0.22.0

28 Apr 15:00

Choose a tag to compare

New models

Full Changelog: v0.21.2...v0.22.0

v0.21.3

24 Apr 12:15
ea01af6

Choose a tag to compare

What's Changed

Full Changelog: v0.21.2...v0.21.3-rc0