ollama
Here are 269 public repositories matching this topic...
One script. 14 self-hosted services. Zero cloud dependency. Replaces Google Drive, Photos, Gmail, Bitwarden, Zapier, Vercel & ChatGPT on Ubuntu 24.04.
-
Updated
Mar 9, 2026 - Shell
One-command, self-hosted deployment of an AI chat interface using OpenWebUI, Ollama, and Nginx with Docker, configured for local network access.
-
Updated
Feb 22, 2026 - Shell
🎧 AI-powered CLI tool that removes ads from podcasts and generates embedded chapters with transcripts. Uses Whisper + Ollama for 100% free local processing.
-
Updated
Mar 20, 2026 - Shell
基于 Docker Compose 的 Ollama 部署配置,支持 NVIDIA GPU 加速和自定义模型。
-
Updated
Apr 1, 2026 - Shell
Build persistent digital lifeforms with local M4 brain, OpenClaw agent, Qwen core, and Git-based memory.
-
Updated
Apr 4, 2026 - Shell
🤖 Modular Docker Compose stack for Ollama with GPU acceleration, dev container integration, and flexible deployment configurations 🚀
-
Updated
Sep 30, 2025 - Shell
Homelab Setup Configurations of Applications, Networking, and more.
-
Updated
Dec 9, 2024 - Shell
Unlock fast, local LLM inference on AMD-powered mini PCs delivering 65-87 t/s for large models without cloud or subscription costs
-
Updated
Apr 4, 2026 - Shell
A tiny CLI that pipes a prompt and a file into Ollama
-
Updated
Sep 18, 2025 - Shell
macOS-first CLI command compiler that turns natural language into one runnable zsh command, with Foundry/Web3 routing and automatic correction of Linux-style commands to macOS-compatible commands.
-
Updated
Feb 7, 2026 - Shell
Modular post-install framework for Ubuntu on Intel MacBooks — standardize a fleet of old Macs as AI lab nodes
-
Updated
Feb 28, 2026 - Shell
Red-team audited, defense-in-depth deployment standard for OpenClaw and Ollama on macOS Apple Silicon. Features 4-layer hardening including pf firewall anchors and POSIX isolation.
-
Updated
Mar 3, 2026 - Shell
Enable accurate academic citation by searching, evaluating, and generating BibTeX entries with quality metrics using Semantic Scholar data.
-
Updated
Apr 4, 2026 - Shell
sandbox docker to work with IA agents in n8n
-
Updated
Jan 17, 2026 - Shell
A simple Docker Compose setup to self-host Ollama and Open WebUI. Run your own private LLMs with GPU acceleration (NVIDIA/AMD) and complete data privacy. Easy to integrate with other services like n8n.
-
Updated
Jan 27, 2026 - Shell
Improve this page
Add a description, image, and links to the ollama topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the ollama topic, visit your repo's landing page and select "manage topics."