DEV Community

# ollama

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
Building a Containerized GenAI Chatbot with Docker, Ollama, FastAPI & ChromaDB

Building a Containerized GenAI Chatbot with Docker, Ollama, FastAPI & ChromaDB

Comments
2 min read
OpenClaw + Mac Mini: Run Your Own 24/7 AI Agent Locally

OpenClaw + Mac Mini: Run Your Own 24/7 AI Agent Locally

5
Comments
2 min read
One‑Line Magic: Using Claude Code Through Ollama

One‑Line Magic: Using Claude Code Through Ollama

Comments
1 min read
The Best of Both Worlds: Merging IBM’s Project Bob with Ollama’s Image Ecosystem

The Best of Both Worlds: Merging IBM’s Project Bob with Ollama’s Image Ecosystem

Comments
10 min read
遠端使用 ollama 的方法

遠端使用 ollama 的方法

1
Comments
4 min read
Open WebUI: Self-Hosted LLM Interface

Open WebUI: Self-Hosted LLM Interface

Comments
13 min read
Running Local AI with Flox and Ollama

Running Local AI with Flox and Ollama

Comments
2 min read
Building Self-Refining AI Agents with Ollama & Langfuse

Building Self-Refining AI Agents with Ollama & Langfuse

Comments
3 min read
AI-Powered Resume Generator: Architecture & Implementation

AI-Powered Resume Generator: Architecture & Implementation

Comments
6 min read
讓 Claude Code 串接 Ollama 使用本地端模型

讓 Claude Code 串接 Ollama 使用本地端模型

3
Comments
1 min read
This Might Be the Best Ollama Chat Client: OllaMan

This Might Be the Best Ollama Chat Client: OllaMan

1
Comments
4 min read
Update of “Fun project of the week, Mermaid flowcharts generator!” — V2 and more…

Update of “Fun project of the week, Mermaid flowcharts generator!” — V2 and more…

Comments
10 min read
The Complete Guide to Local AI Coding in 2026

The Complete Guide to Local AI Coding in 2026

3
Comments
4 min read
The Ultimate LLM Inference Battle: vLLM vs. Ollama vs. ZML

The Ultimate LLM Inference Battle: vLLM vs. Ollama vs. ZML

1
Comments
6 min read
Choosing the Right LLM for Cognee: Local Ollama Setup

Choosing the Right LLM for Cognee: Local Ollama Setup

Comments
3 min read
Diagnose & Fix Painfully Slow Ollama: 4 Essential Debugging Techniques + Fixes

Diagnose & Fix Painfully Slow Ollama: 4 Essential Debugging Techniques + Fixes

20
Comments
3 min read
Securely Exposing LM Studio with Nginx Proxy + Auth + Manage loaded models

Securely Exposing LM Studio with Nginx Proxy + Auth + Manage loaded models

Comments
3 min read
Building an AI-Powered Log Analyser with RAG

Building an AI-Powered Log Analyser with RAG

Comments
6 min read
Stop Paying OpenAI: Free Local AI in .NET with Ollama

Stop Paying OpenAI: Free Local AI in .NET with Ollama

7
Comments 2
13 min read
Panduan Lengkap: Mengakses Ollama Windows dari WSL untuk Pengembangan AI Agent Pendahuluan

Panduan Lengkap: Mengakses Ollama Windows dari WSL untuk Pengembangan AI Agent Pendahuluan

Comments
10 min read
AI Infrastructure on Consumer Hardware

AI Infrastructure on Consumer Hardware

5
Comments
9 min read
Local LLM Hosting: Complete 2025 Guide - Ollama, vLLM, LocalAI, Jan, LM Studio & More

Local LLM Hosting: Complete 2025 Guide - Ollama, vLLM, LocalAI, Jan, LM Studio & More

1
Comments
19 min read
Using Ollama Web Search API in Python

Using Ollama Web Search API in Python

2
Comments 2
9 min read
Using Ollama Web Search API in Go

Using Ollama Web Search API in Go

Comments
11 min read
💻 Unlock RAG-Anything’s Power with Ollama on Your Machine (with Docling as Bonus)

💻 Unlock RAG-Anything’s Power with Ollama on Your Machine (with Docling as Bonus)

Comments 7
8 min read
loading...