Kubernetes and Argo CD manifests for running an Ollama backend together with the Open WebUI frontend. The manifests default to deploying the stack into an ai namespace and exposing the UI through an ingress.
ai/– Kustomize base that provisions the Ollama DaemonSet, Open WebUI deployment, service, ingress, and theainamespace.ingress-lb/– Kustomize base for a clusterLoadBalancerservice that fronts an existing NGINX ingress controller (handy on MicroK8s).app-ai-stack.yaml/app-ingress-lb.yaml– Argo CDApplicationresources that point at theaiandingress-lbfolders in this repository.
kubectl apply -k ai
kubectl apply -k ingress-lbUpdate hostnames or other defaults in the manifests before applying if they differ from your environment (for example the ai/home.lan host in ai/openwebui-ingress.yaml).
- Commit this repository to your own Git remote and update the
spec.source.repoURLvalues in bothapp-*.yamlfiles. - Apply the Argo CD applications:
kubectl apply -f app-ai-stack.yamlandkubectl apply -f app-ingress-lb.yaml. - Sync the applications in the Argo CD UI/CLI; automated sync and self-heal are already enabled.
The Ollama DaemonSet uses a hostPath volume at /var/lib/ollama to persist downloaded models on each node. Ensure that path exists (or change it) on every node where the DaemonSet should run.
- Adjust resource requests/limits in
ai/ollama-daemonset.yamlto match your cluster capacity. - Configure TLS or external DNS for the Open WebUI ingress if exposing it beyond a trusted network.