- One Intel NUC
- k3s
v1.31.9+k3s1 - Cilium as CNI
fzf:
git clone --depth 1 https://github.com/junegunn/fzf.git ~/.fzf
~/.fzf/installk3s and the config.yaml:
cluster-init: true
write-kubeconfig-mode: "0644"
flannel-backend: "none"
disable-kube-proxy: true
disable-network-policy: true
disable:
- servicelb
- traefikcurl -sfL https://get.k3s.io | sh -s - --config=/etc/rancher/k3s/config.yamlcilium:
helm upgrade \
--install \
--create-namespace \
--namespace kube-system \
--debug \
--reuse-values \
-f cilium/values.yaml \
--version 1.16.4 \
cilium \
cilium/cilium
This assumes that you have a tls directory locally.
- Run locally:
openssl genrsa -out nuc-admin.key 2048
openssl req -new -key nuc-admin.key -out nuc-admin.csr -subj /O=nuc-admin/CN=nuc-admin
cat nuc-admin.csr | base64 -w0 | wl-copy -p- Run externally (e.g. on the NUC), create the following manifest, i gave it the name
nuc-admin.yaml. Note that you can changeexpirationSecondsfor longer validity, if you remove that completely you'll get the default 1 year validity from the built-in signer:
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: nuc-admin
spec:
groups:
- nuc-admin
request: <COPY-PASTE THE B64 ENCODED CSR HERE>
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 108000
usages:
- client auth
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: nuc-admin- Run externally:
kubectl apply -f nuc-admin.yaml
kubectl certificate approve nuc-admin
kubectl get csr nuc-admin -o jsonpath='{.status.certificate}' | base64 -d > admin.crt- Locally: Create a file called
nuc-admin.crtlocally in thetlsdirectory. - Locally, finalize the
kubeconfig:
kubectl config set-credentials nuc-admin --client-key nuc-admin.key --client-certificate nuc-admin.crt --embed-certs=true
kubectl config set-cluster <CLUSTER NAME> --server https://<NUC IP>:6443 --insecure-skip-tls-verify=true
kubectl config set-context nuc-admin --user=nuc-admin --cluster=<CLUSTER NAME>
kubectl config use-context nuc-admin- Install ArgoCD using Helm:
mkdir argo-cd
helm repo add argo https://argoproj.github.io/argo-helm
helm show values --version 7.3.2 argo/argo-cd > argo-cd/7.3.2-values.yaml- Make relevant changes to the values file.
- Install:
helm upgrade \
--install \
--reuse-values \
--create-namespace \
--namespace argocd \
--values argo-cd/values.yaml \
--version 7.7.7 \
--debug \
argocd \
argocd/argo-cd- Get the password set for the built-in
adminaccount:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 --decode ; echo- At the moment i'm only port-forwarding to my cluster services, so to be able to initially browse to the ArgoCD UI i did the following:
kubectl port-forward svc/argocd-server -n argocd 4443:443I'll change the way i expose services and applications in the cluster later on.
- Install the
ApplicationSetto install all applications:
kubectl apply -f argo-cd/appset.yamlexport SUC_VERSION="v0.14.2"
kubectl apply --force-conflicts --server-side -f https://github.com/rancher/system-upgrade-controller/releases/download/${SUC_VERSION}/crd.yaml
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/download/${SUC_VERSION}/system-upgrade-controller.yamlUpgrade k3s using the system-upgrade-controller:
-
Change the
k3sversion in thePlanmanifest. -
Apply the
Planmanifest:
kubectl apply -f ./k3s/server-plan.yamlAssumes that the tailscale-operator has been installed and everything needed has been configured in your Tailscale account, see this link for more info on how to do this!
On services that shall be exposed over Tailscale add the following to their Service object:
...
annotations:
tailscale.com/expose: "true"
tailscale.com/hostname: prometheus
...