this is my IaC for my personal projects
TODO:
- longhorn ui setup
I use Hetzner as cloud provider, I create a Kubernetes cluster using k3s hosted on non-dedicated servers.
this part is managed via terraform and the terraform-hcloud-kube-hetzner module.
it lives in the /hcloud-cluster folder.
- set up terraform variables:
cp hcloud_cluster/terraform.tfvars.template hcloud_cluster/terraform.tfvarsthen fill the file with your values, each variable has a comment explaining how to obtain it.
- follow kube-hetzner module installation instructions
- run terraform apply
terraform apply- it will take a bit to create the cluster, once done you can get the kubeconfig with
terraform output -raw kubeconfig > ./kubeconfig - cloudflare records for kubernetes api and grafana dashboard
- a control-plane node pool with 3 nodes (recommended server type at least
cpx21because 4GB of RAM are a minimum in most cases to handle the cluster well) - an agent node pool for lightweight applications and core kubernetes services (the nodes are called
agent-tenderas in the support tender of boats) - an autoscaler agent node pool for general purpose applications (called
agent-cruiseras in cruiser sailing boats) - an autoscaler agent node pool for resource intensive applications (called
agent-racer) - 2 Hetzner load balancers, one for the control plane and one for the agent nodes
- all nodes use
OpenSUSE MicroOS
kubernetes wise (installed directly via the kube-hetzner Terraform module):
-
calico as the CNI
-
nginx
-
longhorn for efficient and scalable storage management
is used to have fast persistant storage for stuff like DBs.
uses all the nodes nvme storage and manages them together giving you a simple StorageClass that you can use in your PVCs.will only use the storage of nodes with the label
node.longhorn.io/create-default-disk=true
the default StorageClass name islonghorn -
kured for automatic kernel updates
-
cluster autoscaler (bless it)
-
smb support: in the future I wanna use Hetzner Storage Boxes for hosting immich and other stuff
Kubernetes is managed using ArgoCD in the /k8s-resources folder.
-
install ArgoCD in the cluster:
kubectl create namespace argocd kubectl apply -k ./argocd-installation
and on your local machine:
brew install argocd
-
Configure two Nginx ingresses for HTTP/HTTPS and gRPC:
kubectl apply -f ./argocd-installation/argocd-nginx-ingresses.yaml
-
Login via the cli
argocd admin initial-password -n argocd
use username: admin and the password from the previous command to login
argocd login grpc.argocd.giuliopime.dev
then change the password and delete the old one
argocd account update-password
kubectl delete secret argocd-initial-admin-secret -n argocd
-
Access the web UI argocd.giuliopime.dev using the credentials created at the previous step
-
TODO: Document how to setup this repository and sealed secrets