Storage provisioner: deploy as DaemonSet and support Node Affinity#22945
Storage provisioner: deploy as DaemonSet and support Node Affinity#22945medyagh wants to merge 5 commits into
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: medyagh The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…meBindingMode: WaitForFirstConsumer), a PersistentVolume (PV) is only provisioned and the PVC bound after a Pod using the PVC is successfully scheduled.
|
@medyagh: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
PR Description
Summary
This PR upgrades minikube’s built-in dynamic
storage-provisioner(hostpath provisioner) to support multi-node clusters properly.The Problem:
Previously, the storage-provisioner ran as a single control-plane Pod. In a multi-node cluster:
The Solution:
DaemonSeton every node in the cluster.WaitForFirstConsumer): ThestandardStorageClass is updated to usevolumeBindingMode: WaitForFirstConsumer. This delays volume provisioning until the Kubernetes Scheduler has assigned the consuming Pod to a specific target node.pv.Spec.NodeAffinitymatching the node. This restricts the consuming pod (and all future pods using the PV) to only run on that specific node.k8s.io/minikube-hostpath-<nodeName>so DaemonSet pod updates or crashes do not create orphaned PVs.Detailed Changes
Core Provisioner Updates
go.mod: Upgradedsigs.k8s.io/sig-storage-lib-external-provisioner/v6tosigs.k8s.io/sig-storage-lib-external-provisioner/v13(v13.0.0) to ensure compatibility with modern client-go v1.36 APIs.pkg/storage/storage_provisioner.go:nodeNameusingNODE_NAMEenvironment variable.Provision()to ignore PVCs targeted at other nodes.VolumeNodeAffinitygeneration on PV metadata specs.Manifest & Build Configuration Updates
deploy/addons/storage-provisioner/storage-provisioner.yaml.tmpl: Converted the workload template fromPodtoDaemonSetand added Coordination lease RBAC rules.deploy/addons/storageclass/storageclass.yaml: ConfiguredvolumeBindingMode: WaitForFirstConsumer.Makefile/deploy/storage-provisioner/Dockerfile: Bumped tag tov6.0.1and migrated customarchbuild parameter to standard BuildxTARGETARCH.Test Suite Additions
test/integration/multinode_test.go: AddedvalidateStorageProvisionerNodeAffinityintegration test. It deploys a pod targeted at a worker node (multinode-m02), verifies that the local PV is successfully created, and asserts that the correctNodeAffinitymetadata points tomultinode-m02.test/integration/testdata/: Addednode-affinity-pvc.yamlandnode-affinity-pod.yamltest manifests.Verification Results
Tested successfully against a local 2-node Docker driver cluster with the integration test suite:
Log Output: