Skip to content

feat(chart): improve configurability of mounts for the longhorn-manager DaemonSet#11199

Open
pstark-code wants to merge 2 commits intolonghorn:masterfrom
pstark-code:feature/configurable-host-path
Open

feat(chart): improve configurability of mounts for the longhorn-manager DaemonSet#11199
pstark-code wants to merge 2 commits intolonghorn:masterfrom
pstark-code:feature/configurable-host-path

Conversation

@pstark-code
Copy link

@pstark-code pstark-code commented Jun 30, 2025

Which issue(s) this PR fixes:

Issue #11186

Additionally, adds the option for extra Volumes and extra VolumeMounts. Longhorn supports multiple disks, but can not mount multiple volumes into the container.

What this PR does / why we need it:

  • add support for a custom hostPath for the data volume attached to the longhorn-manager pod
  • add support for extra volumes entries and extra volumeMounts entries.

Special notes for your reviewer:

This change does not require any adjustments for existing users that don't need this feature. It should be

Additional documentation or context

@mantissahz
Copy link
Contributor

Hi @pstark-code, please sign off the commit.

@pstark-code pstark-code force-pushed the feature/configurable-host-path branch 2 times, most recently from 809b03e to f57e967 Compare June 30, 2025 10:51
@pstark-code
Copy link
Author

signed-off and rebased on top of the upstream default branch.

…-manager ds

- add configurable value in values.yaml with the default value set to
  the previously hard-coded value
- replace the hostPath in the DaemonSet with the new value

Notes:
- this will keep working for existing users without configuration
  changes

Signed-off-by: P. Stark <github@codechaos.ch>
@pstark-code pstark-code force-pushed the feature/configurable-host-path branch from f57e967 to 3ae957c Compare June 30, 2025 15:06
@pstark-code pstark-code changed the title feat(chart): add support for a custom hostPath for the manager pod feat(chart): improve configurability of mounts for the longhorn-manager DaemonSet Jun 30, 2025
@derekbit
Copy link
Member

The PR might not be enough since some paths in the longhorn-manager or other component images are hardcoded with the /var/lib/longhorn prefix.

@pstark-code
Copy link
Author

The PR might not be enough since some paths in the longhorn-manager or other component images are hardcoded with the /var/lib/longhorn prefix.

This was brought up in multiple places. Please read my response. This change does not change the path it gets mounted at in the container. So for the Longhorn Manager software, nothing changes.

@pstark-code
Copy link
Author

I also added the ability to configure extra volumes to be mounted into the longhorn-manager container. This mirrors the longhorn option to have multiple "disks".

…anager pod

Signed-off-by: P. Stark <github@codechaos.ch>
@pstark-code pstark-code force-pushed the feature/configurable-host-path branch from 3ae957c to b1da145 Compare June 30, 2025 15:14
@derekbit
Copy link
Member

derekbit commented Jun 30, 2025

The PR might not be enough since some paths in the longhorn-manager or other component images are hardcoded with the /var/lib/longhorn prefix.

This was brought up in multiple places. Please read my response. This change does not change the path it gets mounted at in the container. So for the Longhorn Manager software, nothing changes.

The binaries stored in the host’s /var/lib/longhorn/engine-binaries are managed by the engineImage controller (code) and the EngineImage DaemonSet.

The change introduced in the PR makes the spec.volumes['longhorn'].hostPath.path of the longhorn-manager DaemonSet configurable. However, the EngineImage DaemonSet is not aware of this change, which results in the controllers being unable to locate the binaries at the expected path inside the container (/var/lib/longhorn/engine-binaries). Thus, updating the longhorn-manager codes is still required.

@pstark-code
Copy link
Author

The binaries stored in the host’s /var/lib/longhorn/engine-binaries are managed by the engineImage controller (code) and the EngineImage DaemonSet.

Ah! That explains why I couldn't figure out how they get there!

Okay, then I will remove the configurable hostPath and leave the extraMounts and extraVolumeMounts options, because that, together with the default settings.defaultPath resolves all the issues I was looking to address.

Just as a curious question: why are the engine binary fetching containers not part of the longhorn-manager pods? InitContainers maybe? It seems unintuitive to have this tight coupling between two seemingly disparate DaemonSets.

@derekbit
Copy link
Member

derekbit commented Jul 1, 2025

Okay, then I will remove the configurable hostPath and leave the extraMounts and extraVolumeMounts options, because that, together with the default settings.defaultPath resolves all the issues I was looking to address.

Is narrowing down the scope to extraMounts and extraVolumeMounts enough for addressing the issue?

Just as a curious question: why are the engine binary fetching containers not part of the longhorn-manager pods? InitContainers maybe? It seems unintuitive to have this tight coupling between two seemingly disparate DaemonSets.

I think this is due to Longhorn can only one longhorn-manager daemonset but can support multiple engineimage daemonsets. Longhorn can support multiple egnineimages simutaneously.

@pstark-code
Copy link
Author

pstark-code commented Jul 1, 2025

Is narrowing down the scope to extraMounts and extraVolumeMounts enough for addressing the issue?

I believe so. I wanted to resolve two things:

  1. I thought it was a silly limitation to require a very specific path on the host to contain longhorn things.
  • but as it turns out there is a good reason to not change this. Changing that would require far-reaching refactoring within the longhorn-manager, as far as I can tell
  1. I was trying to work around a talos-specific issue I was having.
  • Context: Talos runs the kubelet inside a container and so any paths that need to be visible to the kubelet need to be bind-mounted into that namespace. This gave me some issues with the new UserVolume mechanism for exposing extra disks to talos nodes. And the only reason I was trying to do that is because the longhorn chart did not have any other way to access extra storage.
  • The extra disks are already automatically available inside the Talos kubelet namespace at /var/mnt/${user-volume-name}. So if I can just specify another Volume with that hostPath, and then add a longhorn disk with that path for storage, everything should work. (or just set it via the defaultSettings.defaultDataPath)

To summarize: Yes I think limiting the scope will be sufficient. In fact, I think making the hostPath of /var/lib/longhorn configurable is useless and will fundamentally (although temporarily) break longhorn deployments.

@shuo-wu
Copy link
Contributor

shuo-wu commented Jul 25, 2025

Talos runs the kubelet inside a container and so any paths that need to be visible to the kubelet need to be bind-mounted into that namespace.

  • If all paths used by longhorn-manager should be bind mounted to the kubelet container, this change is probably not enough since longhorn-manager hard codes some paths when operating files, e.g., the socket files

if I can just specify another Volume with that hostPath, and then add a longhorn disk with that path for storage, everything should work. (or just set it via the defaultSettings.defaultDataPath)

  • If you are trying to use a customized data path/disk path as Longhorn Disk, the main ways are modifying the customized setting before launching Longhorn or editing the Longhorn Node after launching Longhorn. IIRC, changing the volume host path mounted by longhorn-manager won't help automatically add it as Longhorn Disk, because it's InstanceManager Pod being responsible for managing those data paths.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants