Releases: rook/rook
v1.18.8
Improvements
Rook v1.18.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- core: Add support for ceph tentacle (#16501, @subhamkrai)
- helm: Include exporter options in CephCluster (#16745, @michaeltchapman)
- toolbox: Merge rook-config-override ConfigMap into ceph.conf (#16731, @mheler)
- csi: ControllerPlugin/NodePlugin resource settings were reversed (#16735, @swills)
- osd: Allow snaptrimp and snaptrip_wait PGs by the PDBs during node drains (#16713, @sp98)
- helm: Fix default pathType for HTTPRoute in the rook-ceph-cluster chart (#16724, @fancl20)
- pool: Retry if pool status is empty in the rados namespace controller (#16705, @parth-gr)
- namespace: Add retryOnConflict when updating status (#16661, @subhamkrai)
v1.18.7
Improvements
Rook v1.18.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- pool: Retry pool status updates in the radosnamespace controller (#16700, @parth-gr)
- osd: Add device class label to the osd prepare pods (#16675, @parth-gr)
- external: Fix quote parsing and message in import-external-cluster.sh (#16646, @GanghyeonSeo)
- object: Fix user quotas being overwritten when obc bucketOwner is set (#16672, @jhoblitt)
- docs: Example of application migration between clusters (#16659, @travisn)
- mgr: Add hostNetwork field to Ceph Mgr spec (#16617, @Sunnatillo)
- osd: Add CephCluster
OSDMaxUpdatesInParallelto tune OSD updates (#16655, @jhoblitt)
v1.17.9
Improvements
Rook v1.17.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- pool: Retry pool status updates in the radosnamespace controller (#16700, @parth-gr)
- object: Fix user quotas being overwritten when OBC bucketOwner is set (#16672, @jhoblitt)
- mon: Wait for the canary pods to be terminated (#16619, @sp98)
- mon: Respond quickly to the mon canary pod deletion (#16629, @travisn)
- namespace: Blocklist
ip:noncein cleanup job (#16532, @Madhu-1) - core: Fix typos in ObjectZoneSpec.ZoneGroup and ObjectZoneGroupSpec.Realm field descriptions (#16496, @jhoblitt)
v1.18.6
v1.18.5
Improvements
Rook v1.18.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- mon: Wait for the canary pods to be terminated before starting mon daemons (#16619, @sp98)
- mon: Trap the sigterm to respond quickly to the mon canary pod deletion (#16629, @travisn)
- osd: Set osd resources for specific device class (#16614, @travisn)
- mgr: Add required k8s label for endpointSlice (#16618, @subhamkrai)
- pool: Skip setting the target size ratio to 0 by default (#16609, @travisn)
- core: Fix ceph health check up status check (#16595, @parth-gr)
- osd: Allow OSD init keyring update to be best-effort instead of fail osd start (#16603, @BlaineEXE)
- osd: Add a timeout for osd create and update process (#16594, @parth-gr)
- core: Add missing labels to RBAC resources to prevent ArgoCD drift (#16507, @fullstackjam)
- mon: Update the clusterinfo with v2 port (#16540, @parth-gr)
- file: Allow overriding MDS cache memory limit. (#16556, @timbuchwaldt)
- osd: More detailed logging to check node topology conflicts (#16573, @OdedViner)
v1.18.4
v1.18.3
Improvements
Rook v1.18.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- helm: Allow specifying the image tag and repository separately (#16512, @travisn)
- csi: Allow overriding volume settings in csi operator for nixos (#16395, @travisn)
- osd: Exclude down OSDs from main PDB when cluster is clean (#16112, @elias-dbx)
- build: Add csi operator image to images.txt (#16563, @travisn)
- csi: Update csi-operator version to v0.4.1 (#16560, @Madhu-1)
- rbdmirror: Fix mirroring monitoring settings for rados namespaces (#16520, @parth-gr)
- namespace: Blocklist ip:nonce in cleanup job (#16532, @Madhu-1)
- osd: Clean encrypted disks from other clusters (#16488, @sp98)
- csi: Avoid port conflict by removing liveness probe from the csi-operator (#16516, @Madhu-1)
- csi: Add labels to the csi-operator driver pod (#16514, @subhamkrai)
- helm: Refactoring to modernize templates (#16494, @consideRatio)
- osd: Updated blocking pdbs when drained node comes back online (#16506, @sp98)
- core: Use latest operator context to avoid reference to canceled context (#16493, @sp98)
- ci: Update latest k8s version to v1.34 (#16418, @obnoxxx)
- external: Fix ipv6 monitoring endpoint reconcile (#16468, @parth-gr)
- pool: Allow enableCrushUpdates to be nil (#16478, @travisn)
- mon: Fix mon health nil pointer exception with mons on PVC (#16484, @sp98)
- helm: Refactoring of rook-ceph's configmap to be easier to read and maintain (#16457, @consideRatio)
- nfs: Rotate nfs cephx key (#16456, @subhamkrai)
- external: Fixing rbd provisioner secret in import-external-cluster script (#16474, @rubentsirunyan)
- core: Add CRD Phase column to cephor, cephnfs, cephbn (#16541 #16542 #16543, @jhoblitt)
- object: Add status.{phase,observedGeneration} to cephbn (#16499, @jhoblitt)
- core: fix ObjectZoneSpec.ZoneGroup and ObjectZoneGroupSpec.Realm field descriptions (#16496, @jhoblitt)
v1.18.2
Improvements
Rook v1.18.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- helm: Upgrade requires either deletion of storage classes or removal of new storage class properties, see the Upgrade Guide (#16454, @parth-gr)
- csi: Set host networking on the csi controller pod if host networking is enforced (#16462, @travisn)
- csi: Fix cephx key deletion logic (#16452, @BlaineEXE)
- csi: Add multus network annotation to csi-operator (#16448, @subhamkrai)
- external: Fix secret values in import-external-cluster script (#16433, @rubentsirunyan)
- osd: Remove the osd bootstrap keyring that is not needed after creation (#16421, @parth-gr)
- core: Delete bootstrap keys not necessary for ceph daemons (#16372, @sp98)
- core: Allow rotation of the client.admin cephx key (#16271, @BlaineEXE)
- osd: Rotate lockbox keys for encrypted OSDs (#16409, @sp98)
v1.18.1
Improvements
Rook v1.18.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- csi: Set the cephfs kernel mount options when network encryption is enabled (#16399, @travisn)
- csi: Generate default crush topology labels for the csi operator settings (#16376, @travisn)
- csi: Create csi operator resources when operator settings configmap is updated (#16382, @subhamkrai)
- csi: Delete csi operator CR's when disabled (#16381, @subhamkrai)
- csi: Set csi operator as default if no settings found (#16405, @travisn)
- csi: Fix wrong use of daemon config for cephx status (#16396, @BlaineEXE)
- helm: Recreate storage classes with helm upgrades to add keep policy and new properties (#16373, @parth-gr)
- ci: Always initialize CSI driver names (#16393, @travisn)
v1.18.0
Upgrade Guide
To upgrade from previous versions of Rook, see the Rook upgrade guide.
Breaking Changes
- Kubernetes v1.29 is now the minimum version supported by Rook through the soon-to-be K8s release v1.34.
- Helm versions 3.13 and newer are supported. Previously, only the latest version of helm was tested and the docs stated only version 3.x of helm as a prerequisite. Now rook supports the six most recent minor versions of helm along with their their patch updates.
- Rook now validates node topology during CephCluster creation to prevent misconfigured CRUSH hierarchies for OSDs. If child labels like
topology.rook.io/rackare duplicated across zones, cluster creation will fail. The check applies only to new clusters without OSDs. Clusters with existing OSDs will only log a warning and continue. If the checks are invalid in your topology, they can be suppressed by settingROOK_SKIP_OSD_TOPOLOGY_CHECK=truein therook-ceph-operator-configconfigmap.
Features
- The Ceph CSI operator is now the default and recommended component for configuring CSI drivers for RBD, CephFS, and NFS volumes. The CSI operator has been factored out of Rook to run independently to manage the Ceph-CSI driver.
- During the upgrade and throughout the v1.18.x releases, Rook will automatically convert any Rook CSI settings to the new CSI operator CRs. This transition is expected to be completely transparent. In the future v1.19 release, Rook will relinquish direct control of these settings so advanced users can have more flexibility when configuring the CSI drivers. At that time, we will have a guide on configuring these new Ceph CSI operator CRs directly.
- During install, as mentioned in the Quickstart Guide, there is a new manifest to be created: csi-operator.yaml
- If installing with the helm chart, the Ceph CSI operator will automatically be installed by default with the new helm setting
csi.rookUseCsiOperatorin the rook-ceph chart. - If a blocking issue is found, the previous CSI driver can be re-enabled by setting
ROOK_USE_CSI_OPERATOR: falsein operator.yaml or by applying the helm settingcsi.rookUseCsiOperator: false.
- Ceph CSI v3.15 has a range of features and improvements for the RBD, CephFS, and NFS drivers. This release is supported both by the Ceph CSI operator and Rook's direct mode of configuration. Starting in the next release (at the end of the year), the Ceph CSI operator will be required to configure the CSI driver.
- CephX key rotation is now available as an experimental feature for the CephX authentication keys used by Ceph daemons and clients. Users will begin to see new cephx status items on some Rook resources in newly-deployed Rook clusters. Users can also find
spec.security.cephxsettings that allow initiating CephX key rotation for various Ceph components. Full documentation for key rotation can be found here.- Ceph version v19.2.3+ is required for key rotation.
- The Ceph admin and mon keys cannot yet be rotated. Implementation is still in progress while in experimental mode.
- Add support for specifying the clusterID in the CephBlockPoolRadosNamespace and the CephFilesystemSubVolumeGroup CR.
- When a mon is being failed over, if the assigned node no longer exists, the mon is failed over immediately instead of waiting for a
20 minute timeout. - Support for Ceph Tentacle v20 will be available as soon as it is released.