Skip to content

Conversation

@fabi200123
Copy link

@fabi200123 fabi200123 commented Sep 8, 2022

What type of PR is this?

/kind feature

What this PR does / why we need it:

It adds the Windows support for the In-place Pod Vertical Scaling feature by solving some of the TODOs.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

[FEATURE]: In-place Pod Vertical Scaling

vinaykul and others added 9 commits September 5, 2022 20:14
 1. Define ContainerResizePolicy and add it to Container struct.
 2. Add ResourcesAllocated and Resources fields to ContainerStatus struct.
 3. Define ResourcesResizeStatus and add it to PodStatus struct.
 4. Add InPlacePodVerticalScaling feature gate and drop disabled fields.
 5. ResizePolicy validation & defaulting and Resources mutability for CPU/Memory.
KEP: /enhancements/keps/sig-node/1287-in-place-update-pod-resources
KEP: /enhancements/keps/sig-node/1287-in-place-update-pod-resources
 1. Core Kubelet changes to implement In-place Pod Vertical Scaling.
 2. E2E tests for In-place Pod Vertical Scaling.
KEP: /enhancements/keps/sig-node/1287-in-place-update-pod-resources

Co-authored-by: Chen Wang <Chen.Wang1@ibm.com>
1. Refactor kubelet code and add missing tests (Derek's kubelet review)
2. Add a new hash over container fields without Resources field to allow feature gate toggling without restarting containers not using the feature.
3. Fix corner-case where resize A->B->A gets ignored
4. Remove spurious 'ContainerResources' in Update request.
5. Add guidance/expectation from runtime in handling UpdateContainerResources CRI API.
6. Add cgroup v2 support to pod resize E2E test.
Created the feature TODOs for Windows support
@vinaykul
Copy link
Owner

Let's pursue this directly with the main branch. I believe 112599 will supercede this PR

@vinaykul vinaykul closed this Oct 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants