-
Notifications
You must be signed in to change notification settings - Fork 41.6k
Resolved compatibility issue between Kubelet PLEG and inplace VPA #123941
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolved compatibility issue between Kubelet PLEG and inplace VPA #123941
Conversation
|
/sig node |
| if utilfeature.DefaultFeatureGate.Enabled(features.InPlacePodVerticalScaling) && isPodResizeInProgress(pod, &apiPodStatus) { | ||
| // While resize is in progress, periodically call PLEG to update pod cache | ||
| runningPod := kubecontainer.ConvertPodStatusToRunningPod(kl.getRuntime().Type(), podStatus) | ||
| if err, _ := kl.pleg.UpdateCache(&runningPod, pod.UID); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only purpose of this logic was to update the cache. However, UpdateCache underneath invoke the runtime.GetPodStatus() which retrieves the latest CRI status, then the cache object stores the latest state which can not be used for state comparison in future Relist() loop.
|
/cc @smarterclayton @bobbypage @liggitt @kubernetes/sig-node-pr-reviews |
|
@Jeffwan: GitHub didn't allow me to request PR reviews from the following users: kubernetes/sig-node-pr-reviews. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
8c90373 to
e58fbfe
Compare
| oldPod := g.podRecords.getOld(pid) | ||
| pod := g.podRecords.getCurrent(pid) | ||
|
|
||
| var cachePodStatus *kubecontainer.PodStatus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would also need to calculate in the evented pleg, so make sure we don't miss it there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a good point. I will cover it in new coming commit
|
/assign @tallclair |
|
/triage accepted |
|
@Jeffwan Please fix CI test failures, thanks. As this is a bugfix, it would be great to get this use case covered by e2e tests. |
pkg/kubelet/pleg/generic.go
Outdated
| newContainerStatus := podStatus.FindContainerStatusByContainerID(cid) | ||
| if oldContainerStatus != nil && newContainerStatus != nil && !containerResourceSame(oldContainerStatus.Resources, newContainerStatus.Resources) { | ||
| klog.V(5).InfoS("resize pods triggers the plegContainerUnknown event", "oldContainerStatus", oldContainerStatus, "newContainerStatus", newContainerStatus) | ||
| return generateEvents(pid, cid.ID, oldState, plegContainerUnknown) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better to consider an edge case where a resized container has exited at the same time (newState==exited).
| for i := range events { | ||
| // Filter out events that are not reliable and no other components use yet. | ||
| if events[i].Type == ContainerChanged { | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This prevents a ContainerChangedevent from being sent when a container status is created, which is sometimes detected and converted to unknown at L99. Even if this event is sent, it doesn't seem to cause a problem at a glance. However, it would be better not to send an event when a container status is created in order to avoid any unexpected side-effects by using another event (PodSync or new one) for resizing or by checking the container status.
| return | ||
| } | ||
| if pod != nil { | ||
| podStatus, err = g.runtime.GetPodStatus(ctx, pod.ID, pod.Name, pod.Namespace) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have to call it for all pods? Wouldn't it be enough to get a pod status from the runtime only when the pod is being resized (InProgress)?
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
/remove-lifecycle stale |
|
@horacexd and I are still working on this story so remove the |
1f53437 to
1700047
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Jeffwan The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Make In-place VPA feature work with PLEG relist. Use auxiliary runtime pod status and PLEG cache pod status to distinguish the resize pod and make sure it generate correct PLEG event and come into the event channel. Co-authored-by: Lingyan Yin <yin.387@osu.edu> Co-authored-by: Zewei Ding <horace.d@outlook.com> Co-authored-by: Shengjie Xue <3150104939@zju.edu.cn>
Change-Id: I7a715b8525832f0c39ae0fa25dc42cbb3b9043f9
1700047 to
74310c3
Compare
|
@Jeffwan: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
| return | ||
| } | ||
| if pod != nil { | ||
| podStatus, err = g.runtime.GetPodStatus(ctx, pod.ID, pod.Name, pod.Namespace) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm reading this correctly, this is going to call GetPodStatus() on every pod, on ever relist loop? Won't that make the relist too expensive? I assume the logic that made this conditional originally was intentional to avoid this.
I think this needs a lot of scale & performance testing before we can proceed with this change.
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
I'm proposing an alternative approach to this in #128518 |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Make In-place VPA feature work with PLEG relist. Use auxiliary runtime pod status and PLEG cache pod status to distinguish the resize pod and make sure it generate correct PLEG event and come into the event channel.
What type of PR is this?
/kind bug
What this PR does / why we need it:
Pleg doesn't handle resized pod well. See #123940 for more details
This is part of kubernetes/enhancements#4433
Which issue(s) this PR fixes:
Fixes
Special notes for your reviewer:
I have not added tests yet. I did some manual e2e tests. If the idea looks good to you, I will spend some time improving the test coverage.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: