-
Notifications
You must be signed in to change notification settings - Fork 41.6k
[FG:InPlacePodVerticalScaling] reduce container resources in tests #128719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
1631e77 to
71d999a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this change, it makes sense to do regardless.
Can we define constants with meaningful names to use for all these values instead? For example:
const(
originalCPU = 20
originalCPULimit = 30
reducedCPU = 10
reducedCPULimit = 20
increasedCPU = 30
increasedCPULimit = 40
// etc
)
// For multi-container test cases
func offsetContainerCPUResources(index, value) int64 {
return value + 2 * index
}
test/e2e/common/node/pod_resize.go
Outdated
| {"name":"c1", "resources":{"requests":{"cpu":"14m","memory":"50Mi"},"limits":{"cpu":"14m","memory":"50Mi"}}}, | ||
| {"name":"c2", "resources":{"requests":{"cpu":"15m","memory":"24Mi"},"limits":{"cpu":"15m","memory":"24Mi"}}}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any issues with requesting increments of 1m CPU? I know there might be rounding issues in determining shares.
Actually, now that I think about it, we might need to investigate the way we determine if a resize is pending to account for rounding issues...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran some tests around this: creating & resizing a pod with 20 containers requesting values of 1m - 20m, and didn't hit any rounding issues (though I did discover #128769)
a26e7a3 to
56eb32c
Compare
91caca3 to
f98c25d
Compare
|
/assign @tallclair |
| }, | ||
| { | ||
| name: "Guaranteed QoS pod, one container - increase CPU & memory with an extended resource", | ||
| containers: []e2epod.ResizableContainerInfo{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we parallelize this test case that modifies nodes for an extended resource? Are there any side effects to other tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. Good catch. So far I do not see any side-effects because we have just one test for extended resource. I expect this to cause an issue in future though if we had multiple similar tests.
f98c25d to
3d78f64
Compare
|
/test pull-kubernetes-node-kubelet-serial-podresize |
3d78f64 to
252bb8a
Compare
|
/test pull-kubernetes-node-kubelet-serial-podresize |
252bb8a to
aa31fa3
Compare
|
/test pull-kubernetes-node-kubelet-serial-podresize |
reducing the container resources used in in-place pod resize tests will help us run tests in parallel.
aa31fa3 to
1b15876
Compare
|
/test pull-kubernetes-node-kubelet-serial-podresize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
Thank you!
|
LGTM label has been added. Git tree hash: 573fad6699243992f1f5a37b8db199c2e1915ef5
|
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: AnishShah, tallclair The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
reducing the container resources used in in-place pod resize tests will help us run many subtests in parallel on a single node.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: