-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Fix queue not counting volcano.sh/vgpu-memory and volcano.sh/vgpu-number properly #4520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @archlitchi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've addressed a bug where the Volcano scheduler's queue was not accurately counting volcano.sh/vgpu-memory and volcano.sh/vgpu-number resources. This fix ensures that vGPU resources are properly accounted for when updating queue allocated resources, resolving the related issue #4168.
Highlights
- Enhanced Resource Accounting: I've introduced an AddQueueResource method to the Devices interface and implemented it for vGPU devices. This method calculates the actual vGPU memory and core usage from pod annotations, ensuring these resources are correctly added to the queue's allocated resources.
- Dynamic Resource Name Resolution: I've refactored the vGPU utility functions to use a new getConfig() helper. This change replaces hardcoded vGPU resource names with dynamically retrieved ones, improving flexibility and maintainability.
- Corrected Queue Status Updates: The updateQueueStatus function in the scheduler framework now leverages the new AddQueueResource method. This ensures that when a pod's resources are added to the queue's allocated pool, vGPU memory and number are accurately reflected.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request fixes an issue with queue resource accounting for vGPU memory and number. It introduces a new interface method AddQueueResource to calculate the actual resources consumed by a pod with vGPU, and updates the queue status accordingly. The changes look mostly correct, but I've identified a potential issue with resource unit conversion that could lead to incorrect accounting, a code duplication issue, and a minor performance improvement suggestion.
|
/kind bug |
|
/priority high |
pkg/scheduler/framework/session.go
Outdated
| for status, tasks := range job.TaskStatusIndex { | ||
| if api.AllocatedStatus(status) { | ||
| for _, task := range tasks { | ||
| node, ok1 := ssn.Nodes[task.NodeName] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should not name variable as ok1. ok1 --> ok
pkg/scheduler/framework/session.go
Outdated
| taskReq := task.Resreq | ||
| if ok1 { | ||
| for _, sharedDevices := range node.Others { | ||
| if sharedDevices.(api.Devices).HasDeviceRequest(task.Pod) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering that why Others field in NodeInfo is a map[string]interface{} type rather than map[string]Devices ? Will Others have other types of interface in the future? We cannot control other contributors who may put other interface implementations into Others. If other interfaces do not have HasDeviceRequest methods, there will be problems here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha, the nodeInfo.Others field exists before i implement deviceshare API. It's william-wang's idea to put devices related logics here, so we don't need to change the node_info struct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't modify NodeInfo, there should be an assertion here
| for _, deviceused := range val { | ||
| for _, gsdevice := range gs.Device { | ||
| if strings.Contains(deviceused.UUID, gsdevice.UUID) { | ||
| res[getConfig().ResourceMemoryName] += float64(deviceused.Usedmem * 1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry not so familiar with the previous codes, where is VolcanoVGPUNumber calculated? Can it be correctly recorded in queue allocated field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
volcanoVGPUNumber doesn't need to be calculated here, it is correctly counted in default behavior
| func (gs *GPUDevices) AddQueueResource(pod *v1.Pod) map[string]float64 { | ||
| klog.Infoln("AddQueueResource:Name=", pod.Name) | ||
| res := map[string]float64{} | ||
| if gs == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better move this to top.
| } | ||
|
|
||
| func (gs *GPUDevices) AddQueueResource(pod *v1.Pod) map[string]float64 { | ||
| klog.Infoln("AddQueueResource:Name=", pod.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
klog level 3?
|
Can |
currently no, because you can't know exactly how much device memory a task uses before scheduling, for example, i task allocate volcano.sh/vgpu-number:1, it allocates a whole card, but different card has different device memory, which can't be decided before scheduling. |
|
yes, but that's in the process of scheduling, so that task is still in the wrong state, because such task shouldn't be enqueue |
But enqueueable is also a callback, and plugin can also implement it. |
|
@Monokaix The problem is not we can't implement, it is we don't know the exact device memory a certain task uses before scheduling |
pkg/scheduler/framework/session.go
Outdated
| node, ok := ssn.Nodes[task.NodeName] | ||
| taskReq := task.Resreq | ||
| if ok { | ||
| for _, sharedDevices := range node.Others { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sharedDevices is nil when node has shared gpu resource, will it panic here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In current version , either node.Others is empty and this for loop is not entered, or sharedDevices is not nil because we have initialized it. so panic won't happen
pkg/scheduler/framework/session.go
Outdated
| for status, tasks := range job.TaskStatusIndex { | ||
| if api.AllocatedStatus(status) { | ||
| for _, task := range tasks { | ||
| node, ok := ssn.Nodes[task.NodeName] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can wrap this as a func.
Signed-off-by: limengxuan <mengxuan.li@dynamia.ai>
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Monokaix The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
JesseStutler
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks
/BUG
related issue: #4168