Skip to content

Conversation

@archlitchi
Copy link
Contributor

/BUG

related issue: #4168

@volcano-sh-bot volcano-sh-bot requested review from hwdef and k82cn August 8, 2025 03:38
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @archlitchi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've addressed a bug where the Volcano scheduler's queue was not accurately counting volcano.sh/vgpu-memory and volcano.sh/vgpu-number resources. This fix ensures that vGPU resources are properly accounted for when updating queue allocated resources, resolving the related issue #4168.

Highlights

  • Enhanced Resource Accounting: I've introduced an AddQueueResource method to the Devices interface and implemented it for vGPU devices. This method calculates the actual vGPU memory and core usage from pod annotations, ensuring these resources are correctly added to the queue's allocated resources.
  • Dynamic Resource Name Resolution: I've refactored the vGPU utility functions to use a new getConfig() helper. This change replaces hardcoded vGPU resource names with dynamically retrieved ones, improving flexibility and maintainability.
  • Corrected Queue Status Updates: The updateQueueStatus function in the scheduler framework now leverages the new AddQueueResource method. This ensures that when a pod's resources are added to the queue's allocated pool, vGPU memory and number are accurately reflected.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@volcano-sh-bot volcano-sh-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Aug 8, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes an issue with queue resource accounting for vGPU memory and number. It introduces a new interface method AddQueueResource to calculate the actual resources consumed by a pod with vGPU, and updates the queue status accordingly. The changes look mostly correct, but I've identified a potential issue with resource unit conversion that could lead to incorrect accounting, a code duplication issue, and a minor performance improvement suggestion.

@archlitchi
Copy link
Contributor Author

CC @JesseStutler

@JesseStutler
Copy link
Member

/kind bug

@volcano-sh-bot volcano-sh-bot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 8, 2025
@JesseStutler
Copy link
Member

/priority high
/cc

@archlitchi
Copy link
Contributor Author

The result can be seen as follows:

81ab990a-ca36-440b-a4d1-cfef6cbe72e1

for status, tasks := range job.TaskStatusIndex {
if api.AllocatedStatus(status) {
for _, task := range tasks {
node, ok1 := ssn.Nodes[task.NodeName]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should not name variable as ok1. ok1 --> ok

taskReq := task.Resreq
if ok1 {
for _, sharedDevices := range node.Others {
if sharedDevices.(api.Devices).HasDeviceRequest(task.Pod) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering that why Others field in NodeInfo is a map[string]interface{} type rather than map[string]Devices ? Will Others have other types of interface in the future? We cannot control other contributors who may put other interface implementations into Others. If other interfaces do not have HasDeviceRequest methods, there will be problems here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haha, the nodeInfo.Others field exists before i implement deviceshare API. It's william-wang's idea to put devices related logics here, so we don't need to change the node_info struct

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't modify NodeInfo, there should be an assertion here

for _, deviceused := range val {
for _, gsdevice := range gs.Device {
if strings.Contains(deviceused.UUID, gsdevice.UUID) {
res[getConfig().ResourceMemoryName] += float64(deviceused.Usedmem * 1000)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry not so familiar with the previous codes, where is VolcanoVGPUNumber calculated? Can it be correctly recorded in queue allocated field?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

volcanoVGPUNumber doesn't need to be calculated here, it is correctly counted in default behavior

func (gs *GPUDevices) AddQueueResource(pod *v1.Pod) map[string]float64 {
klog.Infoln("AddQueueResource:Name=", pod.Name)
res := map[string]float64{}
if gs == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better move this to top.

}

func (gs *GPUDevices) AddQueueResource(pod *v1.Pod) map[string]float64 {
klog.Infoln("AddQueueResource:Name=", pod.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

klog level 3?

@Monokaix
Copy link
Member

Can Allocateable and Enqueueable be aware of vgpu resources? If queue.status has vgpu resources, we should also check the vgpu resource quota when allocate.

@archlitchi
Copy link
Contributor Author

Can Allocateable and Enqueueable be aware of vgpu resources? If queue.status has vgpu resources, we should also check the vgpu resource quota when allocate.

currently no, because you can't know exactly how much device memory a task uses before scheduling, for example, i task allocate volcano.sh/vgpu-number:1, it allocates a whole card, but different card has different device memory, which can't be decided before scheduling.

@Monokaix
Copy link
Member

Can Allocateable and Enqueueable be aware of vgpu resources? If queue.status has vgpu resources, we should also check the vgpu resource quota when allocate.

currently no, because you can't know exactly how much device memory a task uses before scheduling, for example, i task allocate volcano.sh/vgpu-number:1, it allocates a whole card, but different card has different device memory, which can't be decided before scheduling.

deviceshare plugin can be aware of that, right? device memory quota check in queue is just like task's memory request check in resource filter.

@archlitchi
Copy link
Contributor Author

Can Allocateable and Enqueueable be aware of vgpu resources? If queue.status has vgpu resources, we should also check the vgpu resource quota when allocate.

currently no, because you can't know exactly how much device memory a task uses before scheduling, for example, i task allocate volcano.sh/vgpu-number:1, it allocates a whole card, but different card has different device memory, which can't be decided before scheduling.

deviceshare plugin can be aware of that, right? device memory quota check in queue is just like task's memory request check in resource filter.

yes, but that's in the process of scheduling, so that task is still in the wrong state, because such task shouldn't be enqueue

@Monokaix
Copy link
Member

Can Allocateable and Enqueueable be aware of vgpu resources? If queue.status has vgpu resources, we should also check the vgpu resource quota when allocate.

currently no, because you can't know exactly how much device memory a task uses before scheduling, for example, i task allocate volcano.sh/vgpu-number:1, it allocates a whole card, but different card has different device memory, which can't be decided before scheduling.

deviceshare plugin can be aware of that, right? device memory quota check in queue is just like task's memory request check in resource filter.

yes, but that's in the process of scheduling, so that task is still in the wrong state, because such task shouldn't be enqueue

But enqueueable is also a callback, and plugin can also implement it.

@archlitchi
Copy link
Contributor Author

@Monokaix The problem is not we can't implement, it is we don't know the exact device memory a certain task uses before scheduling

node, ok := ssn.Nodes[task.NodeName]
taskReq := task.Resreq
if ok {
for _, sharedDevices := range node.Others {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sharedDevices is nil when node has shared gpu resource, will it panic here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In current version , either node.Others is empty and this for loop is not entered, or sharedDevices is not nil because we have initialized it. so panic won't happen

for status, tasks := range job.TaskStatusIndex {
if api.AllocatedStatus(status) {
for _, task := range tasks {
node, ok := ssn.Nodes[task.NodeName]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can wrap this as a func.

Signed-off-by: limengxuan <mengxuan.li@dynamia.ai>
@volcano-sh-bot volcano-sh-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 10, 2025
@Monokaix
Copy link
Member

/approve

@volcano-sh-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Monokaix

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@volcano-sh-bot volcano-sh-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 11, 2025
Copy link
Member

@JesseStutler JesseStutler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. priority/high size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants