-
Notifications
You must be signed in to change notification settings - Fork 943
[bugfix] fix forward_context #5757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bugfix] fix forward_context #5757
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a bug related to the forward_context mechanism by refactoring how input data, specifically position_ids, is prepared for GLM and Qwen models. By moving this data preparation into the _data_collator, the changes ensure that model inputs are correctly formatted before being passed to the model's forward pass, improving the robustness and correctness of the data pipeline. Additionally, it standardizes template configurations within the GRPO trainer.
Highlights
- Refactored Input Processing: The logic for handling position_ids and text_position_ids for GLM and Qwen templates has been moved from the forward_context method to the _data_collator method, centralizing data preparation.
- Streamlined forward_context: The custom forward_context implementations in GLM and Qwen templates have been simplified or removed, with the base class forward_context now primarily serving as a placeholder for model patching scenarios.
- Dependency on Transformers Version: The Qwen template now explicitly parses and checks the transformers library version to apply specific logic for position_ids handling during data collation.
- Trainer Configuration Update: The GRPOTrainer now explicitly disables padding_free and packing for its associated template, ensuring consistent data handling behavior during training.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the forward_context logic by moving it into the _data_collator methods for GLMTemplate and Qwen2VLTemplate. This is a good change for consistency. However, this refactoring has introduced a critical bug in both swift/llm/template/template/glm.py and swift/llm/template/template/qwen.py. The code now attempts to access res['position_ids'] unconditionally, but this key is only created within a conditional block, which will lead to a KeyError in certain scenarios. My review includes suggestions to fix these critical issues.
| if not self.padding_free and self.is_training: | ||
| res['position_ids'] = self._get_position_ids(res) | ||
| position_ids = res['position_ids'] | ||
| res['position_ids'] = position_ids[1:] | ||
| res['text_position_ids'] = text_position_ids = position_ids[0] | ||
| # https://github.com/huggingface/transformers/pull/40194 | ||
| if text_position_ids.shape[0] == 1: | ||
| res.update(get_packed_seq_params(text_position_ids)) | ||
| return res |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block of code will raise a KeyError if not self.padding_free and self.is_training evaluates to False, because res['position_ids'] will not be defined. The logic for processing position_ids should be moved inside the conditional block to ensure it only runs when res['position_ids'] is available.
| if not self.padding_free and self.is_training: | |
| res['position_ids'] = self._get_position_ids(res) | |
| position_ids = res['position_ids'] | |
| res['position_ids'] = position_ids[1:] | |
| res['text_position_ids'] = text_position_ids = position_ids[0] | |
| # https://github.com/huggingface/transformers/pull/40194 | |
| if text_position_ids.shape[0] == 1: | |
| res.update(get_packed_seq_params(text_position_ids)) | |
| return res | |
| if not self.padding_free and self.is_training: | |
| res['position_ids'] = self._get_position_ids(res) | |
| position_ids = res['position_ids'] | |
| res['position_ids'] = position_ids[1:] | |
| res['text_position_ids'] = text_position_ids = position_ids[0] | |
| # https://github.com/huggingface/transformers/pull/40194 | |
| if text_position_ids.shape[0] == 1: | |
| res.update(get_packed_seq_params(text_position_ids)) | |
| return res |
swift/llm/template/template/qwen.py
Outdated
| res = super()._data_collator(batch, padding_to=padding_to) | ||
| if not self.padding_free and self.is_training: | ||
| res['position_ids'] = self._get_position_ids(res) | ||
| position_ids = res['position_ids'] | ||
| res['position_ids'] = position_ids[1:] | ||
| res['text_position_ids'] = text_position_ids = position_ids[0] | ||
| if self.transformers_version >= version.parse('4.53.0.dev') and text_position_ids.shape[0] == 1: | ||
| # https://github.com/huggingface/transformers/pull/40194 | ||
| res.update(get_packed_seq_params(text_position_ids)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block of code will raise a KeyError if not self.padding_free and self.is_training evaluates to False, because res['position_ids'] will not be defined. The logic for processing position_ids should be moved inside the conditional block to ensure it only runs when res['position_ids'] is available.
| res = super()._data_collator(batch, padding_to=padding_to) | |
| if not self.padding_free and self.is_training: | |
| res['position_ids'] = self._get_position_ids(res) | |
| position_ids = res['position_ids'] | |
| res['position_ids'] = position_ids[1:] | |
| res['text_position_ids'] = text_position_ids = position_ids[0] | |
| if self.transformers_version >= version.parse('4.53.0.dev') and text_position_ids.shape[0] == 1: | |
| # https://github.com/huggingface/transformers/pull/40194 | |
| res.update(get_packed_seq_params(text_position_ids)) | |
| if not self.padding_free and self.is_training: | |
| res['position_ids'] = self._get_position_ids(res) | |
| position_ids = res['position_ids'] | |
| res['position_ids'] = position_ids[1:] | |
| res['text_position_ids'] = text_position_ids = position_ids[0] | |
| if self.transformers_version >= version.parse('4.53.0.dev') and text_position_ids.shape[0] == 1: | |
| # https://github.com/huggingface/transformers/pull/40194 | |
| res.update(get_packed_seq_params(text_position_ids)) | |
| return res |
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the handling of forward_context by moving the logic for processing packed position IDs from a runtime context hook into the _data_collator method for glm and qwen templates. This is a good simplification that improves code structure by centralizing data preparation. The changes are consistent and correct. I've added one suggestion to abstract the duplicated logic in glm.py and qwen.py into a shared helper function to improve maintainability.
| if 'position_ids' in res: | ||
| position_ids = res['position_ids'] | ||
| res['position_ids'] = position_ids[1:] | ||
| res['text_position_ids'] = text_position_ids = position_ids[0] | ||
| # https://github.com/huggingface/transformers/pull/40194 | ||
| if text_position_ids.shape[0] == 1: | ||
| res.update(get_packed_seq_params(text_position_ids)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic for processing position_ids appears to be duplicated in swift/llm/template/template/qwen.py. To improve maintainability and reduce code duplication, consider abstracting this block into a shared helper function. The function could take the res dictionary and an optional condition (for the transformers_version check in qwen.py) as arguments.
No description provided.