Skip to content

fix: Set dtype on nested model config to avoid flash attention warning#309

Open
mitchsayre wants to merge 1 commit into
QwenLM:mainfrom
mitchsayre:main
Open

fix: Set dtype on nested model config to avoid flash attention warning#309
mitchsayre wants to merge 1 commit into
QwenLM:mainfrom
mitchsayre:main

Conversation

@mitchsayre
Copy link
Copy Markdown

Propagate dtype to the nested code predictor config. This avoids the warning: "You are attempting to use Flash Attention 2 without specifying a torch dtype. This might lead to unexpected behaviour"

Related Issue: #136

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants