You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The model was not saved to the finetune/output folder I specified, and moreover, the model weight files appear as follows, preventing me from performing inference.
ls
__0_0.distcp __1_0.distcp __2_0.distcp __3_0.distcp train_params.yaml
How can I save the weights of a fully fine-tuned model to a specified path, ensuring that the saved model weight file follows the standard transformers structure?
The text was updated successfully, but these errors were encountered:
During the use of LoRA fine-tuning, everything was normal, but the following issue arose during full-scale fine-tuning.
I use the following script for full fine-tuning :
The model was not saved to the
finetune/output
folder I specified, and moreover, the model weight files appear as follows, preventing me from performing inference.How can I save the weights of a fully fine-tuned model to a specified path, ensuring that the saved model weight file follows the standard transformers structure?
The text was updated successfully, but these errors were encountered: