-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Issues: meta-llama/llama-recipes
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
DeepSpeed support for Full Finetuning - FSDP performance is not as good as Deepspeed
#536
opened May 23, 2024 by
waterluck
Recommendations to save, store & re-use results?
enhancement
New feature or request
#598
opened Jul 16, 2024 by
smach
Llama 3.1 Code Interpreter file reference
enhancement
New feature or request
#610
opened Jul 25, 2024 by
jonatananselmo
Clarification on Evaluation Results for Llama Guard 3
#633
opened Aug 15, 2024 by
sheli-kohan
1 of 2 tasks
Why set the label tokens the same as the input token
#637
opened Aug 19, 2024 by
kaimoxuan123
1 of 2 tasks
The EOS and BOS token setting when contine pretraining Llama3.1
#648
opened Aug 28, 2024 by
ShomyLiu
Loss does not converge with FSDP cpu offloading
triaged
#360
opened Jan 29, 2024 by
hjlee1371
1 of 2 tasks
Inference with "FULL_STATE_DICT" checkpoint from FSDP fine tuning
#699
opened Oct 2, 2024 by
mathmax12
llama3.2 fine tuning generates repeated pattern towards the end of one epoch
#735
opened Oct 18, 2024 by
ruian1
2 tasks done
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.