This folder contains finetuning and inference examples for Llama 2. For the full documentation on these examples please refer to docs/inference.md
Please refer to the main README.md for information on how to use the finetuning.py script. After installing the llama-recipes package through pip you can also invoke the finetuning in two ways:
python -m llama_recipes.finetuning <parameters>
python examples/finetuning.py <parameters>
Please see README.md for details.
So far, we have provide the following inference examples:
-
inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models. It also demonstrates safety features to protect the user from toxic or harmful content.
-
vllm/inference.py script takes advantage of vLLM's paged attention concept for low latency.
-
The hf_text_generation_inference folder contains information on Hugging Face Text Generation Inference (TGI).
-
A chat completion example highlighting the handling of chat dialogs.
-
Code Llama folder which provides examples for code completion and code infilling.
For more in depth information on inference including inference safety checks and examples, see the inference documentation here.
Note The sensitive topics safety checker utilizes AuditNLG which is an optional dependency. Please refer to installation section of the main README.md for details.
Note The vLLM example requires additional dependencies. Please refer to installation section of the main README.md for details.
To show how to train a model on a custom dataset we provide an example to generate a custom dataset in custom_dataset.py. The usage of the custom dataset is further described in the datasets README.