Here you can find a collection of notebooks for AdapterHub and the adapters library.
The table shows notebooks provided by AdapterHub and contained in this folder that show the different possibilities of working with adapters.
As adapters is fully compatible with HuggingFace's Transformers, you can also use the large collection of official and community notebooks there: 🤗 Transformers Notebooks.
Notebook | Description | |
---|---|---|
1: Training an Adapter | How to train a task adapter for a Transformer model | |
2: Using Adapters for Inference | How to download and use pre-trained adapters from AdapterHub |
Notebook | Description | |
---|---|---|
3: Adapter Fusion | How to combine multiple pre-trained adapters on a new task using Fuse composition. |
|
4: Cross-lingual Transfer | How to perform zero-shot cross-lingual transfer between tasks using the MAD-X setup (Stack ). |
|
5: Parallel Adapter Inference | Using the Parallel composition block for inference. |
|
6: Adapter merging and Task Arithmetics | How to merge multiple adapters to create a new one through Task Arithmetics. | |
7: Complex Adapter Configuration | How to flexibly combine multiple adapter methods in complex setups using ConfigUnion . |
Notebook | Description | |
---|---|---|
Text Generation | How to train an adapter for language generation. | |
QLoRA LLama Finetuning | How to finetune a quantized Llama model for using QLoRA. | |
Training a NER Adapter | How to train an adapter on a named entity recoginition task. | |
Adapter Drop Training | How to train an adapter using AdapterDrop | |
Inference example for id2label | How to use the id2label dictionary for inference | |
NER on Wikiann | Evaluating adapters on NER on the wikiann dataset | |
Finetuning Whisper with Adapters | Fine Tuning Whisper using LoRA | |
Adapter Training with ReFT | Fine Tuning using ReFT Adapters |