Implementation for the different ML tasks on Kaggle platform with GPUs.
-
Updated
Jul 31, 2025 - Jupyter Notebook
Implementation for the different ML tasks on Kaggle platform with GPUs.
This repository provides a Jupyter notebook demonstrating parameter-efficient fine-tuning (PEFT) with LoRA on Hugging Face models.
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
This repository contains notebooks and resources related to the Software Development Group Project (SDGP) machine learning component. Specifically, it includes two notebooks used for creating a dataset and fine-tuning a Mistral-7B-v0.1-Instruct model.
Fine-tuning BLIP-2 for image captioning using the Flickr8k dataset with PEFT (LoRA) for efficient training. Inference-ready notebook included.
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
This notebook fine-tunes the FLAN-T5 model for dialogue summarization, comparing full fine-tuning with Parameter-Efficient Fine-Tuning (PEFT). It evaluates performance using ROUGE metrics, demonstrating PEFT's efficiency while achieving competitive results.
Add a description, image, and links to the peft topic page so that developers can more easily learn about it.
To associate your repository with the peft topic, visit your repo's landing page and select "manage topics."