You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GEN AI use case: dialogue summary. This notebook is extracted from the course Generative AI with Large Language Models. It is used to understand how input text can affect model performance.
Successfully developed a text summarization model using Seq2Seq with attention to condense multi-turn dialogues from the SAMSum dataset into coherent and informative summaries.
Fine-tuning FLAN-T5 with PPO and PEFT to generate less toxic text summaries. This notebook leverages Meta AI's hate speech reward model and utilizes RLHF techniques for improved safety.
This repository explores enhancing dialogue summarization with commonsense knowledge through the SICK framework, evaluating models on dialogue datasets to assess commonsense's impact on summarization quality.
This notebook fine-tunes the FLAN-T5 model for dialogue summarization, comparing full fine-tuning with Parameter-Efficient Fine-Tuning (PEFT). It evaluates performance using ROUGE metrics, demonstrating PEFT's efficiency while achieving competitive results.
Fine Tuning pegasus and flan-t5 pre-trained language model on dialogsum datasets for conversation summarization to to optimize context window in RAG-LLMs
Dialogue Summarization application hosted using AWS and CICD deployment with docker and FASTAPI. Model card created in HuggingFace and a deployed on HuggingFace Spaces.