Stars
For the rlhf learning environment of Koreans
code for NH AI/Big Data Strategy Report(23.04)
KoAlpaca: νκ΅μ΄ λͺ λ Ήμ΄λ₯Ό μ΄ν΄νλ μ€νμμ€ μΈμ΄λͺ¨λΈ (KoAlpaca: An open-source language model to understand Korean instructions)
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
A Unified Semi-Supervised Learning Codebase (NeurIPS'22)
π python package to calculate readability statistics of a text object - paragraphs, sentences, articles.
Must-read papers on prompt-based tuning for pre-trained language models.
BERT-based NLP template (+ WandB, Hydra)
Model explainability that works seamlessly with π€ transformers. Explain your transformers model in just 2 lines of code.
[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723
A Unified Library for Parameter-Efficient and Modular Transfer Learning
A python library for self-supervised learning on images.
Code associated with the Don't Stop Pretraining ACL 2020 paper
KBκ΅λ―Όμνμμ μ 곡νλ κ²½μ /κΈμ΅ λλ©μΈμ νΉνλ νκ΅μ΄ ALBERT λͺ¨λΈ
NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformations
A PyTorch-based library for semi-supervised learning (NeurIPS'21)
Phrase-Based & Neural Unsupervised Machine Translation
π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Transformer Implementation using PyTorch for Neural Machine Translation (Korean to English)