Stars
Lists of company wise questions available on leetcode premium. Every csv file in the companies directory corresponds to a list of questions on leetcode for a specific company based on the leetcode …
An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal AI that uses just a decoder to generate both text and images
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…
The repo for studying and sharing diffusion models.
lxe / llama-peft-tuner
Forked from zphang/minimal-llamaTune LLaMa-7B on Alpaca Dataset using PEFT / LORA Based on @zphang's https://github.com/zphang/minimal-llama scripts.
train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism
An Open-source Toolkit for LLM Development
Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
Fine tune a T5 transformer model using PyTorch & Transformers🤗
Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
An open-source framework for training large multimodal models.
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Code and documentation to train Stanford's Alpaca models, and generate the data.
OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.
🎁[ChatGPT4MTevaluation] ErrorAnalysis Prompt for MT Evaluation in ChatGPT
[ACL 2023] Reasoning with Language Model Prompting: A Survey
A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research group: https://samwald.info/
A modular RL library to fine-tune language models to human preferences
Adversarial Natural Language Inference Benchmark