- London, UK
- in/danieledalgrande
- @danidalgrande
fine-tuning
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Code and documentation to train Stanford's Alpaca models, and generate the data.
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Instruction Tuning with GPT-4
A library for training and deploying machine learning models on Amazon SageMaker
Toolkit for running TensorFlow training scripts on SageMaker. Dockerfiles used for building SageMaker TensorFlow Containers are at https://github.com/aws/deep-learning-containers.
Fast and memory-efficient exact attention
Efficient Triton Kernels for LLM Training
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients. Published in Nature.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Official inference library for pre-processing of Mistral models
An Open Source Toolkit For LLM Distillation
Train transformer language models with reinforcement learning.
Post-training with Tinker
Efficient few-shot learning with Sentence Transformers
Tangle is a web app that allows the users to build and run Machine Learning pipelines without having to set up development environment.
Tangle is a web app that allows the users to build and run Machine Learning pipelines without having to set up development environment.