Highlights
- Pro
Stars
This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.
Original code base for On Pretraining Data Diversity for Self-Supervised Learning
Code release for "Semi-supervised learning made simple with self-supervised clustering"
Code release for "Improved baselines for vision-language pre-training"
Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript
Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Library for automatic retraining and continual learning
[ICCV 2023 Oral] Official PyTorch implementation of our paper for semi-supervised continual learning "A soft nearest-neighbor framework for continual semi-supervised learning".
Source code for ACL 2022 paper "Self-contrastive Decorrelation for Sentence Embeddings".
Recent Advances in Vision and Language Pre-training (VLP)
COYO-700M: Large-scale Image-Text Pair Dataset
Matlab/Simulink project built to model and control a magnetic bearing with differetn techniques
Small project about data elaboration. Measurements done with a laser doppler vibrometer on a vibrating structure.
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
uploadcare / pillow-simd
Forked from python-pillow/PillowThe friendly PIL fork
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
official repository for the paper: Multimodal emotion recognition with modality-pairwise unsupervised contrastive loss
The offical Pytorch code for "Continual Attentive Fusion for Incremental Learning in Semantic Segmentation"
The offical Pytorch code for "Uncertainty-aware Contrastive Distillation\\for Incremental Semantic Segmentation"
Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022
Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)