Stars
Train transformer language models with reinforcement learning.
A PyTorch toolbox for domain generalization, domain adaptation and semi-supervised learning.
The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillation-Oriented Trainer https://openaccess.thecvf.com/content…
Domain Generalization with MixStyle (ICLR'21)
You only need to configure one file to support model heterogeneity. Consistent GPU memory usage for single or multiple clients.
Official PyTorch codes of CVPR2022 Oral: Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization
AAAI 2023 accepted paper, FedALA: Adaptive Local Aggregation for Personalized Federated Learning
[ICLR 2023] Multimodal Federated Learning via Contrastive Representation Ensemble
[NeurIPS 2023] "FedFed: Feature Distillation against Data Heterogeneity in Federated Learning"
ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
[AAAI 2024] FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning
FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning
The Code for "Federated Recommender with Additive Personalization"
[CVPR 2024] The official repository of our paper "LEAD: Learning Decomposition for Source-free Universal Domain Adaptation"
The is the official implementation of ICCV 2023 paper "No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier".
This repository contains the source code for our MICCAI 2024 paper titled 'CAR-MFL: Cross-Modal Augmentation by Retrieval for Multimodal Federated Learning with Missing Modalities'
[AAAI24] Official implement of <Beyond Prototypes: Semantic Anchor Regularization for Better Representation Learning>
Implementation for FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated Clients
Confidence-aware Personalized Federated Learning via Variational Expectation Maximization [Accepted at CVPR 2023]
[ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baharan Mirzasoleiman
This paper is currently under review by IEEE TCSVT, and the diffusion framework of the FedDiff algorithm part will be disclosed.
A new simple method for dataset distillation called Randomized Truncated Backpropagation Through Time (RaT-BPTT)