Stars
Train transformer language models with reinforcement learning.
[NeurIPS 2023] "FedFed: Feature Distillation against Data Heterogeneity in Federated Learning"
A flexible Federated Learning Framework based on PyTorch, simplifying your Federated Learning research.
You only need to configure one file to support model heterogeneity. Consistent GPU memory usage for single or multiple clients.
The Code for "Federated Recommender with Additive Personalization"
[CVPR 2024] The official repository of our paper "LEAD: Learning Decomposition for Source-free Universal Domain Adaptation"
[ICLR 2023] Multimodal Federated Learning via Contrastive Representation Ensemble
AAAI 2023 accepted paper, FedALA: Adaptive Local Aggregation for Personalized Federated Learning
This repository contains the source code for our MICCAI 2024 paper titled 'CAR-MFL: Cross-Modal Augmentation by Retrieval for Multimodal Federated Learning with Missing Modalities'
Implementation for FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated Clients
ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
[ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baharan Mirzasoleiman
A new simple method for dataset distillation called Randomized Truncated Backpropagation Through Time (RaT-BPTT)
This paper is currently under review by IEEE TCSVT, and the diffusion framework of the FedDiff algorithm part will be disclosed.
High-Resolution Image Synthesis with Latent Diffusion Models
[AAAI24] Official implement of <Beyond Prototypes: Semantic Anchor Regularization for Better Representation Learning>
[AAAI 2024] FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning
Official PyTorch codes of CVPR2022 Oral: Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization
(AAAI-24) Federated Learning via Input-Output Collaborative Distillation
[ACML 2022] Towards Data-Free Knowledge Distillation
Confidence-aware Personalized Federated Learning via Variational Expectation Maximization [Accepted at CVPR 2023]
The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillation-Oriented Trainer https://openaccess.thecvf.com/content…