Gebruikersprofielen voor Yushun Tang

Yushun Tang

Southern University of Science and Technology (SUSTech) & National University of …
Geverifieerd e-mailadres voor mail.sustech.edu.cn
Geciteerd door 93

Neuro-modulated hebbian learning for fully test-time adaptation

Y Tang, C Zhang, H Xu, S Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Fully test-time adaptation aims to adapt the network model based on sequential analysis of
input samples during the inference stage to address the cross-domain performance …

Self-correctable and adaptable inference for generalizable human pose estimation

Z Kan, S Chen, C Zhang, Y Tang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
A central challenge in human pose estimation, as well as in many other machine learning
and prediction tasks, is the generalization problem. The learned network does not have the …

Cross-modal concept learning and inference for vision-language models

Y Zhang, C Zhang, Y Tang, Z He - Neurocomputing, 2024 - Elsevier
Large-scale pre-trained Vision-Language Models (VLMs), such as CLIP, establish the
correlation between texts and images, achieving remarkable success on various downstream …

Bdc-adapter: Brownian distance covariance for better vision-language reasoning

Y Zhang, C Zhang, Z Liao, Y Tang, Z He - arXiv preprint arXiv:2309.01256, 2023 - arxiv.org
Large-scale pre-trained Vision-Language Models (VLMs), such as CLIP and ALIGN, have
introduced a new paradigm for learning transferable visual representations. Recently, there …

Dual-path adversarial lifting for domain shift correction in online test-time adaptation

Y Tang, S Chen, Z Lu, X Wang, Z He - European Conference on Computer …, 2025 - Springer
Transformer-based methods have achieved remarkable success in various machine
learning tasks. How to design efficient test-time adaptation methods for transformer models …

Learning visual conditioning tokens to correct domain shift for fully test-time adaptation

Y Tang, S Chen, Z Kan, Y Zhang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Fully test-time adaptation aims to adapt the network model based on sequential analysis of
input samples during the inference stage to address the cross-domain performance …

Cross-inferential networks for source-free unsupervised domain adaptation

Y Tang, Q Guo, Z He - 2023 IEEE International Conference on …, 2023 - ieeexplore.ieee.org
One central challenge in source-free unsupervised domain adaptation (UDA) is the lack of
an effective approach to evaluate the prediction results of the adapted network model in the …

Domain-Conditioned Transformer for Fully Test-time Adaptation

Y Tang, S Chen, J Jia, Y Zhang, Z He - Proceedings of the 32nd ACM …, 2024 - dl.acm.org
Fully test-time adaptation aims to adapt a network model online based on sequential analysis
of input samples during the inference stage. We observe that, when applying a transformer …

Concept-Guided Prompt Learning for Generalization in Vision-Language Models

Y Zhang, C Zhang, K Yu, Y Tang, Z He - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Contrastive Language-Image Pretraining (CLIP) model has exhibited remarkable efficacy in
establishing cross-modal connections between texts and images, yielding impressive …

Window-based Channel Attention for Wavelet-enhanced Learned Image Compression

H Xu, B Hai, Y Tang, Z He - Proceedings of the Asian …, 2024 - openaccess.thecvf.com
Learned Image Compression (LIC) models have achieved superior rate-distortion performance
than traditional codecs. Existing LIC models use CNN, Transformer, or Mixed CNN-…