Gebruikersprofielen voor Yushun Tang
Yushun TangSouthern University of Science and Technology (SUSTech) & National University of … Geverifieerd e-mailadres voor mail.sustech.edu.cn Geciteerd door 93 |
Neuro-modulated hebbian learning for fully test-time adaptation
Fully test-time adaptation aims to adapt the network model based on sequential analysis of
input samples during the inference stage to address the cross-domain performance …
input samples during the inference stage to address the cross-domain performance …
Self-correctable and adaptable inference for generalizable human pose estimation
A central challenge in human pose estimation, as well as in many other machine learning
and prediction tasks, is the generalization problem. The learned network does not have the …
and prediction tasks, is the generalization problem. The learned network does not have the …
Cross-modal concept learning and inference for vision-language models
Large-scale pre-trained Vision-Language Models (VLMs), such as CLIP, establish the
correlation between texts and images, achieving remarkable success on various downstream …
correlation between texts and images, achieving remarkable success on various downstream …
Bdc-adapter: Brownian distance covariance for better vision-language reasoning
Large-scale pre-trained Vision-Language Models (VLMs), such as CLIP and ALIGN, have
introduced a new paradigm for learning transferable visual representations. Recently, there …
introduced a new paradigm for learning transferable visual representations. Recently, there …
Dual-path adversarial lifting for domain shift correction in online test-time adaptation
Transformer-based methods have achieved remarkable success in various machine
learning tasks. How to design efficient test-time adaptation methods for transformer models …
learning tasks. How to design efficient test-time adaptation methods for transformer models …
Learning visual conditioning tokens to correct domain shift for fully test-time adaptation
Fully test-time adaptation aims to adapt the network model based on sequential analysis of
input samples during the inference stage to address the cross-domain performance …
input samples during the inference stage to address the cross-domain performance …
Cross-inferential networks for source-free unsupervised domain adaptation
One central challenge in source-free unsupervised domain adaptation (UDA) is the lack of
an effective approach to evaluate the prediction results of the adapted network model in the …
an effective approach to evaluate the prediction results of the adapted network model in the …
Domain-Conditioned Transformer for Fully Test-time Adaptation
Fully test-time adaptation aims to adapt a network model online based on sequential analysis
of input samples during the inference stage. We observe that, when applying a transformer …
of input samples during the inference stage. We observe that, when applying a transformer …
Concept-Guided Prompt Learning for Generalization in Vision-Language Models
Contrastive Language-Image Pretraining (CLIP) model has exhibited remarkable efficacy in
establishing cross-modal connections between texts and images, yielding impressive …
establishing cross-modal connections between texts and images, yielding impressive …
Window-based Channel Attention for Wavelet-enhanced Learned Image Compression
Learned Image Compression (LIC) models have achieved superior rate-distortion performance
than traditional codecs. Existing LIC models use CNN, Transformer, or Mixed CNN-…
than traditional codecs. Existing LIC models use CNN, Transformer, or Mixed CNN-…