Neural Image Compression Using Masked Sparse Visual Representation
W Jiang, W Wang, Y Chen - Proceedings of the IEEE/CVF …, 2024 - openaccess.thecvf.com
W Jiang, W Wang, Y Chen
Proceedings of the IEEE/CVF Winter Conference on Applications …, 2024•openaccess.thecvf.comWe study neural image compression based on the Sparse Visual Representation (SVR),
where images are embedded into a discrete latent space spanned by learned visual
codebooks. By sharing codebooks with the decoder, the encoder transfers integer codeword
indices that are efficient and cross-platform robust, and the decoder retrieves the embedded
latent feature using the indices for reconstruction. Previous SVR-based compression lacks
effective mechanism for rate-distortion tradeoffs, where one can only pursue either high …
where images are embedded into a discrete latent space spanned by learned visual
codebooks. By sharing codebooks with the decoder, the encoder transfers integer codeword
indices that are efficient and cross-platform robust, and the decoder retrieves the embedded
latent feature using the indices for reconstruction. Previous SVR-based compression lacks
effective mechanism for rate-distortion tradeoffs, where one can only pursue either high …
Abstract
We study neural image compression based on the Sparse Visual Representation (SVR), where images are embedded into a discrete latent space spanned by learned visual codebooks. By sharing codebooks with the decoder, the encoder transfers integer codeword indices that are efficient and cross-platform robust, and the decoder retrieves the embedded latent feature using the indices for reconstruction. Previous SVR-based compression lacks effective mechanism for rate-distortion tradeoffs, where one can only pursue either high reconstruction quality or low transmission bitrate. We propose a Masked Adaptive Codebook learning (M-AdaCode) method that applies masks to the latent feature subspace to balance bitrate and reconstruction quality. A set of semantic-class-dependent basis codebooks are learned, which are weighted combined to generate a rich latent feature for high-quality reconstruction. The combining weights are adaptively derived from each input image, providing fidelity information with additional transmission costs. By masking out unimportant weights in the encoder and recovering them in the decoder, we can trade off reconstruction quality for transmission bits, and the masking rate controls the balance between bitrate and distortion. Experiments over the standard JPEG-AI dataset demonstrate the effectiveness of our M-AdaCode approach.
openaccess.thecvf.com
Showing the best result for this search. See all results