default search action
Journal of Visual Communication and Image Representation, Volume 53
Volume 53, May 2018
- Marjan Firouznia, Karim Faez, Hamidreza Amindavar, Javad Alikhani Koupaei:
Chaotic particle filter for visual object tracking. 1-12 - Ling Wang, Hong Cheng, Zicheng Liu:
A set-to-set nearest neighbor approach for robust and efficient face recognition with image sets. 13-19
- Heng Liu, Zilin Fu, Jungong Han, Ling Shao, Hongshen Liu:
Single satellite imagery simultaneous super-resolution and colorization using multi-task deep neural networks. 20-30
- Muwei Jian, Qiang Qi, Junyu Dong, Yilong Yin, Kin-Man Lam:
Integrating QDWD with pattern distinctness and local contrast for underwater saliency detection. 31-41 - Randa Atta, Mohammad Ghanbari:
A high payload steganography mechanism based on wavelet packet transformation and neutrosophic set. 42-54
- Michel Melo Silva, Washington L. S. Ramos, Felipe C. Chamone, João P. K. Ferreira, Mario F. M. Campos, Erickson R. Nascimento:
Making a long story short: A multi-importance fast-forwarding egocentric videos with the emphasis on relevant objects. 55-64
- Xiaofeng Wang, Didong Zhou, Nengliang Zeng, Xina Yu, Shaolin Hu:
Super-resolution image reconstruction using surface fitting with hierarchical structure. 65-75
- Yufan Liu, Li Yang, Mai Xu, Zulin Wang:
Rate control schemes for panoramic video coding. 76-85
- Satendra Pal Singh, Gaurav Bhatnagar:
A new robust watermarking system in integer DCT domain. 86-101 - Jianhua Zhang, Yanzhu Zhao, Shengyong Chen:
Object-level saliency: Fusing objectness estimation and saliency detection into a uniform framework. 102-112 - Yingyue Xu, Xiaopeng Hong, Xin Liu, Guoying Zhao:
Saliency detection via bi-directional propagation. 113-121 - Dragomir M. El Mezeni, Lazar V. Saranovac:
Enhanced local tone mapping for detail preserving reproduction of high dynamic range images. 122-133 - Dongdong Hou, Chuan Qin, Nenghai Yu, Weiming Zhang:
Reversible visual transformation via exploring the correlations within color images. 134-145 - Kharittha Thongkor, Thumrongrat Amornraksa, Edward J. Delp:
Digital watermarking for camera-captured images based on just noticeable distortion and Wiener filtering. 146-160 - Xiaobo Shen, Yunhao Yuan, Fumin Shen, Yang Xu, Quan-Sen Sun:
A novel multi-view dimensionality reduction and recognition framework with applications to face recognition. 161-170 - Wenqian Dong, Song Xiao, Yongxu Li:
Hyperspectral pansharpening based on guided filter and Gaussian filter. 171-179 - Weihao Gan, Ming-Sui Lee, Chi-Hao Wu, C.-C. Jay Kuo:
Online object tracking via motion-guided convolutional neural network (MGNet). 180-191
- Rafael Pagés, Konstantinos Amplianitis, David S. Monaghan, Jan Ondrej, Aljoscha Smolic:
Affordable content creation for free-viewpoint video and VR/AR applications. 192-201
- Toqeer Mahmood, Zahid Mehmood, Mohsin Shah, Tanzila Saba:
A robust technique for copy-move forgery detection and localization in digital images via stationary wavelet and discrete cosine transform. 202-214 - Ming Zhang, Yunhe Wu, Yue Du, Lei Fang, Yu Pang:
Saliency detection integrating global and local information. 215-223 - Qianting Ma, Dexing Kong:
A new variational model for joint restoration and segmentation based on the Mumford-Shah model. 224-234 - Fei Kou, Zhengguo Li, Changyun Wen, Weihai Chen:
Edge-preserving smoothing pyramid based multi-scale exposure fusion. 235-244 - Jiong Yang, Junsong Yuan:
Temporally enhanced image object proposals for online video object and action detections. 245-256 - Gee-Sern Jison Hsu, Arul-Murugan Ambikapathi, Sheng-Luen Chung, Hung-Cheng Shie:
Robust cross-pose face recognition using landmark oriented depth warping. 273-280 - Radu-Mihai Coliban, Mihai Ivanovici:
Reducing the oversegmentation induced by quasi-flat zones for multivariate images. 281-293
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.