default search action
19th CBMI 2022: Graz, Austria
- CBMI 2022: International Conference on Content-based Multimedia Indexing, Graz, Austria, September 14 - 16, 2022. ACM 2022, ISBN 978-1-4503-9720-9
Best Papers
- Sara Sarto, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara:
Retrieval-Augmented Transformer for Image Captioning. 1-7 - Sohail Ahmed Khan, Duc-Tien Dang-Nguyen:
Hybrid Transformer Network for Deepfake Detection. 8-14 - Ly-Duyen Tran, Naushad Alam, Yvette Graham, Linh Khanh Vo, Nghiem Tuong Diep, Binh T. Nguyen, Liting Zhou, Cathal Gurrin:
An Exploration into the Benefits of the CLIP model for Lifelog Retrieval. 15-22
Multimedia understanding and classification
- Lam Pham, Dat Ngo, Tho Nguyen, Phu X. Nguyen, Truong Van Hoang, Alexander Schindler:
An Audio-Visual Dataset and Deep Learning Frameworks for Crowded Scene Classification. 23-28 - Jiang Zhou, Kevin McGuinness, Joseph Antony, Noel E. O'Connor:
A Fine Grained Quality Assessment of Video Anomaly Detection. 29-35 - Mohammad Rehan, Hazem Wannous, Jafar Alkheir, Kinda Aboukassem:
Learning Co-occurrence Features Across Spatial and Temporal Domains for Hand Gesture Recognition. 36-42
Image analysis and enrichment
- Konstantinos Chatzistavros, Theodora Pistola, Sotiris Diplaris, Konstantinos Ioannidis, Stefanos Vrochidis, Ioannis Kompatsiaris:
Sentiment analysis on 2D images of urban and indoor spaces using deep learning architectures. 43-49 - Mathias Glistrup, Stevan Rudinac, Björn Þór Jónsson:
Urban Image Geo-Localization Using Open Data on Public Spaces. 50-56 - Mohamed Amine Kerkouri, Marouane Tliba, Aladine Chetouani, Alessandro Bruno:
A domain adaptive deep learning solution for scanpath prediction of paintings. 57-63
Multimedia Indexing and Retrieval
- Nicola Messina, Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Fabrizio Falchi, Giuseppe Amato, Rita Cucchiara:
ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval. 64-70 - Amorntip Prayoonwong, Ke-Long Zeng, Chih-Yi Chiu:
Improving Nearest Neighbor Indexing by Multitask Learning. 71-76 - Omar Seddati, Stéphane Dupont, Saïd Mahmoudi, Thierry Dutoit:
Towards Human Performance on Sketch-Based Image Retrieval. 77-83 - Varsha Devi, Philippe Mulhem, Georges Quénot:
Analysis of the Complementarity of Latent and Concept Spaces for Cross-Modal Video Search. 84-90
Image processing and reconstruction
- Juhwan Lee, Jongha Lee, Sangwook Yoo:
Real-time deblurring network for face AR applications. 91-96 - Ailin Chen, Rui Jesus, Márcia Vilarigues:
Hyperspectral Image Reconstruction of Heritage Artwork Using RGB Images and Deep Neural Networks. 97-102 - Ilias Koulalis, Nikolaos I. Dourvas, Theocharis Triantafyllidis, Konstantinos Ioannidis, Stefanos Vrochidis, Ioannis Kompatsiaris:
A survey for image based methods in construction: from images to digital twins. 103-110
Special Session: Computer-Assisted Clinical Applications
- Nicolas Martin, Jean-Pierre Chevallet, Georges Quénot:
Segmenting partially annotated medical images. 111-115 - Adnane Ait Nasser, Moulay A. Akhloufi:
Chest Diseases Classification Using CXR and Deep Ensemble Learning. 116-120 - Takfarines Guergueb, Moulay A. Akhloufi:
Skin Cancer Detection using Ensemble Learning and Grouping of Deep Models. 121-125
Special Session: Learning from scarce data challenges in the media domain
- Fabio Carrara, Lorenzo Pasco, Claudio Gennaro, Fabrizio Falchi:
Learning to Detect Fallen People in Virtual Worlds. 126-130 - Werner Bailer, Hannes Fassold:
Few-shot Object Detection as a Semi-supervised Learning Problem. 131-135 - Gabriele Lagani, Davide Bacciu, Claudio Gallicchio, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato:
Deep Features for CBIR with Scarce Data using Hebbian Learning. 136-141
Special Session: Multimodal Signal processing technologies for Protecting people and environment against Natural Disasters
- Maria Eirini Pegia, Anastasia Moumtzidou, Ilias Gialampoukidis, Björn Þór Jónsson, Stefanos Vrochidis, Ioannis Kompatsiaris:
BiasUNet: Learning Change Detection over Sentinel-2 Image Pairs. 142-148 - Rafik Ghali, Moulay A. Akhloufi, Wided Souidène Mseddi, Marwa Jmal:
Wildfire Segmentation using Deep-RegSeg Semantic Segmentation Architecture. 149-154 - Arief Setyanto, Kusrini Kusrini, Gardyas Bidari Adninda, Renindya Kartikakirana, Rhisa Aidilla Suprapto, Arif Laksito, I Made Artha Agastya, Krishna Chandramouli, Andrea Majlingova, Yvonne Brodrechtová, Konstantinos P. Demestichas, Ebroul Izquierdo:
Ecological Impact Assessment Framework for areas affected by Natural Disasters. 155-161
Posters
- Yuchen Qian, Kohei Yamamoto, Keiji Yanai:
StyleGAN-based CLIP-guided Image Shape Manipulation. 162-166 - Abel Kahsay Gebreslassie, Jenny Benois-Pineau, Akka Zemmari:
Streaming learning with Move-to-Data approach for image classification. 167-173 - Seán Cummins, Lorin Sweeney, Alan F. Smeaton:
Analysing the Memorability of a Procedural Crime-Drama TV Series, CSI. 174-180 - Frédéric Rayar, Mathieu Delalandre, Van-Hao Le:
A large-scale TV video and metadata database for French political content analysis and fact-checking. 181-185 - Björn Þór Jónsson, Aaron Duane, Nikolaj Mertz:
Relational Database Performance for Multimedia: A Case Study. 186-190 - Isabel Kütemeyer, Mathias Lux:
The Potential of Webcam Based Real Time Eye-Tracking to Reduce Rendering Cost. 191-195 - Benjamin Chamand, Philippe Joly:
Self-Supervised Spiking Neural Networks applied to Digit Classification. 196-200
Demos
- Syed Zohaib Hassan, Pegah Salehi, Michael Alexander Riegler, Miriam Sinkerud Johnson, Gunn Astrid Baugerud, Pål Halvorsen, Saeed Shafiee Sabet:
A Virtual Reality Talking Avatar for Investigative Interviews of Maltreat Children. 201-204 - Helmut Neuschmied, Florian Krebs, Stefan Ladstätter, Elisabeth Eder, Mohamed Redouane Berrazouane, Georg Thallinger:
A Toolchain for Extracting and Visualising Road Traffic Data. 205-208
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.