default search action
Shri Narayanan
Person information
- affiliation: University of Southern California, Signal Analysis and Interpretation Lab, Los Angeles, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j160]Miran Oh, Dani Byrd, Louis Goldstein, Shrikanth S. Narayanan:
Vertical larynx actions and intergestural timing stability in Hausa ejectives and implosives. Phonetica 81(6): 559-597 (2024) - [j159]Kleanthis Avramidis, Dominika Kunc, Bartosz Perz, Kranti Adsul, Tiantian Feng, Przemyslaw Kazienko, Stanislaw Saganowski, Shrikanth Narayanan:
Scaling Representation Learning From Ubiquitous ECG With State-Space Models. IEEE J. Biomed. Health Informatics 28(10): 5877-5889 (2024) - [c684]Hong Nguyen, Hoang Nguyen, Melinda Chang, Hieu Pham, Shrikanth Narayanan, Michael Pazzani:
ConPro: Learning Severity Representation for Medical Images using Contrastive Learning and Preference Optimization. CVPR Workshops 2024: 5105-5112 - [c683]Kleanthis Avramidis, Melinda Y. Chang, Rahul Sharma, Mark S. Borchert, Shrikanth Narayanan:
Evaluating Atypical Gaze Patterns through Vision Models: The Case of Cortical Visual Impairment. EMBC 2024: 1-5 - [c682]Tiantian Feng, Shrikanth Narayanan:
Understanding Stress, Burnout, and Behavioral Patterns in Medical Residents Using Large-scale Longitudinal Wearable Recordings. EMBC 2024: 1-7 - [c681]Aditya Kommineni, Kleanthis Avramidis, Richard Leahy, Shrikanth Narayanan:
Knowledge-guided EEG Representation Learning. EMBC 2024: 1-6 - [c680]Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, Shrikanth Narayanan:
Audio-Visual Child-Adult Speaker Classification in Dyadic Interactions. ICASSP 2024: 8090-8094 - [c679]Shanti Stewart, Kleanthis Avramidis, Tiantian Feng, Shrikanth Narayanan:
Emotion-Aligned Contrastive Learning Between Images and Music. ICASSP 2024: 8135-8139 - [c678]Sabyasachee Baruah, Shrikanth Narayanan:
Character Attribute Extraction from Movie Scripts Using LLMs. ICASSP 2024: 8270-8275 - [c677]Yoonsoo Nam, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta, Shrikanth Narayanan:
Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization. ICASSP 2024: 8396-8400 - [c676]Tiantian Feng, Rajat Hebbar, Shrikanth Narayanan:
TRUST-SER: On The Trustworthiness Of Fine-Tuning Pre-Trained Speech Embeddings For Speech Emotion Recognition. ICASSP 2024: 11201-11205 - [c675]Tiantian Feng, Shrikanth Narayanan:
Foundation Model Assisted Automatic Speech Emotion Recognition: Transcribing, Annotating, and Augmenting. ICASSP 2024: 12116-12120 - [c674]Keith Burghardt, Ashwin Rao, Georgios Chochlakis, Sabyasachee Baruah, Siyi Guo, Zihao He, Andrew Rojecki, Shrikanth Narayanan, Kristina Lerman:
Socio-Linguistic Characteristics of Coordinated Inauthentic Accounts. ICWSM 2024: 164-176 - [c673]Parsa Hejabi, Akshay Kiran Padte, Preni Golazizian, Rajat Hebbar, Jackson Trager, Georgios Chochlakis, Aditya Kommineni, Ellie Graeden, Shrikanth Narayanan, Benjamin A. T. Grahama, Morteza Dehghani:
CVAT-BWV: A Web-Based Video Annotation Platform for Police Body-Worn Video. IJCAI 2024: 8674-8678 - [i143]Benjamin A. T. Grahama, Lauren Brown, Georgios Chochlakis, Morteza Dehghani, Raquel Delerme, Brittany Friedman, Ellie Graeden, Preni Golazizian, Rajat Hebbar, Parsa Hejabi, Aditya Kommineni, Mayagüez Salinas, Michael Sierra-Arévalo, Jackson Trager, Nicholas Weller, Shrikanth Narayanan:
A Multi-Perspective Machine Learning Approach to Evaluate Police-Driver Interaction in Los Angeles. CoRR abs/2402.01703 (2024) - [i142]Tiantian Feng, Shrikanth Narayanan:
Understanding Stress, Burnout, and Behavioral Patterns in Medical Residents Using Large-scale Longitudinal Wearable Recordings. CoRR abs/2402.09028 (2024) - [i141]Tiantian Feng, Daniel Yang, Digbalay Bose, Shrikanth Narayanan:
Can Text-to-image Model Assist Multi-modal Learning for Visual Recognition with Visual Modality Missing? CoRR abs/2402.09036 (2024) - [i140]Aditya Kommineni, Kleanthis Avramidis, Richard Leahy, Shrikanth Narayanan:
Knowledge-guided EEG Representation Learning. CoRR abs/2403.03222 (2024) - [i139]Alice Baird, Rachel Manzelli, Panagiotis Tzirakis, Chris Gagne, Haoqi Li, Sadie Allen, Sander Dieleman, Brian Kulis, Shrikanth S. Narayanan, Alan Cowen:
The NeurIPS 2023 Machine Learning for Audio Workshop: Affective Audio Benchmarks and Novel Data. CoRR abs/2403.14048 (2024) - [i138]Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan:
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition. CoRR abs/2403.17125 (2024) - [i137]Tiantian Feng, Xuan Shi, Rahul Gupta, Shrikanth S. Narayanan:
TI-ASU: Toward Robust Automatic Speech Understanding through Text-to-speech Imputation Against Missing Speech Modality. CoRR abs/2404.17983 (2024) - [i136]Hong Nguyen, Hoang Nguyen, Melinda Chang, Hieu H. Pham, Shrikanth Narayanan, Michael Pazzani:
ConPro: Learning Severity Representation for Medical Images using Contrastive Learning and Preference Optimization. CoRR abs/2404.18831 (2024) - [i135]Anfeng Xu, Kevin Huang, Tiantian Feng, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan:
Exploring Speech Foundation Models for Speaker Diarization in Child-Adult Dyadic Interactions. CoRR abs/2406.07890 (2024) - [i134]Jihwan Lee, Aditya Kommineni, Tiantian Feng, Kleanthis Avramidis, Xuan Shi, Sudarsana Kadiri, Shrikanth Narayanan:
Toward Fully-End-to-End Listened Speech Decoding from EEG Signals. CoRR abs/2406.08644 (2024) - [i133]Tiantian Feng, Dimitrios Dimitriadis, Shrikanth Narayanan:
Can Synthetic Audio From Generative Foundation Models Assist Audio Recognition and Speech Modeling? CoRR abs/2406.08800 (2024) - [i132]Tuo Zhang, Tiantian Feng, Yibin Ni, Mengqin Cao, Ruying Liu, Katharine Butler, Yanjun Weng, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr:
Creating a Lens of Chinese Culture: A Multimodal Dataset for Chinese Pun Rebus Art Understanding. CoRR abs/2406.10318 (2024) - [i131]Angelly Cabrera, Kleanthis Avramidis, Shrikanth Narayanan:
Early Detection of Coffee Leaf Rust Through Convolutional Neural Networks Trained on Low-Resolution Images. CoRR abs/2407.14737 (2024) - [i130]Tiantian Feng, Tuo Zhang, Salman Avestimehr, Shrikanth S. Narayanan:
ModalityMirror: Improving Audio Classification in Modality Heterogeneity Federated Learning with Multimodal Distillation. CoRR abs/2408.15803 (2024) - [i129]Georgios Chochlakis, Niyantha Maruthu Pandiyan, Kristina Lerman, Shrikanth Narayanan:
Larger Language Models Don't Care How You Think: Why Chain-of-Thought Prompting Fails in Subjective Tasks. CoRR abs/2409.06173 (2024) - [i128]Tiantian Feng, Anfeng Xu, Xuan Shi, Somer Bishop, Shrikanth Narayanan:
Egocentric Speaker Classification in Child-Adult Dyadic Interactions: From Sensing to Computational Modeling. CoRR abs/2409.09340 (2024) - [i127]Zhonghao Shi, Harshvardhan Srivastava, Xuan Shi, Shrikanth Narayanan, Maja J. Mataric:
Personalized Speech Recognition for Children with Test-Time Adaptation. CoRR abs/2409.13095 (2024) - [i126]Aditya Kommineni, Digbalay Bose, Tiantian Feng, So Hyun Kim, Helen Tager-Flusberg, Somer Bishop, Catherine Lord, Sudarsana Kadiri, Shrikanth Narayanan:
Towards Child-Inclusive Clinical Video Understanding for Autism Spectrum Disorder. CoRR abs/2409.13606 (2024) - [i125]Hong Nguyen, Sean Foley, Kevin Huang, Xuan Shi, Tiantian Feng, Shrikanth Narayanan:
Speech2rtMRI: Speech-Guided Diffusion Model for Real-time MRI Video of the Vocal Tract during Speech. CoRR abs/2409.15525 (2024) - [i124]Aditya Ashvin, Rimita Lahiri, Aditya Kommineni, Somer Bishop, Catherine Lord, Sudarsana Reddy Kadiri, Shrikanth Narayanan:
Evaluation of state-of-the-art ASR Models in Child-Adult Interactions. CoRR abs/2409.16135 (2024) - [i123]Girish Narayanswamy, Xin Liu, Kumar Ayush, Yuzhe Yang, Xuhai Xu, Shun Liao, Jake Garrison, Shyam Tailor, Jake Sunshine, Yun Liu, Tim Althoff, Shrikanth Narayanan, Pushmeet Kohli, Jiening Zhan, Mark Malhotra, Shwetak N. Patel, Samy Abdel-Ghaffar, Daniel McDuff:
Scaling Wearable Foundation Models. CoRR abs/2410.13638 (2024) - [i122]Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan:
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors. CoRR abs/2410.13776 (2024) - [i121]Anne-Maria Laukkanen, Sudarsana Reddy Kadiri, Shrikanth Narayanan, Paavo Alku:
Can a Machine Distinguish High and Low Amount of Social Creak in Speech? CoRR abs/2410.17028 (2024) - [i120]Sabyasachee Baruah, Shrikanth Narayanan:
CHATTER: A Character Attribution Dataset for Narrative Understanding. CoRR abs/2411.05227 (2024) - [i119]Tiantian Feng, Anfeng Xu, Rimita Lahiri, Helen Tager-Flusberg, So Hyun Kim, Somer Bishop, Catherine Lord, Shrikanth Narayanan:
Can Generic LLMs Help Analyze Child-adult Interactions Involving Children with Autism in Clinical Observation? CoRR abs/2411.10761 (2024) - 2023
- [j158]Raghuveer Peri, Krishna Somandepalli, Shrikanth Narayanan:
A study of bias mitigation strategies for speaker recognition. Comput. Speech Lang. 79: 101481 (2023) - [j157]Projna Paromita, Karel Mundnich, Amrutha Nadarajan, Brandon M. Booth, Shrikanth S. Narayanan, Theodora Chaspari:
Modeling inter-individual differences in ambulatory-based multimodal signals via metric learning: a case study of personalized well-being estimation of healthcare workers. Frontiers Digit. Health 5 (2023) - [j156]Chi-Chun Lee, Theodora Chaspari, Emily Mower Provost, Shrikanth S. Narayanan:
An Engineering View on Emotions and Speech: From Analysis and Predictive Models to Responsible Human-Centered Applications. Proc. IEEE 111(10): 1142-1158 (2023) - [j155]Rahul Sharma, Krishna Somandepalli, Shrikanth Narayanan:
Cross Modal Video Representations for Weakly Supervised Active Speaker Localization. IEEE Trans. Multim. 25: 7825-7836 (2023) - [c672]Tiantian Feng, Shrikanth Narayanan:
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models. ACII 2023: 1-8 - [c671]Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch, Shrikanth Narayanan:
Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models. ACII 2023: 1-8 - [c670]Sabyasachee Baruah, Shrikanth Narayanan:
Character Coreference Resolution in Movie Screenplays. ACL (Findings) 2023: 10300-10313 - [c669]Mohammad Rostami, Digbalay Bose, Shrikanth Narayanan, Aram Galstyan:
Domain Adaptation for Sentiment Analysis Using Robust Internal Representations. EMNLP (Findings) 2023: 11484-11498 - [c668]Nikolaos Antoniou, Athanasios Katsamanis, Theodoros Giannakopoulos, Shrikanth Narayanan:
Designing and Evaluating Speech Emotion Recognition Systems: A Reality Check Case Study with IEMOCAP. ICASSP 2023: 1-5 - [c667]Victor Ardulov, Shrikanth Narayanan:
Navigating and Reaching Therapeutic Goals with Dynamical Systems in Conversation-Based Interventions. ICASSP 2023: 1-5 - [c666]Kleanthis Avramidis, Kranti Adsul, Digbalay Bose, Shrikanth Narayanan:
Signal Processing Grand Challenge 2023 - E-Prevention: Sleep Behavior as an Indicator of Relapses in Psychotic Patients. ICASSP 2023: 1-2 - [c665]Kleanthis Avramidis, Tiantian Feng, Digbalay Bose, Shrikanth Narayanan:
Multimodal Estimation Of Change Points Of Physiological Arousal During Driving. ICASSP Workshops 2023: 1-5 - [c664]Kleanthis Avramidis, Shanti Stewart, Shrikanth Narayanan:
On the Role of Visual Context in Enriching Music Representations. ICASSP 2023: 1-5 - [c663]Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Shrikanth Narayanan:
Contextually-Rich Human Affect Perception Using Multimodal Scene Information. ICASSP 2023: 1-5 - [c662]Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan:
Leveraging Label Correlations in a Multi-Label Setting: a Case Study in Emotion. ICASSP 2023: 1-5 - [c661]Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan:
Using Emotion Embeddings to Transfer Knowledge between Emotions, Languages, and Annotation Formats. ICASSP 2023: 1-5 - [c660]Rajat Hebbar, Digbalay Bose, Krishna Somandepalli, Veena Vijai, Shrikanth Narayanan:
A Dataset for Audio-Visual Sound Event Detection in Movies. ICASSP 2023: 1-5 - [c659]Rimita Lahiri, Md. Nasir, Catherine Lord, So Hyun Kim, Shrikanth Narayanan:
A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations. ICASSP 2023: 1-5 - [c658]Ravi Pranjal, Ranjana Seshadri, Rakesh Kumar Sanath Kumar Kadaba, Tiantian Feng, Shrikanth S. Narayanan, Theodora Chaspari:
Toward Privacy-Enhancing Ambulatory-Based Well-Being Monitoring: Investigating User Re-Identification Risk in Multimodal Data. ICASSP 2023: 1-5 - [c657]Xuan Shi, Erica Cooper, Xin Wang, Junichi Yamagishi, Shrikanth Narayanan:
Can Knowledge of End-to-End Text-to-Speech Models Improve Neural Midi-to-Audio Synthesis Systems? ICASSP 2023: 1-5 - [c656]Tuo Zhang, Tiantian Feng, Samiul Alam, Sunwoo Lee, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr:
FedAudio: A Federated Learning Benchmark for Audio Tasks. ICASSP 2023: 1-5 - [c655]Homa Hosseinmardi, Amir Ghasemian, Kristina Lerman, Shrikanth Narayanan, Emilio Ferrara:
Tensor Embedding: A Supervised Framework for Human Behavioral Data Mining and Prediction. ICHI 2023: 91-100 - [c654]Shrikanth Narayanan:
Bridging Speech Science and Technology - Now and Into the Future. INTERSPEECH 2023: 1 - [c653]Reed Blaylock, Shrikanth Narayanan:
Beatboxing Kick Drum Kinematics. INTERSPEECH 2023: 2583-2587 - [c652]Thomas Melistas, Lefteris Kapelonis, Nikolaos Antoniou, Petros Mitseas, Dimitris Sgouropoulos, Theodoros Giannakopoulos, Athanasios Katsamanis, Shrikanth Narayanan:
Cross-Lingual Features for Alzheimer's Dementia Detection from Speech. INTERSPEECH 2023: 3008-3012 - [c651]Rimita Lahiri, Tiantian Feng, Rajat Hebbar, Catherine Lord, So Hyun Kim, Shrikanth Narayanan:
Robust Self Supervised Speech Embeddings for Child-Adult Classification in Interactions involving Children with Autism. INTERSPEECH 2023: 3557-3561 - [c650]Anfeng Xu, Rajat Hebbar, Rimita Lahiri, Tiantian Feng, Lindsay Butler, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan:
Understanding Spoken Language Development of Children with ASD Using Pre-trained Speech Embeddings. INTERSPEECH 2023: 4633-4637 - [c649]Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, Shrikanth Narayanan:
FedMultimodal: A Benchmark for Multimodal Federated Learning. KDD 2023: 4035-4045 - [c648]Digbalay Bose, Rajat Hebbar, Tiantian Feng, Krishna Somandepalli, Anfeng Xu, Shrikanth Narayanan:
MM-AU: Towards Multimodal Understanding of Advertisement Videos. ACM Multimedia 2023: 86-95 - [c647]Rajat Hebbar, Digbalay Bose, Shrikanth Narayanan:
SEAR: Semantically-grounded Audio Representations. ACM Multimedia 2023: 2785-2794 - [c646]Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, Shrikanth Narayanan:
MovieCLIP: Visual Scene Recognition in Movies. WACV 2023: 2082-2091 - [i118]Rajat Hebbar, Digbalay Bose, Krishna Somandepalli, Veena Vijai, Shrikanth Narayanan:
A dataset for Audio-Visual Sound Event Detection in Movies. CoRR abs/2302.07315 (2023) - [i117]Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Shrikanth Narayanan:
Contextually-rich human affect perception using multimodal scene information. CoRR abs/2303.06904 (2023) - [i116]Nikolaos Antoniou, Athanasios Katsamanis, Theodoros Giannakopoulos, Shrikanth Narayanan:
Designing and Evaluating Speech Emotion Recognition Systems: A reality check case study with IEMOCAP. CoRR abs/2304.00860 (2023) - [i115]Kleanthis Avramidis, Kranti Adsul, Digbalay Bose, Shrikanth Narayanan:
Signal Processing Grand Challenge 2023 - e-Prevention: Sleep Behavior as an Indicator of Relapses in Psychotic Patients. CoRR abs/2304.08614 (2023) - [i114]Tiantian Feng, Rajat Hebbar, Shrikanth Narayanan:
TrustSER: On the Trustworthiness of Fine-tuning Pre-trained Speech Embeddings For Speech Emotion Recognition. CoRR abs/2305.11229 (2023) - [i113]Keith Burghardt, Ashwin Rao, Siyi Guo, Zihao He, Georgios Chochlakis, Sabyasachee Baruah, Andrew Rojecki, Shri Narayanan, Kristina Lerman:
Socio-Linguistic Characteristics of Coordinated Inauthentic Accounts. CoRR abs/2305.11867 (2023) - [i112]Anfeng Xu, Rajat Hebbar, Rimita Lahiri, Tiantian Feng, Lindsay Butler, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan:
Understanding Spoken Language Development of Children with ASD Using Pre-trained Speech Embeddings. CoRR abs/2305.14117 (2023) - [i111]Tuo Zhang, Tiantian Feng, Samiul Alam, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr:
GPT-FL: Generative Pre-trained Model-Assisted Federated Learning. CoRR abs/2306.02210 (2023) - [i110]Tiantian Feng, Shrikanth Narayanan:
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models. CoRR abs/2306.05350 (2023) - [i109]Tiantian Feng, Digbalay Bose, Xuan Shi, Shrikanth Narayanan:
Unlocking Foundation Models for Privacy-Enhancing Speech Understanding: An Early Study on Low Resource Speech Training Leveraging Label-guided Synthetic Speech Content. CoRR abs/2306.07791 (2023) - [i108]Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, Shrikanth Narayanan:
FedMultimodal: A Benchmark For Multimodal Federated Learning. CoRR abs/2306.09486 (2023) - [i107]Tiantian Feng, Brandon M. Booth, Shrikanth Narayanan:
Learning Behavioral Representations of Routines From Large-scale Unlabeled Wearable Time-series Data Streams using Hawkes Point Process. CoRR abs/2307.04445 (2023) - [i106]Shanti Stewart, Tiantian Feng, Kleanthis Avramidis, Shrikanth Narayanan:
Emotion-Aligned Contrastive Learning Between Images and Music. CoRR abs/2308.12610 (2023) - [i105]Digbalay Bose, Rajat Hebbar, Tiantian Feng, Krishna Somandepalli, Anfeng Xu, Shrikanth Narayanan:
MM-AU: Towards Multimodal Understanding of Advertisement Videos. CoRR abs/2308.14052 (2023) - [i104]Tiantian Feng, Shrikanth Narayanan:
Foundation Model Assisted Automatic Speech Emotion Recognition: Transcribing, Annotating, and Augmenting. CoRR abs/2309.08108 (2023) - [i103]Yoonsoo Nam, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta, Shrikanth Narayanan:
Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization. CoRR abs/2309.09405 (2023) - [i102]Kleanthis Avramidis, Dominika Kunc, Bartosz Perz, Kranti Adsul, Tiantian Feng, Przemyslaw Kazienko, Stanislaw Saganowski, Shrikanth Narayanan:
Scaling Representation Learning from Ubiquitous ECG with State-Space Models. CoRR abs/2309.15292 (2023) - [i101]Samiul Alam, Tuo Zhang, Tiantian Feng, Hui Shen, Zhichao Cao, Dong Zhao, JeongGil Ko, Kiran Somasundaram, Shrikanth S. Narayanan, Salman Avestimehr, Mi Zhang:
FedAIoT: A Federated Learning Benchmark for Artificial Intelligence of Things. CoRR abs/2310.00109 (2023) - [i100]Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, Shrikanth Narayanan:
Audio-visual child-adult speaker classification in dyadic interactions. CoRR abs/2310.01867 (2023) - [i99]Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch, Shrikanth Narayanan:
Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models. CoRR abs/2311.03551 (2023) - [i98]Hong Nguyen, Cuong V. Nguyen, Shrikanth Narayanan, Benjamin Y. Xu, Michael Pazzani:
Explainable Severity ranking via pairwise n-hidden comparison: a case study of glaucoma. CoRR abs/2312.02541 (2023) - 2022
- [j154]Zane Durante, Victor Ardulov, Manoj Kumar, Jennifer Gongola, Thomas D. Lyon, Shrikanth Narayanan:
Causal indicators for assessing the truthfulness of child speech in forensic interviews. Comput. Speech Lang. 71: 101263 (2022) - [j153]Prashanth Gurunath Shivakumar, Shrikanth Narayanan:
End-to-end neural systems for automatic children speech recognition: An empirical study. Comput. Speech Lang. 72: 101289 (2022) - [j152]Tae Jin Park, Naoyuki Kanda, Dimitrios Dimitriadis, Kyu Jeong Han, Shinji Watanabe, Shrikanth Narayanan:
A review of speaker diarization: Recent advances with deep learning. Comput. Speech Lang. 72: 101317 (2022) - [j151]Zhuohao Chen, Nikolaos Flemotomos, Karan Singla, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan:
An automated quality evaluation framework of psychotherapy conversations with local quality estimates. Comput. Speech Lang. 75: 101380 (2022) - [j150]Gábor Mihály Tóth, Tim Hempel, Krishna Somandepalli, Shri Narayanan:
Studying Large-Scale Behavioral Differences in Auschwitz-Birkenau with Simulation of Gendered Narratives. Digit. Humanit. Q. 16(3) (2022) - [j149]Björn W. Schuller, Yonina C. Eldar, Maja Pantic, Shrikanth Narayanan, Tuomas Virtanen, Jianhua Tao:
Editorial: Intelligent Signal Analysis for Contagious Virus Diseases. IEEE J. Sel. Top. Signal Process. 16(2): 159-163 (2022) - [j148]Anil Ramakrishna, Rahul Gupta, Shrikanth Narayanan:
Joint Multi-Dimensional Model for Global and Time-Series Annotations. IEEE Trans. Affect. Comput. 13(1): 473-484 (2022) - [j147]James Gibson, David C. Atkins, Torrey A. Creed, Zac E. Imel, Panayiotis G. Georgiou, Shrikanth Narayanan:
Multi-Label Multi-Task Deep Learning for Behavioral Coding. IEEE Trans. Affect. Comput. 13(1): 508-518 (2022) - [j146]Md. Nasir, Brian R. Baucom, Craig J. Bryan, Shrikanth Narayanan, Panayiotis G. Georgiou:
Modeling Vocal Entrainment in Conversational Speech Using Deep Unsupervised Learning. IEEE Trans. Affect. Comput. 13(3): 1651-1663 (2022) - [j145]Krishna Somandepalli, Rajat Hebbar, Shrikanth Narayanan:
Robust Character Labeling in Movie Videos: Data Resources and Self-Supervised Feature Adaptation. IEEE Trans. Multim. 24: 3355-3368 (2022) - [c645]Aggelina Chatziagapi, Dimitris Sgouropoulos, Constantinos Karouzos, Thomas Melistas, Theodoros Giannakopoulos, Athanasios Katsamanis, Shrikanth Narayanan:
Audio and ASR-based Filled Pause Detection. ACII 2022: 1-7 - [c644]Zhuohao Chen, Nikolaos Flemotomos, Zac E. Imel, David C. Atkins, Shrikanth Narayanan:
Leveraging Open Data and Task Augmentation to Automated Behavioral Coding of Psychotherapy Conversations in Low-Resource Scenarios. EMNLP (Findings) 2022: 5787-5795 - [c643]Tiantian Feng, Hanieh Hashemi, Murali Annavaram, Shrikanth S. Narayanan:
Enhancing Privacy Through Domain Adaptive Noise Injection For Speech Emotion Recognition. ICASSP 2022: 7702-7706 - [c642]Kleanthis Avramidis, Mohammad Rostami, Melinda Chang, Shrikanth Narayanan:
Automating Detection of Papilledema in Pediatric Fundus Images with Explainable Machine Learning. ICIP 2022: 3973-3977 - [c641]Tiantian Feng, Shrikanth Narayanan:
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-Labeling. INTERSPEECH 2022: 5050-5054 - [c640]Tiantian Feng, Raghuveer Peri, Shrikanth Narayanan:
User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition on Federated Learning. INTERSPEECH 2022: 5055-5059 - [c639]Nikolaos Flemotomos, Shrikanth Narayanan:
Multimodal Clustering with Role Induced Constraints for Speaker Diarization. INTERSPEECH 2022: 5075-5079 - [i97]Tiantian Feng, Shrikanth Narayanan:
Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-Labeling. CoRR abs/2203.08810 (2022) - [i96]Rahul Sharma, Shrikanth Narayanan:
Audio visual character profiles for detecting background characters in entertainment media. CoRR abs/2203.11368 (2022) - [i95]Nicholas Mehlman, Anirudh Sreeram, Raghuveer Peri, Shrikanth Narayanan:
Mel Frequency Spectral Domain Defenses against Adversarial Attacks on Speech Recognition Systems. CoRR abs/2203.15283 (2022) - [i94]Rahul Sharma, Shrikanth Narayanan:
Using Active Speaker Faces for Diarization in TV shows. CoRR abs/2203.15961 (2022) - [i93]Nikolaos Flemotomos, Shrikanth Narayanan:
Multimodal Clustering with Role Induced Constraints for Speaker Diarization. CoRR abs/2204.00657 (2022) - [i92]Tiantian Feng, Raghuveer Peri, Shrikanth Narayanan:
User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning. CoRR abs/2204.02500 (2022) - [i91]Victor Ardulov, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan:
Local dynamic mode of Cognitive Behavioral Therapy. CoRR abs/2205.09752 (2022) - [i90]Kleanthis Avramidis, Mohammad Rostami, Melinda Chang, Shrikanth Narayanan:
Automating Detection of Papilledema in Pediatric Fundus Images with Explainable Machine Learning. CoRR abs/2207.04565 (2022) - [i89]Georgios Chochlakis, Tejas Srinivasan, Jesse Thomason, Shrikanth Narayanan:
VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations. CoRR abs/2208.09021 (2022) - [i88]Rahul Sharma, Shrikanth Narayanan:
Unsupervised active speaker detection in media content using cross-modal information. CoRR abs/2209.11896 (2022) - [i87]Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, Shrikanth Narayanan:
MovieCLIP: Visual Scene Recognition in Movies. CoRR abs/2210.11065 (2022) - [i86]Zhuohao Chen, Nikolaos Flemotomos, Zac E. Imel, David C. Atkins, Shrikanth Narayanan:
Leveraging Open Data and Task Augmentation to Automated Behavioral Coding of Psychotherapy Conversations in Low-Resource Scenarios. CoRR abs/2210.14254 (2022) - [i85]Tuo Zhang, Tiantian Feng, Samiul Alam, Sunwoo Lee, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr:
FedAudio: A Federated Learning Benchmark for Audio Tasks. CoRR abs/2210.15707 (2022) - [i84]Kleanthis Avramidis, Tiantian Feng, Digbalay Bose, Shrikanth Narayanan:
Multimodal Estimation of Change Points of Physiological Arousal in Drivers. CoRR abs/2210.15826 (2022) - [i83]Kleanthis Avramidis, Shanti Stewart, Shrikanth Narayanan:
On the Role of Visual Context in Enriching Music Representations. CoRR abs/2210.15828 (2022) - [i82]Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan:
Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion. CoRR abs/2210.15842 (2022) - [i81]Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan:
Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats. CoRR abs/2211.00171 (2022) - [i80]Rimita Lahiri, Md. Nasir, Catherine Lord, So Hyun Kim, Shrikanth Narayanan:
A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations. CoRR abs/2211.03279 (2022) - [i79]Xuan Shi, Erica Cooper, Xin Wang, Junichi Yamagishi, Shrikanth Narayanan:
Can Knowledge of End-to-End Text-to-Speech Models Improve Neural MIDI-to-Audio Synthesis Systems? CoRR abs/2211.13868 (2022) - [i78]Rahul Sharma, Shrikanth Narayanan:
Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection. CoRR abs/2212.00539 (2022) - [i77]Tiantian Feng, Rajat Hebbar, Nicholas Mehlman, Xuan Shi, Aditya Kommineni, Shrikanth Narayanan:
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness. CoRR abs/2212.09006 (2022) - [i76]Tiantian Feng, Shrikanth Narayanan:
Exploring Workplace Behaviors through Speaking Patterns using Large-scale Multimodal Wearable Recordings: A Study of Healthcare Providers. CoRR abs/2212.09090 (2022) - 2021
- [j144]Sandeep Nallan Chakravarthula, Brian R. W. Baucom, Shrikanth Narayanan, Panayiotis G. Georgiou:
An analysis of observation length requirements for machine understanding of human behaviors from spoken language. Comput. Speech Lang. 66: 101162 (2021) - [j143]Arindam Jati, Chin-Cheng Hsu, Monisankha Pal, Raghuveer Peri, Wael AbdAlmageed, Shrikanth Narayanan:
Adversarial attack and defense strategies for deep speaker recognition systems. Comput. Speech Lang. 68: 101199 (2021) - [j142]Haoqi Li, Brian R. Baucom, Shrikanth Narayanan, Panayiotis G. Georgiou:
Unsupervised speech representation learning for behavior modeling using triplet enhanced contextualized networks. Comput. Speech Lang. 70: 101226 (2021) - [j141]Rajat Hebbar, Pavlos Papadopoulos, Ramon Reyes, Alexander F. Danvers, Angelina J. Polsinelli, Suzanne A. Moseley, David A. Sbarra, Matthias R. Mehl, Shrikanth Narayanan:
Deep multiple instance learning for foreground speech localization in ambient audio from wearable devices. EURASIP J. Audio Speech Music. Process. 2021(1): 1-8 (2021) - [j140]Krishna Somandepalli, Tanaya Guha, Victor R. Martinez, Naveen Kumar, Hartwig Adam, Shrikanth Narayanan:
Computational Media Intelligence: Human-Centered Machine Analysis of Media. Proc. IEEE 109(5): 891-910 (2021) - [j139]Colin Vaz, Shrikanth Narayanan:
Extending the Beta divergence to complex values. Pattern Recognit. Lett. 144: 105-111 (2021) - [j138]Shao-Yen Tseng, Shrikanth Narayanan, Panayiotis G. Georgiou:
Multimodal Embeddings From Language Models for Emotion Recognition in the Wild. IEEE Signal Process. Lett. 28: 608-612 (2021) - [j137]Arindam Jati, Amrutha Nadarajan, Raghuveer Peri, Karel Mundnich, Tiantian Feng, Benjamin Girault, Shrikanth Narayanan:
Temporal Dynamics of Workplace Acoustic Scenes: Egocentric Analysis and Prediction. IEEE ACM Trans. Audio Speech Lang. Process. 29: 756-769 (2021) - [j136]Monisankha Pal, Manoj Kumar, Raghuveer Peri, Tae Jin Park, So Hyun Kim, Catherine Lord, Somer Bishop, Shrikanth Narayanan:
Meta-Learning With Latent Space Clustering in Generative Adversarial Network for Speaker Diarization. IEEE ACM Trans. Audio Speech Lang. Process. 29: 1204-1219 (2021) - [j135]Mari Ganesh Kumar, Shrikanth Narayanan, Mriganka Sur, Hema A. Murthy:
Evidence of Task-Independent Person-Specific Signatures in EEG Using Subspace Techniques. IEEE Trans. Inf. Forensics Secur. 16: 2856-2871 (2021) - [j134]Krishna Somandepalli, Shrikanth Narayanan:
Generalized Multiview Shared Subspace Learning Using View Bootstrapping. IEEE Trans. Signal Process. 69: 4774-4786 (2021) - [c638]Tiantian Feng, Shrikanth Narayanan:
Privacy and Utility Preserving Data Transformation for Speech Emotion Recognition. ACII 2021: 1-7 - [c637]Shen Yan, Hsien-Te Kao, Kristina Lerman, Shrikanth Narayanan, Emilio Ferrara:
Mitigating the Bias of Heterogeneous Human Behavior in Affective Computing. ACII 2021: 1-8 - [c636]Sabyasachee Baruah, Sandeep Nallan Chakravarthula, Shrikanth Narayanan:
Annotation and Evaluation of Coreference Resolution in Screenplays. ACL/IJCNLP (Findings) 2021: 2004-2010 - [c635]Rajat Hebbar, Krishna Somandepalli, Raghuveer Peri, Ruchir Travadi, Tracy Tuplin, Fernando Rivera, Shrikanth Narayanan:
A Computational Tool to Study Vocal Participation of Women in UN-ITU Meetings. CBMI 2021: 1-4 - [c634]Dillon Knox, Timothy Greer, Benjamin Ma, Emily Kuo, Krishna Somandepalli, Shrikanth Narayanan:
Loss Function Approaches for Multi-label Music Tagging. CBMI 2021: 1-4 - [c633]Zhuohao Chen, Nikolaos Flemotomos, Victor Ardulov, Torrey A. Creed, Zac E. Imel, David C. Atkins, Shrikanth Narayanan:
Feature Fusion Strategies for End-to-End Evaluation of Cognitive Behavior Therapy Sessions. EMBC 2021: 1836-1839 - [c632]Monisankha Pal, Arindam Jati, Raghuveer Peri, Chin-Cheng Hsu, Wael AbdAlmageed, Shrikanth Narayanan:
Adversarial Defense for Deep Speaker Recognition Using Hybrid Adversarial Training. ICASSP 2021: 6164-6168 - [c631]Tae Jin Park, Manoj Kumar, Shrikanth Narayanan:
Multi-Scale Speaker Diarization with Neural Affinity Score Fusion. ICASSP 2021: 7173-7177 - [c630]Amr Gaballah, Abhishek Tiwari, Shrikanth Narayanan, Tiago H. Falk:
Context-Aware Speech Stress Detection in Hospital Workers Using Bi-LSTM Classifiers. ICASSP 2021: 8348-8352 - [c629]Young-Kyung Kim, Rimita Lahiri, Md. Nasir, So Hyun Kim, Somer Bishop, Catherine Lord, Shrikanth S. Narayanan:
Analyzing Short Term Dynamic Speech Features for Understanding Behavioral Traits of Children with Autism Spectrum Disorder. Interspeech 2021: 2916-2920 - [c628]Haoqi Li, Yelin Kim, Cheng-Hao Kuo, Shrikanth S. Narayanan:
Acted vs. Improvised: Domain Adaptation for Elicitation Approaches in Audio-Visual Emotion Recognition. Interspeech 2021: 3395-3399 - [c627]Miran Oh, Dani Byrd, Shrikanth S. Narayanan:
Leveraging Real-Time MRI for Illuminating Linguistic Velum Action. Interspeech 2021: 3964-3968 - [c626]Keith Burghardt, Nazgol Tavabi, Emilio Ferrara, Shrikanth Narayanan, Kristina Lerman:
Having a Bad Day? Detecting the Impact of Atypical Events Using Wearable Sensors. SBP-BRiMS 2021: 257-267 - [c625]Suchitra Krishnamachari, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan:
Developing Neural Representations for Robust Child-Adult Diarization. SLT 2021: 590-597 - [c624]Prashanth Gurunath Shivakumar, Naveen Kumar, Panayiotis G. Georgiou, Shrikanth Narayanan:
RNN Based Incremental Online Spoken Language Understanding. SLT 2021: 989-996 - [i75]Tae Jin Park, Naoyuki Kanda, Dimitrios Dimitriadis, Kyu Jeong Han, Shinji Watanabe, Shrikanth Narayanan:
A Review of Speaker Diarization: Recent Advances with Deep Learning. CoRR abs/2101.09624 (2021) - [i74]Prashanth Gurunath Shivakumar, Panayiotis G. Georgiou, Shrikanth Narayanan:
Confusion2vec 2.0: Enriching Ambiguous Spoken Language Representations with Subwords. CoRR abs/2102.02270 (2021) - [i73]Yongwan Lim, Shrikanth S. Narayanan, Krishna S. Nayak:
Attention-gated convolutional neural networks for off-resonance correction of spiral real-time MRI. CoRR abs/2102.07271 (2021) - [i72]Yongwan Lim, Asterios Toutios, Yannick Bliesener, Ye Tian, Sajan Goud Lingala, Colin Vaz, Tanner Sorensen, Miran Oh, Sarah Harper, Weiyi Chen, Yoon-Jeong Lee, Johannes Töger, Mairym Lloréns Monteserín, Caitlin Smith, Bianca Godinez, Louis Goldstein, Dani Byrd, Krishna S. Nayak, Shrikanth S. Narayanan:
A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images. CoRR abs/2102.07896 (2021) - [i71]Prashanth Gurunath Shivakumar, Shrikanth Narayanan:
End-to-End Neural Systems for Automatic Children Speech Recognition: An Empirical Study. CoRR abs/2102.09918 (2021) - [i70]Nikolaos Flemotomos, Victor R. Martinez, Zhuohao Chen, Karan Singla, Victor Ardulov, Raghuveer Peri, Derek D. Caperton, James Gibson, Michael J. Tanana, Panayiotis G. Georgiou, Jake Van Epps, Sarah P. Lord, Tad Hirsch, Zac E. Imel, David C. Atkins, Shrikanth Narayanan:
"Am I A Good Therapist?" Automated Evaluation Of Psychotherapy Skills Using Speech And Language Technologies. CoRR abs/2102.11265 (2021) - [i69]Nikolaos Flemotomos, Victor R. Martinez, Zhuohao Chen, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan:
Automated Quality Assessment of Cognitive Behavioral Therapy Sessions Through Highly Contextualized Language Representations. CoRR abs/2102.11573 (2021) - [i68]Nauman Dawalatabad, Jilt Sebastian, Jom Kuriakose, C. Chandra Sekhar, Shrikanth Narayanan, Hema A. Murthy:
Front-end Diarization for Percussion Separation in Taniavartanam of Carnatic Music Concerts. CoRR abs/2103.03215 (2021) - [i67]Haoqi Li, Yelin Kim, Cheng-Hao Kuo, Shrikanth Narayanan:
Acted vs. Improvised: Domain Adaptation for Elicitation Approaches in Audio-Visual Emotion Recognition. CoRR abs/2104.01978 (2021) - [i66]Haoqi Li, Brian R. Baucom, Shrikanth Narayanan, Panayiotis G. Georgiou:
Unsupervised Speech Representation Learning for Behavior Modeling using Triplet Enhanced Contextualized Networks. CoRR abs/2104.03899 (2021) - [i65]Zhuohao Chen, Nikolaos Flemotomos, Karan Singla, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan:
An Automated Quality Evaluation Framework of Psychotherapy Conversations with Local Quality Estimates. CoRR abs/2106.07922 (2021) - [i64]Anirudh Sreeram, Nicholas Mehlman, Raghuveer Peri, Dillon Knox, Shrikanth Narayanan:
Perceptual-based deep-learning denoiser as a defense against adversarial attacks on ASR systems. CoRR abs/2107.05222 (2021) - [i63]Prashanth Gurunath Shivakumar, Somer Bishop, Catherine Lord, Shrikanth Narayanan:
Phone Duration Modeling for Speaker Age Estimation in Children. CoRR abs/2109.01568 (2021) - [i62]Sabyasachee Baruah, Krishna Somandepalli, Shrikanth Narayanan:
Representation of professions in entertainment media: Insights into frequency and sentiment trends through computational text analysis. CoRR abs/2110.03873 (2021) - [i61]Justin Olah, Sabyasachee Baruah, Digbalay Bose, Shrikanth Narayanan:
Cross Domain Emotion Recognition using Few Shot Knowledge Transfer. CoRR abs/2110.05021 (2021) - [i60]Digbalay Bose, Krishna Somandepalli, Souvik Kundu, Rimita Lahiri, Jonathan Gratch, Shrikanth Narayanan:
Understanding of Emotion Perception from Art. CoRR abs/2110.06486 (2021) - [i59]Tiantian Feng, Hanieh Hashemi, Rajat Hebbar, Murali Annavaram, Shrikanth S. Narayanan:
Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings. CoRR abs/2112.13416 (2021) - 2020
- [j133]Rini A. Sharon, Shrikanth S. Narayanan, Mriganka Sur, Hema A. Murthy:
Neural Speech Decoding During Audition, Imagination and Production. IEEE Access 8: 149714-149729 (2020) - [j132]Manoj Kumar, So Hyun Kim, Catherine Lord, Thomas D. Lyon, Shrikanth Narayanan:
Leveraging Linguistic Context in Dyadic Interactions to Improve Automatic Speech Recognition for Children. Comput. Speech Lang. 63: 101101 (2020) - [j131]Jangwon Kim, Asterios Toutios, Sungbok Lee, Shrikanth S. Narayanan:
Vocal tract shaping of emotional speech. Comput. Speech Lang. 64: 101100 (2020) - [j130]Vinesh Ravuri, Projna Paromita, Karel Mundnich, Amrutha Nadarajan, Brandon M. Booth, Shrikanth S. Narayanan, Theodora Chaspari:
Investigating Group-Specific Models of Hospital Workers' Well-Being: Implications for Algorithmic Bias. Int. J. Semantic Comput. 14(4): 477-499 (2020) - [j129]Hsien-Te Kao, Shen Yan, Homa Hosseinmardi, Shrikanth Narayanan, Kristina Lerman, Emilio Ferrara:
User-Based Collaborative Filtering Mobile Health System. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4(4): 165:1-165:17 (2020) - [j128]Shen Yan, Homa Hosseinmardi, Hsien-Te Kao, Shrikanth Narayanan, Kristina Lerman, Emilio Ferrara:
Affect Estimation with Wearable Sensors. J. Heal. Informatics Res. 4(3): 261-294 (2020) - [j127]Tae Jin Park, Kyu Jeong Han, Manoj Kumar, Shrikanth Narayanan:
Auto-Tuning Spectral Clustering for Speaker Diarization Using Normalized Maximum Eigengap. IEEE Signal Process. Lett. 27: 381-385 (2020) - [c623]Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan:
Screenplay Quality Assessment: Can We Predict Who Gets Nominated? NUSE@ACL 2020: 11-16 - [c622]Karan Singla, Zhuohao Chen, David C. Atkins, Shrikanth Narayanan:
Towards end-2-end learning for predicting behavior codes from spoken utterances in psychotherapy conversations. ACL 2020: 3797-3803 - [c621]George Hadjiantonis, Projna Paromita, Karel Mundnich, Amrutha Nadarajan, Brandon M. Booth, Shrikanth Narayanan, Theodora Chaspari:
Dynamical systems modeling of day-to-day signal-based patterns of emotional self-regulation and stress spillover in highly-demanding health professions. EMBC 2020: 284-287 - [c620]Tiantian Feng, Shrikanth Narayanan:
Modeling Human Movement Behavior Among Nursing Profession. EMBC 2020: 4256-4260 - [c619]Victor R. Martinez, Krishna Somandepalli, Yalda T. Uhls, Shrikanth Narayanan:
Joint Estimation and Analysis of Risk Behavior Ratings in Movie Scripts. EMNLP (1) 2020: 4780-4790 - [c618]Timothy Greer, Karel Mundnich, Matthew E. Sachs, Shrikanth Narayanan:
The Role of Annotation Fusion Methods in the Study of Human-Reported Emotion Experience During Music Listening. ICASSP 2020: 776-780 - [c617]Tiantian Feng, Shrikanth S. Narayanan:
Modeling Behavioral Consistency in Large-Scale Wearable Recordings of Human Bio-Behavioral Signals. ICASSP 2020: 1011-1015 - [c616]Tiantian Feng, Brandon M. Booth, Shrikanth S. Narayanan:
Modeling Behavior as Mutual Dependency between Physiological Signals and Indoor Location in Large-Scale Wearable Sensor Study. ICASSP 2020: 1016-1020 - [c615]Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan:
Bringing in the Outliers: A Sparse Subspace Clustering Approach to Learn a Dictionary of Mouse Ultrasonic Vocalizations. ICASSP 2020: 3432-3436 - [c614]Brandon M. Booth, Shrikanth S. Narayanan:
Trapezoidal Segment Sequencing: A Novel Approach for Fusion of Human-Produced Continuous Annotations. ICASSP 2020: 4512-4516 - [c613]Monisankha Pal, Manoj Kumar, Raghuveer Peri, Tae Jin Park, So Hyun Kim, Catherine Lord, Somer Bishop, Shrikanth Narayanan:
Speaker Diarization Using Latent Space Clustering in Generative Adversarial Network. ICASSP 2020: 6504-6508 - [c612]Sandeep Nallan Chakravarthula, Md. Nasir, Shao-Yen Tseng, Haoqi Li, Tae Jin Park, Brian R. Baucom, Craig J. Bryan, Shrikanth Narayanan, Panayiotis G. Georgiou:
Automatic Prediction of Suicidal Risk in Military Couples Using Multimodal Interaction Cues from Couples Conversations. ICASSP 2020: 6539-6543 - [c611]Raghuveer Peri, Monisankha Pal, Arindam Jati, Krishna Somandepalli, Shrikanth Narayanan:
Robust Speaker Recognition Using Unsupervised Adversarial Invariance. ICASSP 2020: 6614-6618 - [c610]Rimita Lahiri, Manoj Kumar, Somer Bishop, Shrikanth Narayanan:
Learning Domain Invariant Representations for Child-Adult Classification from Speech. ICASSP 2020: 6749-6753 - [c609]Haoqi Li, Ming Tu, Jing Huang, Shrikanth Narayanan, Panayiotis G. Georgiou:
Speaker-Invariant Affective Representation Learning via Adversarial Training. ICASSP 2020: 7144-7148 - [c608]S. Ashwin Hebbar, Rahul Sharma, Krishna Somandepalli, Asterios Toutios, Shrikanth Narayanan:
Vocal Tract Articulatory Contour Detection in Real-Time Magnetic Resonance Images Using Spatio-Temporal Context. ICASSP 2020: 7354-7358 - [c607]Victor Ardulov, Zane Durante, Shanna Williams, Thomas D. Lyon, Shrikanth Narayanan:
Identifying Truthful Language in Child Interviews. ICASSP 2020: 8074-8078 - [c606]Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth Narayanan:
Meta-Learning for Robust Child-Adult Classification from Speech. ICASSP 2020: 8094-8098 - [c605]Karan Singla, Shrikanth Narayanan:
Multitask Learning for Darpa Lorelei's Situation Frame Extraction Task. ICASSP 2020: 8149-8153 - [c604]Zhuohao Chen, James Gibson, Ming-Chang Chiu, Qiaohong Hu, Tara K. Knight, Daniella Meeker, James A. Tulsky, Kathryn I. Pollak, Shrikanth Narayanan:
Automated Empathy Detection for Oncology Encounters. ICHI 2020: 1-8 - [c603]Shrikanth Shri Narayanan:
Human-centered Multimodal Machine Intelligence. ICMI 2020: 4-5 - [c602]Brandon M. Booth, Shrikanth S. Narayanan:
Fifty Shades of Green: Towards a Robust Measure of Inter-annotator Agreement for Continuous Signals. ICMI 2020: 204-212 - [c601]Anil Ramakrishna, Shrikanth Narayanan:
Sentence Level Estimation of Psycholinguistic Norms Using Joint Multidimensional Annotations. INTERSPEECH 2020: 601-605 - [c600]Xiaoyi Qin, Ming Li, Hui Bu, Wei Rao, Rohan Kumar Das, Shrikanth Narayanan, Haizhou Li:
The INTERSPEECH 2020 Far-Field Speaker Verification Challenge. INTERSPEECH 2020: 3456-3460 - [c599]Pavlos Papadopoulos, Shrikanth Narayanan:
Exploiting Conic Affinity Measures to Design Speech Enhancement Systems Operating in Unseen Noise Conditions. INTERSPEECH 2020: 4029-4033 - [c598]Danai Xezonaki, Georgios Paraskevopoulos, Alexandros Potamianos, Shrikanth Narayanan:
Affective Conditioning on Hierarchical Attention Networks Applied to Depression Detection from Transcribed Clinical Interviews. INTERSPEECH 2020: 4556-4560 - [c597]Dillon Knox, Timothy Greer, Benjamin Ma, Emily Kuo, Krishna Somandepalli, Shrikanth Narayanan:
MediaEval 2020 Emotion and Theme Recognition in Music Task: Loss Function Approaches for Multi-label Music Tagging. MediaEval 2020 - [c596]Tanaya Guha, Vlad Hosu, Dietmar Saupe, Bastian Goldlücke, Naveen Kumar, Weisi Lin, Victor R. Martinez, Krishna Somandepalli, Shrikanth Narayanan, Wen-Huang Cheng, Kree McLaughlin, Hartwig Adam, John See, Lai-Kuan Wong:
ATQAM/MAST'20: Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends. ACM Multimedia 2020: 4758-4760 - [c595]Nikolaos Flemotomos, Panayiotis G. Georgiou, Shrikanth Narayanan:
Linguistically Aided Speaker Diarization Using Speaker Role Information. Odyssey 2020: 117-124 - [c594]Raghuveer Peri, Haoqi Li, Krishna Somandepalli, Arindam Jati, Shrikanth Narayanan:
An Empirical Analysis of Information Encoded in Disentangled Neural Speaker Representations. Odyssey 2020: 194-201 - [c593]Nazgol Tavabi, Homa Hosseinmardi, Jennifer L. Villatte, Andrés Abeliuk, Shrikanth Narayanan, Emilio Ferrara, Kristina Lerman:
Learning Behavioral Representations from Wearable Sensors. SBP-BRiMS 2020: 245-254 - [i58]Xiaoyi Qin, Ming Li, Hui Bu, Rohan Kumar Das, Wei Rao, Shrikanth Narayanan, Haizhou Li:
The FFSVC 2020 Evaluation Plan. CoRR abs/2002.00387 (2020) - [i57]Raghuveer Peri, Haoqi Li, Krishna Somandepalli, Arindam Jati, Shrikanth Narayanan:
An empirical analysis of information encoded in disentangled neural speaker representations. CoRR abs/2002.03520 (2020) - [i56]Tae Jin Park, Kyu Jeong Han, Manoj Kumar, Shrikanth Narayanan:
Auto-Tuning Spectral Clustering for Speaker Diarization Using Normalized Maximum Eigengap. CoRR abs/2003.02405 (2020) - [i55]Rahul Sharma, Krishna Somandepalli, Shrikanth Narayanan:
Crossmodal learning for audio-visual speech event localization. CoRR abs/2003.04358 (2020) - [i54]Jiaxi Wang, Karel Mundnich, Allison T. Knoll, Pat Levitt, Shrikanth Narayanan:
Bringing in the outliers: A sparse subspace clustering approach to learn a dictionary of mouse ultrasonic vocalizations. CoRR abs/2003.05897 (2020) - [i53]Zhuohao Chen, Karan Singla, David C. Atkins, Zac E. Imel, Shrikanth Narayanan:
A Label Proportions Estimation Technique for Adversarial Domain Adaptation in Text Classification. CoRR abs/2003.07444 (2020) - [i52]Karel Mundnich, Brandon M. Booth, Michelle L'Hommedieu, Tiantian Feng, Benjamin Girault, Justin L'Hommedieu, Mackenzie Wildman, Sophia Skaaden, Amrutha Nadarajan, Jennifer L. Villatte, Tiago H. Falk, Kristina Lerman, Emilio Ferrara, Shrikanth Narayanan:
TILES-2018: A longitudinal physiologic and behavioral data set of hospital workers. CoRR abs/2003.08474 (2020) - [i51]Tae Jin Park, Kyu Jeong Han, Jing Huang, Xiaodong He, Bowen Zhou, Panayiotis G. Georgiou, Shrikanth Narayanan:
Speaker Diarization with Lexical Information. CoRR abs/2004.06756 (2020) - [i50]Anil Ramakrishna, Rahul Gupta, Shrikanth Narayanan:
Joint Multi-Dimensional Model for Global and Time-Series Annotations. CoRR abs/2005.03117 (2020) - [i49]Krishna Somandepalli, Shrikanth Narayanan:
Generalized Multi-view Shared Subspace Learning using View Bootstrapping. CoRR abs/2005.06038 (2020) - [i48]Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan:
Screenplay Quality Assessment: Can We Predict Who Gets Nominated? CoRR abs/2005.06123 (2020) - [i47]Zhuohao Chen, Nikolaos Flemotomos, Victor Ardulov, Torrey A. Creed, Zac E. Imel, David C. Atkins, Shrikanth Narayanan:
Feature Fusion Strategies for End-to-End Evaluation of Cognitive Behavior Therapy Sessions. CoRR abs/2005.07809 (2020) - [i46]Xiaoyi Qin, Ming Li, Hui Bu, Wei Rao, Rohan Kumar Das, Shrikanth Narayanan, Haizhou Li:
The INTERSPEECH 2020 Far-Field Speaker Verification Challenge. CoRR abs/2005.08046 (2020) - [i45]Anil Ramakrishna, Shrikanth Narayanan:
Sentence level estimation of psycholinguistic norms using joint multidimensional annotations. CoRR abs/2005.10232 (2020) - [i44]Danai Xezonaki, Georgios Paraskevopoulos, Alexandros Potamianos, Shrikanth Narayanan:
Affective Conditioning on Hierarchical Networks applied to Depression Detection from Transcribed Clinical Interviews. CoRR abs/2006.08336 (2020) - [i43]Zhuohao Chen, James Gibson, Ming-Chang Chiu, Qiaohong Hu, Tara K. Knight, Daniella Meeker, James A. Tulsky, Kathryn I. Pollak, Shrikanth Narayanan:
Automated Empathy Detection for Oncology Encounters. CoRR abs/2007.00809 (2020) - [i42]Monisankha Pal, Manoj Kumar, Raghuveer Peri, Tae Jin Park, So Hyun Kim, Catherine Lord, Somer Bishop, Shrikanth Narayanan:
Meta-learning with Latent Space Clustering in Generative Adversarial Network for Speaker Diarization. CoRR abs/2007.09635 (2020) - [i41]Mari Ganesh Kumar, Shrikanth Narayanan, Mriganka Sur, Hema A. Murthy:
Evidence of Task-Independent Person-Specific Signatures in EEG using Subspace Techniques. CoRR abs/2007.13517 (2020) - [i40]Manoj Kumar, Tae Jin Park, Somer Bishop, Shrikanth Narayanan:
Designing Neural Speaker Embeddings with Meta Learning. CoRR abs/2007.16196 (2020) - [i39]Keith Burghardt, Nazgol Tavabi, Emilio Ferrara, Shrikanth Narayanan, Kristina Lerman:
Having a Bad Day? Detecting the Impact of Atypical Life Events Using Wearable Sensors. CoRR abs/2008.01723 (2020) - [i38]Arindam Jati, Chin-Cheng Hsu, Monisankha Pal, Raghuveer Peri, Wael AbdAlmageed, Shrikanth Narayanan:
Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems. CoRR abs/2008.07685 (2020) - [i37]Victor R. Martinez, Krishna Somandepalli, Karan Singla, Anil Ramakrishna, Yalda T. Uhls, Shrikanth Narayanan:
Victim or Perpetrator? Analysis of Violent Characters Portrayals from Movie Scripts. CoRR abs/2008.08225 (2020) - [i36]Krishna Somandepalli, Rajat Hebbar, Shrikanth Narayanan:
Multi-Face: Self-supervised Multiview Adaptation for Robust Face Clustering in Videos. CoRR abs/2008.11289 (2020)
2010 – 2019
- 2019
- [j126]Ruchir Travadi, Shrikanth S. Narayanan:
Efficient estimation and model generalization for the totalvariability model. Comput. Speech Lang. 53: 43-64 (2019) - [j125]Michael I. Proctor, Rachel Walker, Caitlin Smith, Tünde Szalay, Louis Goldstein, Shrikanth Narayanan:
Articulatory characterization of English liquid-final rimes. J. Phonetics 77 (2019) - [j124]Karel Mundnich, Brandon M. Booth, Benjamin Girault, Shrikanth S. Narayanan:
Generating labels for regression of subjective constructs using triplet embeddings. Pattern Recognit. Lett. 128: 385-392 (2019) - [j123]Ruchir Travadi, Shrikanth S. Narayanan:
Total Variability Layer in Deep Neural Network Embeddings for Speaker Verification. IEEE Signal Process. Lett. 26(6): 893-897 (2019) - [c592]Victor R. Martinez, Krishna Somandepalli, Karan Singla, Anil Ramakrishna, Yalda T. Uhls, Shrikanth S. Narayanan:
Violence Rating Prediction from Movie Scripts. AAAI 2019: 671-678 - [c591]Victor R. Martinez, Anil Ramakrishna, Ming-Chang Chiu, Karan Singla, Shrikanth S. Narayanan:
A system for the 2019 Sentiment, Emotion and Cognitive State Task of DARPA's LORELEI project. ACII 2019: 1-6 - [c590]Abhishek Tiwari, Jennifer L. Villatte, Shrikanth Narayanan, Tiago H. Falk:
Prediction of Psychological Flexibility with multi-scale Heart Rate Variability and Breathing Features in an "in-the-wild" Setting. ACII Workshops 2019: 297-303 - [c589]Brandon M. Booth, Shrikanth S. Narayanan:
Trapezoidal Segmented Regression: A Novel Continuous-scale Real-time Annotation Approximation Algorithm. ACII 2019: 600-606 - [c588]Benjamin Ma, Timothy Greer, Matthew E. Sachs, Assal Habibi, Jonas T. Kaplan, Shrikanth Narayanan:
Predicting Human-Reported Enjoyment Responses in Happy and Sad Music. ACII 2019: 607-613 - [c587]Theodoros Giannakopoulos, Spiros Dimopoulos, Georgios Pantazopoulos, Aggelina Chatziagapi, Dimitris Sgouropoulos, Athanasios Katsamanis, Alexandros Potamianos, Shrikanth S. Narayanan:
Using Oliver API for emotion-aware movie content characterization. CBMI 2019: 1-4 - [c586]Abhishek Tiwari, Raymundo Cassani, Shrikanth S. Narayanan, Tiago H. Falk:
A Comparative Study of Stress and Anxiety Estimation in Ecological Settings Using a Smart-shirt and a Smart-bracelet. EMBC 2019: 2213-2216 - [c585]Tiantian Feng, Shrikanth S. Narayanan:
Imputing Missing Data In Large-Scale Multivariate Biomedical Wearable Recordings Using Bidirectional Recurrent Neural Networks With Temporal Activation Regularization. EMBC 2019: 2529-2534 - [c584]Mari Ganesh Kumar, M. S. Saranya, Shrikanth S. Narayanan, Mriganka Sur, Hema A. Murthy:
Subspace techniques for task-independent EEG person identification. EMBC 2019: 4545-4548 - [c583]Abhishek Tiwari, Shrikanth S. Narayanan, Tiago H. Falk:
Stress and Anxiety Measurement "In-the-Wild" Using Quality-aware Multi-scale HRV Features. EMBC 2019: 7056-7059 - [c582]Abhishek Tiwari, Shrikanth S. Narayanan, Tiago H. Falk:
Breathing Rate Complexity Features for "In-the-Wild" Stress and Anxiety Measurement. EUSIPCO 2019: 1-5 - [c581]Taruna Agrawal, Rahul Gupta, Shrikanth S. Narayanan:
On Evaluating CNN Representations for Low Resource Medical Image Classification. ICASSP 2019: 1363-1367 - [c580]Che-Wei Huang, Shrikanth S. Narayanan:
On Role and Location of Normalization before Model-based Data Augmentation in Residual Blocks for Classification Tasks. ICASSP 2019: 3322-3326 - [c579]Timothy Greer, Karan Singla, Benjamin Ma, Shrikanth S. Narayanan:
Learning Shared Vector Representations of Lyrics and Chords in Music. ICASSP 2019: 3951-3955 - [c578]Krishna Somandepalli, Shrikanth S. Narayanan:
Reinforcing Self-expressive Representation with Constraint Propagation for Face Clustering in Movies. ICASSP 2019: 4065-4069 - [c577]Rini A. Sharon, Shrikanth S. Narayanan, Mriganka Sur, Hema A. Murthy:
An Empirical Study of Speech Processing in the Brain by Analyzing the Temporal Syllable Structure in Speech-input Induced EEG. ICASSP 2019: 4090-4094 - [c576]Rajat Hebbar, Krishna Somandepalli, Shrikanth S. Narayanan:
Robust Speech Activity Detection in Movie Audio: Data Resources and Experimental Evaluation. ICASSP 2019: 4105-4109 - [c575]Zhuohao Chen, Karan Singla, James Gibson, Dogan Can, Zac E. Imel, David C. Atkins, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Improving the Prediction of Therapist Behaviors in Addiction Counseling by Exploiting Class Confusions. ICASSP 2019: 6605-6609 - [c574]Amrutha Nadarajan, Krishna Somandepalli, Shrikanth S. Narayanan:
Speaker Agnostic Foreground Speech Detection from Audio Recordings in Workplace Settings from Wearable Recorders. ICASSP 2019: 6765-6769 - [c573]Nikolaos Flemotomos, Panayiotis G. Georgiou, David C. Atkins, Shrikanth S. Narayanan:
Role Specific Lattice Rescoring for Speaker Role Recognition from Speech Recognition Outputs. ICASSP 2019: 7330-7334 - [c572]Karel Mundnich, Benjamin Girault, Shrikanth S. Narayanan:
Bluetooth Based Indoor Localization Using Triplet Embeddings. ICASSP 2019: 7570-7574 - [c571]Tiantian Feng, Shrikanth S. Narayanan:
Discovering Optimal Variable-length Time Series Motifs in Large-scale Wearable Recordings of Human Bio-behavioral Signals. ICASSP 2019: 7615-7619 - [c570]Brandon M. Booth, Tiantian Feng, Abhishek Jangalwa, Shrikanth S. Narayanan:
Toward Robust Interpretable Human Movement Pattern Analysis in a Workplace Setting. ICASSP 2019: 7630-7634 - [c569]Shen Yan, Homa Hosseinmardi, Hsien-Te Kao, Shrikanth S. Narayanan, Kristina Lerman, Emilio Ferrara:
Estimating Individualized Daily Self-Reported Affect with Wearable Sensors. ICHI 2019: 1-9 - [c568]Rahul Sharma, Krishna Somandepalli, Shrikanth S. Narayanan:
Toward Visual Voice Activity Detection for Unconstrained Videos. ICIP 2019: 2991-2995 - [c567]Aggelina Chatziagapi, Georgios Paraskevopoulos, Dimitris Sgouropoulos, Georgios Pantazopoulos, Malvina Nikandrou, Theodoros Giannakopoulos, Athanasios Katsamanis, Alexandros Potamianos, Shrikanth Narayanan:
Data Augmentation Using GANs for Speech Emotion Recognition. INTERSPEECH 2019: 171-175 - [c566]Tae Jin Park, Kyu Jeong Han, Jing Huang, Xiaodong He, Bowen Zhou, Panayiotis G. Georgiou, Shrikanth Narayanan:
Speaker Diarization with Lexical Information. INTERSPEECH 2019: 391-395 - [c565]Tae Jin Park, Manoj Kumar, Nikolaos Flemotomos, Monisankha Pal, Raghuveer Peri, Rimita Lahiri, Panayiotis G. Georgiou, Shrikanth Narayanan:
The Second DIHARD Challenge: System Description for USC-SAIL Team. INTERSPEECH 2019: 998-1002 - [c564]Md. Nasir, Sandeep Nallan Chakravarthula, Brian R. W. Baucom, David C. Atkins, Panayiotis G. Georgiou, Shrikanth Narayanan:
Modeling Interpersonal Linguistic Coordination in Conversations Using Word Mover's Distance. INTERSPEECH 2019: 1423-1427 - [c563]Victor R. Martinez, Nikolaos Flemotomos, Victor Ardulov, Krishna Somandepalli, Simon B. Goldberg, Zac E. Imel, David C. Atkins, Shrikanth Narayanan:
Identifying Therapist and Client Personae for Therapeutic Alliance Estimation. INTERSPEECH 2019: 1901-1905 - [c562]Krishna Somandepalli, Naveen Kumar, Arindam Jati, Panayiotis G. Georgiou, Shrikanth Narayanan:
Multiview Shared Subspace Learning Across Speakers and Speech Commands. INTERSPEECH 2019: 2320-2324 - [c561]Arindam Jati, Raghuveer Peri, Monisankha Pal, Tae Jin Park, Naveen Kumar, Ruchir Travadi, Panayiotis G. Georgiou, Shrikanth Narayanan:
Multi-Task Discriminative Training of Hybrid DNN-TVM Model for Speaker Verification with Noisy and Far-Field Speech. INTERSPEECH 2019: 2463-2467 - [c560]Timothy Greer, Benjamin Ma, Matthew E. Sachs, Assal Habibi, Shrikanth S. Narayanan:
A Multimodal View into Music's Effect on Human Neural, Physiological, and Emotional Experience. ACM Multimedia 2019: 167-175 - [c559]Timothy Greer, Shrikanth Narayanan:
Using Shared Vector Representations of Words and Chords in Music for Genre Classification. SMM 2019 - [c558]Shrikanth Narayanan:
Understanding affective expressions and experiences through behavioral machine intelligence. SMM 2019 - [i35]Taruna Agrawal, Rahul Gupta, Shrikanth S. Narayanan:
On evaluating CNN representations for low resource medical image classification. CoRR abs/1903.11176 (2019) - [i34]Karel Mundnich, Brandon M. Booth, Benjamin Girault, Shrikanth S. Narayanan:
Generating Labels for Regression of Subjective Constructs using Triplet Embeddings. CoRR abs/1904.01643 (2019) - [i33]Krishna Somandepalli, Naveen Kumar, Ruchir Travadi, Shrikanth S. Narayanan:
Multimodal Representation Learning using Deep Multiset Canonical Correlation. CoRR abs/1904.01775 (2019) - [i32]Md. Nasir, Sandeep Nallan Chakravarthula, Brian R. Baucom, David C. Atkins, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Modeling Interpersonal Linguistic Coordination in Conversations using Word Mover's Distance. CoRR abs/1904.06002 (2019) - [i31]Victor R. Martinez, Anil Ramakrishna, Ming-Chang Chiu, Karan Singla, Shrikanth S. Narayanan:
A system for the 2019 Sentiment, Emotion and Cognitive State Task of DARPAs LORELEI project. CoRR abs/1905.00472 (2019) - [i30]Kunal Dhawan, Colin Vaz, Ruchir Travadi, Shrikanth S. Narayanan:
Towards Adapting NMF Dictionaries Using Total Variability Modeling for Noise-Robust Acoustic Features. CoRR abs/1907.06859 (2019) - [i29]Shih-Fu Chang, Alexander G. Hauptmann, Louis-Philippe Morency, Sameer K. Antani, Dick C. A. Bulterman, Carlos Busso, Joyce Yue Chai, Julia Hirschberg, Ramesh C. Jain, Ketan Mayer-Patel, Reuven Meth, Raymond J. Mooney, Klara Nahrstedt, Shrikanth S. Narayanan, Prem Natarajan, Sharon L. Oviatt, Balakrishnan Prabhakaran, Arnold W. M. Smeulders, Hari Sundaram, Zhengyou Zhang, Michelle X. Zhou:
Report of 2017 NSF Workshop on Multimedia Challenges, Opportunities and Research Roadmaps. CoRR abs/1908.02308 (2019) - [i28]Prashanth Gurunath Shivakumar, Shao-Yen Tseng, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Behavior Gated Language Models. CoRR abs/1909.00107 (2019) - [i27]Vidhyasaharan Sethu, Emily Mower Provost, Julien Epps, Carlos Busso, Nicholas Cummins, Shrikanth S. Narayanan:
The Ambiguous World of Emotion Representation. CoRR abs/1909.00360 (2019) - [i26]Shao-Yen Tseng, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Multimodal Embeddings from Language Models. CoRR abs/1909.04302 (2019) - [i25]Prashanth Gurunath Shivakumar, Naveen Kumar, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Incremental Online Spoken Language Understanding. CoRR abs/1910.10287 (2019) - [i24]Monisankha Pal, Manoj Kumar, Raghuveer Peri, Tae Jin Park, So Hyun Kim, Catherine Lord, Somer Bishop, Shrikanth S. Narayanan:
Speaker diarization using latent space clustering in generative adversarial network. CoRR abs/1910.11398 (2019) - [i23]Nithin Rao Koluguri, Manoj Kumar, So Hyun Kim, Catherine Lord, Shrikanth S. Narayanan:
Meta-learning for robust child-adult classification from speech. CoRR abs/1910.11400 (2019) - [i22]Monisankha Pal, Manoj Kumar, Raghuveer Peri, Shrikanth S. Narayanan:
A study of semi-supervised speaker diarization system using gan mixture model. CoRR abs/1910.11416 (2019) - [i21]Rimita Lahiri, Manoj Kumar, Somer Bishop, Shrikanth S. Narayanan:
Learning Domain Invariant Representations for Child-Adult Classification from Speech. CoRR abs/1910.11472 (2019) - [i20]Raghuveer Peri, Monisankha Pal, Arindam Jati, Krishna Somandepalli, Shrikanth S. Narayanan:
Robust speaker recognition using unsupervised adversarial invariance. CoRR abs/1911.00940 (2019) - [i19]Haoqi Li, Ming Tu, Jing Huang, Shrikanth S. Narayanan, Panayiotis G. Georgiou:
Speaker-invariant Affective Representation Learning via Adversarial Training. CoRR abs/1911.01533 (2019) - [i18]Arindam Jati, Amrutha Nadarajan, Karel Mundnich, Shrikanth S. Narayanan:
Characterizing dynamically varying acoustic scenes from egocentric audio recordings in workplace setting. CoRR abs/1911.03843 (2019) - [i17]Nazgol Tavabi, Homa Hosseinmardi, Jennifer L. Villatte, Andrés Abeliuk, Shrikanth S. Narayanan, Emilio Ferrara, Kristina Lerman:
Learning Behavioral Representations from Wearable Sensors. CoRR abs/1911.06959 (2019) - [i16]Nikolaos Flemotomos, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Language Aided Speaker Diarization Using Speaker Role Information. CoRR abs/1911.07994 (2019) - [i15]Sandeep Nallan Chakravarthula, Brian R. Baucom, Shrikanth S. Narayanan, Panayiotis G. Georgiou:
An analysis of observation length requirements in spoken language for machine understanding of human behaviors. CoRR abs/1911.09515 (2019) - 2018
- [j122]Vikram Ramanarayanan, Sam Tilsen, Michael I. Proctor, Johannes Töger, Louis Goldstein, Krishna S. Nayak, Shrikanth S. Narayanan:
Analysis of speech production real-time MRI. Comput. Speech Lang. 52: 1-22 (2018) - [j121]Nikolaos Malandrakis, Anil Ramakrishna, Victor R. Martinez, Tanner Sorensen, Dogan Can, Shrikanth S. Narayanan:
The ELISA Situation Frame extraction for low resource languages pipeline for LoReHLT'2016. Mach. Transl. 32(1-2): 127-142 (2018) - [j120]Benjamin Parrell, Shrikanth S. Narayanan:
Explaining Coronal Reduction: Prosodic Structure and Articulatory Posture. Phonetica 75(2): 151-181 (2018) - [j119]Tanaya Guha, Zhaojun Yang, Ruth B. Grossman, Shrikanth S. Narayanan:
A Computational Study of Expressive Facial Dynamics in Children with Autism. IEEE Trans. Affect. Comput. 9(1): 14-20 (2018) - [j118]Rahul Gupta, Kartik Audhkhasi, Zach Jacokes, Agata Rozga, Shrikanth S. Narayanan:
Modeling Multiple Time Series Annotations as Noisy Distortions of the Ground Truth: An Expectation-Maximization Approach. IEEE Trans. Affect. Comput. 9(1): 76-89 (2018) - [j117]Colin Vaz, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Acoustic Denoising Using Dictionary Learning With Spectral and Temporal Regularization. IEEE ACM Trans. Audio Speech Lang. Process. 26(5): 967-980 (2018) - [j116]Krishna Somandepalli, Naveen Kumar, Tanaya Guha, Shrikanth S. Narayanan:
Unsupervised Discovery of Character Dictionaries in Animation Movies. IEEE Trans. Multim. 20(3): 539-551 (2018) - [j115]Benjamin Girault, Antonio Ortega, Shrikanth S. Narayanan:
Irregularity-Aware Graph Fourier Transforms. IEEE Trans. Signal Process. 66(21): 5746-5761 (2018) - [c557]Karan Singla, Dogan Can, Shrikanth S. Narayanan:
A Multi-task Approach to Learning Multilingual Representations. ACL (2) 2018: 214-220 - [c556]Hsien-Te Kao, Homa Hosseinmardi, Shen Yan, Michelle Hasan, Shrikanth S. Narayanan, Kristina Lerman, Emilio Ferrara:
Discovering Latent Psychological Structures from Self-Report Assessments of Hospital Workers. BESC 2018: 156-161 - [c555]Brandon M. Booth, Taylor J. Seamans, Shrikanth S. Narayanan:
An Evaluation of EEG-based Metrics for Engagement Assessment of Distance Learners. EMBC 2018: 307-310 - [c554]Brandon M. Booth, Karel Mundnich, Shrikanth S. Narayanan:
A Novel Method for Human Bias Correction of Continuous- Time Annotations. ICASSP 2018: 3091-3095 - [c553]Rahul Gupta, Saurabh Sahu, Carol Y. Espy-Wilson, Shrikanth S. Narayanan:
Semi-Supervised and Transfer Learning Approaches for Low Resource Sentiment Classification. ICASSP 2018: 5109-5113 - [c552]Manoj Kumar, Pavlos Papadopoulos, Ruchir Travadi, Daniel Bone, Shrikanth S. Narayanan:
Improving Semi-Supervised Classification for Low-Resource Speech Interaction Applications. ICASSP 2018: 5149-5153 - [c551]Dogan Can, Victor R. Martinez, Pavlos Papadopoulos, Shrikanth S. Narayanan:
Pykaldi: A Python Wrapper for Kaldi. ICASSP 2018: 5889-5893 - [c550]Che-Wei Huang, Shrikanth S. Narayanan:
Shaking Acoustic Spectral Sub-Bands can Letxer Regularize Learning in Affective Computing. ICASSP 2018: 6827-6831 - [c549]Shrikanth S. Narayanan:
A Multimodal Approach to Understanding Human Vocal Expressions and Beyond. ICMI 2018: 1 - [c548]Victor Ardulov, Madelyn Mendlen, Manoj Kumar, Neha Anand, Shanna Williams, Thomas D. Lyon, Shrikanth S. Narayanan:
Multimodal Interaction Modeling of Child Forensic Interviewing. ICMI 2018: 179-185 - [c547]Krishna Somandepalli, Victor R. Martinez, Naveen Kumar, Shrikanth S. Narayanan:
Multimodal Representation of Advertisements Using Segment-level Autoencoders. ICMI 2018: 418-422 - [c546]Rajat Hebbar, Krishna Somandepalli, Shrikanth S. Narayanan:
Improving Gender Identification in Movie Audio Using Cross-Domain Data. INTERSPEECH 2018: 282-286 - [c545]Jilt Sebastian, Manoj Kumar, Pavan Kumar D. S., Mathew Magimai-Doss, Hema A. Murthy, Shrikanth S. Narayanan:
Denoising and Raw-waveform Networks for Weakly-Supervised Gender Identification on Noisy Speech. INTERSPEECH 2018: 292-296 - [c544]Pavlos Papadopoulos, Colin Vaz, Shrikanth S. Narayanan:
Exploring the Relationship between Conic Affinity of NMF Dictionaries and Speech Enhancement Metrics. INTERSPEECH 2018: 1146-1150 - [c543]Nikolaos Flemotomos, Pavlos Papadopoulos, James Gibson, Shrikanth S. Narayanan:
Combined Speaker Clustering and Role Recognition in Conversational Speech. INTERSPEECH 2018: 1378-1382 - [c542]Nikolaos Flemotomos, Victor R. Martinez, James Gibson, David C. Atkins, Torrey A. Creed, Shrikanth S. Narayanan:
Language Features for Automated Evaluation of Cognitive Behavior Psychotherapy Sessions. INTERSPEECH 2018: 1908-1912 - [c541]Anil Ramakrishna, Timothy Greer, David C. Atkins, Shrikanth S. Narayanan:
Computational Modeling of Conversational Humor in Psychotherapy. INTERSPEECH 2018: 2344-2348 - [c540]Manoj Kumar, Pooja Chebolu, So Hyun Kim, Kassandra Martinez, Catherine Lord, Shrikanth S. Narayanan:
A Knowledge Driven Structural Segmentation Approach for Play-Talk Classification During Autism Assessment. INTERSPEECH 2018: 2763-2767 - [c539]Karan Singla, Zhuohao Chen, Nikolaos Flemotomos, James Gibson, Dogan Can, David C. Atkins, Shrikanth S. Narayanan:
Using Prosodic and Lexical Information for Learning Utterance-level Behaviors in Psychotherapy. INTERSPEECH 2018: 3413-3417 - [c538]Md. Nasir, Brian R. Baucom, Shrikanth S. Narayanan, Panayiotis G. Georgiou:
Towards an Unsupervised Entrainment Distance in Conversational Speech Using Deep Neural Networks. INTERSPEECH 2018: 3423-3427 - [c537]Che-Wei Huang, Shrikanth S. Narayanan:
Stochastic Shake-Shake Regularization for Affective Learning from Speech. INTERSPEECH 2018: 3658-3662 - [c536]Brandon M. Booth, Karel Mundnich, Shrikanth S. Narayanan:
Fusing Annotations with Majority Vote Triplet Embeddings. AVEC@MM 2018: 83-89 - [c535]Tiantian Feng, Amrutha Nadarajan, Colin Vaz, Brandon M. Booth, Shrikanth S. Narayanan:
TILES audio recorder: an unobtrusive wearable solution to track audio activity. WearSys@MobiSys 2018: 33-38 - [c534]Christos Baziotis, Athanasiou Nikolaos, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth S. Narayanan, Alexandros Potamianos:
NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning. SemEval@NAACL-HLT 2018: 245-255 - [c533]Nikolaos Flemotomos, Zhuohao Chen, David C. Atkins, Shrikanth S. Narayanan:
Role Annotated Speech Recognition for Conversational Interactions. SLT 2018: 1036-1043 - [i14]Christos Baziotis, Nikos Athanasiou, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth S. Narayanan, Alexandros Potamianos:
NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning. CoRR abs/1804.06658 (2018) - [i13]Che-Wei Huang, Shrikanth S. Narayanan:
Shaking Acoustic Spectral Sub-bands Can Better Regularize Learning in Affective Computing. CoRR abs/1804.06779 (2018) - [i12]Md. Nasir, Brian R. Baucom, Shrikanth S. Narayanan, Panayiotis G. Georgiou:
Towards an Unsupervised Entrainment Distance in Conversational Speech using Deep Neural Networks. CoRR abs/1804.08782 (2018) - [i11]Rahul Gupta, Saurabh Sahu, Carol Y. Espy-Wilson, Shrikanth S. Narayanan:
Semi-supervised and Transfer learning approaches for low resource sentiment classification. CoRR abs/1806.02863 (2018) - [i10]Victor Ardulov, Manoj Kumar, Shanna Williams, Thomas D. Lyon, Shrikanth S. Narayanan:
Measuring Conversational Productivity in Child Forensic Interviews. CoRR abs/1806.03357 (2018) - [i9]Che-Wei Huang, Shrikanth S. Narayanan:
Normalization Before Shaking Toward Learning Symmetrically Distributed Representation Without Margin in Speech Emotion Recognition. CoRR abs/1808.00876 (2018) - [i8]Homa Hosseinmardi, Amir Ghasemian, Shrikanth S. Narayanan, Kristina Lerman, Emilio Ferrara:
Tensor Embedding: A Supervised Framework for Human Behavioral Data Mining and Prediction. CoRR abs/1808.10867 (2018) - [i7]James Gibson, David C. Atkins, Torrey A. Creed, Zac E. Imel, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Multi-label Multi-task Deep Learning for Behavioral Coding. CoRR abs/1810.12349 (2018) - 2017
- [j114]Adela C. Timmons, Theodora Chaspari, Sohyun C. Han, Laura Perrone, Shrikanth S. Narayanan, Gayla Margolin:
Using Multimodal Wearable Technology to Detect Conflict among Couples. Computer 50(3): 50-59 (2017) - [j113]Daniel Bone, Chi-Chun Lee, Theodora Chaspari, James Gibson, Shrikanth S. Narayanan:
Signal Processing and Machine Learning for Mental Health Research and Clinical Applications [Perspectives]. IEEE Signal Process. Mag. 34(5): 196-195 (2017) - [j112]James Gibson, Athanasios Katsamanis, Francisco Romero, Bo Xiao, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Multiple Instance Learning for Behavioral Coding. IEEE Trans. Affect. Comput. 8(1): 81-94 (2017) - [j111]Zhaojun Yang, Shrikanth S. Narayanan:
Modeling Dynamics of Expressive Body Gestures In Dyadic Interactions. IEEE Trans. Affect. Comput. 8(3): 369-381 (2017) - [c532]Tad Hirsch, Kritzia Merced, Shrikanth S. Narayanan, Zac E. Imel, David C. Atkins:
Designing Contestability: Interaction Design, Machine Learning, and Mental Health. Conference on Designing Interactive Systems 2017: 95-99 - [c531]Zhaojun Yang, Boqing Gong, Shrikanth S. Narayanan:
Weighted geodesic flow kernel for interpersonal mutual influence modeling and emotion recognition in dyadic interactions. ACII 2017: 236-241 - [c530]Theodora Chaspari, Adela C. Timmons, Brian R. Baucom, Laura Perrone, Katherine J. W. Baucom, Panayiotis G. Georgiou, Gayla Margolin, Shrikanth S. Narayanan:
Exploring sparse representation measures of physiological synchrony for romantic couples. ACII 2017: 267-272 - [c529]Brandon M. Booth, Asem M. Ali, Shrikanth S. Narayanan, Ian Bennett, Aly A. Farag:
Toward active and unobtrusive engagement assessment of distance learners. ACII 2017: 470-476 - [c528]Anil Ramakrishna, Victor R. Martinez, Nikolaos Malandrakis, Karan Singla, Shrikanth S. Narayanan:
Linguistic analysis of differences in portrayal of movie characters. ACL (1) 2017: 1669-1678 - [c527]Shrikanth S. Narayanan:
Understanding individual-level speech variability: From novel speech production data to robust speaker recognition. CISS 2017: 1 - [c526]Taruna Agrawal, Rahul Gupta, Shrikanth S. Narayanan:
Multimodal detection of fake social media use through a fusion of classification and pairwise ranking systems. EUSIPCO 2017: 1045-1049 - [c525]Zisis Iason Skordilis, Asterios Toutios, Johannes Töger, Shrikanth S. Narayanan:
Estimation of vocal tract area function from volumetric Magnetic Resonance Imaging. ICASSP 2017: 924-928 - [c524]Ramasubramanian Balasubramanian, Theodora Chaspari, Shrikanth S. Narayanan:
A knowledge-driven framework for ECG representation and interpretation for wearable applications. ICASSP 2017: 1018-1022 - [c523]Sabyasachee Baruah, Rahul Gupta, Shrikanth S. Narayanan:
A knowledge transfer and boosting approach to the prediction of affect in movies. ICASSP 2017: 2876-2880 - [c522]Theodora Chaspari, Sohyun C. Han, Daniel Bone, Adela C. Timmons, Laura Perrone, Gayla Margolin, Shrikanth S. Narayanan:
Quantifying regulation mechanisms in dating couples through a dynamical systems model of acoustic and physiological arousal. ICASSP 2017: 3001-3005 - [c521]Benjamin Girault, Shrikanth S. Narayanan, Antonio Ortega:
Towards a definition of local stationarity for graph signals. ICASSP 2017: 4139-4143 - [c520]Benjamin Girault, Shrikanth S. Narayanan, Antonio Ortega, Paulo Gonçalves, Eric Fleury:
Grasp: A matlab toolbox for graph signal processing. ICASSP 2017: 6574-6575 - [c519]Che-Wei Huang, Shrikanth S. Narayanan:
Deep convolutional recurrent neural network with attention mechanism for robust speech emotion recognition. ICME 2017: 583-588 - [c518]Daniel Bone, Julia Mertens, Emily Zane, Sungbok Lee, Shrikanth S. Narayanan, Ruth B. Grossman:
Acoustic-Prosodic and Physiological Response to Stressful Interactions in Children with Autism Spectrum Disorder. INTERSPEECH 2017: 147-151 - [c517]Rachel Alexander, Tanner Sorensen, Asterios Toutios, Shrikanth S. Narayanan:
VCV Synthesis Using Task Dynamics to Animate a Factor-Based Articulatory Model. INTERSPEECH 2017: 244-248 - [c516]Krishna Somandepalli, Asterios Toutios, Shrikanth S. Narayanan:
Semantic Edge Detection for Tracking Vocal Tract Air-Tissue Boundaries in Real-Time Magnetic Resonance Images. INTERSPEECH 2017: 631-635 - [c515]Tanner Sorensen, Zisis Iason Skordilis, Asterios Toutios, Yoon-Chul Kim, Yinghua Zhu, Jangwon Kim, Adam C. Lammert, Vikram Ramanarayanan, Louis Goldstein, Dani Byrd, Krishna S. Nayak, Shrikanth S. Narayanan:
Database of Volumetric and Real-Time Vocal Tract MRI for Speech Science. INTERSPEECH 2017: 645-649 - [c514]Tanner Sorensen, Asterios Toutios, Johannes Töger, Louis Goldstein, Shrikanth S. Narayanan:
Test-Retest Repeatability of Articulatory Strategies Using Real-Time Magnetic Resonance Imaging. INTERSPEECH 2017: 994-998 - [c513]Qinyi Luo, Rahul Gupta, Shrikanth S. Narayanan:
Transfer Learning Between Concepts for Human Behavior Modeling: An Application to Sincerity and Deception Prediction. INTERSPEECH 2017: 1462-1466 - [c512]Ruchir Travadi, Shrikanth S. Narayanan:
A Distribution Free Formulation of the Total Variability Model. INTERSPEECH 2017: 1576-1580 - [c511]Pavlos Papadopoulos, Ruchir Travadi, Colin Vaz, Nikolaos Malandrakis, Ulf Hermjakob, Nima Pourdamghani, Michael Pust, Boliang Zhang, Xiaoman Pan, Di Lu, Ying Lin, Ondrej Glembek, Murali Karthick Baskar, Martin Karafiát, Lukás Burget, Mark Hasegawa-Johnson, Heng Ji, Jonathan May, Kevin Knight, Shrikanth S. Narayanan:
Team ELISA System for DARPA LORELEI Speech Evaluation 2016. INTERSPEECH 2017: 2053-2057 - [c510]Nikolaos Malandrakis, Ondrej Glembek, Shrikanth S. Narayanan:
Extracting Situation Frames from Non-English Speech: Evaluation Framework and Pilot Results. INTERSPEECH 2017: 2123-2127 - [c509]Nimisha Patil, Timothy Greer, Reed Blaylock, Shrikanth S. Narayanan:
Comparison of Basic Beatboxing Articulations Between Expert and Novice Artists Using Real-Time Magnetic Resonance Imaging. INTERSPEECH 2017: 2277-2281 - [c508]Reed Blaylock, Nimisha Patil, Timothy Greer, Shrikanth S. Narayanan:
Sounds of the Human Vocal Tract. INTERSPEECH 2017: 2287-2291 - [c507]Manoj Kumar, Daniel Bone, Kelly McWilliams, Shanna Williams, Thomas D. Lyon, Shrikanth S. Narayanan:
Multi-Scale Context Adaptation for Improving Child Automatic Speech Recognition in Child-Adult Spoken Interactions. INTERSPEECH 2017: 2730-2734 - [c506]Rahul Gupta, Saurabh Sahu, Carol Y. Espy-Wilson, Shrikanth S. Narayanan:
An Affect Prediction Approach Through Depression Severity Parameter Incorporation in Neural Networks. INTERSPEECH 2017: 3122-3126 - [c505]Karel Mundnich, Md. Nasir, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Exploiting Intra-Annotator Rating Consistency Through Copeland's Method for Estimation of Ground Truth Labels in Couples' Therapy. INTERSPEECH 2017: 3167-3171 - [c504]James Gibson, Dogan Can, Panayiotis G. Georgiou, David C. Atkins, Shrikanth S. Narayanan:
Attention Networks for Modeling Behaviors in Addiction Counseling. INTERSPEECH 2017: 3251-3255 - [c503]Md. Nasir, Brian R. Baucom, Craig J. Bryan, Shrikanth S. Narayanan, Panayiotis G. Georgiou:
Complexity in Speech and its Relation to Emotional Bond in Therapist-Patient Interactions During Suicide Risk Assessment Interviews. INTERSPEECH 2017: 3296-3300 - [c502]Pavlos Papadopoulos, Ruchir Travadi, Shrikanth S. Narayanan:
Global SNR Estimation of Speech Signals for Unknown Noise Conditions Using Noise Adapted Non-Linear Regression. INTERSPEECH 2017: 3842-3846 - [c501]Athanasia Kolovou, Filippos Kokkinos, Aris Fergadis, Pinelopi Papalampidi, Elias Iosif, Nikolaos Malandrakis, Elisavet Palogiannidi, Haris Papageorgiou, Shrikanth S. Narayanan, Alexandros Potamianos:
Tweester at SemEval-2017 Task 4: Fusion of Semantic-Affective and pairwise classification models for sentiment analysis in Twitter. SemEval@ACL 2017: 675-682 - [i6]Che-Wei Huang, Shrikanth S. Narayanan:
Characterizing Types of Convolution in Deep Convolutional Recurrent Neural Networks for Robust Speech Emotion Recognition. CoRR abs/1706.02901 (2017) - 2016
- [j110]Rahul Gupta, Kartik Audhkhasi, Sungbok Lee, Shrikanth S. Narayanan:
Detecting paralinguistic events in audio stream using context in features and probabilistic decisions. Comput. Speech Lang. 36: 72-92 (2016) - [j109]Ming Li, Jangwon Kim, Adam C. Lammert, Prasanta Kumar Ghosh, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Speaker verification based on the fusion of speech acoustics and inverted articulatory signals. Comput. Speech Lang. 36: 196-211 (2016) - [j108]Vikram Ramanarayanan, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories. Comput. Speech Lang. 36: 330-346 (2016) - [j107]Rahul Gupta, Daniel Bone, Sungbok Lee, Shrikanth S. Narayanan:
Analysis of engagement behavior in children during dyadic interactions using prosodic cues. Comput. Speech Lang. 37: 47-66 (2016) - [j106]Naveen Kumar, Fatemeh Fazel, Milica Stojanovic, Shrikanth S. Narayanan:
Online rate adjustment for adaptive random access compressed sensing of time-varying fields. EURASIP J. Adv. Signal Process. 2016: 48 (2016) - [j105]Angeliki Metallinou, Zhaojun Yang, Chi-Chun Lee, Carlos Busso, Sharon Carnicke, Shrikanth S. Narayanan:
The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations. Lang. Resour. Evaluation 50(3): 497-521 (2016) - [j104]Abe Kazemzadeh, James Gibson, Panayiotis G. Georgiou, Sungbok Lee, Shrikanth S. Narayanan:
A Socratic epistemology for verbal emotional intelligence. PeerJ Comput. Sci. 2: e40 (2016) - [j103]Bo Xiao, Che-Wei Huang, Zac E. Imel, David C. Atkins, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
A technology prototype system for rating therapist empathy from audio recordings in addiction counseling. PeerJ Comput. Sci. 2: e59 (2016) - [j102]Florian Eyben, Klaus R. Scherer, Björn W. Schuller, Johan Sundberg, Elisabeth André, Carlos Busso, Laurence Y. Devillers, Julien Epps, Petri Laukka, Shrikanth S. Narayanan, Khiet P. Truong:
The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing. IEEE Trans. Affect. Comput. 7(2): 190-202 (2016) - [j101]Pavlos Papadopoulos, Andreas Tsiartas, Shrikanth S. Narayanan:
Long-Term SNR Estimation of Speech Signals in Known and Unknown Channel Conditions. IEEE ACM Trans. Audio Speech Lang. Process. 24(12): 2495-2506 (2016) - [j100]Theodora Chaspari, Andreas Tsiartas, Panagiotis Tsilifis, Shrikanth S. Narayanan:
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations. IEEE Trans. Signal Process. 64(12): 3077-3092 (2016) - [c500]Theodora Chaspari, Sohyun C. Han, Daniel Bone, Adela C. Timmons, Laura Perrone, Gayla Margolin, Shrikanth S. Narayanan:
Dynamical Systems Modeling of Acoustic and Physiological Arousal in Young Couples. AAAI Spring Symposia 2016 - [c499]Daniel Bone, James Gibson, Theodora Chaspari, Dogan Can, Shrikanth S. Narayanan:
Speech and language processing for mental health research and care. ACSSC 2016: 831-835 - [c498]Tad Hirsch, Geoff Gray, James Gibson, Shrikanth S. Narayanan, Zac E. Imel, David S. Atkins:
Developing an Automated Report Card for Addiction Counseling: The Counselor Observer Ratings Expert for MI (CORE-MI). AMIA 2016 - [c497]Theodora Chaspari, Andreas Tsiartas, Leah I. Stein Duker, Sharon A. Cermak, Shrikanth S. Narayanan:
EDA-gram: Designing electrodermal activity fingerprints for visualization and feature extraction. EMBC 2016: 403-406 - [c496]Benjamin Girault, Paulo Gonçalves, Shrikanth S. Narayanan, Antonio Ortega:
Localization bounds for the graph translation. GlobalSIP 2016: 331-335 - [c495]Zhaojun Yang, Shrikanth S. Narayanan:
Lightly-supervised utterance-level emotion identification using latent topic modeling of multimodal words. ICASSP 2016: 2767-2771 - [c494]Adarsh Tadimari, Naveen Kumar, Tanaya Guha, Shrikanth S. Narayanan:
Opening big in box office? Trailer content can help. ICASSP 2016: 2777-2781 - [c493]Ankit Goyal, Naveen Kumar, Tanaya Guha, Shrikanth S. Narayanan:
A multimodal mixture-of-experts model for dynamic emotion prediction in movies. ICASSP 2016: 2822-2826 - [c492]Colin Vaz, Dimitrios Dimitriadis, Samuel Thomas, Shrikanth S. Narayanan:
CNMF-based acoustic features for noise-robust ASR. ICASSP 2016: 5735-5739 - [c491]Rahul Gupta, Theodora Chaspari, Jangwon Kim, Naveen Kumar, Daniel Bone, Shrikanth S. Narayanan:
Pathological speech processing: State-of-the-art, current challenges, and future directions. ICASSP 2016: 6470-6474 - [c490]Zhaojun Yang, Shrikanth S. Narayanan:
Analyzing Temporal Dynamics of Dyadic Synchrony in Affective Interactions. INTERSPEECH 2016: 42-46 - [c489]Johannes Töger, Yongwan Lim, Sajan Goud Lingala, Shrikanth S. Narayanan, Krishna S. Nayak:
Sensitivity of Quantitative RT-MRI Metrics of Vocal Tract Dynamics to Image Reconstruction Settings. INTERSPEECH 2016: 165-169 - [c488]Sarah Harper, Louis Goldstein, Shrikanth S. Narayanan:
L2 Acquisition and Production of the English Rhotic Pharyngeal Gesture. INTERSPEECH 2016: 208-212 - [c487]Adam C. Lammert, Christine H. Shadle, Shrikanth S. Narayanan, Thomas F. Quatieri:
Investigation of Speed-Accuracy Tradeoffs in Speech Production Using Real-Time Magnetic Resonance Imaging. INTERSPEECH 2016: 460-464 - [c486]Tanner Sorensen, Asterios Toutios, Louis Goldstein, Shrikanth S. Narayanan:
Characterizing Vocal Tract Dynamics Across Speakers Using Real-Time MRI. INTERSPEECH 2016: 465-469 - [c485]Sajan Goud Lingala, Asterios Toutios, Johannes Töger, Yongwan Lim, Yinghua Zhu, Yoon-Chul Kim, Colin Vaz, Shrikanth S. Narayanan, Krishna S. Nayak:
State-of-the-Art MRI Protocol for Comprehensive Assessment of Vocal Tract Structure and Function. INTERSPEECH 2016: 475-479 - [c484]Rahul Gupta, Nishant Nath, Taruna Agrawal, Panayiotis G. Georgiou, David C. Atkins, Shrikanth S. Narayanan:
Laughter Valence Prediction in Motivational Interviewing Based on Lexical and Acoustic Cues. INTERSPEECH 2016: 505-509 - [c483]Md. Nasir, Brian R. Baucom, Shrikanth S. Narayanan, Panayiotis G. Georgiou:
Complexity in Prosody: A Nonlinear Dynamical Systems Approach for Dyadic Conversations; Behavior and Outcomes in Couples Therapy. INTERSPEECH 2016: 893-897 - [c482]Bo Xiao, Dogan Can, James Gibson, Zac E. Imel, David C. Atkins, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Behavioral Coding of Therapist Language in Addiction Counseling Using Recurrent Neural Networks. INTERSPEECH 2016: 908-912 - [c481]Colin Vaz, Asterios Toutios, Shrikanth S. Narayanan:
Convex Hull Convolutive Non-Negative Matrix Factorization for Uncovering Temporal Patterns in Multivariate Time-Series Data. INTERSPEECH 2016: 963-967 - [c480]Reed Blaylock, Louis Goldstein, Shrikanth S. Narayanan:
Velum Control for Oral Sounds. INTERSPEECH 2016: 1084-1088 - [c479]Daniel Bone, Somer Bishop, Rahul Gupta, Sungbok Lee, Shrikanth S. Narayanan:
Acoustic-Prosodic and Turn-Taking Features in Interactions with Children with Neurodevelopmental Disorders. INTERSPEECH 2016: 1185-1189 - [c478]Che-Wei Huang, Shrikanth S. Narayanan:
Attention Assisted Discovery of Sub-Utterance Structure in Speech Emotion Recognition. INTERSPEECH 2016: 1387-1391 - [c477]Rahul Gupta, Shrikanth S. Narayanan:
Predicting Affective Dimensions Based on Self Assessed Depression Severity. INTERSPEECH 2016: 1427-1431 - [c476]James Gibson, Dogan Can, Bo Xiao, Zac E. Imel, David C. Atkins, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
A Deep Learning Approach to Modeling Empathy in Addiction Counseling. INTERSPEECH 2016: 1447-1451 - [c475]Asterios Toutios, Tanner Sorensen, Krishna Somandepalli, Rachel Alexander, Shrikanth S. Narayanan:
Articulatory Synthesis Based on Real-Time Magnetic Resonance Imaging Data. INTERSPEECH 2016: 1492-1496 - [c474]Anil Ramakrishna, Rahul Gupta, Ruth B. Grossman, Shrikanth S. Narayanan:
An Expectation Maximization Approach to Joint Modeling of Multidimensional Ratings Derived from Multiple Annotators. INTERSPEECH 2016: 1555-1559 - [c473]Yongwan Lim, Sajan Goud Lingala, Asterios Toutios, Shrikanth S. Narayanan, Krishna S. Nayak:
Improved Depiction of Tissue Boundaries in Vocal Tract Real-Time MRI Using Automatic Off-Resonance Correction. INTERSPEECH 2016: 1765-1769 - [c472]Brandon M. Booth, Rahul Gupta, Pavlos Papadopoulos, Ruchir Travadi, Shrikanth S. Narayanan:
Automatic Estimation of Perceived Sincerity from Spoken Language. INTERSPEECH 2016: 2021-2025 - [c471]Naveen Kumar, Md. Nasir, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Robust Multichannel Gender Classification from Speech in Movie Audio. INTERSPEECH 2016: 2233-2237 - [c470]Asterios Toutios, Sajan Goud Lingala, Colin Vaz, Jangwon Kim, John H. Esling, Patricia A. Keating, Matthew Gordon, Dani Byrd, Louis Goldstein, Krishna S. Nayak, Shrikanth S. Narayanan:
Illustrating the Production of the International Phonetic Alphabet Sounds Using Fast Real-Time Magnetic Resonance Imaging. INTERSPEECH 2016: 2428-2432 - [c469]Mairym Lloréns Monteserín, Shrikanth S. Narayanan, Louis Goldstein:
Perceptual Lateralization of Coda Rhotic Production in Puerto Rican Spanish. INTERSPEECH 2016: 2443-2447 - [c468]Manoj Kumar, Rahul Gupta, Daniel Bone, Nikolaos Malandrakis, Somer Bishop, Shrikanth S. Narayanan:
Objective Language Feature Analysis in Children with Neurodevelopmental Disorders During Autism Assessment. INTERSPEECH 2016: 2721-2725 - [c467]Pavlos Papadopoulos, Colin Vaz, Shrikanth S. Narayanan:
Noise Aware and Combined Noise Models for Speech Denoising in Unknown Noise Conditions. INTERSPEECH 2016: 2866-2869 - [c466]Ruchir Travadi, Shrikanth S. Narayanan:
Non-Iterative Parameter Estimation for Total Variability Model Using Randomized Singular Value Decomposition. INTERSPEECH 2016: 3221-3225 - [c465]Che-Wei Huang, Shrikanth S. Narayanan:
Flow of Renyi information in deep neural networks. MLSP 2016: 1-6 - [c464]Krishna Somandepalli, Rahul Gupta, Md. Nasir, Brandon M. Booth, Sungbok Lee, Shrikanth S. Narayanan:
Online Affect Tracking with Multimodal Kalman Filters. AVEC@ACM Multimedia 2016: 59-66 - [c463]Che-Wei Huang, Shrikanth S. Narayanan:
Comparison of feature-level and kernel-level data fusion methods in multi-sensory fall detection. MMSP 2016: 1-6 - [c462]Che-Wei Huang, Shrikanth S. Narayanan:
Comparison of feature-level and kernel-level data fusion methods in multi-sensory fall detection. MMSP 2016: 1-6 - [c461]Naveen Kumar, Tanaya Guha, Che-Wei Huang, Colin Vaz, Shrikanth S. Narayanan:
Novel affective features for multiscale prediction of emotion in music. MMSP 2016: 1-5 - [c460]Shrikanth S. Narayanan:
Understanding individual-level speech variability: From novel speech production data to robust speaker recognition. Odyssey 2016 - [c459]Elisavet Palogiannidi, Athanasia Kolovou, Fenia Christopoulou, Filippos Kokkinos, Elias Iosif, Nikolaos Malandrakis, Haris Papageorgiou, Shrikanth S. Narayanan, Alexandros Potamianos:
Tweester at SemEval-2016 Task 4: Sentiment Analysis in Twitter Using Semantic-Affective Model Adaptation. SemEval@NAACL-HLT 2016: 155-163 - [i5]Benjamin Girault, Paulo Gonçalves, Shrikanth S. Narayanan, Antonio Ortega:
Localization bounds for the graph translation. CoRR abs/1609.08820 (2016) - [i4]Rahul Gupta, Shrikanth S. Narayanan:
Inferring object rankings based on noisy pairwise comparisons from multiple annotators. CoRR abs/1612.04413 (2016) - 2015
- [j99]Urbashi Mitra, Sunav Choudhary, Franz S. Hover, Robert Hummel, Naveen Kumar, Shrikanth S. Narayanan, Milica Stojanovic, Gaurav S. Sukhatme:
Structured sparse methods for active ocean observation systems with communication constraints. IEEE Commun. Mag. 53(11): 88-96 (2015) - [j98]Jangwon Kim, Naveen Kumar, Andreas Tsiartas, Ming Li, Shrikanth S. Narayanan:
Automatic intelligibility classification of sentence-level pathological speech. Comput. Speech Lang. 29(1): 132-144 (2015) - [j97]Maarten Van Segbroeck, Ruchir Travadi, Shrikanth S. Narayanan:
Rapid Language Identification. IEEE ACM Trans. Audio Speech Lang. Process. 23(7): 1118-1129 (2015) - [j96]Theodora Chaspari, Andreas Tsiartas, Leah I. Stein, Sharon A. Cermak, Shrikanth S. Narayanan:
Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries. IEEE Trans. Biomed. Eng. 62(3): 960-971 (2015) - [j95]Bo Xiao, Panayiotis G. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan:
Head Motion Modeling for Human Behavior Analysis in Dyadic Interaction. IEEE Trans. Multim. 17(7): 1107-1119 (2015) - [c458]Bo Xiao, Panayiotis G. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan:
Modeling head motion entrainment for prediction of couples' behavioral characteristics. ACII 2015: 91-97 - [c457]Angeliki Metallinou, Athanasios Katsamanis, Martin Wöllmer, Florian Eyben, Björn W. Schuller, Shrikanth S. Narayanan:
Context-sensitive learning for enhanced audiovisual emotion classification (Extended abstract). ACII 2015: 463-469 - [c456]Anil Ramakrishna, Nikolaos Malandrakis, Elizabeth Staruk, Shrikanth S. Narayanan:
A quantitative analysis of gender differences in movies using psycholinguistic normatives. EMNLP 2015: 1996-2001 - [c455]Dogan Can, Shrikanth S. Narayanan:
A Dynamic Programming Algorithm for Computing N-gram Posteriors from Lattices. EMNLP 2015: 2388-2397 - [c454]Rahul Gupta, Naveen Kumar, Shrikanth S. Narayanan:
Affect prediction in music using boosted ensemble of filters. EUSIPCO 2015: 11-15 - [c453]Tanaya Guha, Zhaojun Yang, Anil Ramakrishna, Ruth B. Grossman, Darren Hedley, Sungbok Lee, Shrikanth S. Narayanan:
On quantifying facial expression-related atypicality of children with Autism Spectrum Disorder. ICASSP 2015: 803-807 - [c452]Theodora Chaspari, Brian R. Baucom, Adela C. Timmons, Andreas Tsiartas, Larissa Borofsky Del Piero, Katherine J. W. Baucom, Panayiotis G. Georgiou, Gayla Margolin, Shrikanth S. Narayanan:
Quantifying EDA synchrony through joint sparse representation: A case-study of couples' interactions. ICASSP 2015: 817-821 - [c451]Md. Nasir, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Redundancy analysis of behavioral coding for couples therapy and improved estimation of behavior from noisy annotations. ICASSP 2015: 1886-1890 - [c450]Rahul Gupta, Kartik Audhkhasi, Shrikanth S. Narayanan:
A mixture of experts approach towards intelligibility classification of pathological speech. ICASSP 2015: 1986-1990 - [c449]Zhaojun Yang, Shrikanth S. Narayanan:
Modeling mutual influence of multimodal behavior in affective dyadic interactions. ICASSP 2015: 2234-2238 - [c448]Tanaya Guha, Naveen Kumar, Shrikanth S. Narayanan, Stacy L. Smith:
Computationally deconstructing movie narratives: An informatics approach. ICASSP 2015: 2264-2268 - [c447]Samuel Thomas, George Saon, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Improvements to the IBM speech activity detection system for the DARPA RATS program. ICASSP 2015: 4500-4504 - [c446]Tanaya Guha, Che-Wei Huang, Naveen Kumar, Yan Zhu, Shrikanth S. Narayanan:
Gender Representation in Cinematic Content: A Multimodal Approach. ICMI 2015: 31-34 - [c445]Yoon-Jeong Lee, Louis Goldstein, Shrikanth S. Narayanan:
Systematic variation in the articulation of the Korean liquid across prosodic positions. ICPhS 2015 - [c444]Alexsandro R. Meireles, Louis Goldstein, Reed Blaylock, Shrikanth S. Narayanan:
Gestural coordination of Brazilian Portugese nasal vowels in CV syllables: A real-time MRI study. ICPhS 2015 - [c443]Michael I. Proctor, Chi Yhun Lo, Shrikanth S. Narayanan:
Articulation of English vowels in running speech: A real-time MRI study. ICPhS 2015 - [c442]Asterios Toutios, Shrikanth S. Narayanan:
Factor analysis of vocal-tract outlines derived from real-time magnetic resonance imaging data. ICPhS 2015 - [c441]Naveen Kumar, Shrikanth S. Narayanan:
A discriminative reliability-aware classification model with applications to intelligibility classification in pathological speech. INTERSPEECH 2015: 90-94 - [c440]Dogan Can, David C. Atkins, Shrikanth S. Narayanan:
A dialog act tagging approach to behavioral coding: a case study of addiction counseling conversations. INTERSPEECH 2015: 339-343 - [c439]Zisis Iason Skordilis, Vikram Ramanarayanan, Louis Goldstein, Shrikanth S. Narayanan:
Experimental assessment of the tongue incompressibility hypothesis during speech production. INTERSPEECH 2015: 384-388 - [c438]Matthew P. Black, Daniel Bone, Zisis Iason Skordilis, Rahul Gupta, Wei Xia, Pavlos Papadopoulos, Sandeep Nallan Chakravarthula, Bo Xiao, Maarten Van Segbroeck, Jangwon Kim, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Automated evaluation of non-native English pronunciation quality: combining knowledge- and data-driven features at multiple time scales. INTERSPEECH 2015: 493-497 - [c437]Jangwon Kim, Md. Nasir, Rahul Gupta, Maarten Van Segbroeck, Daniel Bone, Matthew P. Black, Zisis Iason Skordilis, Zhaojun Yang, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Automatic estimation of parkinson's disease severity from diverse speech tasks. INTERSPEECH 2015: 914-918 - [c436]Chi-Chun Lee, Daniel Bone, Shrikanth S. Narayanan:
An analysis of the relationship between signal-derived vocal arousal score and human emotion production and perception. INTERSPEECH 2015: 1304-1308 - [c435]Daniel Bone, Matthew P. Black, Anil Ramakrishna, Ruth B. Grossman, Shrikanth S. Narayanan:
Acoustic-prosodic correlates of 'awkward' prosody in story retellings from adolescents with autism. INTERSPEECH 2015: 1616-1620 - [c434]Colin Vaz, Shrikanth S. Narayanan:
Learning a speech manifold for signal subspace speech denoising. INTERSPEECH 2015: 1735-1739 - [c433]Ruchir Travadi, Shrikanth S. Narayanan:
Ensemble of Gaussian mixture localized neural networks with application to phone recognition. INTERSPEECH 2015: 1903-1907 - [c432]James Gibson, Nikolaos Malandrakis, Francisco Romero, David C. Atkins, Shrikanth S. Narayanan:
Predicting therapist empathy in motivational interviews using language features inspired by psycholinguistic norms. INTERSPEECH 2015: 1947-1951 - [c431]Nikolaos Malandrakis, Shrikanth S. Narayanan:
Therapy language analysis using automatically generated psycholinguistic norms. INTERSPEECH 2015: 1952-1956 - [c430]Rahul Gupta, Theodora Chaspari, Panayiotis G. Georgiou, David C. Atkins, Shrikanth S. Narayanan:
Analysis and modeling of the role of laughter in motivational interviewing based psychotherapy conversations. INTERSPEECH 2015: 1962-1966 - [c429]Bo Xiao, Zac E. Imel, David C. Atkins, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Analyzing speech rate entrainment and its relation to therapist empathy in drug addiction counseling. INTERSPEECH 2015: 2489-2493 - [c428]Md. Nasir, Wei Xia, Bo Xiao, Brian R. Baucom, Shrikanth S. Narayanan, Panayiotis G. Georgiou:
Still together?: the role of acoustic features in predicting marital outcome. INTERSPEECH 2015: 2499-2503 - [c427]Taruna Agrawal, Rahul Gupta, Shrikanth S. Narayanan:
Retrieving Social Images using Relevance Filtering and Diverse Selection. MediaEval 2015 - [c426]Rahul Gupta, Shrikanth S. Narayanan:
Predicting Affect in Music Using Regression Methods on Low Level Features. MediaEval 2015 - [c425]Shrikanth S. Narayanan:
Keynote speech 4: Extraction of linguistic and paralinguistic information from audio-visual data. O-COCOSDA/CASLRE 2015: 1-2 - [i3]Abe Kazemzadeh, James Gibson, Panayiotis G. Georgiou, Sungbok Lee, Shrikanth S. Narayanan:
A Socratic epistemology for verbal emotional intelligence. PeerJ Prepr. 3: e1292 (2015) - 2014
- [j94]Daniel Bone, Ming Li, Matthew P. Black, Shrikanth S. Narayanan:
Intoxicated speech detection: A fusion framework with speaker-normalized hierarchical functionals and GMM supervectors. Comput. Speech Lang. 28(2): 375-391 (2014) - [j93]Chi-Chun Lee, Athanasios Katsamanis, Matthew P. Black, Brian R. Baucom, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Computing vocal entrainment: A signal-derived PCA-based quantification scheme with application to affect analysis in married couple interactions. Comput. Speech Lang. 28(2): 518-539 (2014) - [j92]Ming Li, Shrikanth S. Narayanan:
Simplified supervised i-vector modeling with application to robust and efficient language identification and speaker verification. Comput. Speech Lang. 28(4): 940-958 (2014) - [j91]Adam C. Lammert, Louis Goldstein, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Gestural Control in the English Past-Tense Suffix: An Articulatory Study Using Real-Time MRI. Phonetica 71(4): 229-248 (2014) - [j90]Daniel Bone, Chi-Chun Lee, Shrikanth S. Narayanan:
Robust Unsupervised Arousal Rating: A Rule-Based Framework withKnowledge-Inspired Vocal Features. IEEE Trans. Affect. Comput. 5(2): 201-213 (2014) - [j89]Kartik Audhkhasi, Andreas M. Zavou, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Theoretical Analysis of Diversity in an Ensemble of Automatic Speech Recognition Systems. IEEE ACM Trans. Audio Speech Lang. Process. 22(3): 711-726 (2014) - [j88]Zhaojun Yang, Angeliki Metallinou, Shrikanth S. Narayanan:
Analysis and Predictive Modeling of Body Language Behavior in Dyadic Interactions From Multimodal Interlocutor Cues. IEEE Trans. Multim. 16(6): 1766-1778 (2014) - [c424]Andreas Tsiartas, Prasanta Kumar Ghosh, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Classification of clean and noisy bilingual movie audio for speech-to-speech translation corpora design. ICASSP 2014: 121-125 - [c423]Zhaojun Yang, Angeliki Metallinou, Engin Erzin, Shrikanth S. Narayanan:
Analysis of interaction attitudes using data-driven hand gesture phrases. ICASSP 2014: 699-703 - [c422]Theodora Chaspari, Matthew S. Goodwin, Oliver Wilder-Smith, Amanda Gulsrud, Charlotte A. Mucchetti, Connie Kasari, Shrikanth S. Narayanan:
A non-homogeneous poisson process model of Skin Conductance Responses integrated with observed regulatory behaviors for Autism intervention. ICASSP 2014: 1611-1615 - [c421]Rahul Gupta, Kartik Audhkhasi, Shrikanth S. Narayanan:
Training ensemble of diverse classifiers on feature subsets. ICASSP 2014: 2927-2931 - [c420]Dogan Can, James Gibson, Colin Vaz, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Barista: A framework for concurrent speech processing by usc-sail. ICASSP 2014: 3306-3310 - [c419]James Gibson, Shrikanth S. Narayanan:
Learning multiple concepts with incremental diverse density. ICASSP 2014: 4558-4562 - [c418]Bo Xiao, Panayiotis G. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan:
Power-spectral analysis of head motion signal for behavioral modeling in human interaction. ICASSP 2014: 4593-4597 - [c417]Prashanth Gurunath Shivakumar, Ming Li, Vedant Dhandhania, Shrikanth S. Narayanan:
Simplified and supervised i-vector modeling for speaker age regression. ICASSP 2014: 4833-4837 - [c416]Nikolaos Malandrakis, Alexandros Potamianos, Kean J. Hsu, Kalina N. Babeva, Michelle C. Feng, Gerald C. Davison, Shrikanth S. Narayanan:
Affective language model adaptation via corpus selection. ICASSP 2014: 4838-4842 - [c415]Naveen Kumar, Maarten Van Segbroeck, Kartik Audhkhasi, Peter Drotár, Shrikanth S. Narayanan:
Fusion of diverse denoising systems for robust automatic speech recognition. ICASSP 2014: 5557-5561 - [c414]Colin Vaz, Andreas Tsiartas, Shrikanth S. Narayanan:
Energy-constrained minimum variance response filter for robust vowel spectral estimation. ICASSP 2014: 6275-6279 - [c413]Naveen Kumar, Shrikanth S. Narayanan:
Hull detection based on largest empty sector angle with application to analysis of realtime MR images. ICASSP 2014: 6617-6621 - [c412]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Shrikanth S. Narayanan:
Semi-supervised term-weighted value rescoring for keyword search. ICASSP 2014: 7869-7873 - [c411]Pavlos Papadopoulos, Andreas Tsiartas, James Gibson, Shrikanth S. Narayanan:
A supervised signal-to-noise ratio estimation of speech signals. ICASSP 2014: 8237-8241 - [c410]Shrikanth S. Narayanan, Ayush Jaiswal, Yao-Yi Chiang, Yanhui Geng, Craig A. Knoblock, Pedro A. Szekely:
Integration and Automation of Data Preparation and Data Mining. ICDM Workshops 2014: 1076-1085 - [c409]Zhaojun Yang, Antonio Ortega, Shrikanth S. Narayanan:
Gesture dynamics modeling for attitude analysis using graph based transform. ICIP 2014: 1515-1519 - [c408]Jiun-Yu Kao, Antonio Ortega, Shrikanth S. Narayanan:
Graph-based approach for motion capture data representation and analysis. ICIP 2014: 2061-2065 - [c407]Shrikanth S. Narayanan:
Behavioral informatics from multimodal human interaction cues. SLAM@INTERSPEECH 2014: 1 - [c406]Vikram Ramanarayanan, Louis Goldstein, Shrikanth S. Narayanan:
Motor control primitives arising from a learned dynamical systems model of speech articulation. INTERSPEECH 2014: 150-154 - [c405]Jangwon Kim, Sungbok Lee, Shrikanth S. Narayanan:
Estimation of the movement trajectories of non-crucial articulators based on the detection of crucial moments and physiological constraints. INTERSPEECH 2014: 164-168 - [c404]Rahul Gupta, Panayiotis G. Georgiou, David C. Atkins, Shrikanth S. Narayanan:
Predicting client's inclination towards target behavior change in motivational interviewing and investigating the role of laughter. INTERSPEECH 2014: 208-212 - [c403]Bo Xiao, Daniel Bone, Maarten Van Segbroeck, Zac E. Imel, David C. Atkins, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Modeling therapist empathy through prosody in drug addiction counseling. INTERSPEECH 2014: 213-217 - [c402]Daniel Bone, Chi-Chun Lee, Alexandros Potamianos, Shrikanth S. Narayanan:
An investigation of vocal arousal dynamics in child-psychologist interactions using synchrony measures and a conversation-based model. INTERSPEECH 2014: 218-222 - [c401]Jangwon Kim, Donna Erickson, Sungbok Lee, Shrikanth S. Narayanan:
A study of invariant properties and variation patterns in the converter/distributor model for emotional speech. INTERSPEECH 2014: 413-417 - [c400]Che-Wei Huang, Bo Xiao, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Unsupervised speaker diarization using riemannian manifold clustering. INTERSPEECH 2014: 567-571 - [c399]James Gibson, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Comparing time-frequency representations for directional derivative features. INTERSPEECH 2014: 612-615 - [c398]Andrés Benítez, Vikram Ramanarayanan, Louis Goldstein, Shrikanth S. Narayanan:
A real-time MRI study of articulatory setting in second language speech. INTERSPEECH 2014: 701-705 - [c397]Maarten Van Segbroeck, Ruchir Travadi, Colin Vaz, Jangwon Kim, Matthew P. Black, Alexandros Potamianos, Shrikanth S. Narayanan:
Classification of cognitive load from speech using an i-vector framework. INTERSPEECH 2014: 751-755 - [c396]Colin Vaz, Dimitrios Dimitriadis, Shrikanth S. Narayanan:
Enhancing audio source separability using spectro-temporal regularization with NMF. INTERSPEECH 2014: 855-859 - [c395]Abhay Prasad, Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Selection of optimal vocal tract regions using real-time magnetic resonance imaging for robust voice activity detection. INTERSPEECH 2014: 1539-1543 - [c394]Sriram Ganapathy, Kyu Jeong Han, Samuel Thomas, Mohamed Kamal Omar, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Robust language identification using convolutional neural network features. INTERSPEECH 2014: 1846-1850 - [c393]Zhaojun Yang, Shrikanth S. Narayanan:
Analysis of emotional effect on speech-body gesture interplay. INTERSPEECH 2014: 1934-1938 - [c392]Colin Vaz, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Joint filtering and factorization for recovering latent structure from noisy speech data. INTERSPEECH 2014: 2365-2369 - [c391]Rahul Gupta, Sankaranarayanan Ananthakrishnan, Zhaojun Yang, Shrikanth S. Narayanan:
Variable Span disfluency detection in ASR transcripts. INTERSPEECH 2014: 2892-2896 - [c390]Maarten Van Segbroeck, Ruchir Travadi, Shrikanth S. Narayanan:
UBM fused total variability modeling for language identification. INTERSPEECH 2014: 3027-3031 - [c389]Ruchir Travadi, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Modified-prior i-vector estimation for language identification of short duration utterances. INTERSPEECH 2014: 3037-3041 - [c388]Naveen Kumar, Rahul Gupta, Tanaya Guha, Colin Vaz, Maarten Van Segbroeck, Jangwon Kim, Shrikanth S. Narayanan:
Affective Feature Design and Predicting Continuous Affective Dimensions from Music. MediaEval 2014 - [c387]Naveen Kumar, Shrikanth S. Narayanan:
Detection of Musical Event Drop from Crowdsourced Annotations Using a Noisy Channel Model. MediaEval 2014 - [c386]Rahul Gupta, Nikolaos Malandrakis, Bo Xiao, Tanaya Guha, Maarten Van Segbroeck, Matthew Black, Alexandros Potamianos, Shrikanth S. Narayanan:
Multimodal Prediction of Affective Dimensions and Depression in Human-Computer Interactions. AVEC@MM 2014: 33-40 - [c385]Kalliopi Zervanou, Nikolaos Malandrakis, Shrikanth S. Narayanan:
SAIL-GRS: Grammar Induction for Spoken Dialogue Systems using CF-IRF Rule Similarity. SemEval@COLING 2014: 508-511 - [c384]Nikolaos Malandrakis, Michael Falcone, Colin Vaz, Jesse James Bisogni, Alexandros Potamianos, Shrikanth S. Narayanan:
SAIL: Sentiment Analysis using Semantic Similarity and Contrast Features. SemEval@COLING 2014: 512-516 - [c383]Prashanth Gurunath Shivakumar, Alexandros Potamianos, Sungbok Lee, Shrikanth S. Narayanan:
Improving speech recognition for children using acoustic adaptation and pronunciation modeling. WOCCI 2014: 15-19 - 2013
- [j87]Abe Kazemzadeh, Sungbok Lee, Shrikanth S. Narayanan:
Fuzzy Logic Models for the Meaning of Emotion Words. IEEE Comput. Intell. Mag. 8(2): 34-49 (2013) - [j86]Björn W. Schuller, Stefan Steidl, Anton Batliner, Felix Burkhardt, Laurence Devillers, Christian A. Müller, Shrikanth S. Narayanan:
Paralinguistics in speech and language - State-of-the-art and the challenge. Comput. Speech Lang. 27(1): 4-39 (2013) - [j85]Ming Li, Kyu Jeong Han, Shrikanth S. Narayanan:
Automatic speaker age and gender recognition using acoustic and prosodic level information fusion. Comput. Speech Lang. 27(1): 151-167 (2013) - [j84]Emil Ettelaie, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Unsupervised data processing for classifier-based speech translator. Comput. Speech Lang. 27(2): 438-454 (2013) - [j83]Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, Shrikanth S. Narayanan:
Enriching machine-mediated speech-to-speech translation using contextual information. Comput. Speech Lang. 27(2): 492-508 (2013) - [j82]JongHo Shin, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Enabling effective design of multimodal interfaces for speech-to-speech translation system: An empirical study of longitudinal user behaviors over time and user strategies for coping with errors. Comput. Speech Lang. 27(2): 554-571 (2013) - [j81]Andreas Tsiartas, Prasanta Kumar Ghosh, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
High-quality bilingual subtitle document alignments with application to spontaneous speech translation. Comput. Speech Lang. 27(2): 572-591 (2013) - [j80]Angeliki Metallinou, Athanasios Katsamanis, Shrikanth S. Narayanan:
Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information. Image Vis. Comput. 31(2): 137-152 (2013) - [j79]Kartik Audhkhasi, Shrikanth S. Narayanan:
A Globally-Variant Locally-Constant Model for Fusion of Labels from Multiple Diverse Experts without Using Reference Labels. IEEE Trans. Pattern Anal. Mach. Intell. 35(4): 769-783 (2013) - [j78]Shrikanth S. Narayanan, Panayiotis G. Georgiou:
Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language. Proc. IEEE 101(5): 1203-1233 (2013) - [j77]Gaël Richard, Shiva Sundaram, Shrikanth S. Narayanan:
An Overview on Perceptually Motivated Audio Indexing and Classification. Proc. IEEE 101(9): 1939-1954 (2013) - [j76]Matthew P. Black, Athanasios Katsamanis, Brian R. Baucom, Chi-Chun Lee, Adam C. Lammert, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features. Speech Commun. 55(1): 1-21 (2013) - [j75]Adam C. Lammert, Louis Goldstein, Shrikanth S. Narayanan, Khalil Iskarous:
Statistical methods for estimation of direct and differential kinematics of the vocal tract. Speech Commun. 55(1): 147-161 (2013) - [j74]Carlos Busso, Soroosh Mariooryad, Angeliki Metallinou, Shrikanth S. Narayanan:
Iterative Feature Normalization Scheme for Automatic Emotion Detection from Speech. IEEE Trans. Affect. Comput. 4(4): 386-397 (2013) - [j73]Nikos Malandrakis, Alexandros Potamianos, Elias Iosif, Shrikanth S. Narayanan:
Distributional Semantic Models for Affective Text Analysis. IEEE Trans. Speech Audio Process. 21(11): 2379-2392 (2013) - [j72]Theodosis Moschopoulos, Elias Iosif, Leeda Demetropoulou, Alexandros Potamianos, Shrikanth S. Narayanan:
Toward the Automatic Extraction of Policy Networks Using Web Links and Documents. IEEE Trans. Knowl. Data Eng. 25(10): 2404-2417 (2013) - [j71]Yinghua Zhu, Yoon-Chul Kim, Michael I. Proctor, Shrikanth S. Narayanan, Krishna S. Nayak:
Dynamic 3-D Visualization of Vocal Tract Shaping During Speech. IEEE Trans. Medical Imaging 32(5): 838-848 (2013) - [c382]Abhinav Sethy, Stanley F. Chen, Ebru Arisoy, Bhuvana Ramabhadran, Kartik Audhkhasi, Shrikanth S. Narayanan, Paul Vozila:
Joint training of interpolated exponential n-gram models. ASRU 2013: 25-30 - [c381]Angeliki Metallinou, Shrikanth S. Narayanan:
Annotation and processing of continuous emotional attributes: Challenges and opportunities. FG 2013: 1-8 - [c380]Samuel Kim, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
On-line genre classification of TV programs using audio content. ICASSP 2013: 798-802 - [c379]Jangwon Kim, Adam C. Lammert, Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Spatial and temporal alignment of multimodal human speech production data: Real time imaging, flesh point tracking and audio. ICASSP 2013: 3637-3641 - [c378]Theodora Chaspari, Daniel Bone, James Gibson, Chi-Chun Lee, Shrikanth S. Narayanan:
Using physiology and language cues for modeling verbal response latencies of children with ASD. ICASSP 2013: 3702-3706 - [c377]Zhaojun Yang, Angeliki Metallinou, Shrikanth S. Narayanan:
Toward body language generation in dyadic interaction settings from interlocutor multimodal cues. ICASSP 2013: 3761-3765 - [c376]Bo Xiao, Panayiotis G. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan:
Data driven modeling of head motion towards analysis of behaviors in couple interactions. ICASSP 2013: 3766-3770 - [c375]Qun Feng Tan, Shrikanth S. Narayanan:
Combining window predictions efficiently - A new imputation approach for noise robust automatic speech recognition. ICASSP 2013: 7054-7057 - [c374]Maarten Van Segbroeck, Shrikanth S. Narayanan:
A robust frontend for ASR: Combining denoising, noise masking and feature normalization. ICASSP 2013: 7097-7101 - [c373]Ming Li, Andreas Tsiartas, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Speaker verification using simplified and supervised i-vector modeling. ICASSP 2013: 7199-7203 - [c372]Andreas Tsiartas, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
A study on the effect of prosodic emphasis transfer on overall speech translation quality. ICASSP 2013: 8396-8400 - [c371]Nikos Malandrakis, Alexandros Potamianos, Shrikanth S. Narayanan:
Continuous models of affect from text using n-grams. ICASSP 2013: 8500-8504 - [c370]James Gibson, Bo Xiao, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
An audio-visual approach to learning salient behaviors in couples' problem solving discussions. ICME Workshops 2013: 1-4 - [c369]Angeliki Metallinou, Ruth B. Grossman, Shrikanth S. Narayanan:
Quantifying atypicality in affective facial expressions of children with autism spectrum disorders. ICME 2013: 1-6 - [c368]Emily Mower Provost, Irene Zhu, Shrikanth S. Narayanan:
Using emotional noise to uncloud audio-visual emotion perceptual evaluation. ICME 2013: 1-6 - [c367]Bo Xiao, Panayiotis G. Georgiou, Chi-Chun Lee, Brian R. Baucom, Shrikanth S. Narayanan:
Head motion synchrony and its correlation to affectivity in dyadic interactions. ICME 2013: 1-6 - [c366]Dogan Can, Shrikanth S. Narayanan:
On the computation of document frequency statistics from spoken corpora using factor automata. INTERSPEECH 2013: 6-10 - [c365]Rahul Gupta, Kartik Audhkhasi, Sungbok Lee, Shrikanth S. Narayanan:
Paralinguistic event detection from speech using probabilistic time-series smoothing and masking. INTERSPEECH 2013: 173-177 - [c364]Daniel Bone, Theodora Chaspari, Kartik Audhkhasi, James Gibson, Andreas Tsiartas, Maarten Van Segbroeck, Ming Li, Sungbok Lee, Shrikanth S. Narayanan:
Classifying language-related developmental disorders from speech cues: the promise and the potential confounds. INTERSPEECH 2013: 182-186 - [c363]Michael I. Proctor, Louis Goldstein, Adam C. Lammert, Dani Byrd, Asterios Toutios, Shrikanth S. Narayanan:
Velic coordination in French nasals: a real-time magnetic resonance imaging study. INTERSPEECH 2013: 577-581 - [c362]Maarten Van Segbroeck, Andreas Tsiartas, Shrikanth S. Narayanan:
A robust frontend for VAD: exploiting contextual, discriminative and spectral cues of human voice. INTERSPEECH 2013: 704-708 - [c361]Andreas Tsiartas, Theodora Chaspari, Nassos Katsamanis, Prasanta Kumar Ghosh, Ming Li, Maarten Van Segbroeck, Alexandros Potamianos, Shrikanth S. Narayanan:
Multi-band long-term signal variability features for robust voice activity detection. INTERSPEECH 2013: 718-722 - [c360]James Gibson, Maarten Van Segbroeck, Antonio Ortega, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Spectro-temporal directional derivative features for automatic speech recognition. INTERSPEECH 2013: 872-875 - [c359]Adam C. Lammert, Vikram Ramanarayanan, Michael I. Proctor, Shrikanth S. Narayanan:
Vocal tract cross-distance estimation from real-time MRI using region-of-interest analysis. INTERSPEECH 2013: 959-962 - [c358]Fang-Ying Hsieh, Louis Goldstein, Dani Byrd, Shrikanth S. Narayanan:
Truncation of pharyngeal gesture in English diphthong [aɪ]. INTERSPEECH 2013: 968-972 - [c357]Zhaojun Yang, Vikram Ramanarayanan, Dani Byrd, Shrikanth S. Narayanan:
The effect of word frequency and lexical class on articulatory-acoustic coupling. INTERSPEECH 2013: 973-977 - [c356]Samuel Kim, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Annotation and classification of Political advertisements. INTERSPEECH 2013: 1092-1096 - [c355]Yinghua Zhu, Asterios Toutios, Shrikanth S. Narayanan, Krishna S. Nayak:
Faster 3d vocal tract real-time MRI using constrained reconstruction. INTERSPEECH 2013: 1292-1296 - [c354]Colin Vaz, Vikram Ramanarayanan, Shrikanth S. Narayanan:
A two-step technique for MRI audio enhancement using dictionary learning and wavelet packet analysis. INTERSPEECH 2013: 1312-1315 - [c353]Kyu Jeong Han, Sriram Ganapathy, Ming Li, Mohamed Kamal Omar, Shrikanth S. Narayanan:
TRAP language identification system for RATS phase II evaluation. INTERSPEECH 2013: 1502-1506 - [c352]Ming Li, Jangwon Kim, Prasanta Kumar Ghosh, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Speaker verification based on fusion of acoustic and articulatory information. INTERSPEECH 2013: 1614-1618 - [c351]Caitlin Smith, Michael I. Proctor, Khalil Iskarous, Louis Goldstein, Shrikanth S. Narayanan:
Stable articulatory tasks and their variable formation: tamil retroflex consonants. INTERSPEECH 2013: 2006-2009 - [c350]Vikram Ramanarayanan, Adam C. Lammert, Louis Goldstein, Shrikanth S. Narayanan:
Articulatory settings facilitate mechanically advantageous motor control of vocal tract articulators. INTERSPEECH 2013: 2010-2013 - [c349]Daniel Bone, Chi-Chun Lee, Theodora Chaspari, Matthew P. Black, Marian E. Williams, Sungbok Lee, Pat Levitt, Shrikanth S. Narayanan:
Acoustic-prosodic, turn-taking, and language cues in child-psychologist interactions for varying social demand. INTERSPEECH 2013: 2400-2404 - [c348]Daniel Bone, Chi-Chun Lee, Vikram Ramanarayanan, Shrikanth S. Narayanan, Renske S. Hoedemaker, Peter C. Gordon:
Analyzing eye-voice coordination in rapid automatized naming. INTERSPEECH 2013: 2425-2429 - [c347]Theodora Chaspari, Emily Mower Provost, Shrikanth S. Narayanan:
Analyzing the structure of parent-moderated narratives from children with ASD using an entity-based approach. INTERSPEECH 2013: 2430-2434 - [c346]Asterios Toutios, Shrikanth S. Narayanan:
Articulatory synthesis of French connected speech from EMA data. INTERSPEECH 2013: 2738-2742 - [c345]Bo Xiao, Panayiotis G. Georgiou, Zac E. Imel, David C. Atkins, Shrikanth S. Narayanan:
Modeling therapist empathy and vocal entrainment in drug addiction counseling. INTERSPEECH 2013: 2861-2865 - [c344]Kartik Audhkhasi, Andreas M. Zavou, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Empirical link between hypothesis diversity and fusion performance in an ensemble of automatic speech recognition systems. INTERSPEECH 2013: 3082-3086 - [c343]Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Information theoretic acoustic feature selection for acoustic-to-articulatory inversion. INTERSPEECH 2013: 3177-3181 - [c342]Andreas Tsiartas, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Toward transfer of acoustic cues of emphasis across languages. INTERSPEECH 2013: 3483-3486 - [c341]Nikolaos Malandrakis, Abe Kazemzadeh, Alexandros Potamianos, Shrikanth S. Narayanan:
SAIL: A hybrid approach to sentiment analysis. SemEval@NAACL-HLT 2013: 438-442 - [c340]Fabrizio Morbini, Kartik Audhkhasi, Kenji Sagae, Ron Artstein, Dogan Can, Panayiotis G. Georgiou, Shrikanth S. Narayanan, Anton Leuski, David R. Traum:
Which ASR should I choose for my dialogue system? SIGDIAL Conference 2013: 394-403 - [c339]Nikolaos Malandrakis, Elias Iosif, Vassiliki Prokopi, Alexandros Potamianos, Shrikanth S. Narayanan:
DeepPurple: Lexical, String and Affective Feature Fusion for Sentence-Level Semantic Similarity Estimation. *SEM@NAACL-HLT 2013: 103-108 - [i2]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Shrikanth S. Narayanan:
Generalized Ambiguity Decomposition for Understanding Ensemble Diversity. CoRR abs/1312.7463 (2013) - [i1]Meinard Müller, Shrikanth S. Narayanan, Björn W. Schuller:
Computational Audio Analysis (Dagstuhl Seminar 13451). Dagstuhl Reports 3(11): 1-28 (2013) - 2012
- [j70]Urbashi Mitra, B. Adar Emken, Sangwon Lee, Ming Li, Viktor Rozgic, Gautam Thatte, Harshvardhan Vathsangam, Daphney-Stavroula Zois, Murali Annavaram, Shrikanth S. Narayanan, Marco Levorato, Donna Spruijt-Metz, Gaurav S. Sukhatme:
KNOWME: a case study in wireless body area sensor network design. IEEE Commun. Mag. 50(5): 116-125 (2012) - [j69]Julien Epps, Roddy Cowie, Shrikanth S. Narayanan, Björn W. Schuller, Jianhua Tao:
Emotion and mental state recognition from speech. EURASIP J. Adv. Signal Process. 2012: 15 (2012) - [j68]Jorge F. Silva, Shrikanth S. Narayanan:
On signal representations within the Bayes decision framework. Pattern Recognit. 45(5): 1853-1865 (2012) - [j67]Angeliki Metallinou, Martin Wöllmer, Athanasios Katsamanis, Florian Eyben, Björn W. Schuller, Shrikanth S. Narayanan:
Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification. IEEE Trans. Affect. Comput. 3(2): 184-198 (2012) - [j66]Qun Feng Tan, Shrikanth S. Narayanan:
Novel Variations of Group Sparse Regularization Techniques With Applications to Noise Robust Automatic Speech Recognition. IEEE Trans. Speech Audio Process. 20(4): 1337-1346 (2012) - [j65]Gautam Thatte, Ming Li, Sangwon Lee, B. Adar Emken, Shrikanth S. Narayanan, Urbashi Mitra, Donna Spruijt-Metz, Murali Annavaram:
KNOWME: An Energy-Efficient Multimodal Body Area Network for Physical Activity Monitoring. ACM Trans. Embed. Comput. Syst. 11(S2): 48:1-48:24 (2012) - [j64]Jorge F. Silva, Shrikanth S. Narayanan:
Complexity-Regularized Tree-Structured Partition for Mutual Information Estimation. IEEE Trans. Inf. Theory 58(3): 1940-1952 (2012) - [c338]Hao Wang, Dogan Can, Abe Kazemzadeh, François Bar, Shrikanth S. Narayanan:
A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle. ACL (System Demonstrations) 2012: 115-120 - [c337]Ozan Cakmak, Abe Kazemzadeh, Serdar Yildirim, Shrikanth S. Narayanan:
Using interval type-2 fuzzy logic to analyze Turkish emotion words. APSIPA 2012: 1-4 - [c336]Selina Chu, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Composite-DBN for recognition of environmental contexts. APSIPA 2012: 1-4 - [c335]Jangwon Kim, Prasanta Kumar Ghosh, Sungbok Lee, Shrikanth S. Narayanan:
A study of emotional information present in articulatory movements estimated using acoustic-to-articulatory inversion. APSIPA 2012: 1-4 - [c334]Chi-Chun Lee, Athanasios Katsamanis, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Using measures of vocal entrainment to inform outcome-related behaviors in marital conflicts. APSIPA 2012: 1-5 - [c333]Ming Li, Charley Lu, Anne Wang, Shrikanth S. Narayanan:
Speaker verification using Lasso based sparse total variability supervector with PLDA modeling. APSIPA 2012: 1-4 - [c332]Emily Mower Provost, Shrikanth S. Narayanan:
Simplifying emotion classification through emotion distillation. APSIPA 2012: 1-4 - [c331]Vikram Ramanarayanan, Prasanta Kumar Ghosh, Adam C. Lammert, Shrikanth S. Narayanan:
Exploiting speech production information for automatic speech and speaker modeling and recognition - possibilities and new opportunities. APSIPA 2012: 1-6 - [c330]Bo Xiao, Dogan Can, Panayiotis G. Georgiou, David C. Atkins, Shrikanth S. Narayanan:
Analyzing the language of therapist empathy in Motivational Interview based psychotherapy. APSIPA 2012: 1-4 - [c329]Samuel Kim, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Supervised acoustic topic model with a consequent classifier for unstructured audio classification. CBMI 2012: 1-6 - [c328]Björn W. Schuller, Simone Hantke, Felix Weninger, Wenjing Han, Zixing Zhang, Shrikanth S. Narayanan:
Automatic recognition of emotion evoked by general sound events. ICASSP 2012: 341-344 - [c327]Naveen Kumar, Qun Feng Tan, Shrikanth S. Narayanan:
Object classification in sidescan sonar images with sparse representation techniques. ICASSP 2012: 1333-1336 - [c326]Ming Li, Angeliki Metallinou, Daniel Bone, Shrikanth S. Narayanan:
Speaker states recognition using latent factor analysis based Eigenchannel factor vector modeling. ICASSP 2012: 1937-1940 - [c325]Rahul Gupta, Chi-Chun Lee, Shrikanth S. Narayanan:
Classification of emotional content of sighs in dyadic human interactions. ICASSP 2012: 2265-2268 - [c324]Angeliki Metallinou, Athanasios Katsamanis, Shrikanth S. Narayanan:
A hierarchical framework for modeling multimodality and emotional evolution in affective dialogs. ICASSP 2012: 2401-2404 - [c323]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Analyzing quality of crowd-sourced speech transcriptions of noisy audio for acoustic model adaptation. ICASSP 2012: 4137-4140 - [c322]Martin Wöllmer, Angeliki Metallinou, Nassos Katsamanis, Björn W. Schuller, Shrikanth S. Narayanan:
Analyzing the memory of BLSTM Neural Networks for enhanced emotion classification in dyadic spoken interactions. ICASSP 2012: 4157-4160 - [c321]Theodora Chaspari, Emily Mower Provost, Athanasios Katsamanis, Shrikanth S. Narayanan:
An acoustic analysis of shared enjoyment in ECA interactions of children with autism. ICASSP 2012: 4485-4488 - [c320]Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran, Shrikanth S. Narayanan:
Creating ensemble of diverse maximum entropy models. ICASSP 2012: 4845-4848 - [c319]Matthew P. Black, Shrikanth S. Narayanan:
Improvements in predicting children's overall reading ability by modeling variability in evaluators' subjective judgments. ICASSP 2012: 5069-5072 - [c318]Bo Xiao, Panayiotis G. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan:
Multimodal detection of salient behaviors of approach-avoidance in dyadic interactions. ICMI 2012: 141-144 - [c317]Abe Kazemzadeh, James Gibson, Juanchen Li, Sungbok Lee, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
A Sequential Bayesian Dialog Agent for Computational Ethnography. INTERSPEECH 2012: 238-241 - [c316]Kartik Audhkhasi, Angeliki Metallinou, Ming Li, Shrikanth S. Narayanan:
Speaker Personality Classification Using Systems Based on Acoustic-Lexical Cues and an Optimal Tree-Structured Bayesian Network. INTERSPEECH 2012: 262-265 - [c315]Jangwon Kim, Naveen Kumar, Andreas Tsiartas, Ming Li, Shrikanth S. Narayanan:
Intelligibility classification of pathological speech using fusion of multiple high level descriptors. INTERSPEECH 2012: 534-537 - [c314]Chi-Chun Lee, Athanasios Katsamanis, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Based on Isolated Saliency or Causal Integration? Toward a Better Understanding of Human Annotation Process using Multiple Instance Learning and Sequential Probability Ratio Test. INTERSPEECH 2012: 619-622 - [c313]Daniel Bone, Matthew P. Black, Chi-Chun Lee, Marian E. Williams, Pat Levitt, Sungbok Lee, Shrikanth S. Narayanan:
Spontaneous-Speech Acoustic-Prosodic Features of Children with Autism and the Interacting Psychologist. INTERSPEECH 2012: 1043-1046 - [c312]Christina Hagedorn, Michael I. Proctor, Louis Goldstein, Maria Luisa Gorno-Tempini, Shrikanth S. Narayanan:
Characterizing Covert Articulation in Apraxic Speech Using real-time MRI. INTERSPEECH 2012: 1051-1054 - [c311]Daniel Bone, Chi-Chun Lee, Shrikanth S. Narayanan:
A Robust Unsupervised Arousal Rating Framework using Prosody with Cross-Corpora Evaluation. INTERSPEECH 2012: 1175-1178 - [c310]Theodora Chaspari, Chi-Chun Lee, Shrikanth S. Narayanan:
Interplay between verbal response latency and physiology of children with autism during ECA interactions. INTERSPEECH 2012: 1319-1322 - [c309]Assaf Israel, Michael I. Proctor, Louis Goldstein, Khalil Iskarous, Shrikanth S. Narayanan:
Emphatic segments and emphasis spread in Lebanese Arabic: a Real-time Magnetic Resonance Imaging Study. INTERSPEECH 2012: 2178-2181 - [c308]Dogan Can, Panayiotis G. Georgiou, David C. Atkins, Shrikanth S. Narayanan:
A Case Study: Detecting Counselor Reflections in Psychotherapy for Addictions using Linguistic Features. INTERSPEECH 2012: 2254-2257 - [c307]Priti Aggarwal, Ron Artstein, Jillian Gerten, Athanasios Katsamanis, Shrikanth S. Narayanan, Angela Nazarian, David R. Traum:
The Twins Corpus of Museum Visitor Questions. LREC 2012: 2355-2361 - [c306]Naveen Kumar, Andreas Tsiartas, Shrikanth S. Narayanan:
Features for comparing tune similarity of songs across different languages. MMSP 2012: 331-336 - [c305]Fabrizio Morbini, Kartik Audhkhasi, Ron Artstein, Maarten Van Segbroeck, Kenji Sagae, Panayiotis G. Georgiou, David R. Traum, Shrikanth S. Narayanan:
A reranking approach for recognition and classification of speech input in conversational dialogue systems. SLT 2012: 49-54 - [c304]Rahul Gupta, Chi-Chun Lee, Daniel Bone, Agata Rozga, Sungbok Lee, Shrikanth S. Narayanan:
Acoustical analysis of engagement behavior in children. WOCCI 2012: 25-31 - 2011
- [j63]Serdar Yildirim, Shrikanth S. Narayanan, Alexandros Potamianos:
Detecting emotional state of a child in a conversational computer game. Comput. Speech Lang. 25(1): 29-44 (2011) - [j62]Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Joint source-filter optimization for robust glottal source estimation in the presence of shimmer and jitter. Speech Commun. 53(1): 98-109 (2011) - [j61]Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53(9-10): 1162-1171 (2011) - [j60]Joseph Tepperman, Sungbok Lee, Shrikanth S. Narayanan, Abeer Alwan:
A Generative Student Model for Scoring Word Reading Skills. IEEE Trans. Speech Audio Process. 19(2): 348-360 (2011) - [j59]Prasanta Kumar Ghosh, Andreas Tsiartas, Shrikanth S. Narayanan:
Robust Voice Activity Detection Using Long-Term Signal Variability. IEEE Trans. Speech Audio Process. 19(3): 600-613 (2011) - [j58]Matthew P. Black, Joseph Tepperman, Shrikanth S. Narayanan:
Automatic Prediction of Children's Reading Ability for High-Level Literacy Assessment. IEEE ACM Trans. Audio Speech Lang. Process. 19(4): 1015-1028 (2011) - [j57]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
A Framework for Automatic Human Emotion Classification Using Emotion Profiles. IEEE Trans. Speech Audio Process. 19(5): 1057-1070 (2011) - [j56]Qun Feng Tan, Panayiotis G. Georgiou, Shrikanth Narayanan:
Enhanced Sparse Imputation Techniques for a Robust Speech Recognition Front-End. IEEE ACM Trans. Audio Speech Lang. Process. 19(8): 2418-2429 (2011) - [j55]Alexandros Potamianos, Diego Giuliani, Shrikanth S. Narayanan, Kay Berkling:
Introduction to the special issue on speech and language processing of children's speech for child-machine interaction applications. ACM Trans. Speech Lang. Process. 7(4): 11:1-11:3 (2011) - [j54]Matthew Black, Abe Kazemzadeh, Joseph Tepperman, Shrikanth S. Narayanan:
Automatically assessing the ABCs: Verification of children's spoken letter-names and letter-sounds. ACM Trans. Speech Lang. Process. 7(4): 15:1-15:17 (2011) - [j53]Gautam Thatte, Ming Li, Sangwon Lee, B. Adar Emken, Murali Annavaram, Shrikanth S. Narayanan, Donna Spruijt-Metz, Urbashi Mitra:
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection. IEEE Trans. Signal Process. 59(4): 1843-1857 (2011) - [c303]Abe Kazemzadeh, Sungbok Lee, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Emotion Twenty Questions: Toward a Crowd-Sourced Theory of Emotions. ACII (2) 2011: 1-10 - [c302]Chi-Chun Lee, Athanasios Katsamanis, Matthew P. Black, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Affective State Recognition in Married Couples' Interactions Using PCA-Based Vocal Entrainment Measures with Multiple Instance Learning. ACII (2) 2011: 31-41 - [c301]Panayiotis G. Georgiou, Matthew Black, Adam C. Lammert, Brian R. Baucom, Shrikanth S. Narayanan:
"That's Aggravating, Very Aggravating": Is It Possible to Classify Behaviors in Couple Interactions Using Automatically Derived Lexical Features? ACII (1) 2011: 87-96 - [c300]Athanasios Katsamanis, James Gibson, Matthew P. Black, Shrikanth S. Narayanan:
Multiple Instance Learning for Classification of Human Behavior Observations. ACII (1) 2011: 145-154 - [c299]Abe Kazemzadeh, James Gibson, Panayiotis G. Georgiou, Sungbok Lee, Shrikanth S. Narayanan:
EMO20Q Questioner Agent. ACII (2) 2011: 313-314 - [c298]Samuel Kim, Ming Li, Sangwon Lee, Urbashi Mitra, B. Adar Emken, Donna Spruijt-Metz, Murali Annavaram, Shrikanth S. Narayanan:
Modeling high-level descriptions of real-life physical activities using latent topic modeling of multimodal sensor signals. EMBC 2011: 6033-6036 - [c297]Naveen Kumar, Adam C. Lammert, Brendan J. Englot, Franz S. Hover, Shrikanth S. Narayanan:
Directional descriptors using zernike moment phases for object orientation estimation in underwater sonar images. ICASSP 2011: 1025-1028 - [c296]Ming Li, Shrikanth S. Narayanan:
Robust talking face video verification using joint factor analysis and sparse representation on GMM mean shifted supervectors. ICASSP 2011: 1481-1484 - [c295]Angeliki Metallinou, Athanassios Katsamanis, Yun Wang, Shrikanth S. Narayanan:
Tracking changes in continuous emotion states using body language and prosodic cues. ICASSP 2011: 2288-2291 - [c294]Viktor Rozgic, Bo Xiao, Athanasios Katsamanis, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Estimation of ordinal approach-avoidance labels in dyadic interactions: Ordinal logistic regression approach. ICASSP 2011: 2368-2371 - [c293]Emily Mower, Shrikanth S. Narayanan:
A hierarchical static-dynamic framework for emotion classification. ICASSP 2011: 2372-2375 - [c292]Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
A subject-independent acoustic-to-articulatory inversion. ICASSP 2011: 4624-4627 - [c291]Kartik Audhkhasi, Shrikanth S. Narayanan:
Emotion classification from speech using evaluator reliability-weighted combination of ranked lists. ICASSP 2011: 4956-4959 - [c290]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Accurate transcription of broadcast news speech using multiple noisy transcribers and unsupervised reliability metrics. ICASSP 2011: 4980-4983 - [c289]Bo Xiao, Prasanta Kumar Ghosh, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Overlapped speech detection using long-term spectro-temporal similarity in stereo recording. ICASSP 2011: 5216-5219 - [c288]Andreas Tsiartas, Prasanta Kumar Ghosh, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Bilingual audio-subtitle extraction using automatic segmentation of movie audio. ICASSP 2011: 5624-5627 - [c287]Carlos Busso, Angeliki Metallinou, Shrikanth S. Narayanan:
Iterative feature normalization for emotional speech detection. ICASSP 2011: 5692-5695 - [c286]Emily Mower, Matthew P. Black, Elisa Flores, Marian E. Williams, Shrikanth S. Narayanan:
Rachel: Design of an emotionally targeted interactive agent for children with autism. ICME 2011: 1-6 - [c285]Vikram Ramanarayanan, Athanasios Katsamanis, Shrikanth S. Narayanan:
Automatic Data-Driven Learning of Articulatory Primitives from Real-Time MRI Data Using Convolutive NMF with Sparseness Constraints. INTERSPEECH 2011: 61-64 - [c284]Matthew Black, Panayiotis G. Georgiou, Athanasios Katsamanis, Brian R. Baucom, Shrikanth S. Narayanan:
"You made me do it": Classification of Blame in Married Couples' Interactions by Fusing Automatically Derived Speech and Language Information. INTERSPEECH 2011: 89-92 - [c283]Yoon-Chul Kim, Michael I. Proctor, Shrikanth S. Narayanan, Krishna S. Nayak:
Visualization of Vocal Tract Shape Using Interleaved Real-Time MRI of Multiple Scan Planes. INTERSPEECH 2011: 269-272 - [c282]Michael I. Proctor, Adam C. Lammert, Athanasios Katsamanis, Louis M. Goldstein, Christina Hagedorn, Shrikanth S. Narayanan:
Direct Estimation of Articulatory Kinematics from Real-Time Magnetic Resonance Image Sequences. INTERSPEECH 2011: 281-284 - [c281]Shrikanth S. Narayanan, Erik Bresch, Prasanta Kumar Ghosh, Louis Goldstein, Athanasios Katsamanis, Yoon Kim, Adam C. Lammert, Michael I. Proctor, Vikram Ramanarayanan, Yinghua Zhu:
A Multimodal Real-Time MRI Articulatory Corpus for Speech Research. INTERSPEECH 2011: 837-840 - [c280]Matthew Black, Daniel Bone, Marian E. Williams, Phillip Gorrindo, Pat Levitt, Shrikanth S. Narayanan:
The USC CARE Corpus: Child-Psychologist Interactions of Children with Autism Spectrum Disorders. INTERSPEECH 2011: 1497-1500 - [c279]James Gibson, Athanasios Katsamanis, Matthew P. Black, Shrikanth S. Narayanan:
Automatic Identification of Salient Acoustic Instances in Couples' Behavioral Interactions Using Diverse Density Support Vector Machines. INTERSPEECH 2011: 1561-1564 - [c278]Abe Kazemzadeh, Sungbok Lee, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Determining what Questions to Ask, with the Help of Spectral Graph Theory. INTERSPEECH 2011: 2053-2056 - [c277]Emil Ettelaie, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Enhancements to the Training Process of Classifier-Based Speech Translator via Topic Modeling. INTERSPEECH 2011: 2109-2112 - [c276]Bo Xiao, Viktor Rozgic, Athanasios Katsamanis, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Acoustic and Visual Cues of Turn-Taking Dynamics in Dyadic Interactions. INTERSPEECH 2011: 2441-2444 - [c275]Carlos Molina, Sungbok Lee, Shrikanth S. Narayanan, Néstor Becerra Yoma:
A Study of the Effectiveness of Articulatory Strokes for Phonemic Recognition. INTERSPEECH 2011: 2513-2516 - [c274]Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Analysis of Inter-Articulator Correlation in Acoustic-to-Articulatory Inversion Using Generalized Smoothness Criterion. INTERSPEECH 2011: 2685-2688 - [c273]Ming Li, Xiang Zhang, Yonghong Yan, Shrikanth S. Narayanan:
Speaker Verification Using Sparse Representations on Total Variability i-vectors. INTERSPEECH 2011: 2729-2732 - [c272]Adam C. Lammert, Michael I. Proctor, Athanasios Katsamanis, Shrikanth S. Narayanan:
Morphological Variation in the Adult Vocal Tract: A Modeling Study of its Potential Acoustic Impact. INTERSPEECH 2011: 2813-2816 - [c271]Athanasios Katsamanis, Erik Bresch, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Validating rt-MRI Based Articulatory Representations via Articulatory Recognition. INTERSPEECH 2011: 2841-2844 - [c270]Jangwon Kim, Sungbok Lee, Shrikanth S. Narayanan:
An Exploratory Study of the Relations Between Perceived Emotion Strength and Articulatory Kinematics. INTERSPEECH 2011: 2961-2964 - [c269]Nikos Malandrakis, Alexandros Potamianos, Elias Iosif, Shrikanth S. Narayanan:
Kernel Models for Affective Lexicon Creation. INTERSPEECH 2011: 2977-2980 - [c268]Emily Mower, Chi-Chun Lee, James Gibson, Theodora Chaspari, Marian E. Williams, Shrikanth S. Narayanan:
Analyzing the Nature of ECA Interactions in Children with Autism. INTERSPEECH 2011: 2989-2993 - [c267]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Reliability-Weighted Acoustic Model Adaptation Using Crowd Sourced Transcriptions. INTERSPEECH 2011: 3045-3048 - [c266]Chi-Chun Lee, Athanasios Katsamanis, Matthew P. Black, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
An Analysis of PCA-Based Vocal Entrainment Measures in Married Couples' Affective Spoken Interactions. INTERSPEECH 2011: 3101-3104 - [c265]Daniel Bone, Matthew Black, Ming Li, Angeliki Metallinou, Sungbok Lee, Shrikanth S. Narayanan:
Intoxicated Speech Detection by Fusion of Speaker Normalized Hierarchical Features and GMM Supervectors. INTERSPEECH 2011: 3217-3220 - [c264]Erdem Unal, Elaine Chew, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
A Preplexity Based Cover Song Matching System for Short Length Queries. ISMIR 2011: 43-48 - [c263]Panayiotis G. Georgiou, Matthew P. Black, Shrikanth S. Narayanan:
Behavioral signal processing for understanding (distressed) dyadic interactions: some recent developments. J-HGBU@MM 2011: 7-12 - [c262]Nikos Malandrakis, Alexandros Potamianos, Elias Iosif, Shrikanth S. Narayanan:
EmotiWord: Affective Lexicon Creation with Application to Interaction and Multimedia Data. MUSCLE 2011: 30-41 - 2010
- [j52]JongHo Shin, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Towards modeling user behavior in interactions mediated through an automated bidirectional speech translation system. Comput. Speech Lang. 24(2): 232-256 (2010) - [j51]Dhaval Shah, Kyu Jeong Han, Shrikanth S. Narayanan:
Robust Multimodal Person Recognition Using Low-Complexity Audio-Visual Feature Fusion Approaches. Int. J. Semantic Comput. 4(2): 155-179 (2010) - [j50]Viktor Rozgic, Kyu Jeong Han, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Multimodal Speaker Segmentation and Identification in Presence of Overlapped Speech Segments. J. Multim. 5(4): 322-331 (2010) - [j49]Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Bark Frequency Transform Using an Arbitrary Order Allpass Filter. IEEE Signal Process. Lett. 17(6): 543-546 (2010) - [j48]Dongrui Wu, Christopher G. Courtney, Brent J. Lance, Shrikanth S. Narayanan, Michael E. Dawson, Kelvin S. Oie, Thomas D. Parsons:
Optimal Arousal Identification and Classification for Affective Computing Using Physiological Signals: Virtual Reality Stroop Task. IEEE Trans. Affect. Comput. 1(2): 109-118 (2010) - [j47]Jorge F. Silva, Shrikanth S. Narayanan:
Nonproduct data-dependent partitions for mutual information estimation: strong consistency and applications. IEEE Trans. Signal Process. 58(7): 3497-3511 (2010) - [j46]Girish Varatkar, Shrikanth S. Narayanan, Naresh R. Shanbhag, Douglas L. Jones:
Stochastic Networked Computation. IEEE Trans. Very Large Scale Integr. Syst. 18(10): 1421-1432 (2010) - [c261]Samuel Kim, Shiva Sundaram, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Acoustic stopwords for unstructured audio information retrieval. EUSIPCO 2010: 1277-1280 - [c260]Samuel Kim, Panayiotis G. Georgiou, Shrikanth S. Narayanan, Shiva Sundaram:
Using naïve text queries for robust audio information retrieval. ICASSP 2010: 2406-2409 - [c259]Angeliki Metallinou, Sungbok Lee, Shrikanth S. Narayanan:
Decision level combination of multiple modalities for recognition and analysis of emotional expression. ICASSP 2010: 2462-2465 - [c258]Angeliki Metallinou, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Visual emotion recognition using compact facial representations and viseme information. ICASSP 2010: 2474-2477 - [c257]Jangwon Kim, Sungbok Lee, Shrikanth S. Narayanan:
An exploratory study of manifolds of emotional speech. ICASSP 2010: 5142-5145 - [c256]Chi-Chun Lee, Shrikanth S. Narayanan:
Predicting interruptions in dyadic spoken interactions. ICASSP 2010: 5250-5253 - [c255]Andreas Tsiartas, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Language model adaptation using WWW documents obtained by utterance-based queries. ICASSP 2010: 5406-5409 - [c254]Dongrui Wu, Thomas D. Parsons, Emily Mower, Shrikanth S. Narayanan:
Speech emotion estimation in 3D space. ICME 2010: 737-742 - [c253]Ming Li, Shrikanth S. Narayanan:
Robust ECG Biometrics by Fusing Temporal and Cepstral Information. ICPR 2010: 1326-1329 - [c252]Dongrui Wu, Thomas D. Parsons, Shrikanth S. Narayanan:
Acoustic feature analysis in speech emotion primitives estimation. INTERSPEECH 2010: 785-788 - [c251]Chi-Chun Lee, Matthew Black, Athanasios Katsamanis, Adam C. Lammert, Brian R. Baucom, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Quantification of prosodic entrainment in affective spontaneous spoken interactions of married couples. INTERSPEECH 2010: 793-796 - [c250]Emily Mower, Kyu Jeong Han, Sungbok Lee, Shrikanth S. Narayanan:
A cluster-profile representation of emotion using agglomerative hierarchical clustering. INTERSPEECH 2010: 797-800 - [c249]Daniel Bone, Samuel Kim, Sungbok Lee, Shrikanth S. Narayanan:
A study of intra-speaker and inter-speaker affective variability using electroglottograph and inverse filtered glottal waveforms. INTERSPEECH 2010: 913-916 - [c248]Jangwon Kim, Sungbok Lee, Shrikanth S. Narayanan:
A study of interplay between articulatory movement and prosodic characteristics in emotional speech production. INTERSPEECH 2010: 1173-1176 - [c247]Adam C. Lammert, Michael I. Proctor, Shrikanth S. Narayanan:
Data-driven analysis of realtime vocal tract MRI using correlated image regions. INTERSPEECH 2010: 1572-1575 - [c246]Michael I. Proctor, Daniel Bone, Athanasios Katsamanis, Shrikanth S. Narayanan:
Rapid semi-automatic segmentation of real-time magnetic resonance images for parametric vocal tract analysis. INTERSPEECH 2010: 1576-1579 - [c245]Yoon-Chul Kim, Shrikanth S. Narayanan, Krishna S. Nayak:
Improved real-time MRI of oral-velar coordination using a golden-ratio spiral view order. INTERSPEECH 2010: 1580-1583 - [c244]Erik Bresch, Athanasios Katsamanis, Louis Goldstein, Shrikanth S. Narayanan:
Statistical multi-stream modeling of real-time MRI articulatory speech data. INTERSPEECH 2010: 1584-1587 - [c243]Sungbok Lee, Shrikanth S. Narayanan:
Vocal tract contour analysis of emotional speech by the functional data curve representation. INTERSPEECH 2010: 1600-1603 - [c242]Viktor Rozgic, Bo Xiao, Athanasios Katsamanis, Brian R. Baucom, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
A new multichannel multi modal dyadic interaction database. INTERSPEECH 2010: 1982-1985 - [c241]Vikram Ramanarayanan, Dani Byrd, Louis Goldstein, Shrikanth S. Narayanan:
Investigating articulatory setting - pauses, ready position, and rest - using real-time MRI. INTERSPEECH 2010: 1994-1997 - [c240]Matthew Black, Athanasios Katsamanis, Chi-Chun Lee, Adam C. Lammert, Brian R. Baucom, Andrew Christensen, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Automatic classification of married couples' behavior using audio features. INTERSPEECH 2010: 2030-2033 - [c239]Martin Wöllmer, Angeliki Metallinou, Florian Eyben, Björn W. Schuller, Shrikanth S. Narayanan:
Context-sensitive multimodal emotion recognition from speech and facial expression using bidirectional LSTM modeling. INTERSPEECH 2010: 2362-2365 - [c238]Kartik Audhkhasi, Shrikanth S. Narayanan:
Data-dependent evaluator modeling and its application to emotional valence classification from speech. INTERSPEECH 2010: 2366-2369 - [c237]Qun Feng Tan, Kartik Audhkhasi, Panayiotis G. Georgiou, Emil Ettelaie, Shrikanth S. Narayanan:
Automatic speech recognition system channel modeling. INTERSPEECH 2010: 2442-2445 - [c236]Emil Ettelaie, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Hierarchical classification for speech-to-speech translation. INTERSPEECH 2010: 2530-2533 - [c235]Kyu Jeong Han, Shrikanth S. Narayanan:
An improved cluster model selection method for agglomerative hierarchical speaker clustering using incremental Gaussian mixture models. INTERSPEECH 2010: 2658-2661 - [c234]Chi-Sang Jung, Kyu Jeong Han, Hyunson Seo, Shrikanth S. Narayanan, Hong-Goo Kang:
A variable frame length and rate algorithm based on the spectral kurtosis measure for speaker verification. INTERSPEECH 2010: 2754-2757 - [c233]Björn W. Schuller, Stefan Steidl, Anton Batliner, Felix Burkhardt, Laurence Devillers, Christian A. Müller, Shrikanth S. Narayanan:
The INTERSPEECH 2010 paralinguistic challenge. INTERSPEECH 2010: 2794-2797 - [c232]Prasanta Kumar Ghosh, Andreas Tsiartas, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Robust voice activity detection in stereo recording with crosstalk. INTERSPEECH 2010: 3098-3101 - [c231]Jorge F. Silva, Shrikanth S. Narayanan:
A near-optimal (minimax) tree-structured partition for mutual information estimation. ISIT 2010: 1418-1422 - [c230]Jorge F. Silva, Shrikanth S. Narayanan:
On data-driven histogram-based estimation for mutual information. ISIT 2010: 1423-1427 - [c229]William R. Swartout, David R. Traum, Ron Artstein, Dan Noren, Paul E. Debevec, Kerry Bronnenkant, Josh Williams, Anton Leuski, Shrikanth S. Narayanan, Diane Piepol:
Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides. IVA 2010: 286-300 - [c228]Samuel Kim, Shiva Sundaram, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
An N-gram model for unstructured audio signals toward information retrieval. MMSP 2010: 477-480 - [c227]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Robust representations for out-of-domain emotions using Emotion Profiles. SLT 2010: 25-30 - [c226]William R. Swartout, David R. Traum, Ron Artstein, Dan Noren, Paul E. Debevec, Kerry Bronnenkant, Josh Williams, Anton Leuski, Shrikanth S. Narayanan, Diane Piepol, H. Chad Lane, Jackie Morie, Priti Aggarwal, Matt Liewer, Jen-Yuan Chiang, Jillian Gerten, Selina Chu, Kyle White:
Virtual Museum Guides demonstration. SLT 2010: 163-164
2000 – 2009
- 2009
- [j45]Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, Shrikanth S. Narayanan:
Combining lexical, syntactic and prosodic cues for improved online dialog act tagging. Comput. Speech Lang. 23(4): 407-422 (2009) - [j44]Dani Byrd, Stephen J. Tobin, Erik Bresch, Shrikanth S. Narayanan:
Timing effects of syllable structure and stress on nasals: A real-time MRI examination. J. Phonetics 37(1): 97-110 (2009) - [j43]Patti Price, Joseph Tepperman, Markus Iseli, Thao Duong, Matthew Black, Shizhen Wang, Christy Kim Boscardin, Margaret Heritage, P. David Pearson, Shrikanth S. Narayanan, Abeer Alwan:
Assessment of emerging reading skills in young native speakers and language learners. Speech Commun. 51(10): 968-984 (2009) - [j42]Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Pitch Contour Stylization Using an Optimal Piecewise Polynomial Approximation. IEEE Signal Process. Lett. 16(9): 810-813 (2009) - [j41]Serdar Yildirim, Shrikanth S. Narayanan:
Automatic Detection of Disfluency Boundaries in Spontaneous Speech of Children Using Audio-Visual Information. IEEE Trans. Speech Audio Process. 17(1): 2-12 (2009) - [j40]Abhinav Sethy, Panayiotis G. Georgiou, Bhuvana Ramabhadran, Shrikanth S. Narayanan:
An Iterative Relative Entropy Minimization-Based Data Selection Approach for n-Gram Model Adaptation. IEEE Trans. Speech Audio Process. 17(1): 13-23 (2009) - [j39]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
Unsupervised Adaptation of Categorical Prosody Models for Prosody Labeling and Speech Recognition. IEEE Trans. Speech Audio Process. 17(1): 138-149 (2009) - [j38]Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection. IEEE Trans. Speech Audio Process. 17(4): 582-596 (2009) - [j37]Ozlem Kalinli, Shrikanth S. Narayanan:
Prominence Detection Using Auditory Attention Cues and Task-Dependent High Level Information. IEEE Trans. Speech Audio Process. 17(5): 1009-1024 (2009) - [j36]Selina Chu, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Environmental Sound Recognition With Time-Frequency Audio Features. IEEE Trans. Speech Audio Process. 17(6): 1142-1158 (2009) - [j35]Erik Bresch, Shrikanth S. Narayanan:
Region Segmentation in the Frequency Domain Applied to Upper Airway Real-Time Magnetic Resonance Images. IEEE Trans. Medical Imaging 28(3): 323-338 (2009) - [j34]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Human Perception of Audio-Visual Synthetic Character Emotion Expression in the Presence of Ambiguous and Conflicting Information. IEEE Trans. Multim. 11(5): 843-855 (2009) - [j33]Jorge F. Silva, Shrikanth S. Narayanan:
Discriminative wavelet packet filter bank selection for pattern recognition. IEEE Trans. Signal Process. 57(5): 1796-1810 (2009) - [c225]Emily Mower, Angeliki Metallinou, Chi-Chun Lee, Abe Kazemzadeh, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Interpreting ambiguous emotional expressions. ACII 2009: 1-8 - [c224]Kartik Audhkhasi, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Lattice-based lexical cues for word fragment detection in conversational speech. ASRU 2009: 568-573 - [c223]Gautam Thatte, Viktor Rozgic, Ming Li, Sabyasachi Ghosh, Urbashi Mitra, Shrikanth S. Narayanan, Murali Annavaram, Donna Spruijt-Metz:
Optimal time-resource allocation for activity-detection via multimodal sensing. BODYNETS 2009: 14 - [c222]Gautam Thatte, Viktor Rozgic, Ming Li, Sabyasachi Ghosh, Urbashi Mitra, Shrikanth S. Narayanan, Murali Annavaram, Donna Spruijt-Metz:
Optimal Allocation of Time-Resources for Multihypothesis Activity-Level Detection. DCOSS 2009: 273-286 - [c221]Yoon-Chul Kim, Shrikanth S. Narayanan, Krishna S. Nayak:
Accelerated 3D MRI of vocal tract shaping using compressed sensing and parallel imaging. ICASSP 2009: 389-392 - [c220]Selina Chu, Shrikanth S. Narayanan, C.-C. Jay Kuo:
A semi-supervised learning approach to online audio background detection. ICASSP 2009: 1629-1632 - [c219]Samuel Kim, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
A robust harmony structure modeling scheme for classical music opus identification. ICASSP 2009: 1961-1964 - [c218]Tsuneo Kato, Sungbok Lee, Shrikanth S. Narayanan:
An analysis of articulatory-acoustic data based on articulatory strokes. ICASSP 2009: 4493-4496 - [c217]Andreas Tsiartas, Prasanta Kumar Ghosh, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Robust word boundary detection in spontaneous speech using acoustic and lexical cues. ICASSP 2009: 4785-4788 - [c216]Matthew Black, Joseph Tepperman, Abe Kazemzadeh, Sungbok Lee, Shrikanth S. Narayanan:
Automatic pronunciation verification of english letter-names for early literacy assessment of preliterate children. ICASSP 2009: 4861-4864 - [c215]Shiva Sundaram, Shrikanth S. Narayanan:
A divide-and-conquer approach to Latent Perceptual Indexing of audio for large Web 2.0 applications. ICME 2009: 466-469 - [c214]Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Emotion recognition using a hierarchical binary decision tree approach. INTERSPEECH 2009: 320-323 - [c213]Andreas Tsiartas, Prasanta Kumar Ghosh, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Context-driven automatic bilingual movie subtitle alignment. INTERSPEECH 2009: 444-447 - [c212]Emily Nava, Joseph Tepperman, Louis Goldstein, Maria Luisa Zubizarreta, Shrikanth S. Narayanan:
Connecting rhythm and prominence in automatic ESL pronunciation scoring. INTERSPEECH 2009: 684-687 - [c211]Joseph Tepperman, Erik Bresch, Yoon-Chul Kim, Sungbok Lee, Louis Goldstein, Shrikanth S. Narayanan:
An articulatory analysis of phonological transfer using real-time MRI. INTERSPEECH 2009: 700-703 - [c210]Kyu Jeong Han, Shrikanth S. Narayanan:
Improved speaker diarization of meeting speech with recurrent selection of representative speech segments and participant interaction pattern modeling. INTERSPEECH 2009: 1067-1070 - [c209]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Evaluating evaluators: a case study in understanding the benefits and pitfalls of multi-evaluator modeling. INTERSPEECH 2009: 1583-1586 - [c208]Matthew Black, Joseph Tepperman, Sungbok Lee, Shrikanth S. Narayanan:
Predicting children's reading ability using evaluator-informed features. INTERSPEECH 2009: 1895-1898 - [c207]Ozlem Kalinli, Shrikanth S. Narayanan:
Continuous speech recognition using attention shift decoding with soft decision. INTERSPEECH 2009: 1927-1930 - [c206]Chi-Chun Lee, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Modeling mutual influence of interlocutor emotion states in dyadic spoken interactions. INTERSPEECH 2009: 1983-1986 - [c205]Jangwon Kim, Sungbok Lee, Shrikanth S. Narayanan:
A detailed study of word-position effects on emotion expression in speech. INTERSPEECH 2009: 1987-1990 - [c204]Kyu Jeong Han, Shrikanth S. Narayanan:
Signature cluster model selection for incremental Gaussian mixture cluster modeling in agglomerative hierarchical speaker clustering. INTERSPEECH 2009: 2547-2550 - [c203]Joseph Tepperman, Louis Goldstein, Sungbok Lee, Shrikanth S. Narayanan:
Automatically rating pronunciation through articulatory phonology. INTERSPEECH 2009: 2771-2774 - [c202]Prasanta Kumar Ghosh, Shrikanth S. Narayanan, Pierre L. Divenyi, Louis Goldstein, Elliot Saltzman:
Estimation of articulatory gesture patterns from speech acoustics. INTERSPEECH 2009: 2803-2806 - [c201]Shrikanth S. Narayanan, Jorge F. Silva:
Histogram-based estimation for the divergence revisited. ISIT 2009: 468-472 - [c200]Dhaval Shah, Kyu Jeong Han, Shrikanth S. Narayanan:
A Low-Complexity Dynamic Face-Voice Feature Fusion Approach to Multimodal Person Recognition. ISM 2009: 24-31 - [c199]Ozlem Kalinli, Shiva Sundaram, Shrikanth S. Narayanan:
Saliency-driven unstructured acoustic scene classification using latent perceptual indexing. MMSP 2009: 1-6 - [c198]Samuel Kim, Shrikanth S. Narayanan, Shiva Sundaram:
Acoustic topic model for audio information retrieval. WASPAA 2009: 37-40 - [c197]Matthew Black, Jeannette N. Chang, Jonathan Chang, Shrikanth S. Narayanan:
Comparison of child-human and child-computer interactions based on manual annotations. WOCCI 2009: 49-54 - [c196]Serdar Yildirim, Shrikanth S. Narayanan:
Recognizing child's emotional state in problem-solving child-machine interactions. WOCCI 2009: 61-64 - [c195]Matteo Gerosa, Diego Giuliani, Shrikanth S. Narayanan, Alexandros Potamianos:
A review of ASR technologies for children's speech. WOCCI 2009: 89-96 - 2008
- [j32]Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N. Chang, Sungbok Lee, Shrikanth S. Narayanan:
IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Evaluation 42(4): 335-359 (2008) - [j31]Erik Bresch, Yoon-Chul Kim, Krishna S. Nayak, Dani Byrd, Shrikanth Narayanan:
Seeing speech: Capturing vocal tract shaping using real-time magnetic resonance imaging [Exploratory DSP]. IEEE Signal Process. Mag. 25(3): 123-132 (2008) - [j30]Joseph Tepperman, Shrikanth S. Narayanan:
Using Articulatory Representations to Detect Segmental Errors in Nonnative Pronunciation. IEEE Trans. Speech Audio Process. 16(1): 8-22 (2008) - [j29]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
Automatic Prosodic Event Detection Using Acoustic, Lexical, and Syntactic Evidence. IEEE Trans. Speech Audio Process. 16(1): 216-228 (2008) - [j28]Erdem Unal, Elaine Chew, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Challenging Uncertainty in Query by Humming Systems: A Fingerprinting Approach. IEEE Trans. Speech Audio Process. 16(2): 359-371 (2008) - [j27]Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, Shrikanth S. Narayanan:
Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework. IEEE Trans. Speech Audio Process. 16(4): 797-811 (2008) - [j26]Kyu Jeong Han, Samuel Kim, Shrikanth S. Narayanan:
Strategies to Improve the Robustness of Agglomerative Hierarchical Clustering Under Data Source Variation for Speaker Diarization. IEEE Trans. Speech Audio Process. 16(8): 1590-1601 (2008) - [j25]Chartchai Meesookho, Urbashi Mitra, Shrikanth S. Narayanan:
On Energy-Based Acoustic Source Localization for Sensor Networks. IEEE Trans. Signal Process. 56(1): 365-377 (2008) - [j24]Jorge F. Silva, Shrikanth S. Narayanan:
Upper Bound Kullback-Leibler Divergence for Transient Hidden Markov Models. IEEE Trans. Signal Process. 56(9): 4176-4188 (2008) - [c194]Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, Shrikanth S. Narayanan:
Enriching Spoken Language Translation with Dialog Acts. ACL (2) 2008: 225-228 - [c193]Emil Ettelaie, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Mitigation of Data Sparsity in Classifier-Based Translation. SPSCTPA@COLING 2008: 1-4 - [c192]Selina Chu, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Environmental sound recognition using MP-based features. ICASSP 2008: 1-4 - [c191]Shiva Sundaram, Shrikanth S. Narayanan:
Audio retrieval by latent perceptual indexing. ICASSP 2008: 49-52 - [c190]Shrikanth S. Narayanan, Girish Varatkar, Douglas L. Jones, Naresh R. Shanbhag:
Computation as estimation: Estimation-theoretic IC design improves robustness and reduces power consumption. ICASSP 2008: 1421-1424 - [c189]Emily Mower, Sungbok Lee, Maja J. Mataric, Shrikanth S. Narayanan:
Human perception of synthetic character emotions in the presence of conflicting and congruent vocal and facial expressions. ICASSP 2008: 2201-2204 - [c188]Ozlem Kalinli, Shrikanth S. Narayanan:
A top-down auditory attention model for learning task dependent influences on prominence detection in speech. ICASSP 2008: 3981-3984 - [c187]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
A novel algorithm for unsupervised prosodic language model adaptation. ICASSP 2008: 4181-4184 - [c186]Kyu Jeong Han, Shrikanth S. Narayanan:
Novel inter-cluster distance measure combining GLR and ICR for improved agglomerative hierarchical speaker clustering. ICASSP 2008: 4373-4376 - [c185]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
Fine-grained pitch accent and boundary tone labeling with parametric F0 features. ICASSP 2008: 4545-4548 - [c184]Murtaza Bulut, Sungbok Lee, Shrikanth S. Narayanan:
Recognition for synthesis: Automatic parameter selection for resynthesis of emotional speech from neutral speech. ICASSP 2008: 4629-4632 - [c183]Sankaranarayanan Ananthakrishnan, Prasanta Kumar Ghosh, Shrikanth S. Narayanan:
Automatic classification of question turns in spontaneous speech using lexical and prosodic evidence. ICASSP 2008: 5005-5008 - [c182]Vivek Kumar Rangarajan Sridhar, Shrikanth S. Narayanan, Srinivas Bangalore:
Modeling the intonation of discourse segments for improved online dialog ACT tagging. ICASSP 2008: 5033-5036 - [c181]Matteo Gerosa, Shrikanth S. Narayanan:
Investigating automatic assessment of reading comprehension in young children. ICASSP 2008: 5057-5060 - [c180]Michael Grimm, Kristian Kroschel, Shrikanth S. Narayanan:
The Vera am Mittag German audio-visual emotional speech database. ICME 2008: 865-868 - [c179]Emily Mower, Sungbok Lee, Maja J. Mataric, Shrikanth S. Narayanan:
Joint-processing of audio-visual signals in human perception of conflicting synthetic character emotions. ICME 2008: 961-964 - [c178]Samuel Kim, Erdem Unal, Shrikanth S. Narayanan:
Music fingerprint extraction for classical music cover song identification. ICME 2008: 1261-1264 - [c177]Shiva Sundaram, Shrikanth S. Narayanan:
Classification of sound clips by two schemes: Using onomatopoeia and semantic labels. ICME 2008: 1341-1344 - [c176]Kyu Jeong Han, Shrikanth S. Narayanan:
Agglomerative hierarchical speaker clustering using incremental Gaussian mixture cluster modeling. INTERSPEECH 2008: 20-23 - [c175]Carlos Busso, Shrikanth S. Narayanan:
The expression and perception of emotions: comparing assessments of self versus others. INTERSPEECH 2008: 257-260 - [c174]Ozlem Kalinli, Shrikanth S. Narayanan:
Combining task-dependent information with auditory attention cues for prominence detection in speech. INTERSPEECH 2008: 1064-1067 - [c173]Carlos Busso, Shrikanth S. Narayanan:
Scripted dialogs versus improvisation: lessons learned about emotional elicitation techniques from the IEMOCAP database. INTERSPEECH 2008: 1670-1673 - [c172]Chi-Chun Lee, Sungbok Lee, Shrikanth S. Narayanan:
An analysis of multimodal cues of interruption in dyadic spoken interactions. INTERSPEECH 2008: 1678-1681 - [c171]Joseph Tepperman, Shrikanth S. Narayanan:
Better nonnative intonation scores through prosodic theory. INTERSPEECH 2008: 1813-1816 - [c170]Joseph Tepperman, Shrikanth S. Narayanan:
Tree grammars as models of prosodic structure. INTERSPEECH 2008: 2286-2289 - [c169]Sungbok Lee, Tsuneo Kato, Shrikanth S. Narayanan:
Relation between geometry and kinematics of articulatory trajectory associated with emotional speech production. INTERSPEECH 2008: 2290-2293 - [c168]Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, Shrikanth S. Narayanan:
Factored translation models for enriching spoken language translation with prosody. INTERSPEECH 2008: 2723-2726 - [c167]Emil Ettelaie, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Towards unsupervised training of the classifier-based speech translator. INTERSPEECH 2008: 2739-2742 - [c166]Abe Kazemzadeh, Sungbok Lee, Shrikanth S. Narayanan:
An interval type-2 fuzzy logic system to translate between emotion-related vocabularies. INTERSPEECH 2008: 2747-2750 - [c165]Matthew Black, Joseph Tepperman, Sungbok Lee, Shrikanth S. Narayanan:
Estimation of children's reading ability by fusion of automatic pronunciation verification and fluency detection. INTERSPEECH 2008: 2779-2782 - [c164]Matthew Black, Joseph Tepperman, Abe Kazemzadeh, Sungbok Lee, Shrikanth S. Narayanan:
Pronunciation verification of English letter-sounds in preliterate children. INTERSPEECH 2008: 2783-2786 - [c163]Erik Bresch, Daylen Riggs, Louis M. Goldstein, Dani Byrd, Sungbok Lee, Shrikanth S. Narayanan:
An analysis of vocal tract shaping in English sibilant fricatives using real-time magnetic resonance imaging. INTERSPEECH 2008: 2823-2826 - [c162]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Selection of Emotionally Salient Audio-Visual Features for Modeling Human Evaluations of Synthetic Character Emotion Displays. ISM 2008: 190-195 - [c161]Angeliki Metallinou, Sungbok Lee, Shrikanth S. Narayanan:
Audio-Visual Emotion Recognition Using Gaussian Mixture Models for Face and Voice. ISM 2008: 250-257 - [c160]Viktor Rozgic, Kyu Jeong Han, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Multimodal Speaker Segmentation in Presence of Overlapped Speech Segments. ISM 2008: 679-684 - [c159]Kyu Jeong Han, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
The SAIL speaker diarization system for analysis of spontaneous meetings. MMSP 2008: 966-971 - [c158]Samuel Kim, Shrikanth S. Narayanan:
Dynamic chroma feature vectors with applications to cover song identification. MMSP 2008: 984-987 - [c157]Vivek Kumar Rangarajan Sridhar, Shrikanth S. Narayanan, Srinivas Bangalore:
Incorporating discourse context in spoken language translation through dialog acts. SLT 2008: 269-272 - [c156]Matthew P. Black, Jeannette N. Chang, Shrikanth S. Narayanan:
An empirical analysis of user uncertainty in problem-solving child-machine interactions. WOCCI 2008: 1 - [c155]Vassiliki Farantouri, Alexandros Potamianos, Shrikanth S. Narayanan:
Linguistic analysis of spontaneous children speech. WOCCI 2008: 4 - [c154]Joseph Tepperman, Matteo Gerosa, Shrikanth S. Narayanan:
A generative model for scoring children2s reading comprehension. WOCCI 2008: 16 - 2007
- [j23]Soonil Kwon, Shrikanth S. Narayanan:
Robust speaker identification based on selective use of feature vectors. Pattern Recognit. Lett. 28(1): 85-89 (2007) - [j22]Michael Grimm, Kristian Kroschel, Emily Mower, Shrikanth S. Narayanan:
Primitives-based evaluation and estimation of emotions in speech. Speech Commun. 49(10-11): 787-800 (2007) - [j21]Dagen Wang, Shrikanth S. Narayanan:
An Acoustic Measure for Word Prominence in Spontaneous Speech. IEEE Trans. Speech Audio Process. 15(2): 690-701 (2007) - [j20]Carlos Busso, Zhigang Deng, Michael Grimm, Ulrich Neumann, Shrikanth S. Narayanan:
Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis. IEEE Trans. Speech Audio Process. 15(3): 1075-1086 (2007) - [j19]Dagen Wang, Shrikanth S. Narayanan:
Robust Speech Rate Estimation for Spontaneous Speech. IEEE Trans. Speech Audio Process. 15(8): 2190-2201 (2007) - [j18]Carlos Busso, Shrikanth S. Narayanan:
Interrelation Between Speech and Facial Gestures in Emotional Utterances: A Single Subject Study. IEEE Trans. Speech Audio Process. 15(8): 2331-2347 (2007) - [c153]Kyu Jeong Han, Samuel Kim, Shrikanth S. Narayanan:
Robust speaker clustering strategies to data source variation for improved speaker diarization. ASRU 2007: 262-267 - [c152]Ozlem Kalinli, Shrikanth S. Narayanan:
Early auditory processing inspired features for robust automatic speech recognition. EUSIPCO 2007: 2385-2389 - [c151]Abhinav Sethy, Shrikanth S. Narayanan, Bhuvana Ramabhadran:
Data Driven Approach for Language Model Adaptation using Stepwise Relative Entropy Minimization. ICASSP (4) 2007: 177-180 - [c150]Shiva Sundaram, Shrikanth S. Narayanan:
Discriminating Two Types of Noise Sources using Cortical Representation and Dimension Reduction Technique. ICASSP (1) 2007: 213-216 - [c149]Jorge F. Silva, Vivek Kumar Rangarajan Sridhar, Viktor Rozgic, Shrikanth S. Narayanan:
Information Theoretic Analysis of Direct Articulatory Measurements for Phonetic Discrimination. ICASSP (4) 2007: 457-460 - [c148]Carlos Busso, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Real-Time Monitoring of Participants' Interaction in a Meeting using Audio-Visual Sensors. ICASSP (2) 2007: 685-688 - [c147]Shiva Sundaram, Shrikanth S. Narayanan:
Analysis of Audio Clustering using Word Descriptions. ICASSP (2) 2007: 769-772 - [c146]Jorge F. Silva, Shrikanth S. Narayanan:
Optimal Wavelet Packets Decomposition Based on a Rate-Distortion Optimality Criterion. ICASSP (3) 2007: 817-820 - [c145]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
Improved Speech Recognition using Acoustic and Lexical Correlates of Pitch Accent in a N-Best Rescoring Framework. ICASSP (4) 2007: 873-876 - [c144]Michael Grimm, Kristian Kroschel, Shrikanth S. Narayanan:
Support Vector Regression for Automatic Recognition of Spontaneous Emotions in Speech. ICASSP (4) 2007: 1085-1088 - [c143]Murtaza Bulut, Sungbok Lee, Shrikanth S. Narayanan:
A Statistical Approach for Modeling Prosody Features using POS Tags for Emotional Speech Synthesis. ICASSP (4) 2007: 1237-1240 - [c142]Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, Shrikanth S. Narayanan:
Exploiting prosodic features for dialog act tagging in a discriminative modeling framework. INTERSPEECH 2007: 150-153 - [c141]Matthew Black, Joseph Tepperman, Sungbok Lee, Patti Price, Shrikanth S. Narayanan:
Automatic detection and classification of disfluent reading miscues in young children's speech for the purpose of assessment. INTERSPEECH 2007: 206-209 - [c140]Murtaza Bulut, Sungbok Lee, Shrikanth S. Narayanan:
Analysis of emotional speech prosody in terms of part of speech tags. INTERSPEECH 2007: 626-629 - [c139]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
Prosody-enriched lattices for improved syllable recognition. INTERSPEECH 2007: 1813-1816 - [c138]Kyu Jeong Han, Shrikanth S. Narayanan:
A robust stopping criterion for agglomerative hierarchical clustering in a speaker diarization system. INTERSPEECH 2007: 1853-1856 - [c137]Ozlem Kalinli, Shrikanth S. Narayanan:
A saliency-based auditory attention model with applications to unsupervised prominent syllable detection in speech. INTERSPEECH 2007: 1941-1944 - [c136]Joseph Tepperman, Abe Kazemzadeh, Shrikanth S. Narayanan:
A text-free approach to assessing nonnative intonation. INTERSPEECH 2007: 2169-2172 - [c135]Joseph Tepperman, Matthew Black, Patti Price, Sungbok Lee, Abe Kazemzadeh, Matteo Gerosa, Margaret Heritage, Abeer Alwan, Shrikanth S. Narayanan:
A Bayesian network classifier for word-level reading assessment. INTERSPEECH 2007: 2185-2188 - [c134]Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Using neutral speech models for emotional speech analysis. INTERSPEECH 2007: 2225-2228 - [c133]Prasanta Kumar Ghosh, Antonio Ortega, Shrikanth S. Narayanan:
Pitch period estimation using multipulse model and wavelet transform. INTERSPEECH 2007: 2761-2764 - [c132]Jorge F. Silva, Shrikanth S. Narayanan:
Universal Consistency of Data-Driven Partitions for Divergence Estimation. ISIT 2007: 2021-2025 - [c131]Alexandros Potamianos, Shrikanth S. Narayanan:
A review of the acoustic and linguistic properties of children's speech. MMSP 2007: 22-25 - [c130]Abeer Alwan, Yijian Bai, Matthew Black, Larry Casey, Matteo Gerosa, Margaret Heritage, Markus Iseli, Barbara Jones, Abe Kazemzadeh, Sungbok Lee, Shrikanth S. Narayanan, Patti Price, Joseph Tepperman, Shizhen Wang:
A System for Technology Based Assessment of Language and Literacy in Young Children: the Role of Multiple Information Sources. MMSP 2007: 26-30 - [c129]Carlos Busso, Shrikanth S. Narayanan:
Joint Analysis of the Emotional Fingerprint in the Face and Speech: A single subject study. MMSP 2007: 43-47 - [c128]Samuel Kim, Panayiotis G. Georgiou, Sungbok Lee, Shrikanth S. Narayanan:
Real-time Emotion Detection System using Speech: Multi-modal Fusion of Different Timescale Features. MMSP 2007: 48-51 - [c127]Viktor Rozgic, Carlos Busso, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Multimodal Meeting Monitoring: Improvements on Speaker Tracking and Segmentation through a Modified Mixture Particle Filter. MMSP 2007: 60-65 - [c126]Shiva Sundaram, Shrikanth S. Narayanan:
Experiments in Automatic Genre Classification of Full-length Music Tracks using Audio Activity Rate. MMSP 2007: 98-102 - [c125]JongHo Shin, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Analyzing the Multimodal Behaviors of Users of a Speech-to-Speech Translation Device by using Concept Matching Scores. MMSP 2007: 259-263 - [c124]Erdem Ünal, Panayiotis G. Georgiou, Shrikanth S. Narayanan, Elaine Chew:
Statistical Modeling and Retrieval of Polyphonic Music. MMSP 2007: 405-409 - [c123]Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, Shrikanth S. Narayanan:
Exploiting Acoustic and Syntactic Features for Prosody Labeling in a Maximum Entropy Framework. HLT-NAACL 2007: 1-8 - [c122]Emily Mower, David Feil-Seifer, Maja J. Mataric, Shrikanth S. Narayanan:
Investigating Implicit Cues for User State Estimation in Human-Robot Interaction Using Physiological Measurements. RO-MAN 2007: 1125-1130 - [c121]David R. Traum, Antonio Roque, Anton Leuski, Panayiotis G. Georgiou, Jillian Gerten, Bilyana Martinovski, Shrikanth Narayanan, Susan Robinson, Ashish Vaswani:
Hassan: A Virtual Human for Tactical Questioning. SIGdial 2007: 71-74 - 2006
- [j17]Naveen Srinivasamurthy, Antonio Ortega, Shrikanth S. Narayanan:
Efficient scalable encoding for distributed speech recognition. Speech Commun. 48(8): 888-902 (2006) - [j16]Abhinav Sethy, Shrikanth S. Narayanan, S. Parthasarthy:
A split lexicon approach for improved recognition of spoken names. Speech Commun. 48(9): 1126-1136 (2006) - [j15]Jorge F. Silva, Shrikanth S. Narayanan:
Average divergence distance as a statistical discrimination measure for hidden Markov models. IEEE Trans. Speech Audio Process. 14(3): 890-906 (2006) - [j14]Zhigang Deng, Ulrich Neumann, John P. Lewis, Tae-Yong Kim, Murtaza Bulut, Shrikanth S. Narayanan:
Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces. IEEE Trans. Vis. Comput. Graph. 12(6): 1523-1534 (2006) - [c120]Selina Chu, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Content Analysis for Acoustic Environment Classification in Mobile Robots. AAAI Fall Symposium: Aurally Informed Performance 2006: 16-21 - [c119]Shiva Sundaram, Shrikanth S. Narayanan:
Vector-based Representation and Clustering of Audio Using Onomatopoeia Words. AAAI Fall Symposium: Aurally Informed Performance 2006: 55- - [c118]Selina Chu, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Efficient Rotation Invariant Retrieval of Shapes with Applications in Medical Databases. CBMS 2006: 673-678 - [c117]Alireza A. Dibazar, Theodore W. Berger, Shrikanth S. Narayanan:
Pathological Voice Assessment. EMBC 2006: 1669-1673 - [c116]Abhinav Sethy, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Text data acquisition for domain-specific language models. EMNLP 2006: 382-389 - [c115]Michael Grimm, Emily Mower Provost, Kristian Kroschel, Shrikanth S. Narayanan:
Combining categorical and primitives-based emotion recognition. EUSIPCO 2006: 1-5 - [c114]Vivek Rangarajan, Shrikanth S. Narayanan:
Analysis of disfluent repetitions in spontaneous speech recognition. EUSIPCO 2006: 1-5 - [c113]David Sündermann, Harald Höge, Antonio Bonafonte, Hermann Ney, Alan W. Black, Shrikanth S. Narayanan:
Text-Independent Voice Conversion Based on Unit Selection. ICASSP (1) 2006: 81-84 - [c112]Chuping Liu, Qian-Jie Fu, Shrikanth S. Narayanan:
Smooth Gmm Based Multi-Talker Spectral Conversion for Spectrally Degraded Speech. ICASSP (5) 2006: 141-144 - [c111]Matteo Gerosa, Sungbok Lee, Diego Giuliani, Shrikanth S. Narayanan:
Analyzing Children's Speech: An Acoustic Study of Consonants and Consonant-Vowel Transition. ICASSP (1) 2006: 393-396 - [c110]Shrikanth S. Narayanan, Panayiotis G. Georgiou, Abhinav Sethy, Dagen Wang, Murtaza Bulut, Shiva Sundaram, Emil Ettelaie, Sankaranarayanan Ananthakrishnan, Horacio Franco, Kristin Precoda, Dimitra Vergyri, Jing Zheng, Wen Wang, Venkata Ramana Rao Gadde, Martin Graciarena, Victor Abrash, Michael W. Frandsen, Colleen Richey:
Speech Recognition Engineering Issues in Speech to Speech Translation System Design for Low Resource Languages and Domains. ICASSP (5) 2006: 1209-1212 - [c109]Selina Chu, Shrikanth S. Narayanan, C.-C. Jay Kuo, Maja J. Mataric:
Where am I? Scene Recognition for Mobile Robots using Audio Features. ICME 2006: 885-888 - [c108]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
Combining acoustic, lexical, and syntactic evidence for automatic unsupervised prosody labeling. INTERSPEECH 2006 - [c107]Emil Ettelaie, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Cross-lingual dialog model for speech to speech translation. INTERSPEECH 2006 - [c106]Matteo Gerosa, Diego Giuliani, Shrikanth S. Narayanan:
Acoustic analysis and automatic recognition of spontaneous children²s speech. INTERSPEECH 2006 - [c105]Abe Kazemzadeh, Joseph Tepperman, Jorge F. Silva, Hong You, Sungbok Lee, Abeer Alwan, Shrikanth S. Narayanan:
Automatic detection of voice onset time contrasts for use in pronunciation assessment. INTERSPEECH 2006 - [c104]Sungbok Lee, Erik Bresch, Jason Adams, Abe Kazemzadeh, Shrikanth S. Narayanan:
A study of emotional speech articulation using a fast magnetic resonance imaging technique. INTERSPEECH 2006 - [c103]Antonio Roque, Anton Leuski, Vivek Kumar Rangarajan Sridhar, Susan Robinson, Ashish Vaswani, Shrikanth S. Narayanan, David R. Traum:
Radiobot-CFF: a spoken dialogue system for military training. INTERSPEECH 2006 - [c102]Joseph Tepperman, Jorge F. Silva, Abe Kazemzadeh, Hong You, Sungbok Lee, Abeer Alwan, Shrikanth S. Narayanan:
Pronunciation verification of children²s speech for automatic literacy assessment. INTERSPEECH 2006 - [c101]Joseph Tepperman, David R. Traum, Shrikanth S. Narayanan:
"yeah right": sarcasm recognition for spoken dialogue systems. INTERSPEECH 2006 - [c100]Jorge F. Silva, Shrikanth S. Narayanan:
Upper Bound Kullback-Leibler Divergence for Hidden Markov Models with Application as Discrimination Measure for Speech Recognition. ISIT 2006: 2299-2303 - [c99]JongHo Shin, Panayiotis G. Georgiou, Shrikanth Narayanan:
User modeling in a speech translation driven mediated interaction setting. HCM@MM 2006: 75-80 - [c98]Abe Kazemzadeh, Sungbok Lee, Shrikanth Narayanan:
Using model trees for evaluating dialog error conditions based on acoustic information. HCM@MM 2006: 109-114 - [c97]Shiva Sundaram, Shrikanth S. Narayanan:
An attribute-based approach to audio description applied to segmenting vocal sections in popular music songs. MMSP 2006: 103-107 - [c96]Abhinav Sethy, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Selecting relevant text subsets from web-data for building topic specific language models. HLT-NAACL 2006 - [c95]Vivek Kumar Rangarajan Sridhar, Shrikanth S. Narayanan, Srinivas Bangalore:
Acoustic-Syntactic Maximum Entropy Model for Automatic prosody Labeling. SLT 2006: 74-77 - 2005
- [j13]Carlos Busso, Zhigang Deng, Ulrich Neumann, Shrikanth S. Narayanan:
Natural head motion synthesis driven by acoustic prosodic features. Comput. Animat. Virtual Worlds 16(3-4): 283-290 (2005) - [j12]Erdem Unal, Shrikanth S. Narayanan, Hsuan-Huei Shih, Elaine Chew, C.-C. Jay Kuo:
Creating data resources for designing usercentric frontends for query-by-humming systems. Multim. Syst. 10(6): 475-483 (2005) - [j11]Athanasios Mouchtaris, Shrikanth S. Narayanan, Chris Kyriakakis:
Multichannel audio synthesis by subband-based spectral conversion and parameter adaptation. IEEE Trans. Speech Audio Process. 13(2): 263-274 (2005) - [j10]Chul Min Lee, Shrikanth S. Narayanan:
Toward detecting emotions in spoken dialogs. IEEE Trans. Speech Audio Process. 13(2): 293-303 (2005) - [j9]Alexandros Potamianos, Shrikanth S. Narayanan, Giuseppe Riccardi:
Adaptive categorical understanding for spoken dialogue systems. IEEE Trans. Speech Audio Process. 13(3): 321-329 (2005) - [j8]S. Kwon, Shri Narayanan:
Unsupervised Speaker Indexing Using Generic Models. IEEE Trans. Speech Audio Process. 13(5-2): 1004-1013 (2005) - [c94]Robert S. Belvin, Emil Ettelaie, Sudeep Gandhe, Panayiotis G. Georgiou, Kevin Knight, Daniel Marcu, Scott Millward, Shrikanth S. Narayanan, Howard Neely, David R. Traum:
Transonics: A Practical Speech-to-Speech Translator for English-Farsi Medical Dialogs. ACL 2005: 89-92 - [c93]Sankaranarayanan Ananthakrishnan, Shrikanth S. Narayanan:
An Automatic Prosody Recognizer using a Coupled Multi-Stream Acoustic Model and a Syntactic-Prosodic Language Model. ICASSP (1) 2005: 269-272 - [c92]Dagen Wang, Shrikanth S. Narayanan:
An Unsupervised Quantitative Measure for Word Prominence in Spontaneous Speech. ICASSP (1) 2005: 377-380 - [c91]Shrikanth S. Narayanan, Dagen Wang:
Speech Rate Estimation via Temporal Correlation and Selected Sub-Band Correlation. ICASSP (1) 2005: 413-416 - [c90]Joseph Tepperman, Shrikanth S. Narayanan:
Automatic Syllable Stress Detection Using Prosodic Features for Pronunciation Evaluation of Language Learners. ICASSP (1) 2005: 937-940 - [c89]Carlos Busso, Sergi Hernanz, Chi-Wei Chu, Soonil Kwon, Sung Lee, Panayiotis G. Georgiou, Isaac Cohen, Shrikanth S. Narayanan:
Smart room: participant and speaker localization and identification. ICASSP (2) 2005: 1117-1120 - [c88]Abhinav Sethy, Shrikanth S. Narayanan, Nicolaus Mote, W. Lewis Johnson:
Modeling and automating detection of errors in Arabic language learner speech. INTERSPEECH 2005: 177-180 - [c87]Sungbok Lee, Serdar Yildirim, Abe Kazemzadeh, Shrikanth S. Narayanan:
An articulatory study of emotional speech production. INTERSPEECH 2005: 497-500 - [c86]Hong You, Abeer Alwan, Abe Kazemzadeh, Shrikanth S. Narayanan:
Pronunciation variations of Spanish-accented English spoken by young children. INTERSPEECH 2005: 749-752 - [c85]Murtaza Bulut, Carlos Busso, Serdar Yildirim, Abe Kazemzadeh, Chul Min Lee, Sungbok Lee, Shrikanth S. Narayanan:
Investigating the role of phoneme-level modifications in emotional speech resynthesis. INTERSPEECH 2005: 801-804 - [c84]Abhinav Sethy, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Building topic specific language models from webdata using competitive models. INTERSPEECH 2005: 1293-1296 - [c83]Abe Kazemzadeh, Hong You, Markus Iseli, Barbara Jones, Xiaodong Cui, Margaret Heritage, Patti Price, Elaine Andersen, Shrikanth S. Narayanan, Abeer Alwan:
TBALL data collection: the making of a young children's speech corpus. INTERSPEECH 2005: 1581-1584 - [c82]Serdar Yildirim, Chul Min Lee, Sungbok Lee, Alexandros Potamianos, Shrikanth S. Narayanan:
Detecting Politeness and frustration state of a child in a conversational computer game. INTERSPEECH 2005: 2209-2212 - [c81]Dagen Wang, Shrikanth S. Narayanan:
Piecewise linear stylization of pitch via wavelet analysis. INTERSPEECH 2005: 3277-3280 - [c80]David R. Traum, William R. Swartout, Jonathan Gratch, Stacy Marsella, Patrick G. Kenny, Eduard H. Hovy, Shri Narayanan, Ed Fast, Bilyana Martinovski, Rahul Baghat, Susan Robinson, Andrew Marshall, Dagen Wang, Sudeep Gandhe, Anton Leuski:
Dealing with Doctors: A Virtual Human for Non-team Interaction. SIGDIAL Workshop 2005: 232-236 - 2004
- [j7]Ying Li, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Adaptive speaker identification with audiovisual cues for movie content analysis. Pattern Recognit. Lett. 25(7): 777-791 (2004) - [j6]Sadaoki Furui, Mary E. Beckman, Julia Hirschberg, Shuichi Itahashi, Tatsuya Kawahara, Satoshi Nakamura, Shrikanth S. Narayanan:
Introduction to the Special Issue on Spontaneous Speech Processing. IEEE Trans. Speech Audio Process. 12(4): 349-350 (2004) - [j5]Ying Li, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Content-based movie analysis and indexing based on audiovisual cues. IEEE Trans. Circuits Syst. Video Technol. 14(8): 1073-1085 (2004) - [c79]Shri Narayanan, S. Ananthakrishnan, Robert S. Belvin, Emil Ettelaie, Sudeep Gandhe, Shadi Ganjavi, Panayiotis G. Georgiou, C. M. Hein, S. Kadambe, Kevin Knight, Daniel Marcu, Howard Neely, Naveen Srinivasamurthy, David R. Traum, Dagen Wang:
The Transonics Spoken Dialogue Translator: An Aid for English-Persian Doctor-Patient Interviews. AAAI Technical Report (4) 2004: 97-103 - [c78]Farhad Farahani, Panayiotis G. Georgiou, Shrikanth S. Narayanan:
Speaker identification using supra-segmental pitch pattern dynamics. ICASSP (1) 2004: 89-92 - [c77]Naveen Srinivasamurthy, Antonio Ortega, Shrikanth S. Narayanan:
Enhanced standard compliant distributed speech recognition (Aurora encoder) using rate allocation. ICASSP (1) 2004: 485-488 - [c76]Dagen Wang, Shrikanth S. Narayanan:
A multi-pass linear fold algorithm for sentence boundary detection using prosodic cues. ICASSP (1) 2004: 525-532 - [c75]Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, Shrikanth S. Narayanan:
Analysis of emotion recognition using facial expressions, speech and multimodal information. ICMI 2004: 205-211 - [c74]Naveen Srinivasamurthy, Kyu Jeong Han, Shrikanth S. Narayanan:
Robust speech recognition over packet networks: an overview. INTERSPEECH 2004: 621-624 - [c73]Jorge F. Silva, Shrikanth S. Narayanan:
A statistical discrimination measure for hidden Markov models based on divergence. INTERSPEECH 2004: 657-660 - [c72]Panayiotis G. Georgiou, Shrikanth S. Narayanan, Hooman Shirani Mehr:
Context dependent statistical augmentation of persian transcripts. INTERSPEECH 2004: 853-856 - [c71]Chul Min Lee, Serdar Yildirim, Murtaza Bulut, Abe Kazemzadeh, Carlos Busso, Zhigang Deng, Sungbok Lee, Shrikanth S. Narayanan:
Emotion recognition based on phoneme classes. INTERSPEECH 2004: 889-892 - [c70]Abhinav Sethy, Shrikanth S. Narayanan, Bhuvana Ramabhadran:
Measuring convergence in language model estimation using relative entropy. INTERSPEECH 2004: 1057-1060 - [c69]Heiga Zen, Tadashi Kitamura, Murtaza Bulut, Shrikanth S. Narayanan, Ryosuke Tsuzuki, Keiichi Tokuda:
Constructing emotional speech synthesizers with limited speech database. INTERSPEECH 2004: 1185-1188 - [c68]Soonil Kwon, Shrikanth S. Narayanan:
Speaker model quantization for unsupervised speaker indexing. INTERSPEECH 2004: 1517-1520 - [c67]Simona Montanari, Serdar Yildirim, Elaine Andersen, Shrikanth S. Narayanan:
Reference marking in children's computer-directed speech: an integrated analysis of discourse and gestures. INTERSPEECH 2004: 1841-1844 - [c66]Kyu Jeong Han, Shrikanth S. Narayanan, Naveen Srinivasamurthy:
A distributed speech recognition system in multi-user environments. INTERSPEECH 2004: 2121-2124 - [c65]Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Zhigang Deng, Sungbok Lee, Shrikanth S. Narayanan, Carlos Busso:
An acoustic study of emotions expressed in speech. INTERSPEECH 2004: 2193-2196 - [c64]W. Lewis Johnson, Carole R. Beal, Anna Fowles-Winkler, Ursula Lauper, Stacy Marsella, Shrikanth S. Narayanan, Dimitra Papachristou, Hannes Högni Vilhjálmsson:
Tactical Language Training System: An Interim Report. Intelligent Tutoring Systems 2004: 336-345 - [c63]Robert S. Melvin, Win May, Shrikanth S. Narayanan, Panayiotis G. Georgiou, Shadi Ganjavi:
Creation of a Doctor-Patient Dialogue Corpus Using Standardized Patients. LREC 2004 - [c62]Erdem Unal, Shrikanth S. Narayanan, Elaine Chew:
A statistical approach to retrieval under user-dependent uncertainty in query-by-humming systems. Multimedia Information Retrieval 2004: 113-118 - [c61]Zhigang Deng, Shri Narayanan, Carlos Busso, Ulrich Neumann:
Audio-based head motion synthesis for Avatar-based telepresence systems. ETP@MM 2004: 24-30 - 2003
- [j4]Athanasios Mouchtaris, Shrikanth S. Narayanan, Chris Kyriakakis:
Virtual Microphones for Multichannel Audio Resynthesis. EURASIP J. Adv. Signal Process. 2003(10): 968-979 (2003) - [j3]Alexandros Potamianos, Shrikanth S. Narayanan:
Robust recognition of children's speech. IEEE Trans. Speech Audio Process. 11(6): 603-616 (2003) - [c60]Serdar Yildirim, Shrikanth S. Narayanan:
An information-theoretic analysis of developmental changes in speech. ICASSP (1) 2003: 480-483 - [c59]Hsuan-Huei Shih, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Multidimensional humming transcription using a statistical approach for query by humming systems. ICASSP (5) 2003: 541-544 - [c58]Abhinav Sethy, Shrikanth S. Narayanan:
Split-lexicon based hierarchical recognition of speech using syllable and word level acoustic units. ICASSP (1) 2003: 772-775 - [c57]Ying Li, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Audiovisual-based adaptive speaker identification. ICASSP (5) 2003: 812-815 - [c56]Hsuan-Huei Shih, Shrikanth S. Narayanan, C.-C. Jay Kuo:
A statistical multidimensional humming transcription using phone level hidden Markov models for query by humming systems. ICME 2003: 61-64 - [c55]Hsuan-Huei Shih, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Multidimensional humming transcription using a statistical approach for query by humming systems. ICME 2003: 385-388 - [c54]Ying Li, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Audiovisual-based adaptive speaker identification. ICME 2003: 565-568 - [c53]Chul Min Lee, Shrikanth S. Narayanan:
Emotion recognition using a data-driven fuzzy inference system. INTERSPEECH 2003: 157-160 - [c52]Naveen Srinivasamurthy, Antonio Ortega, Shrikanth S. Narayanan:
Towards optimal encoding for classification with applications to distributed speech recognition. INTERSPEECH 2003: 1113-1116 - [c51]Shiva Sundaram, Shrikanth S. Narayanan:
An empirical text transformation method for spontaneous speech synthesizers. INTERSPEECH 2003: 1221-1224 - [c50]Soonil Kwon, Shrikanth S. Narayanan:
A method for on-line speaker indexing using generic reference models. INTERSPEECH 2003: 2653-2656 - [c49]Naveen Srinivasamurthy, Shrikanth S. Narayanan:
Language-adaptive persian speech recognition. INTERSPEECH 2003: 3137-3140 - [c48]Erdem Unal, Shrikanth S. Narayanan, Hsuan-Huei Shih, Elaine Chew, C.-C. Jay Kuo:
Creating data resources for designing user-centric frontends for query by humming systems. Multimedia Information Retrieval 2003: 116-121 - [c47]Sriram Mahadevan, Shrikanth S. Narayanan:
Handling real-time scheduling exceptions using decision support systems. SMC 2003: 931-936 - 2002
- [j2]Shrikanth S. Narayanan, Alexandros Potamianos:
Creating conversational interfaces for children. IEEE Trans. Speech Audio Process. 10(2): 65-78 (2002) - [c46]Athanasios Mouchtaris, Shrikanth S. Narayanan, Chris Kyriakakis:
Effcient multichannel audio resynthesis by subband-based spectral conversion. EUSIPCO 2002: 1-4 - [c45]Ying Li, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Identification of speakers in movie dialogs using audiovisual cues. ICASSP 2002: 2093-2096 - [c44]Hsuan-Huei Shih, Shrikanth S. Narayanan, C.-C. Jay Kuo:
A statistical approach to humming recognition. ICASSP 2002: 4175 - [c43]Athanasios Mouchtaris, Shrikanth S. Narayanan, Chris Kyriakakis:
Multiresolution spectral conversion for multichannel audio resynthesis. ICME (2) 2002: 273-276 - [c42]Hsuan-Huei Shih, Shrikanth S. Narayanan, C.-C. Jay Kuo:
An HMM-based approach to humming transcription. ICME (1) 2002: 337-340 - [c41]Chul Min Lee, Shrikanth S. Narayanan, Roberto Pieraccini:
Classifying emotions in human-machine spoken dialogs. ICME (1) 2002: 737-740 - [c40]Abhinav Sethy, Shrikanth S. Narayanan:
Refined speech segmentation for concatenative speech synthesis. INTERSPEECH 2002: 149-152 - [c39]Chul Min Lee, Shrikanth S. Narayanan, Roberto Pieraccini:
Combining acoustic and language information for emotion recognition. INTERSPEECH 2002: 873-876 - [c38]Murtaza Bulut, Shrikanth S. Narayanan, Ann K. Syrdal:
Expressive speech synthesis using a concatenative synthesizer. INTERSPEECH 2002: 1265-1268 - [c37]JongHo Shin, Shrikanth S. Narayanan, Laurie Gerber, Abe Kazemzadeh, Dani Byrd:
Analysis of user behavior under error conditions in spoken dialogs. INTERSPEECH 2002: 2069-2072 - [c36]Soonil Kwon, Shrikanth S. Narayanan:
Speaker change detection using a new weighted distance measure. INTERSPEECH 2002: 2537-2540 - [c35]Hsuan-Huei Shih, Shrikanth S. Narayanan, C.-C. Jay Kuo:
Comparison of dictionary-based approaches to automatic repeating melody extraction. Storage and Retrieval for Media Databases 2002: 306-317 - 2001
- [c34]Dawn Dutton, Selina Chu, James Hubbell, Marilyn A. Walker, Shrikanth S. Narayanan:
Just (all) the facts, ma'am. CHI Extended Abstracts 2001: 133-134 - [c33]Richard C. Rose, Sarangarajan Parthasarathy, Bojana Gajic, Aaron E. Rosenberg, Shrikanth S. Narayanan:
On the implementation of ASR algorithms for hand-held wireless mobile devices. ICASSP 2001: 17-20 - [c32]Hsuan-Huei Shih, Shrikanth S. Narayanan, C.-C. Jay Kuo:
A Dictionary Approach To Repetitive Pattern Finding In Music. ICME 2001 - [c31]Marilyn A. Walker, John S. Aberdeen, Julie E. Boland, Elizabeth Owen Bratt, John S. Garofolo, Lynette Hirschman, Audrey N. Le, Sungbok Lee, Shrikanth S. Narayanan, Kishore Papineni, Bryan L. Pellom, Joseph Polifroni, Alexandros Potamianos, P. Prabhu, Alexander I. Rudnicky, Gregory A. Sanders, Stephanie Seneff, David Stallard, Steve Whittaker:
DARPA communicator dialog travel planning systems: the june 2000 data collection. INTERSPEECH 2001: 1371-1374 - [c30]Naveen Srinivasamurthy, Antonio Ortega, Shrikanth S. Narayanan:
Efficient scalable speech compression for scalable speech recognition. INTERSPEECH 2001: 1845-1848 - [c29]Sudha Arunachalam, Dylan Gould, Elaine Andersen, Dani Byrd, Shrikanth S. Narayanan:
Politeness and frustration language in child-machine interactions. INTERSPEECH 2001: 2675-2678 - [c28]Dawn Dutton, Marilyn A. Walker, Selina Chu, James Hubbell, Shrikanth S. Narayanan:
Amount of Information Presented in a Complex List: Effects on User Performance. HLT 2001 - 2000
- [j1]Shrikanth S. Narayanan, Abeer Alwan:
Noise source models for fricative consonants. IEEE Trans. Speech Audio Process. 8(3): 328-344 (2000) - [c27]Giuseppe Di Fabbrizio, Shrikanth S. Narayanan, P. Ruscitti, Candace A. Kamm, Bruce Buntschuh, James Hubbell, Jeremy H. Wright, Janna S. Hamaker:
Unifying Conversational Multimedia Interfaces for Accessing Network Services Across Communication Devices. IEEE International Conference on Multimedia and Expo (II) 2000: 653-656 - [c26]Esther Levin, Shrikanth S. Narayanan, Roberto Pieraccini, Konstantin Biatov, Enrico Bocchieri, Giuseppe Di Fabbrizio, Wieland Eckert, Sungbok Lee, A. Pokrovsky, Mazin G. Rahim, P. Ruscitti, Marilyn A. Walker:
The AT&t-DARPA communicator mixed-initiative spoken dialog system. INTERSPEECH 2000: 122-125 - [c25]Shrikanth S. Narayanan, Giuseppe Di Fabbrizio, Candace A. Kamm, James Hubbell, Bruce Buntschuh, P. Ruscitti, Jerry H. Wright:
Effects of dialog initiative and multi-modal presentation strategies on large directory information access. INTERSPEECH 2000: 636-639 - [c24]Giuseppe Di Fabbrizio, Shrikanth S. Narayanan:
Web-based monitoring, logging and reporting tools for multi-service multi-modal systems. INTERSPEECH 2000: 736-739 - [c23]Mazin G. Rahim, Roberto Pieraccini, Wieland Eckert, Esther Levin, Giuseppe Di Fabbrizio, Giuseppe Riccardi, Candace A. Kamm, Shrikanth S. Narayanan:
A spoken dialogue system for conference/workshop services. INTERSPEECH 2000: 1041-1044
1990 – 1999
- 1999
- [c22]Shrikanth S. Narayanan, Alexandros Potamianos, Haohong Wang:
Multimodal systems for children: building a prototype. EUROSPEECH 1999: 1727-1730 - [c21]Alexandros Potamianos, Giuseppe Riccardi, Shrikanth S. Narayanan:
Categorical understanding using statistical ngram models. EUROSPEECH 1999: 2027-2030 - 1998
- [c20]Marilyn A. Walker, Jeanne C. Fromer, Shrikanth S. Narayanan:
Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email. COLING-ACL 1998: 1345-1351 - [c19]Alexandros Potamianos, Shrikanth S. Narayanan:
Spoken dialog systems for children. ICASSP 1998: 197-200 - [c18]Shrikanth S. Narayanan, Mani Subramaniam, Benjamin Stern, Barbara Hollister, Chih-mei Lin:
Probing the relationship between qualitative and quantitative performance measures for voice-enabled telecommunication services. ICASSP 1998: 3769-3772 - [c17]Bruce Buntschuh, Candace A. Kamm, Giuseppe Di Fabbrizio, Alicia Abella, Mehryar Mohri, Shrikanth S. Narayanan, Ilija Zeljkovic, R. Doug Sharp, Jeremy H. Wright, S. Marcus, J. Shaffer, R. Duncan, Jay G. Wilpon:
VPQ: a spoken language interface to large scale directory information. ICSLP 1998 - [c16]Giuseppe Riccardi, Alexandros Potamianos, Shrikanth S. Narayanan:
Language model adaptation for spoken language systems. ICSLP 1998 - 1997
- [c15]Ilija Zeljkovic, Shrikanth S. Narayanan:
Novel filler acoustic models for connected digit recognition. EUROSPEECH 1997: 283-286 - [c14]Carol Y. Espy-Wilson, Shrikanth S. Narayanan, Suzanne Boyce, Abeer Alwan:
Acoustic modelling of American English /r/. EUROSPEECH 1997: 393-396 - [c13]Sungbok Lee, Alexandros Potamianos, Shrikanth S. Narayanan:
Analysis of children's speech: duration, pitch and formants. EUROSPEECH 1997: 473-476 - [c12]Shrikanth S. Narayanan, Abeer Alwan, Yong Song:
New results in vowel production: MRI, EPG, and acoustic data. EUROSPEECH 1997: 1007-1010 - [c11]Ilija Zeljkovic, Shrikanth S. Narayanan, Alexandros Potamianos:
Unsupervised HMM adaptation based on speech-silence discrimination. EUROSPEECH 1997: 2055-2058 - [c10]Chih-mei Lin, Shrikanth S. Narayanan, E. Russell Ritenour:
Database management and analysis for spoken dialog systems: methodology and tools. EUROSPEECH 1997: 2199-2202 - [c9]Candace A. Kamm, Shrikanth S. Narayanan, Dawn Dutton, E. Russell Ritenour:
Evaluating spoken dialog systems for telecommunication services. EUROSPEECH 1997: 2203-2206 - [c8]Alexandros Potamianos, Shrikanth S. Narayanan, Sungbok Lee:
Automatic speech recognition for children. EUROSPEECH 1997: 2371-2374 - 1996
- [c7]Shrikanth S. Narayanan, Abeer Alwan:
Parametric hybrid source models for voiced and voiceless fricative consonants. ICASSP 1996: 377-380 - [c6]Philbert Bangayan, Abeer Alwan, Shrikanth S. Narayanan:
From MRI and acoustic data to articulatory synthesis: a case study of the lateral approximants in american English. ICSLP 1996: 793-796 - [c5]Shrikanth S. Narayanan, Abigail Kaun, Dani Byrd, Peter Ladefoged, Abeer Alwan:
Liquids in tamil. ICSLP 1996: 797-800 - [c4]Ilija Zeljkovic, Shrikanth S. Narayanan:
Improved HMM phone and triphone models for realtime ASR telephony applications. ICSLP 1996: 1105-1108 - 1994
- [c3]Shrikanth S. Narayanan, Homayoun Shahri, Donald J. Youtkus, Minsky Luo:
Fast and Efficient Techniques for Motion Estimation Using Subband Analysis. ICIP (3) 1994: 265-269 - [c2]Shrikanth S. Narayanan, Abeer Alwan, Katherine Haker:
An MRI study of fricative consonants. ICSLP 1994: 627-630 - 1993
- [c1]Shrikanth S. Narayanan, Abeer A. Alwan:
Strange attractors and chaotic dynamics in the production of voiced and voiceless fricatives. EUROSPEECH 1993: 77-80
Coauthor Index
aka: Abeer A. Alwan
aka: Salman Avestimehr
aka: Brian R. W. Baucom
aka: Matthew P. Black
aka: Louis M. Goldstein
aka: Nikolaos Malandrakis
aka: Emily Mower
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-10 19:34 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint