default search action
Tatsuya Kawahara
Person information
- affiliation: Kyoto University, School of Informatics, Japan
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j68]Kenta Yamamoto, Koji Inoue, Tatsuya Kawahara:
Character expression of a conversational robot for adapting to user personality. Adv. Robotics 38(4): 256-266 (2024) - [j67]Li Tao, Shunsuke Nakamura, Xueting Wang, Tatsuya Kawahara, Gen Tamura, Toshihiko Yamasaki:
A large-scale television advertising dataset for detailed impression analysis. Multim. Tools Appl. 83(7): 18779-18802 (2024) - [j66]Hao Shi, Masato Mimura, Tatsuya Kawahara:
Waveform-Domain Speech Enhancement Using Spectrogram Encoding for Robust Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 32: 3049-3060 (2024) - [j65]Sei Ueno, Akinobu Lee, Tatsuya Kawahara:
Refining Synthesized Speech Using Speaker Information and Phone Masking for Data Augmentation of Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 32: 3924-3933 (2024) - [c309]Koji Inoue, Bing'er Jiang, Erik Ekstedt, Tatsuya Kawahara, Gabriel Skantze:
Multilingual Turn-taking Prediction Using Voice Activity Projection. LREC/COLING 2024: 11873-11883 - [c308]Kazuki Shimada, Kengo Uchida, Yuichiro Koyama, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, Tatsuya Kawahara:
Zero- and Few-Shot Sound Event Localization and Detection. ICASSP 2024: 636-640 - [c307]Wangjin Zhou, Zhengdong Yang, Chenhui Chu, Sheng Li, Raj Dabre, Yi Zhao, Tatsuya Kawahara:
MOS-FAD: Improving Fake Audio Detection Via Automatic Mean Opinion Score Prediction. ICASSP 2024: 876-880 - [c306]Yuan Gao, Hao Shi, Chenhui Chu, Tatsuya Kawahara:
Enhancing Two-Stage Finetuning for Speech Emotion Recognition Using Adapters. ICASSP 2024: 11316-11320 - [c305]Hao Shi, Kazuki Shimada, Masato Hirano, Takashi Shibuya, Yuichiro Koyama, Zhi Zhong, Shusuke Takahashi, Tatsuya Kawahara, Yuki Mitsufuji:
Diffusion-Based Speech Enhancement with Joint Generative and Predictive Decoders. ICASSP 2024: 12951-12955 - [c304]Yahui Fu, Chenhui Chu, Tatsuya Kawahara:
StyEmp: Stylizing Empathetic Response Generation via Multi-Grained Prefix Encoder and Personality Reinforcement. SIGDIAL 2024: 172-185 - [e2]Tatsuya Kawahara, Vera Demberg, Stefan Ultes, Koji Inoue, Shikib Mehri, David M. Howcroft, Kazunori Komatani:
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2024, Kyoto, Japan, September 18 - 20, 2024. Association for Computational Linguistics 2024, ISBN 979-8-89176-161-2 [contents] - [i49]Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze:
An Analysis of User Behaviors for Objectively Evaluating Spoken Dialogue Systems. CoRR abs/2401.04867 (2024) - [i48]Koji Inoue, Bing'er Jiang, Erik Ekstedt, Tatsuya Kawahara, Gabriel Skantze:
Real-time and Continuous Turn-taking Prediction Using Voice Activity Projection. CoRR abs/2401.04868 (2024) - [i47]Yahui Fu, Haiyue Song, Tianyu Zhao, Tatsuya Kawahara:
Enhancing Personality Recognition in Dialogue by Data Augmentation and Heterogeneous Conversational Graph Networks. CoRR abs/2401.05871 (2024) - [i46]Wangjin Zhou, Zhengdong Yang, Chenhui Chu, Sheng Li, Raj Dabre, Yi Zhao, Tatsuya Kawahara:
MOS-FAD: Improving Fake Audio Detection Via Automatic Mean Opinion Score Prediction. CoRR abs/2401.13249 (2024) - [i45]Zi Haur Pang, Yahui Fu, Divesh Lala, Keiko Ochi, Koji Inoue, Tatsuya Kawahara:
Acknowledgment of Emotional States: Generating Validating Responses for Empathetic Dialogue. CoRR abs/2402.12770 (2024) - [i44]Haruki Kawai, Divesh Lala, Koji Inoue, Keiko Ochi, Tatsuya Kawahara:
Evaluation of a semi-autonomous attentive listening system with takeover prompting. CoRR abs/2402.14863 (2024) - [i43]Hao Shi, Tatsuya Kawahara:
Investigation of Adapter for Automatic Speech Recognition in Noisy Environment. CoRR abs/2402.18275 (2024) - [i42]Koji Inoue, Bing'er Jiang, Erik Ekstedt, Tatsuya Kawahara, Gabriel Skantze:
Multilingual Turn-taking Prediction Using Voice Activity Projection. CoRR abs/2403.06487 (2024) - [i41]Yahui Fu, Chenhui Chu, Tatsuya Kawahara:
StyEmp: Stylizing Empathetic Response Generation via Multi-Grained Prefix Encoder and Personality Reinforcement. CoRR abs/2408.02271 (2024) - [i40]Yuka Ko, Sheng Li, Chao-Han Huck Yang, Tatsuya Kawahara:
Benchmarking Japanese Speech Recognition on ASR-LLM Setups with Multi-Pass Augmented Generative Error Correction. CoRR abs/2408.16180 (2024) - [i39]Hao Shi, Yuan Gao, Zhaoheng Ni, Tatsuya Kawahara:
Serialized Speech Information Guidance with Overlapped Encoding Separation for Multi-Speaker Automatic Speech Recognition. CoRR abs/2409.00815 (2024) - [i38]Ryuichi Sumida, Koji Inoue, Tatsuya Kawahara:
Should RAG Chatbots Forget Unimportant Conversations? Exploring Importance and Forgetting with Psychological Insights. CoRR abs/2409.12524 (2024) - [i37]Sota Kobuki, Katie Seaborn, Seiki Tokunaga, Kosuke Fukumori, Shun Hidaka, Kazuhiro Tamura, Koji Inoue, Tatsuya Kawahara, Mihoko Otake-Matsuura:
Robotic Backchanneling in Online Conversation Facilitation: A Cross-Generational Study. CoRR abs/2409.16899 (2024) - 2023
- [j64]Keiko Ochi, Koji Inoue, Divesh Lala, Tatsuya Kawahara, Hirokazu Kumazaki:
Effect of attentive listening robot on pleasure and arousal change in psychiatric daycare. Adv. Robotics 37(21): 1382-1391 (2023) - [j63]Yahui Fu, Koji Inoue, Divesh Lala, Kenta Yamamoto, Chenhui Chu, Tatsuya Kawahara:
Dual variational generative model and auxiliary retrieval for empathetic response generation by conversational robot. Adv. Robotics 37(21): 1406-1418 (2023) - [j62]Kenta Yamamoto, Koji Inoue, Tatsuya Kawahara:
Character expression for spoken dialogue systems with semi-supervised learning using Variational Auto-Encoder. Comput. Speech Lang. 79: 101469 (2023) - [j61]Soky Kak, Sheng Li, Chenhui Chu, Tatsuya Kawahara:
Finetuning Pretrained Model with Embedding of Domain and Language Information for ASR of Very Low-Resource Settings. Int. J. Asian Lang. Process. 33(4): 2350024:1-2350024:17 (2023) - [j60]Hirofumi Inaguma, Tatsuya Kawahara:
Alignment Knowledge Distillation for Online Streaming Attention-Based Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 31: 1371-1385 (2023) - [c303]Yuanchao Li, Koji Inoue, Leimin Tian, Changzeng Fu, Carlos Toshinori Ishi, Hiroshi Ishiguro, Tatsuya Kawahara, Catherine Lai:
I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue. CHI Extended Abstracts 2023: 166:1-166:7 - [c302]Soky Kak, Sheng Li, Chenhui Chu, Tatsuya Kawahara:
Domain and Language Adaptation Using Heterogeneous Datasets for Wav2vec2.0-Based Speech Recognition of Low-Resource Language. ICASSP 2023: 1-5 - [c301]Hao Shi, Masato Mimura, Longbiao Wang, Jianwu Dang, Tatsuya Kawahara:
Time-Domain Speech Enhancement Assisted by Multi-Resolution Frequency Encoder and Decoder. ICASSP 2023: 1-5 - [c300]Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze:
Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors. ICMI Companion 2023: 86-90 - [c299]Jaeyoung Lee, Masato Mimura, Tatsuya Kawahara:
Embedding Articulatory Constraints for Low-resource Speech Recognition Based on Large Pre-trained Model. INTERSPEECH 2023: 1394-1398 - [c298]Yuan Gao, Chenhui Chu, Tatsuya Kawahara:
Two-stage Finetuning of Wav2vec 2.0 for Speech Emotion Recognition with ASR and Gender Pretraining. INTERSPEECH 2023: 3637-3641 - [c297]Sanae Yamashita, Koji Inoue, Ao Guo, Shota Mochizuki, Tatsuya Kawahara, Ryuichiro Higashinaka:
RealPersonaChat: A Realistic Persona Chat Corpus with Interlocutors' Own Personalities. PACLIC 2023: 852-861 - [c296]Sota Kobuki, Katie Seaborn, Seiki Tokunaga, Kosuke Fukumori, Shun Hidaka, Kazuhiro Tamura, Koji Inoue, Tatsuya Kawahara, Mihoko Otake-Matsuura:
Robotic Backchanneling in Online Conversation Facilitation: A Cross-Generational Study. RO-MAN 2023: 71-76 - [c295]Yahui Fu, Koji Inoue, Chenhui Chu, Tatsuya Kawahara:
Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation. SIGDIAL 2023: 645-656 - [i36]Yuanchao Li, Koji Inoue, Leimin Tian, Changzeng Fu, Carlos Toshinori Ishi, Hiroshi Ishiguro, Tatsuya Kawahara, Catherine Lai:
I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue. CoRR abs/2303.00146 (2023) - [i35]Hao Shi, Masato Mimura, Longbiao Wang, Jianwu Dang, Tatsuya Kawahara:
Time-domain Speech Enhancement Assisted by Multi-resolution Frequency Encoder and Decoder. CoRR abs/2303.14593 (2023) - [i34]Hao Shi, Kazuki Shimada, Masato Hirano, Takashi Shibuya, Yuichiro Koyama, Zhi Zhong, Shusuke Takahashi, Tatsuya Kawahara, Yuki Mitsufuji:
Diffusion-Based Speech Enhancement with Joint Generative and Predictive Decoders. CoRR abs/2305.10734 (2023) - [i33]Yahui Fu, Koji Inoue, Chenhui Chu, Tatsuya Kawahara:
Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation. CoRR abs/2308.00085 (2023) - [i32]Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze:
Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors. CoRR abs/2308.11020 (2023) - [i31]Kazuki Shimada, Kengo Uchida, Yuichiro Koyama, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, Tatsuya Kawahara:
Zero- and Few-shot Sound Event Localization and Detection. CoRR abs/2309.09223 (2023) - 2022
- [j59]Koji Inoue, Divesh Lala, Tatsuya Kawahara:
Can a robot laugh with you?: Shared laughter generation for empathetic spoken dialogue. Frontiers Robotics AI 9 (2022) - [j58]Yicheng Du, Robin Scheibler, Masahito Togami, Kazuyoshi Yoshii, Tatsuya Kawahara:
Computationally-Efficient Overdetermined Blind Source Separation Based on Iterative Source Steering. IEEE Signal Process. Lett. 29: 927-931 (2022) - [j57]Kouhei Sekiguchi, Yoshiaki Bando, Aditya Arie Nugraha, Mathieu Fontaine, Kazuyoshi Yoshii, Tatsuya Kawahara:
Autoregressive Moving Average Jointly-Diagonalizable Spatial Covariance Analysis for Joint Source Separation and Dereverberation. IEEE ACM Trans. Audio Speech Lang. Process. 30: 2368-2382 (2022) - [c294]Divesh Lala, Koji Inoue, Tatsuya Kawahara, Kei Sawada:
Backchannel Generation Model for a Third Party Listener Agent. HAI 2022: 114-122 - [c293]Yuanchao Li, Catherine Lai, Divesh Lala, Koji Inoue, Tatsuya Kawahara:
Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners. HRI 2022: 875-879 - [c292]Heran Zhang, Masato Mimura, Tatsuya Kawahara, Kenkichi Ishizuka:
Selective Multi-Task Learning For Speech Emotion Recognition Using Corpora Of Different Styles. ICASSP 2022: 7707-7711 - [c291]Sei Ueno, Tatsuya Kawahara:
Phone-Informed Refinement of Synthesized Mel Spectrogram for Data Augmentation in Speech Recognition. ICASSP 2022: 8572-8576 - [c290]Hao Shi, Longbiao Wang, Sheng Li, Jianwu Dang, Tatsuya Kawahara:
Monaural Speech Enhancement Based on Spectrogram Decomposition for Convolutional Neural Network-sensitive Feature Extraction. INTERSPEECH 2022: 221-225 - [c289]Soky Kak, Sheng Li, Masato Mimura, Chenhui Chu, Tatsuya Kawahara:
Leveraging Simultaneous Translation for Enhancing Transcription of Low-resource Language via Cross Attention Mechanism. INTERSPEECH 2022: 1362-1366 - [c288]Jumon Nozaki, Tatsuya Kawahara, Kenkichi Ishizuka, Taiichi Hashimoto:
End-to-end Speech-to-Punctuated-Text Recognition. INTERSPEECH 2022: 1811-1815 - [c287]Seiya Kawano, Muteki Arioka, Akishige Yuguchi, Kenta Yamamoto, Koji Inoue, Tatsuya Kawahara, Satoshi Nakamura, Koichiro Yoshino:
Multimodal Persuasive Dialogue Corpus using Teleoperated Android. INTERSPEECH 2022: 2308-2312 - [c286]Hayato Futami, Hirofumi Inaguma, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Non-autoregressive Error Correction for CTC-based ASR with Phone-conditioned Masked LM. INTERSPEECH 2022: 3889-3893 - [c285]Haruki Kawai, Yusuke Muraki, Kenta Yamamoto, Divesh Lala, Koji Inoue, Tatsuya Kawahara:
Simultaneous Job Interview System Using Multiple Semi-autonomous Agents. SIGDIAL 2022: 107-110 - [i30]Jumon Nozaki, Tatsuya Kawahara, Kenkichi Ishizuka, Taiichi Hashimoto:
End-to-end Speech-to-Punctuated-Text Recognition. CoRR abs/2207.03169 (2022) - [i29]Hayato Futami, Hirofumi Inaguma, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Distilling the Knowledge of BERT for CTC-based ASR. CoRR abs/2209.02030 (2022) - [i28]Hayato Futami, Hirofumi Inaguma, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Non-autoregressive Error Correction for CTC-based ASR with Phone-conditioned Masked LM. CoRR abs/2209.04062 (2022) - [i27]Yuanchao Li, Catherine Lai, Divesh Lala, Koji Inoue, Tatsuya Kawahara:
Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners. CoRR abs/2211.08526 (2022) - 2021
- [j56]Tatsuya Kawahara, Naoyuki Muramatsu, Kenta Yamamoto, Divesh Lala, Koji Inoue:
Semi-autonomous avatar enabling unconstrained parallel conversations -seamless hybrid of WOZ and autonomous dialogue systems-. Adv. Robotics 35(11): 657-663 (2021) - [j55]Soky Kak, Masato Mimura, Tatsuya Kawahara, Chenhui Chu, Sheng Li, Chenchen Ding, Sethserey Sam:
TriECCC: Trilingual Corpus of the Extraordinary Chambers in the Courts of Cambodia for Speech Recognition and Translation Studies. Int. J. Asian Lang. Process. 31(3&4): 2250007:1-2250007:21 (2021) - [c284]Soky Kak, Sheng Li, Masato Mimura, Chenhui Chu, Tatsuya Kawahara:
On the Use of Speaker Information for Automatic Speech Recognition in Speaker-imbalanced Corpora. APSIPA ASC 2021: 433-437 - [c283]Hao Shi, Longbiao Wang, Sheng Li, Cunhang Fan, Jianwu Dang, Tatsuya Kawahara:
Spectrograms Fusion-based End-to-end Robust Automatic Speech Recognition. APSIPA ASC 2021: 438-442 - [c282]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
An End-To-End Model from Speech to Clean Transcript for Parliamentary Meetings. APSIPA ASC 2021: 465-470 - [c281]Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Data Augmentation for ASR Using TTS Via a Discrete Representation. ASRU 2021: 68-75 - [c280]Hayato Futami, Hirofumi Inaguma, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
ASR Rescoring and Confidence Estimation with Electra. ASRU 2021: 380-387 - [c279]Hirofumi Inaguma, Yosuke Higuchi, Kevin Duh, Tatsuya Kawahara, Shinji Watanabe:
ORTHROS: non-autoregressive end-to-end speech translation With dual-decoder. ICASSP 2021: 7503-7507 - [c278]Hirofumi Inaguma, Tatsuya Kawahara:
StableEmit: Selection Probability Discount for Reducing Emission Latency of Streaming Monotonic Attention ASR. Interspeech 2021: 1817-1821 - [c277]Hirofumi Inaguma, Tatsuya Kawahara:
VAD-Free Streaming Hybrid CTC/Attention ASR for Unsegmented Recording. Interspeech 2021: 4049-4053 - [c276]Hirofumi Inaguma, Tatsuya Kawahara, Shinji Watanabe:
Source and Target Bidirectional Knowledge Distillation for End-to-end Speech Translation. NAACL-HLT 2021: 1872-1881 - [c275]Soky Kak, Masato Mimura, Tatsuya Kawahara, Sheng Li, Chenchen Ding, Chenhui Chu, Sethserey Sam:
Khmer Speech Translation Corpus of the Extraordinary Chambers in the Courts of Cambodia (ECCC). O-COCOSDA 2021: 122-127 - [c274]Tianyu Zhao, Tatsuya Kawahara:
Multi-Referenced Training for Dialogue Response Generation. SIGDIAL 2021: 190-201 - [c273]Etsuko Ishii, Genta Indra Winata, Samuel Cahyawijaya, Divesh Lala, Tatsuya Kawahara, Pascale Fung:
ERICA: An Empathetic Android Companion for Covid-19 Quarantine. SIGDIAL 2021: 257-260 - [c272]Koji Inoue, Hiromi Sakamoto, Kenta Yamamoto, Divesh Lala, Tatsuya Kawahara:
A multi-party attentive listening robot which stimulates involvement from side participants. SIGDIAL 2021: 261-264 - [i26]Hirofumi Inaguma, Tatsuya Kawahara:
Alignment Knowledge Distillation for Online Streaming Attention-based Speech Recognition. CoRR abs/2103.00422 (2021) - [i25]Hirofumi Inaguma, Tatsuya Kawahara, Shinji Watanabe:
Source and Target Bidirectional Knowledge Distillation for End-to-end Speech Translation. CoRR abs/2104.06457 (2021) - [i24]Tatsuya Kawahara, Koji Inoue, Divesh Lala:
Intelligent Conversational Android ERICA Applied to Attentive Listening and Job Interview. CoRR abs/2105.00403 (2021) - [i23]Etsuko Ishii, Genta Indra Winata, Samuel Cahyawijaya, Divesh Lala, Tatsuya Kawahara, Pascale Fung:
ERICA: An Empathetic Android Companion for Covid-19 Quarantine. CoRR abs/2106.02325 (2021) - [i22]Hirofumi Inaguma, Tatsuya Kawahara:
StableEmit: Selection Probability Discount for Reducing Emission Latency of Streaming Monotonic Attention ASR. CoRR abs/2107.00635 (2021) - [i21]Hirofumi Inaguma, Tatsuya Kawahara:
VAD-free Streaming Hybrid CTC/Attention ASR for Unsegmented Recording. CoRR abs/2107.07509 (2021) - [i20]Hirofumi Inaguma, Yosuke Higuchi, Kevin Duh, Tatsuya Kawahara, Shinji Watanabe:
Non-autoregressive End-to-end Speech Translation with Parallel Autoregressive Rescoring. CoRR abs/2109.04411 (2021) - [i19]Hayato Futami, Hirofumi Inaguma, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
ASR Rescoring and Confidence Estimation with ELECTRA. CoRR abs/2110.01857 (2021) - 2020
- [j54]Richeng Duan, Tatsuya Kawahara, Masatake Dantsuji, Hiroaki Nanjo:
Cross-Lingual Transfer Learning of Non-Native Acoustic Modeling for Pronunciation Error Detection and Diagnosis. IEEE ACM Trans. Audio Speech Lang. Process. 28: 391-401 (2020) - [j53]Kouhei Sekiguchi, Yoshiaki Bando, Aditya Arie Nugraha, Kazuyoshi Yoshii, Tatsuya Kawahara:
Fast Multichannel Nonnegative Matrix Factorization With Directivity-Aware Jointly-Diagonalizable Spatial Covariance Matrices for Blind Source Separation. IEEE ACM Trans. Audio Speech Lang. Process. 28: 2610-2625 (2020) - [c271]Tianyu Zhao, Divesh Lala, Tatsuya Kawahara:
Designing Precise and Robust Dialogue Response Evaluators. ACL 2020: 26-33 - [c270]Masaya Wake, Masahito Togami, Kazuyoshi Yoshii, Tatsuya Kawahara:
Integration of Semi-Blind Speech Source Separation and Voice Activity Detection for Flexible Spoken Dialogue. APSIPA 2020: 775-780 - [c269]Masahito Togami, Yoshiki Masuyama, Tatsuya Komatsu, Kazuyoshi Yoshii, Tatsuya Kawahara:
Computer-Resource-Aware Deep Speech Separation with a Run-Time-Specified Number of BLSTM Layers. APSIPA 2020: 788-793 - [c268]Jeongwoo Woo, Masato Mimura, Kazuyoshi Yoshii, Tatsuya Kawahara:
End-to-end Music-mixed Speech Recognition. APSIPA 2020: 800-804 - [c267]Shuying Zhang, Tianyu Zhao, Tatsuya Kawahara:
Topic-relevant Response Generation using Optimal Transport for an Open-domain Dialog System. COLING 2020: 4067-4077 - [c266]Yicheng Du, Kouhei Sekiguchi, Yoshiaki Bando, Aditya Arie Nugraha, Mathieu Fontaine, Kazuyoshi Yoshii, Tatsuya Kawahara:
Semi-supervised Multichannel Speech Separation Based on a Phone- and Speaker-Aware Deep Generative Model of Speech Spectrograms. EUSIPCO 2020: 870-874 - [c265]Hiroshi Ishiguro, Tatsuya Kawahara, Yutaka Nakamura:
Autonomous Dialogue Technologies in Symbiotic Human-robot Interaction. HRI (Companion) 2020: 650-651 - [c264]Divesh Lala, Koji Inoue, Tatsuya Kawahara:
Prediction of Shared Laughter for Human-Robot Dialogue. ICMI Companion 2020: 62-66 - [c263]Koji Inoue, Kohei Hara, Divesh Lala, Kenta Yamamoto, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
Job Interviewer Android with Elaborate Follow-up Question Generation. ICMI 2020: 324-332 - [c262]Han Feng, Sei Ueno, Tatsuya Kawahara:
End-to-End Speech Emotion Recognition Combined with Acoustic-to-Word ASR Model. INTERSPEECH 2020: 501-505 - [c261]Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara:
CTC-Synchronous Training for Monotonic Attention Model. INTERSPEECH 2020: 571-575 - [c260]Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara:
Enhancing Monotonic Multihead Attention for Streaming ASR. INTERSPEECH 2020: 2137-2141 - [c259]Kohei Matsuura, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Generative Adversarial Training Data Adaptation for Very Low-Resource Automatic Speech Recognition. INTERSPEECH 2020: 2737-2741 - [c258]Hayato Futami, Hirofumi Inaguma, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Distilling the Knowledge of BERT for Sequence-to-Sequence ASR. INTERSPEECH 2020: 3635-3639 - [c257]Viet-Trung Dang, Tianyu Zhao, Sei Ueno, Hirofumi Inaguma, Tatsuya Kawahara:
End-to-End Speech-to-Dialog-Act Recognition. INTERSPEECH 2020: 3910-3914 - [c256]Kenta Yamamoto, Koji Inoue, Tatsuya Kawahara:
Semi-Supervised Learning for Character Expression of Spoken Dialogue Systems. INTERSPEECH 2020: 4188-4192 - [c255]Kenta Yamamoto, Koji Inoue, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
A Character Expression Model Affecting Spoken Dialogue Behaviors. IWSDS 2020: 3-13 - [c254]Sota Isonishi, Koji Inoue, Divesh Lala, Katsuya Takanashi, Tatsuya Kawahara:
Response Generation to Out-of-Database Questions for Example-Based Dialogue Systems. IWSDS 2020: 303-317 - [c253]Kohei Matsuura, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language. LREC 2020: 2622-2628 - [c252]Koji Inoue, Divesh Lala, Kenta Yamamoto, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
An Attentive Listening System with Android ERICA: Comparison of Autonomous and WOZ Interactions. SIGdial 2020: 118-127 - [i18]Kohei Matsuura, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language. CoRR abs/2002.06675 (2020) - [i17]Tianyu Zhao, Divesh Lala, Tatsuya Kawahara:
Designing Precise and Robust Dialogue Response Evaluators. CoRR abs/2004.04908 (2020) - [i16]Viet-Trung Dang, Tianyu Zhao, Sei Ueno, Hirofumi Inaguma, Tatsuya Kawahara:
End-to-end speech-to-dialog-act recognition. CoRR abs/2004.11419 (2020) - [i15]Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara:
CTC-synchronous Training for Monotonic Attention Model. CoRR abs/2005.04712 (2020) - [i14]Kohei Matsuura, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Generative Adversarial Training Data Adaptation for Very Low-resource Automatic Speech Recognition. CoRR abs/2005.09256 (2020) - [i13]Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara:
Enhancing Monotonic Multihead Attention for Streaming ASR. CoRR abs/2005.09394 (2020) - [i12]Hayato Futami, Hirofumi Inaguma, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Distilling the Knowledge of BERT for Sequence-to-Sequence ASR. CoRR abs/2008.03822 (2020) - [i11]Tianyu Zhao, Tatsuya Kawahara:
Multi-Referenced Training for Dialogue Response Generation. CoRR abs/2009.07117 (2020) - [i10]Hirofumi Inaguma, Yosuke Higuchi, Kevin Duh, Tatsuya Kawahara, Shinji Watanabe:
Orthros: Non-autoregressive End-to-end Speech Translation with Dual-decoder. CoRR abs/2010.13047 (2020) - [i9]Laurence Devillers, Tatsuya Kawahara, Roger K. Moore, Matthias Scheutz:
Spoken Language Interaction with Virtual Agents and Robots (SLIVAR): Towards Effective and Ethical Interaction (Dagstuhl Seminar 20021). Dagstuhl Reports 10(1): 1-51 (2020)
2010 – 2019
- 2019
- [j52]Yuanchao Li, Carlos Toshinori Ishi, Koji Inoue, Shizuka Nakamura, Tatsuya Kawahara:
Expressing reactive emotion based on multimodal emotion recognition for natural conversation in human-robot interaction. Adv. Robotics 33(20): 1030-1041 (2019) - [j51]Tianyu Zhao, Tatsuya Kawahara:
Joint dialog act segmentation and recognition in human conversations using attention to dialog context. Comput. Speech Lang. 57: 108-127 (2019) - [j50]Kazuki Shimada, Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Unsupervised Speech Enhancement Based on Multichannel NMF-Informed Beamforming for Noise-Robust Automatic Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 27(5): 960-971 (2019) - [j49]Kouhei Sekiguchi, Yoshiaki Bando, Aditya Arie Nugraha, Kazuyoshi Yoshii, Tatsuya Kawahara:
Semi-Supervised Multichannel Speech Enhancement With a Deep Speech Prior. IEEE ACM Trans. Audio Speech Lang. Process. 27(12): 2197-2212 (2019) - [c251]Soky Kak, Sheng Li, Tatsuya Kawahara, Sopheap Seng:
Multi-lingual Transformer Training for Khmer Automatic Speech Recognition. APSIPA 2019: 1893-1896 - [c250]Hirofumi Inaguma, Kevin Duh, Tatsuya Kawahara, Shinji Watanabe:
Multilingual End-to-End Speech Translation. ASRU 2019: 570-577 - [c249]Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, Shinji Watanabe:
Transfer Learning of Language-independent End-to-end ASR with Language Model Fusion. ICASSP 2019: 6096-6100 - [c248]Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Multi-speaker Sequence-to-sequence Speech Synthesis for Data Augmentation in Acoustic-to-word Speech Recognition. ICASSP 2019: 6161-6165 - [c247]Divesh Lala, Koji Inoue, Tatsuya Kawahara:
Smooth Turn-taking by a Robot Using an Online Continuous Model to Generate Turn-taking Cues. ICMI 2019: 226-234 - [c246]Divesh Lala, Graham Wilcock, Kristiina Jokinen, Tatsuya Kawahara:
ERICA and WikiTalk. IJCAI 2019: 6533-6535 - [c245]Sheng Li, Chenchen Ding, Xugang Lu, Peng Shen, Tatsuya Kawahara, Hisashi Kawai:
End-to-End Articulatory Attribute Modeling for Low-Resource Multilingual Speech Recognition. INTERSPEECH 2019: 2145-2149 - [c244]Sheng Li, Xugang Lu, Chenchen Ding, Peng Shen, Tatsuya Kawahara, Hisashi Kawai:
Investigating Radical-Based End-to-End Speech Recognition Systems for Chinese Dialects and Japanese. INTERSPEECH 2019: 2200-2204 - [c243]Yuanchao Li, Tianyu Zhao, Tatsuya Kawahara:
Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning. INTERSPEECH 2019: 2803-2807 - [c242]Kohei Hara, Koji Inoue, Katsuya Takanashi, Tatsuya Kawahara:
Turn-Taking Prediction Based on Detection of Transition Relevance Place. INTERSPEECH 2019: 4170-4174 - [c241]Divesh Lala, Shizuka Nakamura, Tatsuya Kawahara:
Analysis of Effect and Timing of Fillers in Natural Turn-Taking. INTERSPEECH 2019: 4175-4179 - [c240]Sheng Li, Raj Dabre, Xugang Lu, Peng Shen, Tatsuya Kawahara, Hisashi Kawai:
Improving Transformer-Based Speech Recognition Systems with Compressed Structure and Speech Attributes Augmentation. INTERSPEECH 2019: 4400-4404 - [c239]Koki Tanaka, Koji Inoue, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
End-to-end Modeling for Selection of Utterance Constructional Units via System Internal States. IWSDS 2019: 15-27 - [c238]Koji Inoue, Divesh Lala, Kenta Yamamoto, Katsuya Takanashi, Tatsuya Kawahara:
Engagement-Based Adaptive Behaviors for Laboratory Guide in Human-Robot Dialogue. IWSDS 2019: 129-139 - [c237]Koji Inoue, Kohei Hara, Divesh Lala, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
A Job Interview Dialogue System with Autonomous Android ERICA. IWSDS 2019: 291-297 - [i8]Kazuki Shimada, Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Unsupervised Speech Enhancement Based on Multichannel NMF-Informed Beamforming for Noise-Robust Automatic Speech Recognition. CoRR abs/1903.09341 (2019) - [i7]Tianyu Zhao, Tatsuya Kawahara:
Content Word-based Sentence Decoding and Evaluating for Open-domain Neural Response Generation. CoRR abs/1905.13438 (2019) - [i6]Tianyu Zhao, Tatsuya Kawahara:
Effective Incorporation of Speaker Information in Utterance Encoding in Dialog. CoRR abs/1907.05599 (2019) - [i5]Hirofumi Inaguma, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Improving OOV Detection and Resolution with External Language Models in Acoustic-to-Word ASR. CoRR abs/1909.09993 (2019) - [i4]Hirofumi Inaguma, Kevin Duh, Tatsuya Kawahara, Shinji Watanabe:
Multilingual End-to-End Speech Translation. CoRR abs/1910.00254 (2019) - 2018
- [j48]Maryam Sadat Mirzaei, Kourosh Meshgi, Tatsuya Kawahara:
Exploiting automatic speech recognition errors to enhance partial and synchronized caption for facilitating second language listening. Comput. Speech Lang. 49: 17-36 (2018) - [j47]Toshiyuki Hagiya, Toshiharu Horiuchi, Tomonori Yazaki, Tatsuya Kawahara:
Typing Tutor: Individualized Tutoring in Text Entry for Older Adults Based on Statistical Input Stumble Detection. J. Inf. Process. 26: 362-374 (2018) - [j46]Yoshiaki Bando, Katsutoshi Itoyama, Masashi Konyo, Satoshi Tadokoro, Kazuhiro Nakadai, Kazuyoshi Yoshii, Tatsuya Kawahara, Hiroshi G. Okuno:
Speech Enhancement Based on Bayesian Low-Rank and Sparse Decomposition of Multichannel Magnitude Spectrograms. IEEE ACM Trans. Audio Speech Lang. Process. 26(2): 215-230 (2018) - [j45]Kousuke Itakura, Yoshiaki Bando, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Bayesian Multichannel Audio Source Separation Based on Integrated Source and Spatial Models. IEEE ACM Trans. Audio Speech Lang. Process. 26(4): 831-846 (2018) - [c236]Kouhei Sekiguchi, Yoshiaki Bando, Kazuyoshi Yoshii, Tatsuya Kawahara:
Bayesian Multichannel Speech Enhancement with a Deep Speech Prior. APSIPA 2018: 1233-1239 - [c235]Kenta Yamamoto, Koji Inoue, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
Dialogue Behavior Control Model for Expressing a Character of Humanoid Robots. APSIPA 2018: 1732-1737 - [c234]Kazuyoshi Yoshii, Koichi Kitamura, Yoshiaki Bando, Eita Nakamura, Tatsuya Kawahara:
Independent Low-Rank Tensor Analysis for Audio Source Separation. EUSIPCO 2018: 1657-1661 - [c233]Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Statistical Speech Enhancement Based on Probabilistic Integration of Variational Autoencoder and Non-Negative Matrix Factorization. ICASSP 2018: 716-720 - [c232]Kazuki Shimada, Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Unsupervised Beamforming Based on Multichannel Nonnegative Matrix Factorization for Noisy Speech Recognition. ICASSP 2018: 5734-5738 - [c231]Sei Ueno, Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara:
Acoustic-to-Word Attention-Based Model Complemented with Character-Level CTC-Based Model. ICASSP 2018: 5804-5808 - [c230]Hirofumi Inaguma, Masato Mimura, Koji Inoue, Kazuyoshi Yoshii, Tatsuya Kawahara:
An End-to-End Approach to Joint Social Signal Detection and Automatic Speech Recognition. ICASSP 2018: 6214-6218 - [c229]Richeng Duan, Tatsuya Kawahara, Masatake Dantsuji, Hiroaki Nanjo:
Efficient Learning of Articulatory Models Based on Multi-Label Training and Label Correction for Pronunciation Learning. ICASSP 2018: 6239-6243 - [c228]Tatsuya Kawahara, Koji Inoue, Divesh Lala, Katsuya Takanashi:
Audio-Visual Conversation Analysis by Smart Posterboard and Humanoid Robot. ICASSP 2018: 6573-6577 - [c227]Divesh Lala, Koji Inoue, Tatsuya Kawahara:
Evaluation of Real-time Deep Learning Turn-taking Models for Multiple Dialogue Scenarios. ICMI 2018: 78-86 - [c226]Koji Inoue, Divesh Lala, Katsuya Takanashi, Tatsuya Kawahara:
Engagement Recognition in Spoken Dialogue via Neural Network by Aggregating Different Annotators' Models. INTERSPEECH 2018: 616-620 - [c225]Kohei Hara, Koji Inoue, Katsuya Takanashi, Tatsuya Kawahara:
Prediction of Turn-taking Using Multitask Learning with Prediction of Backchannels and Fillers. INTERSPEECH 2018: 991-995 - [c224]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Forward-Backward Attention Decoder. INTERSPEECH 2018: 2232-2236 - [c223]Sei Ueno, Takafumi Moriya, Masato Mimura, Shinsuke Sakai, Yusuke Shinohara, Yoshikazu Yamaguchi, Yushi Aono, Tatsuya Kawahara:
Encoder Transfer for Attention-based Acoustic-to-word Speech Recognition. INTERSPEECH 2018: 2424-2428 - [c222]Sheng Li, Xugang Lu, Ryoichi Takashima, Peng Shen, Tatsuya Kawahara, Hisashi Kawai:
Improving CTC-based Acoustic Model with Very Deep Residual Time-delay Neural Networks. INTERSPEECH 2018: 3708-3712 - [c221]Toshiyuki Hagiya, Keiichiro Hoashi, Tatsuya Kawahara:
Voice Input Tutoring System for Older Adults using Input Stumble Detection. IUI 2018: 415-419 - [c220]Tatsuya Kawahara:
Spoken Dialogue System for a Human-like Conversational Robot ERICA. IWSDS 2018: 65-75 - [c219]Ryosuke Nakanishi, Koji Inoue, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
Generating Fillers Based on Dialog Act Pairs for Smooth Turn-Taking by Humanoid Robot. IWSDS 2018: 91-101 - [c218]Koji Inoue, Divesh Lala, Katsuya Takanashi, Tatsuya Kawahara:
Latent Character Model for Engagement Recognition Based on Multimodal Behaviors. IWSDS 2018: 119-130 - [c217]Tianyu Zhao, Tatsuya Kawahara:
A Unified Neural Architecture for Joint Dialog Act Segmentation and Recognition in Spoken Dialog System. SIGDIAL Conference 2018: 201-208 - [c216]Sheng Li, Xugang Lu, Ryoichi Takashima, Peng Shen, Tatsuya Kawahara, Hisashi Kawai:
Improving Very Deep Time-Delay Neural Network With Vertical-Attention For Effectively Training CTC-Based ASR Systems. SLT 2018: 77-83 - [c215]Hirofumi Inaguma, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Improving OOV Detection and Resolution with External Language Models in Acoustic-to-Word ASR. SLT 2018: 212-218 - [c214]Masato Mimura, Sei Ueno, Hirofumi Inaguma, Shinsuke Sakai, Tatsuya Kawahara:
Leveraging Sequence-to-Sequence Speech Synthesis for Enhancing Acoustic-to-Word Speech Recognition. SLT 2018: 477-484 - [i3]Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, Shinji Watanabe:
Transfer learning of language-independent end-to-end ASR with language model fusion. CoRR abs/1811.02134 (2018) - 2017
- [j44]Richeng Duan, Tatsuya Kawahara, Masatake Dantsuji, Jinsong Zhang:
Articulatory Modeling for Pronunciation Error Detection without Non-Native Training Data Based on DNN Transfer Learning. IEICE Trans. Inf. Syst. 100-D(9): 2174-2182 (2017) - [j43]Toshiyuki Hagiya, Toshiharu Horiuchi, Tomonori Yazaki, Tsuneo Kato, Tatsuya Kawahara:
Assistive Typing Application for Older Adults Based on Input Stumble Detection. J. Inf. Process. 25: 417-425 (2017) - [c213]Tatsuya Kawahara:
Automatic meeting transcription system for the Japanese parliament (diet). APSIPA 2017: 1006-1010 - [c212]Yuanchao Li, Carlos Toshinori Ishi, Nigel G. Ward, Koji Inoue, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
Emotion recognition by combining prosody and sentiment analysis for expressing reactive emotion by humanoid robot. APSIPA 2017: 1356-1359 - [c211]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Cross-domain speech recognition using nonparallel corpora with cycle-consistent adversarial networks. ASRU 2017: 134-140 - [c210]Sheng Li, Xugang Lu, Peng Shen, Ryoichi Takashima, Tatsuya Kawahara, Hisashi Kawai:
Incremental training and constructing the very deep convolutional residual network acoustic models. ASRU 2017: 222-227 - [c209]Divesh Lala, Yuanchao Li, Tatsuya Kawahara:
Utterance Behavior of Users While Playing Basketball with a Virtual Teammate. ICAART (1) 2017: 28-38 - [c208]Kousuke Itakura, Yoshiaki Bando, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Bayesian multichannel nonnegative matrix factorization for audio source separation and localization. ICASSP 2017: 551-555 - [c207]Sheng Li, Xugang Lu, Shinsuke Sakai, Masato Mimura, Tatsuya Kawahara:
Semi-supervised ensemble DNN acoustic model training. ICASSP 2017: 5270-5274 - [c206]Richeng Duan, Tatsuya Kawahara, Masatake Dantsuji, Jinsong Zhang:
Effective articulatory modeling for pronunciation error detection of L2 learner without non-native training data. ICASSP 2017: 5815-5819 - [c205]Tianyu Zhao, Tatsuya Kawahara:
Joint Learning of Dialog Act Segmentation and Recognition in Spoken Dialog Using Neural Networks. IJCNLP(1) 2017: 704-712 - [c204]Hirofumi Inaguma, Koji Inoue, Masato Mimura, Tatsuya Kawahara:
Social Signal Detection in Spontaneous Dialogue Using Bidirectional LSTM-CTC. INTERSPEECH 2017: 1691-1695 - [c203]Shizuka Nakamura, Ryosuke Nakanishi, Katsuya Takanashi, Tatsuya Kawahara:
Analysis of the Relationship Between Prosodic Features of Fillers and its Forms or Occurrence Positions. INTERSPEECH 2017: 1726-1730 - [c202]Masato Mimura, Yoshiaki Bando, Kazuki Shimada, Shinsuke Sakai, Kazuyoshi Yoshii, Tatsuya Kawahara:
Combined Multi-Channel NMF-Based Robust Beamforming for Noisy Speech Recognition. INTERSPEECH 2017: 2451-2455 - [c201]Pierrick Milhorat, Divesh Lala, Koji Inoue, Tianyu Zhao, Masanari Ishida, Katsuya Takanashi, Shizuka Nakamura, Tatsuya Kawahara:
A Conversational Dialogue Manager for the Humanoid Robot ERICA. IWSDS 2017: 119-131 - [c200]Masaya Wake, Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Semi-Blind speech enhancement basedon recurrent neural network for source separation and dereverberation. MLSP 2017: 1-6 - [c199]Divesh Lala, Pierrick Milhorat, Koji Inoue, Masanari Ishida, Katsuya Takanashi, Tatsuya Kawahara:
Attentive listening system with backchanneling, response generation and flexible turn-taking. SIGDIAL Conference 2017: 127-136 - [c198]Richeng Duan, Tatsuya Kawahara, Masatake Dantsuji, Hiroaki Nanjo:
Transfer Learning based Non-native Acoustic Modeling for Pronunciation Error Detection. SLaTE 2017: 42-46 - [c197]Maryam Sadat Mirzaei, Kourosh Meshgi, Tatsuya Kawahara:
Detecting listening difficulty for second language learners using Automatic Speech Recognition errors. SLaTE 2017: 156-160 - [i2]Divesh Lala, Koji Inoue, Pierrick Milhorat, Tatsuya Kawahara:
Detection of social signals for recognizing engagement in human-robot interaction. CoRR abs/1709.10257 (2017) - [i1]Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Statistical Speech Enhancement Based on Probabilistic Integration of Variational Autoencoder and Non-Negative Matrix Factorization. CoRR abs/1710.11439 (2017) - 2016
- [j42]Sheng Li, Yuya Akita, Tatsuya Kawahara:
Semi-Supervised Acoustic Model Training by Discriminative Data Selection From Multiple ASR Systems' Hypotheses. IEEE ACM Trans. Audio Speech Lang. Process. 24(9): 1524-1534 (2016) - [c196]Maryam Sadat Mirzaei, Kourosh Meshgi, Tatsuya Kawahara:
Automatic Speech Recognition Errors as a Predictor of L2 Listening Difficulties. CL4LC@COLING 2016 2016: 192-201 - [c195]Richeng Duan, Tatsuya Kawahara, Masatake Dantsuji, Jinsong Zhang:
Multi-lingual and multi-task DNN learning for articulatory error detection. APSIPA 2016: 1-4 - [c194]Sheng Li, Yuya Akita, Tatsuya Kawahara:
Data selection from multiple ASR systems' hypotheses for unsupervised acoustic model training. ICASSP 2016: 5875-5879 - [c193]Hirofumi Inaguma, Koji Inoue, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
Prediction of ice-breaking between participants using prosodic features in the first meeting dialogue. ASSP4MI@ICMI 2016: 11-15 - [c192]Koji Inoue, Divesh Lala, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara:
Annotation and analysis of listener's engagement based on multi-modal behaviors. MA3HMI@ICMI 2016: 25-32 - [c191]Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, Tatsuya Kawahara:
Multimodal interaction with the autonomous Android ERICA. ICMI 2016: 417-418 - [c190]Tatsuya Kawahara, Takashi Yamaguchi, Koji Inoue, Katsuya Takanashi, Nigel G. Ward:
Prediction and Generation of Backchannel Form for Attentive Listening Systems. INTERSPEECH 2016: 2890-2894 - [c189]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Joint Optimization of Denoising Autoencoder and DNN Acoustic Model Based on Multi-Target Learning for Noisy Speech Recognition. INTERSPEECH 2016: 3803-3807 - [c188]Richeng Duan, Tatsuya Kawahara, Masatake Dantsuji, Jinsong Zhang:
Pronunciation error detection using DNN articulatory model based on multi-lingual and multi-task learning. ISCSLP 2016: 1-5 - [c187]Sheng Li, Xugang Lu, Shinsuke Mori, Yuya Akita, Tatsuya Kawahara:
Confidence estimation for speech recognition systems using conditional random fields trained with partially annotated data. ISCSLP 2016: 1-5 - [c186]Divesh Lala, Tatsuya Kawahara:
Managing Dialog and Joint Actions for Virtual Basketball Teammates. IVA 2016: 355-358 - [c185]Dylan F. Glas, Takashi Minato, Carlos Toshinori Ishi, Tatsuya Kawahara, Hiroshi Ishiguro:
ERICA: The ERATO Intelligent Conversational Android. RO-MAN 2016: 22-29 - [c184]Koji Inoue, Pierrick Milhorat, Divesh Lala, Tianyu Zhao, Tatsuya Kawahara:
Talking with ERICA, an autonomous android. SIGDIAL Conference 2016: 212-215 - [p1]Tatsuya Kawahara:
Smart Posterboard: Multi-modal Sensing and Analysis of Poster Conversations. Human-Harmonized Information Technology (1) 2016: 239-270 - 2015
- [j41]Koichiro Yoshino, Tatsuya Kawahara:
Conversational system for information navigation based on POMDP with user focus tracking. Comput. Speech Lang. 34(1): 275-291 (2015) - [j40]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Reverberant speech recognition combining deep neural networks and deep autoencoders augmented with a phone-class feature. EURASIP J. Adv. Signal Process. 2015: 62 (2015) - [j39]Sheng Li, Yuya Akita, Tatsuya Kawahara:
Automatic Lecture Transcription Based on Discriminative Data Selection for Lightly Supervised Acoustic Model Training. IEICE Trans. Inf. Syst. 98-D(8): 1545-1552 (2015) - [c183]Koichiro Yoshino, Tatsuya Kawahara:
News Navigation System Based on Proactive Dialogue Strategy. IWSDS 2015: 15-25 - [c182]Yuya Akita, Nobuhiro Kuwahara, Tatsuya Kawahara:
Automatic classification of usability of ASR result for real-time captioning of lectures. APSIPA 2015: 19-22 - [c181]Tatsuya Kawahara, Takashi Yamaguchi, Miki Uesato, Koichiro Yoshino, Katsuya Takanashi:
Synchrony in prosodic and linguistic features between backchannels and preceding utterances in attentive listening. APSIPA 2015: 392-395 - [c180]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Deep autoencoders augmented with phone-class feature for reverberant speech recognition. ICASSP 2015: 4365-4369 - [c179]Yuya Akita, Yizheng Tong, Tatsuya Kawahara:
Language model adaptation for academic lectures using character recognition result of presentation slides. ICASSP 2015: 5431-5435 - [c178]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Speech dereverberation using long short-term memory. INTERSPEECH 2015: 2435-2439 - [c177]Sheng Li, Xugang Lu, Yuya Akita, Tatsuya Kawahara:
Ensemble speaker modeling using speaker adaptive training deep neural network for speaker adaptation. INTERSPEECH 2015: 2892-2896 - [c176]Koji Inoue, Yukoh Wakabayashi, Hiromasa Yoshimoto, Katsuya Takanashi, Tatsuya Kawahara:
Enhanced speaker diarization with detection of backchannels using eye-gaze information in poster conversations. INTERSPEECH 2015: 3086-3090 - [c175]Sheng Li, Yuya Akita, Tatsuya Kawahara:
Discriminative data selection for lightly supervised training of acoustic model using closed caption texts. INTERSPEECH 2015: 3526-3530 - [c174]Tetsuro Sasada, Shinsuke Mori, Tatsuya Kawahara, Yoko Yamakata:
Named Entity Recognizer Trainable from Partially Annotated Data. PACLING 2015: 148-160 - [c173]Maryam Sadat Mirzaei, Tatsuya Kawahara:
ASR technology to empower partial and synchronized caption for L2 listening development. SLaTE 2015: 65-70 - 2014
- [j38]Mijit Ablimit, Tatsuya Kawahara, Askar Hamdulla:
Lexicon optimization based on discriminative learning for automatic speech recognition of agglutinative language. Speech Commun. 60: 78-87 (2014) - [j37]Tony Tung, Randy Gomez, Tatsuya Kawahara, Takashi Matsuyama:
Multiparty Interaction Understanding Using Smart Multimodal Digital Signage. IEEE Trans. Hum. Mach. Syst. 44(5): 625-637 (2014) - [c172]Katsuhito Sudoh, Masaaki Nagata, Shinsuke Mori, Tatsuya Kawahara:
Japanese-to-English patent translation system based on domain-adapted word segmentation and post-ordering. AMTA 2014: 234-248 - [c171]Masato Mimura, Tatsuya Kawahara:
Unsupervised speaker adaptation of DNN-HMM by selecting similar speakers for lecture transcription. APSIPA 2014: 1-4 - [c170]Yukoh Wakabayashi, Koji Inoue, Hiromasa Yoshimoto, Tatsuya Kawahara:
Speaker diarization based on audio-visual integration for smart posterboard. APSIPA 2014: 1-4 - [c169]Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara:
Exploring deep neural networks and deep autoencoders in reverberant speech recognition. HSCMA 2014: 197-201 - [c168]Koji Inoue, Yukoh Wakabayashi, Hiromasa Yoshimoto, Tatsuya Kawahara:
Speaker diarization using eye-gaze information in multi-party conversations. INTERSPEECH 2014: 562-566 - [c167]Sheng Li, Yuya Akita, Tatsuya Kawahara:
Corpus and transcription system of Chinese Lecture Room. ISCSLP 2014: 442-445 - [c166]Koichiro Yoshino, Tatsuya Kawahara:
Information Navigation System Based on POMDP that Tracks User Focus. SIGDIAL Conference 2014: 32-40 - 2013
- [j36]Shinsuke Sakai, Tatsuya Kawahara:
Admissible Stopping in Viterbi Beam Search for Unit Selection Speech Synthesis. IEICE Trans. Inf. Syst. 96-D(6): 1359-1367 (2013) - [j35]Hiromitsu Nishizaki, Tomoyosi Akiba, Kiyoaki Aikawa, Tatsuya Kawahara, Tomoko Matsui:
Evaluation Framework Design of Spoken Term Detection Study at the NTCIR-9 IR for Spoken Documents Task. Inf. Media Technol. 8(1): 59-80 (2013) - [j34]Graham Neubig, Taro Watanabe, Shinsuke Mori, Tatsuya Kawahara:
Substring-based machine translation. Mach. Transl. 27(2): 139-166 (2013) - [c165]Tatsuya Kawahara:
Smart posterboard: Multi-modal sensing and analysis of poster conversations. APSIPA 2013: 1-5 - [c164]Tony Tung, Randy Gomez, Tatsuya Kawahara, Takashi Matsuyama:
Multi-party Human-Machine Interaction Using a Smart Multimodal Digital Signage. HCI (4) 2013: 408-415 - [c163]Koichiro Yoshino, Shinsuke Mori, Tatsuya Kawahara:
Incorporating semantic information to selection of web texts for language model of spoken dialogue system. ICASSP 2013: 8252-8256 - [c162]Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai, Ui-Hyun Kim, Hiroshi G. Okuno, Tatsuya Kawahara:
Hands-free human-robot communication robust to speaker's radial position. ICRA 2013: 4329-4334 - [c161]Koichiro Yoshino, Shinsuke Mori, Tatsuya Kawahara:
Predicate Argument Structure Analysis using Partially Annotated Corpora. IJCNLP 2013: 957-961 - [c160]Tatsuya Kawahara, Soichiro Hayashi, Katsuya Takanashi:
Estimation of interest and comprehension level of audience through multi-modal behaviors in poster conversations. INTERSPEECH 2013: 1882-1885 - [c159]Nigel G. Ward, Steven D. Werner, David G. Novick, Elizabeth Shriberg, Catharine Oertel, Louis-Philippe Morency, Tatsuya Kawahara:
The Similar Segments in Social Speech Task. MediaEval 2013 - [c158]Tomoyosi Akiba, Hiromitsu Nishizaki, Kiyoaki Aikawa, Xinhui Hu, Yoshiaki Itoh, Tatsuya Kawahara, Seiichi Nakagawa, Hiroaki Nanjo, Yoichi Yamashita:
Overview of the NTCIR-10 SpokenDoc-2 Task. NTCIR 2013 - 2012
- [j33]Graham Neubig, Yuya Akita, Shinsuke Mori, Tatsuya Kawahara:
A monotonic statistical machine translation approach to speaking style transformation. Comput. Speech Lang. 26(5): 349-370 (2012) - [j32]Graham Neubig, Masato Mimura, Shinsuke Mori, Tatsuya Kawahara:
Bayesian Learning of a Language Model from Continuous Speech. IEICE Trans. Inf. Syst. 95-D(2): 614-625 (2012) - [j31]Graham Neubig, Taro Watanabe, Eiichiro Sumita, Shinsuke Mori, Tatsuya Kawahara:
Joint Phrase Alignment and Extraction for Statistical Machine Translation. Inf. Media Technol. 7(2): 793-804 (2012) - [j30]Graham Neubig, Taro Watanabe, Eiichiro Sumita, Shinsuke Mori, Tatsuya Kawahara:
Joint Phrase Alignment and Extraction for Statistical Machine Translation. J. Inf. Process. 20(2): 512-523 (2012) - [c157]Graham Neubig, Taro Watanabe, Shinsuke Mori, Tatsuya Kawahara:
Machine Translation without Words through Substring Alignment. ACL (1) 2012: 165-174 - [c156]Cheongjae Lee, Tatsuya Kawahara:
Hybrid vector space model for flexible voice search. APSIPA 2012: 1-4 - [c155]Koichiro Yoshino, Shinsuke Mori, Tatsuya Kawahara:
Language modeling for spoken dialogue system based on sentence transformation and filtering using predicate-argument structures. APSIPA 2012: 1-4 - [c154]Koichiro Yoshino, Shinsuke Mori, Tatsuya Kawahara:
Language Modeling for Spoken Dialogue System based on Filtering using Predicate-Argument Structures. COLING 2012: 2993-3002 - [c153]Tony Tung, Randy Gomez, Tatsuya Kawahara, Takashi Matsuyama:
Group Dynamics and Multimodal Interaction Modeling Using a Smart Digital Signage. ECCV Workshops (1) 2012: 362-371 - [c152]Randy Gomez, Tatsuya Kawahara, Keisuke Nakamura, Kazuhiro Nakadai:
Multi-party human-robot interaction with distant-talking speech recognition. HRI 2012: 439-446 - [c151]Tatsuya Kawahara:
Transcription System Using Automatic Speech Recognition for the Japanese Parliament (Diet). IAAI 2012: 2224-2228 - [c150]Mijit Ablimit, Tatsuya Kawahara, Askar Hamdulla:
Discriminative approach to lexical entry selection for automatic speech recognition of agglutinative language. ICASSP 2012: 5009-5012 - [c149]Tatsuya Kawahara, Takuma Iwatate, Katsuya Takanashi:
Prediction of Turn-Taking by Combining Prosodic and Eye-Gaze Information in Poster Conversations. INTERSPEECH 2012: 727-730 - [c148]Tomoko Nariai, Kazuyo Tanaka, Tatsuya Kawahara:
Comparative Analysis of Intensity between Native Speakers and Japanese Speakers of English. INTERSPEECH 2012: 895-898 - [c147]Tatsuya Kawahara, Randy Gomez:
Dereverberation based on Wavelet Packet Filtering for Robust Automatic Speech Recognition. INTERSPEECH 2012: 1243-1246 - [c146]Yuya Akita, Makoto Watanabe, Tatsuya Kawahara:
Automatic Transcription of Lecture Speech using Language Model Based on Speaking-Style Transformation of Proceeding Texts. INTERSPEECH 2012: 2326-2329 - [c145]Tomoyosi Akiba, Hiromitsu Nishizaki, Kiyoaki Aikawa, Tatsuya Kawahara, Tomoko Matsui:
Designing an Evaluation Framework for Spoken Term Detection and Spoken Document Retrieval at the NTCIR-9 SpokenDoc Task. LREC 2012: 3527-3534 - [c144]Tatsuya Kawahara:
Multi-modal Sensing and Analysis of Poster Conversations: Toward Smart Posterboard. SIGDIAL Conference 2012: 1-9 - 2011
- [j29]Shinsuke Sakai, Tatsuya Kawahara, Hisashi Kawai:
Probabilistic Concatenation Modeling for Corpus-Based Speech Synthesis. IEICE Trans. Inf. Syst. 94-D(10): 2006-2014 (2011) - [j28]Teruhisa Misu, Komei Sugiura, Tatsuya Kawahara, Kiyonori Ohtake, Chiori Hori, Hideki Kashioka, Hisashi Kawai, Satoshi Nakamura:
Modeling spoken decision support dialogue and optimization of its dialogue strategy. ACM Trans. Speech Lang. Process. 7(3): 10:1-10:18 (2011) - [c143]Teruhisa Misu, Komei Sugiura, Tatsuya Kawahara, Kiyonori Ohtake, Chiori Hori, Hideki Kashioka, Satoshi Nakamura:
Online Learning of Bayes Risk-Based Optimization of Dialogue Management for Document Retrieval Systems with Speech Interface. IWSDS 2011: 29-52 - [c142]Graham Neubig, Taro Watanabe, Eiichiro Sumita, Shinsuke Mori, Tatsuya Kawahara:
An Unsupervised Model for Joint Phrase Alignment and Extraction. ACL 2011: 632-641 - [c141]Makoto Watanabe, Naoko Tosa, Tatsuya Kawahara:
Japanese Painting Study Tool: A System for Creating Nihonga Portraits. Culture and Computing 2011: 159-160 - [c140]Randy Gomez, Tatsuya Kawahara:
Denoising Using Optimized Wavelet Filtering for Automatic Speech Recognition. INTERSPEECH 2011: 1673-1676 - [c139]Yuya Akita, Tatsuya Kawahara:
Automatic Comma Insertion of Lecture Transcripts Based on Multiple Annotations. INTERSPEECH 2011: 2889-2892 - [c138]Cheongjae Lee, Tatsuya Kawahara, Alexander I. Rudnicky:
Combining Slot-based Vector Space Model for Voice Book Search. IWSDS 2011: 31-39 - [c137]Tomoyosi Akiba, Hiromitsu Nishizaki, Kiyoaki Aikawa, Tatsuya Kawahara, Tomoko Matsui:
Overview of the IR for Spoken Documents Task in NTCIR-9 Workshop. NTCIR 2011 - [c136]Koichiro Yoshino, Shinsuke Mori, Tatsuya Kawahara:
Spoken Dialogue System based on Information Extraction using Similarity of Predicate Argument Structures. SIGDIAL Conference 2011: 59-66 - 2010
- [j27]Takahiro Shinozaki, Sadaoki Furui, Tatsuya Kawahara:
Gaussian Mixture Optimization Based on Efficient Cross-Validation. IEEE J. Sel. Top. Signal Process. 4(3): 540-547 (2010) - [j26]David Cournapeau, Shinji Watanabe, Atsushi Nakamura, Tatsuya Kawahara:
Online Unsupervised Classification With Model Comparison in the Variational Bayes Framework for Voice Activity Detection. IEEE J. Sel. Top. Signal Process. 4(6): 1071-1083 (2010) - [j25]Teruhisa Misu, Tatsuya Kawahara:
Bayes risk-based dialogue management for document retrieval system with speech interface. Speech Commun. 52(1): 61-71 (2010) - [j24]Kentaro Ishizuka, Shoko Araki, Tatsuya Kawahara:
Speech Activity Detection for Multi-Party Conversation Analyses Based on Likelihood Ratio Test on Spatial Magnitude. IEEE Trans. Speech Audio Process. 18(6): 1354-1365 (2010) - [j23]Yuya Akita, Tatsuya Kawahara:
Statistical Transformation of Language and Pronunciation Models for Spontaneous Speech Recognition. IEEE Trans. Speech Audio Process. 18(6): 1539-1549 (2010) - [j22]Randy Gomez, Tatsuya Kawahara:
Robust Speech Recognition Based on Dereverberation Parameter Optimization Using Acoustic Model Likelihood. IEEE Trans. Speech Audio Process. 18(7): 1708-1716 (2010) - [c135]Randy Gomez, Tatsuya Kawahara, Kazuhiro Nakadai:
Robust hands-free Automatic Speech Recognition for human-machine interaction. Humanoids 2010: 138-143 - [c134]David Cournapeau, Shinji Watanabe, Atsushi Nakamura, Tatsuya Kawahara:
Using online model comparison in the Variational Bayes framework for online unsupervised Voice Activity Detection. ICASSP 2010: 4462-4465 - [c133]Randy Gomez, Tatsuya Kawahara:
Optimizing spectral subtraction and wiener filtering for robust speech recognition in reverberant and noisy conditions. ICASSP 2010: 4566-4569 - [c132]Graham Neubig, Yuya Akita, Shinsuke Mori, Tatsuya Kawahara:
Improved statistical models for SMT-based speaking style transformation. ICASSP 2010: 5206-5209 - [c131]Yuya Akita, Masato Mimura, Graham Neubig, Tatsuya Kawahara:
Semi-automated update of automatic transcription system for the Japanese national congress. INTERSPEECH 2010: 338-341 - [c130]Randy Gomez, Tatsuya Kawahara:
An improved wavelet-based dereverberation for robust automatic speech recognition. INTERSPEECH 2010: 578-581 - [c129]Tatsuya Kawahara, Norihiro Katsumaru, Yuya Akita, Shinsuke Mori:
Classroom note-taking system for hearing impaired students using automatic speech recognition adapted to lectures. INTERSPEECH 2010: 626-629 - [c128]Yoshiaki Itoh, Hiromitsu Nishizaki, Xinhui Hu, Hiroaki Nanjo, Tomoyosi Akiba, Tatsuya Kawahara, Seiichi Nakagawa, Tomoko Matsui, Yoichi Yamashita, Kiyoaki Aikawa:
Constructing Japanese test collections for spoken term detection. INTERSPEECH 2010: 677-680 - [c127]Graham Neubig, Masato Mimura, Shinsuke Mori, Tatsuya Kawahara:
Learning a language model from continuous speech. INTERSPEECH 2010: 1053-1056 - [c126]Tatsuya Kawahara, Kouhei Sumi, Zhi-Qiang Chang, Katsuya Takanashi:
Detection of hot spots in poster conversations based on reactive tokens of audience. INTERSPEECH 2010: 3042-3045 - [c125]Tatsuya Kawahara:
Automatic transcription of parliamentary meetings and classroom lectures - A sustainable approach and real system evaluations -. ISCSLP 2010: 1-6 - [c124]Koichiro Yoshino, Tatsuya Kawahara:
Spoken Dialogue System Based on Information Extraction from Web Text. IWSDS 2010: 196-197
2000 – 2009
- 2009
- [j21]Hongcui Wang, Tatsuya Kawahara:
Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System. IEICE Trans. Inf. Syst. 92-D(12): 2462-2468 (2009) - [j20]Tomoyosi Akiba, Kiyoaki Aikawa, Yoshiaki Itoh, Tatsuya Kawahara, Hiroaki Nanjo, Hiromitsu Nishizaki, Norihito Yasuda, Yoichi Yamashita, Katunobu Itou:
Construction of a Test Collection for Spoken Document Retrieval from Lecture Audio Data. Inf. Media Technol. 4(2): 485-497 (2009) - [j19]Tomoyosi Akiba, Kiyoaki Aikawa, Yoshiaki Itoh, Tatsuya Kawahara, Hiroaki Nanjo, Hiromitsu Nishizaki, Norihito Yasuda, Yoichi Yamashita, Katunobu Itou:
Construction of a Test Collection for Spoken Document Retrieval from Lecture Audio Data. J. Inf. Process. 17: 82-94 (2009) - [j18]Hongcui Wang, Christopher J. Waple, Tatsuya Kawahara:
Computer Assisted Language Learning system based on dynamic question generation and error prediction for automatic speech recognition. Speech Commun. 51(10): 995-1005 (2009) - [c123]Tatsuya Kawahara:
New perspectives on spoken language understanding: Does machine need to fully understand speech? ASRU 2009: 46-50 - [c122]Tatsuya Kawahara, Masato Mimura, Yuya Akita:
Language model transformation applied to lightly supervised training of acoustic model for congress meetings. ICASSP 2009: 3853-3856 - [c121]Shinsuke Sakai, Tatsuya Kawahara, Tohru Shimizu, Satoshi Nakamura:
Optimal learning of P-Layer additive F0 models with cross-validation. ICASSP 2009: 4245-4248 - [c120]Yuya Akita, Masato Mimura, Tatsuya Kawahara:
Automatic transcription system for meetings of the Japanese national congress. INTERSPEECH 2009: 84-87 - [c119]Kouhei Sumi, Tatsuya Kawahara, Jun Ogata, Masataka Goto:
Acoustic event detection for spotting "hot spots" in podcasts. INTERSPEECH 2009: 1143-1146 - [c118]Randy Gomez, Tatsuya Kawahara:
Optimization of dereverberation parameters based on likelihood of speech recognizer. INTERSPEECH 2009: 1223-1226 - [c117]Graham Neubig, Shinsuke Mori, Tatsuya Kawahara:
A WFST-based log-linear framework for speaking-style transformation. INTERSPEECH 2009: 1495-1498 - [c116]Tatsuya Kawahara, Hongcui Wang, Yasushi Tsubota, Masatake Dantsuji:
Japanese CALL system based on dynamic question generation and error prediction for ASR. SLaTE 2009 - [c115]Kazunori Komatani, Tatsuya Kawahara, Hiroshi G. Okuno:
A Model of Temporally Changing User Behaviors in a Deployed Spoken Dialogue System. UMAP 2009: 409-414 - 2008
- [j17]David Cournapeau, Tatsuya Kawahara:
Voice Activity Detection Based on High Order Statistics and Online EM Algorithm. IEICE Trans. Inf. Syst. 91-D(12): 2854-2861 (2008) - [c114]Teruhisa Misu, Tatsuya Kawahara:
Bayes Risk-based Dialogue Management for Document Retrieval System with Speech Interface. COLING (Posters) 2008: 59-62 - [c113]Takahiro Shinozaki, Tatsuya Kawahara:
GMM and HMM training by aggregated EM algorithm with increased ensemble sizes for robust parameter estimation. ICASSP 2008: 4405-4408 - [c112]David Cournapeau, Tatsuya Kawahara:
Using variational bayes free energy for unsupervised voice activity detection. ICASSP 2008: 4429-4432 - [c111]Shinichi Sakai, Tatsuya Kawahara, Shun Nakamura:
Admissible stopping in viterbi beam search for unit selection in concatenative speech synthesis. ICASSP 2008: 4613-4616 - [c110]Tatsuya Kawahara, Yusuke Nemoto, Yuya Akita:
Automatic lecture transcription by exploiting presentation slide information for language model adaptation. ICASSP 2008: 4929-4932 - [c109]Hongcui Wang, Tatsuya Kawahara:
Effective error prediction using decision tree for ASR grammar network in call system. ICASSP 2008: 5069-5072 - [c108]Kentaro Ishizuka, Shoko Araki, Tatsuya Kawahara:
Statistical speech activity detection based on spatial power distribution for analyses of poster presentations. INTERSPEECH 2008: 99-102 - [c107]Kazunori Komatani, Tatsuya Kawahara, Hiroshi G. Okuno:
Predicting ASR errors by exploiting barge-in rate of individual users for spoken dialogue systems. INTERSPEECH 2008: 183-186 - [c106]Tatsuya Kawahara, Hisao Setoguchi, Katsuya Takanashi, Kentaro Ishizuka, Shoko Araki:
Multi-modal recording, analysis and indexing of poster sessions. INTERSPEECH 2008: 1622-1625 - [c105]Tatsuya Kawahara, Masayoshi Toyokura, Teruhisa Misu, Chiori Hori:
Detection of feeling through back-channels in spoken dialogue. INTERSPEECH 2008: 1696 - [c104]Hongcui Wang, Tatsuya Kawahara:
A Japanese CALL system based on dynamic question generation and error prediction for ASR. INTERSPEECH 2008: 1737-1740 - [c103]Tetsuro Sasada, Shinsuke Mori, Tatsuya Kawahara:
Extracting word-pronunciation pairs from comparable set of text and speech. INTERSPEECH 2008: 1821-1824 - [c102]Takahiro Shinozaki, Sadaoki Furui, Tatsuya Kawahara:
Aggregated cross-validation and its efficient application to Gaussian mixture optimization. INTERSPEECH 2008: 2382-2385 - [c101]Tomoyosi Akiba, Kiyoaki Aikawa, Yoshiaki Itoh, Tatsuya Kawahara, Hiroaki Nanjo, Hiromitsu Nishizaki, Norihito Yasuda, Yoichi Yamashita, Katunobu Itou:
Test Collections for Spoken Document Retrieval from Lecture Audio Data. LREC 2008 - 2007
- [j16]Ian R. Lane, Tatsuya Kawahara, Tomoko Matsui, Satoshi Nakamura:
Out-of-Domain Utterance Detection Using Classification Confidences of Multiple Topics. IEEE Trans. Speech Audio Process. 15(1): 150-161 (2007) - [c100]Takahiro Shinozaki, Tatsuya Kawahara:
HMM training based on CV-EM and CV Gaussian mixture optimization. ASRU 2007: 318-322 - [c99]Yuya Akita, Tatsuya Kawahara:
Topic-Independent Speaking-Style Transformation of Language Model for Spontaneous Speech Recognition. ICASSP (4) 2007: 33-36 - [c98]Tatsuya Kawahara, Masahiro Saikou, Katsuya Takanashi:
Automatic Detection of Sentence and Clause Units using Local Syntactic Dependency. ICASSP (4) 2007: 125-128 - [c97]Teruhisa Misu, Tatsuya Kawahara:
Speech-Based Interactive Information Guidance System using Question-Answering Technique. ICASSP (4) 2007: 145-148 - [c96]Hisao Setoguchi, Katsuya Takanashi, Tatsuya Kawahara:
Multi-modal conversational analysis of poster presentations using multiple sensors. TMR@ICMI 2007: 43-46 - [c95]Teruhisa Misu, Tatsuya Kawahara:
An Interactive Framework for Document Retrieval and Presentation with Question-Answering Function in Restricted Domain. IEA/AIE 2007: 126-134 - [c94]Kazunori Komatani, Tatsuya Kawahara, Hiroshi G. Okuno:
Analyzing temporal transition of real user's behaviors in a spoken dialogue system. INTERSPEECH 2007: 142-145 - [c93]Yuya Akita, Yusuke Nemoto, Tatsuya Kawahara:
PLSA-based topic detection in meetings for adaptation of lexicon and language model. INTERSPEECH 2007: 602-605 - [c92]Takahiro Shinozaki, Tatsuya Kawahara:
Gaussian mixture optimization for HMM based on efficient cross-validation. INTERSPEECH 2007: 2061-2064 - [c91]Christopher J. Waple, Hongcui Wang, Tatsuya Kawahara, Yasushi Tsubota, Masatake Dantsuji:
Evaluating and optimizing Japanese tutor system featuring dynamic question generation and interactive guidance. INTERSPEECH 2007: 2177-2180 - [c90]Teruhisa Misu, Tatsuya Kawahara:
Bayes risk-based optimization of dialogue management for document retrieval system with speech interface. INTERSPEECH 2007: 2705-2708 - [c89]David Cournapeau, Tatsuya Kawahara:
Evaluation of real-time voice activity detection based on high order statistics. INTERSPEECH 2007: 2945-2948 - [c88]Hiroaki Kokubo, Nobuo Hataoka, Akinobu Lee, Tatsuya Kawahara, Kiyohiro Shikano:
Real-Time Continuous Speech Recognition System on SH-4A Microprocessor. MMSP 2007: 35-38 - [e1]Sadaoki Furui, Tatsuya Kawahara:
IEEE Workshop on Automatic Speech Recognition & Understanding, ASRU 2007, Kyoto, Japan, December 9-13, 2007. IEEE 2007, ISBN 978-1-4244-1746-9 [contents] - 2006
- [j15]Ian R. Lane, Tatsuya Kawahara:
Verification of Speech Recognition Results Incorporating In-domain Confidence and Discourse Coherence Measures. IEICE Trans. Inf. Syst. 89-D(3): 931-938 (2006) - [j14]Carlos Troncoso, Tatsuya Kawahara:
Trigger-Based Language Model Adaptation for Automatic Transcription of Panel Discussions. IEICE Trans. Inf. Syst. 89-D(3): 1024-1031 (2006) - [j13]Teruhisa Misu, Tatsuya Kawahara:
Dialogue strategy to clarify user's queries for document retrieval system with speech interface. Speech Commun. 48(9): 1137-1150 (2006) - [c87]Ryoji Hamabe, Kiyotaka Uchimoto, Tatsuya Kawahara, Hitoshi Isahara:
Detection of Quotations and Inserted Clauses and Its Application to Dependency Structure Analysis in Spontaneous Japanese. ACL 2006 - [c86]Yuya Akita, Tatsuya Kawahara:
Efficient Estimation of Language Model Statistics of Spontaneous Speech Via Statistical Transformation Model. ICASSP (1) 2006: 1049-1052 - [c85]Yuya Akita, Masahiro Saikou, Hiroaki Nanjo, Tatsuya Kawahara:
Sentence boundary detection of spontaneous Japanese using statistical language model and support vector machines. INTERSPEECH 2006 - [c84]David Cournapeau, Tatsuya Kawahara, Kenji Mase, Tomoji Toriyama:
Voice activity detector based on enhanced cumulant of LPC residual and on-line EM algorithm. INTERSPEECH 2006 - [c83]Ryoji Hamabe, Kiyotaka Uchimoto, Tatsuya Kawahara, Hitoshi Isahara:
Detection of quotations and inserted clauses and its application to dependency structure analysis in spontaneous Japanese. INTERSPEECH 2006 - [c82]Yusuke Kida, Tatsuya Kawahara:
Evaluation of voice activity detection by combining multiple features with weight adaptation. INTERSPEECH 2006 - [c81]Teruhisa Misu, Tatsuya Kawahara:
A bootstrapping approach for developing language model of new spoken dialogue systems by selecting web texts. INTERSPEECH 2006 - [c80]Shinsuke Sakai, Tatsuya Kawahara:
Decision tree-based training of probabilistic concatenation models for corpus-based speech synthesis. INTERSPEECH 2006 - [c79]Christopher J. Waple, Yasushi Tsubota, Masatake Dantsuji, Tatsuya Kawahara:
Prototyping a call system for students of Japanese using dynamic diagram generation and interactive hints. INTERSPEECH 2006 - [c78]Kiyotaka Uchimoto, Ryoji Hamabe, Takehiko Maruyama, Katsuya Takanashi, Tatsuya Kawahara, Hitoshi Isahara:
Dependency-structure Annotation to Corpus of Spontaneous Japanese. LREC 2006: 635-638 - [c77]Hiroaki Kokubo, Hiroaki Hataoka, Akinobu Lee, Tatsuya Kawahara, Kiyohiro Shikano:
Embedded Julius: Continuous Speech Recognition Software for Microprocessor. MMSP 2006: 378-381 - 2005
- [j12]Yuya Akita, Tatsuya Kawahara:
Language Model Adaptation Based on PLSA of Topics and Speakers for Automatic Transcription of Panel Discussions. IEICE Trans. Inf. Syst. 88-D(3): 439-445 (2005) - [j11]Ian R. Lane, Tatsuya Kawahara, Tomoko Matsui, Satoshi Nakamura:
Dialogue Speech Recognition by Combining Hierarchical Topic Classification and Language Model Switching. IEICE Trans. Inf. Syst. 88-D(3): 446-454 (2005) - [j10]Yuya Akita, Tatsuya Kawahara:
Unsupervised speaker indexing of discussions using anchor models. Syst. Comput. Jpn. 36(9): 25-33 (2005) - [j9]Masafumi Nishida, Tatsuya Kawahara:
Speaker model selection based on the Bayesian information criterion applied to unsupervised speaker indexing. IEEE Trans. Speech Audio Process. 13(4): 583-592 (2005) - [j8]Kazunori Komatani, Shinichi Ueno, Tatsuya Kawahara, Hiroshi G. Okuno:
User Modeling in Spoken Dialogue Systems to Generate Flexible Guidance. User Model. User Adapt. Interact. 15(1-2): 169-183 (2005) - [c76]Yuya Akita, Tatsuya Kawahara:
Generalized Statistical Modeling of Pronunciation Variations using Variable-length Phone Context. ICASSP (1) 2005: 689-692 - [c75]Ian R. Lane, Tatsuya Kawahara:
Incorporating Dialogue Context and Topic Clustering in Out-of-Domain Detection. ICASSP (1) 2005: 1045-1048 - [c74]Hiroaki Nanjo, Tatsuya Kawahara:
A New ASR Evaluation Measure and Minimum Bayes-Risk Decoding for Open-domain Speech Understanding. ICASSP (1) 2005: 1053-1056 - [c73]Ian R. Lane, Tatsuya Kawahara:
Utterance verification incorporating in-domain confidence and discourse coherence measures. INTERSPEECH 2005: 421-424 - [c72]Hiroaki Nanjo, Teruhisa Misu, Tatsuya Kawahara:
Minimum Bayes-risk decoding considering word significance for information retrieval system. INTERSPEECH 2005: 561-564 - [c71]Teruhisa Misu, Tatsuya Kawahara:
Dialogue strategy to clarify user's queries for document retrieval system with speech interface. INTERSPEECH 2005: 637-640 - [c70]Carlos Troncoso, Tatsuya Kawahara:
Trigger-based language model adaptation for automatic meeting transcription. INTERSPEECH 2005: 1297-1300 - [c69]Yusuke Kida, Tatsuya Kawahara:
Voice activity detection based on optimally weighted combination of multiple features. INTERSPEECH 2005: 2621-2624 - [c68]Teruhisa Misu, Tatsuya Kawahara:
Speech-based Information Retrieval System with Clarification Dialogue Strategy. HLT/EMNLP 2005: 1003-1010 - 2004
- [j7]Sadaoki Furui, Mary E. Beckman, Julia Hirschberg, Shuichi Itahashi, Tatsuya Kawahara, Satoshi Nakamura, Shrikanth S. Narayanan:
Introduction to the Special Issue on Spontaneous Speech Processing. IEEE Trans. Speech Audio Process. 12(4): 349-350 (2004) - [j6]Hiroaki Nanjo, Tatsuya Kawahara:
Language model and speaking rate adaptation for spontaneous presentation speech recognition. IEEE Trans. Speech Audio Process. 12(4): 391-400 (2004) - [j5]Tatsuya Kawahara, Masahiro Hasegawa, Kazuya Shitaoka, Tasuku Kitade, Hiroaki Nanjo:
Automatic indexing of lecture presentations using unsupervised learning of presumed discourse markers. IEEE Trans. Speech Audio Process. 12(4): 409-419 (2004) - [c67]Kazunori Komatani, Teruhisa Misu, Tatsuya Kawahara, Hiroshi G. Okuno:
Efficient Confirmation Strategy for Large-scale Text Retrieval Systems with Spoken Dialogue Interface. COLING 2004 - [c66]Kazuya Shitaoka, Kiyotaka Uchimoto, Tatsuya Kawahara, Hitoshi Isahara:
Dependency Structure Analysis and Sentence Boundary Detection in Spontaneous Japanese. COLING 2004 - [c65]Masafumi Nishida, Tatsuya Kawahara:
Speaker indexing and adaptation using speaker clustering based on statistical model selection. ICASSP (1) 2004: 353-356 - [c64]Hiroaki Nanjo, Tasuku Kitade, Tatsuya Kawahara:
Automatic indexing of key sentences for lecture archives using statistics of presumed discourse markers. ICASSP (1) 2004: 449-452 - [c63]Ian R. Lane, Tatsuya Kawahara, Tomoko Matsui, Satoshi Nakamura:
Out-of-domain detection based on confidence measures from multiple topic classification. ICASSP (1) 2004: 757-760 - [c62]Akinobu Lee, Kiyohiro Shikano, Tatsuya Kawahara:
Real-time word confidence scoring using local posterior probabilities on tree trellis search. ICASSP (1) 2004: 793-796 - [c61]Yuya Akita, Masahiro Hasegawa, Tatsuya Kawahara:
Automatic audio archiving system for panel discussions. ICME 2004: 1859-1862 - [c60]Kazunori Komatani, Ryosuke Ito, Tatsuya Kawahara, Hiroshi G. Okuno:
Recognition of Emotional States in Spoken Dialogue with a Robot. IEA/AIE 2004: 413-423 - [c59]Teruhisa Misu, Tatsuya Kawahara, Kazunori Komatani:
Confirmation strategy for document retrieval systems with spoken dialog interface. INTERSPEECH 2004: 45-48 - [c58]Yuya Akita, Tatsuya Kawahara:
Language model adaptation based on PLSA of topics and speakers. INTERSPEECH 2004: 1045-1048 - [c57]Tatsuya Kawahara, Kiyotaka Uchimoto, Hitoshi Isahara, Kazuya Shitaoka:
Dependency structure analysis and sentence boundary detection in spontaneous Japanese. INTERSPEECH 2004: 1353-1356 - [c56]Tatsuya Kawahara, Masatake Dantsuji, Yasushi Tsubota:
Practical use of English pronunciation system for Japanese students in the CALL classroom. INTERSPEECH 2004: 1689-1692 - [c55]Tasuku Kitade, Tatsuya Kawahara, Hiroaki Nanjo:
Automatic extraction of key sentences from oral presentations using statistical measure based on discourse markers. INTERSPEECH 2004: 2169-2172 - [c54]Tatsuya Kawahara, Ian Richard Lane, Tomoko Matsui, Satoshi Nakamura:
Topic classification and verification modeling for out-of-domain utterance detection. INTERSPEECH 2004: 2197-2200 - [c53]Ian Richard Lane, Tatsuya Kawahara, Shinichi Ueno:
Example-based training of dialogue planning incorporating user and situation models. INTERSPEECH 2004: 2837-2840 - [c52]Tatsuya Kawahara, Kazuya Shitaoka, Hiroaki Nanjo:
Automatic transformation of lecture transcription into document style using statistical framework. INTERSPEECH 2004: 2881-2884 - [c51]Tatsuya Kawahara, Akinobu Lee, Kazuya Takeda, Katsunobu Itou, Kiyohiro Shikano:
Recent progress of open-source LVCSR engine julius and Japanese model repository. INTERSPEECH 2004: 3069-3072 - [c50]Shinichi Ueno, Fumihiro Adachi, Kazunori Komatani, Tatsuya Kawahara, Hiroshi G. Okuno:
Bus Information System Based on User Models and Dynamic Generation of VoiceXML Scripts. JSAI Workshops 2004: 46-60 - 2003
- [c49]Yoji Kiyota, Sadao Kurohashi, Teruhisa Misu, Kazunori Komatani, Tatsuya Kawahara, Fuyuko Kido:
Dialog Navigator : A Spoken Dialog Q-A System based on Large Text Knowledge Base. ACL (Companion) 2003: 149-152 - [c48]Kazunori Komatani, Shinichi Ueno, Tatsuya Kawahara, Hiroshi G. Okuno:
Flexible Guidance Generation Using User Model in Spoken Dialogue Systems. ACL 2003: 256-263 - [c47]Masafumi Nishida, Tatsuya Kawahara:
Unsupervised speaker indexing using speaker model selection based on Bayesian information criterion. ICASSP (1) 2003: 172-175 - [c46]Ian R. Lane, Tatsuya Kawahara, Tomoko Matsui:
Language model switching based on topic detection for dialog speech recognition. ICASSP (1) 2003: 616-619 - [c45]Ian R. Lane, Tatsuya Kawahara, Tomoko Matsui, Satoshi Nakamura:
Hierarchical topic classification for dialog speech recognition based on language model switching. INTERSPEECH 2003: 429-432 - [c44]Kazunori Komatani, Shinichi Ueno, Tatsuya Kawahara, Hiroshi G. Okuno:
User modeling in spoken dialogue systems for flexible guidance generation. INTERSPEECH 2003: 745-748 - [c43]Tatsuya Kawahara, Ryosuke Ito, Kazunori Komatani:
Spoken dialogue system for queries on appliance manuals using hierarchical confirmation strategy. INTERSPEECH 2003: 1701-1704 - [c42]Masafumi Nishida, Tatsuya Kawahara:
Speaker model selection using Bayesian information criterion for speaker indexing and speaker adaptation. INTERSPEECH 2003: 1849-1852 - [c41]Yuya Akita, Tatsuya Kawahara:
Unsupervised speaker indexing using anchor models and automatic transcription of discussions. INTERSPEECH 2003: 2985-2988 - [c40]Kazunori Komatani, Fumihiro Adachi, Shinichi Ueno, Tatsuya Kawahara, Hiroshi G. Okuno:
Flexible Spoken Dialogue System based on User Models and Dynamic Generation of VoiceXML Scripts. SIGDIAL Workshop 2003: 87-96 - 2002
- [c39]Kazunori Komatani, Tatsuya Kawahara, Ryosuke Ito, Hiroshi G. Okuno:
Efficient Dialogue Strategy to Find Users' Intended Items from Information Query Results. COLING 2002 - [c38]Tatsuya Kawahara, Masahiro Hasegawa:
Automatic indexing of lecture speech by extracting topic-independent discourse markers. ICASSP 2002: 1-4 - [c37]Hiroaki Nanjo, Tatsuya Kawahara:
Speaking-rate dependent decoding and adaptation for spontaneous lecture speech recognition. ICASSP 2002: 725-728 - [c36]Yoko Yamakata, Tatsuya Kawahara, Hiroshi G. Okuno:
Belief network based disambiguation of object reference in spoken dialogue system for robot. INTERSPEECH 2002: 177-180 - [c35]Antoine Raux, Tatsuya Kawahara:
Automatic intelligibility assessment and diagnosis of critical pronunciation errors for computer-assisted pronunciation learning. INTERSPEECH 2002: 737-740 - [c34]Kazunori Imoto, Yasushi Tsubota, Antoine Raux, Tatsuya Kawahara, Masatake Dantsuji:
Modeling and automatic detection of English sentence stress for computer-assisted English prosody learning system. INTERSPEECH 2002: 749-752 - [c33]Yasushi Tsubota, Tatsuya Kawahara, Masatake Dantsuji:
Recognition and verification of English by Japanese students for computer-assisted language learning system. INTERSPEECH 2002: 1205-1208 - [c32]Kozo Okuda, Tatsuya Kawahara, Satoshi Nakamura:
Speaking rate compensation based on likelihood criterion in acoustic model training and decoding. INTERSPEECH 2002: 2589-2592 - [c31]Akinobu Lee, Tatsuya Kawahara, Kazuya Takeda, Masato Mimura, Atsushi Yamada, Akinori Ito, Katsunobu Itou, Kiyohiro Shikano:
Continuous Speech Recognition Consortium an Open Repository for CSR Tools and Models. LREC 2002 - 2001
- [c30]Akinobu Lee, Tatsuya Kawahara, Kiyohiro Shikano:
Gaussian mixture selection using context-independent HMM. ICASSP 2001: 69-72 - [c29]Kazunori Komatani, Katsuaki Tanaka, Hiroaki Kashima, Tatsuya Kawahara:
Domain-independent spoken dialogue platform using key-phrase spotting based on combined language model. INTERSPEECH 2001: 1319-1322 - [c28]Akinobu Lee, Tatsuya Kawahara, Kiyohiro Shikano:
Julius - an open source real-time large vocabulary recognition engine. INTERSPEECH 2001: 1691-1694 - [c27]Hiroaki Nanjo, Kazuomi Kato, Tatsuya Kawahara:
Speaking rate dependent acoustic modeling for spontaneous lecture speech recognition. INTERSPEECH 2001: 2531-2534 - 2000
- [c26]Kazunori Komatani, Tatsuya Kawahara:
Flexible Mixed-Initiative Dialogue Management using Concept-Level Confidence Measures of Speech Recognizer Output. COLING 2000: 467-473 - [c25]Akinobu Lee, Tatsuya Kawahara, Kazuya Takeda, Kiyohiro Shikano:
A new phonetic tied-mixture model for efficient decoding. ICASSP 2000: 1269-1272 - [c24]Hiroya Fujisaki, Katsuhiko Shirai, Shuji Doshita, Seiichi Nakagawa, Keikichi Hirose, Shuichi Itahashi, Tatsuya Kawahara, Sumio Ohno, Hideaki Kikuchi, Kenji Abe, Shinya Kiriyama:
Overview of an intelligent system for information retrieval based on human-machine dialogue through spoken language. INTERSPEECH 2000: 70-73 - [c23]Kazuomi Kato, Hiroaki Nanjo, Tatsuya Kawahara:
Automatic transcription of lecture speech using topic-independent language modeling. INTERSPEECH 2000: 162-165 - [c22]Kazunori Imoto, Masatake Dantsuji, Tatsuya Kawahara:
Modelling of the perception of English sentence stress for computer-assisted language learning. INTERSPEECH 2000: 175-178 - [c21]Tatsuya Kawahara, Akinobu Lee, Tetsunori Kobayashi, Kazuya Takeda, Nobuaki Minematsu, Shigeki Sagayama, Katsunobu Itou, Akinori Ito, Mikio Yamamoto, Atsushi Yamada, Takehito Utsuro, Kiyohiro Shikano:
Free software toolkit for Japanese large vocabulary continuous speech recognition. INTERSPEECH 2000: 476-479 - [c20]Yasushi Tsubota, Masatake Dantsuji, Tatsuya Kawahara:
Computer-assisted English vowel learning system for Japanese speakers using cross language formant structures. INTERSPEECH 2000: 566-569 - [c19]Kazunori Komatani, Tatsuya Kawahara:
Generating effective confirmation and guidance using two-level confidence measures for dialogue systems. INTERSPEECH 2000: 648 - [c18]Hiroaki Nanjo, Akinobu Lee, Tatsuya Kawahara:
Automatic diagnosis of recognition errors in large vocabulary continuous speech recognition systems. INTERSPEECH 2000: 1027-1030 - [c17]Katsunobu Itou, Kiyohiro Shikano, Tatsuya Kawahara, Kazuya Takeda, Atsushi Yamada, Akinori Ito, Takehito Utsuro, Tetsunori Kobayashi, Nobuaki Minematsu, Mikio Yamamoto, Shigeki Sagayama, Akinobu Lee:
IPA Japanese Dictation Free Software Project. LREC 2000
1990 – 1999
- 1999
- [c16]Tatsuya Kawahara, Shuji Doshita:
Topic independent language model for key-phrase detection and verification. ICASSP 1999: 685-688 - 1998
- [j4]Tatsuya Kawahara, Chin-Hui Lee, Biing-Hwang Juang:
Flexible speech understanding based on combined key-phrase detection and verification. IEEE Trans. Speech Audio Process. 6(6): 558-568 (1998) - [c15]Chul-Ho Jo, Tatsuya Kawahara, Shuji Doshita, Masatake Dantsuji:
Automatic pronunciation error detection and guidance for foreign language learning. ICSLP 1998 - [c14]Tatsuya Kawahara, Kentaro Ishizuka, Shuji Doshita, Chin-Hui Lee:
Speaking-style dependent lexicalized filler model for key-phrase detection and verification. ICSLP 1998 - [c13]Tatsuya Kawahara, Tetsunori Kobayashi, Kazuya Takeda, Nobuaki Minematsu, Katsunobu Itou, Mikio Yamamoto, Atsushi Yamada, Takehito Utsuro, Kiyohiro Shikano:
Sharable software repository for Japanese large vocabulary continuous speech recognition. ICSLP 1998 - [c12]Akinobu Lee, Tatsuya Kawahara, Shuji Doshita:
An efficient two-pass search algorithm using word trellis index. ICSLP 1998 - [c11]Felix C. M. Quimbo, Tatsuya Kawahara, Shuji Doshita:
Prosodic analysis of fillers and self-repair in Japanese speech. ICSLP 1998 - 1997
- [j3]Shigenori Maeda, Masaaki Okamoto, Tatsuya Kawahara, Michihiko Minoh, Katsuo Ikeda, Shuji Doshita:
Individual identification by integrating facial image, walking image, and vocal features. Syst. Comput. Jpn. 28(6): 77-84 (1997) - [c10]Hirokazu Masataki, Yoshinori Sagisaka, Kazuya Hisaki, Tatsuya Kawahara:
Task adaptation using MAP estimation in N-gram language modeling. ICASSP 1997: 783-786 - [c9]Tatsuya Kawahara, Chin-Hui Lee, Biing-Hwang Juang:
Combining key-phrase detection and subword-based verification for flexible speech understanding. ICASSP 1997: 1159-1162 - 1996
- [c8]Tatsuya Kawahara, Norihide Kitaoka, Shuji Doshita:
Concept-based phrase spotting approach for spontaneous speech understanding. ICASSP 1996: 291-298 - [c7]Tatsuya Kawahara, Chin-Hui Lee, Biing-Hwang Juang:
Key-phrase detection and verification for flexible speech understanding. ICSLP 1996: 861-864 - 1994
- [j2]Tatsuya Kawahara, Shuji Doshita, Shinji Matsumoto:
Continuous speech recognition based on A* search with word-pair constraint as heuristics. Syst. Comput. Jpn. 25(7): 101-110 (1994) - [c6]Tatsuya Kawahara, Masahiro Araki, Shuji Doshita:
Heuristic search integrating syntactic, semantic and dialog-level constraints. ICASSP (2) 1994: 25-28 - [c5]Tatsuya Kawahara, Toshihiko Munetsugu, Norihide Kitaoka, Shuji Doshita:
Keyword and phrase spotting with heuristic language model. ICSLP 1994: 815-818 - 1992
- [c4]Tatsuya Kawahara, Shuji Doshita:
HMM based on pair-wise Bayes classifiers. ICASSP 1992: 365-368 - 1991
- [j1]Tatsuya Kawahara, Shuji Doshita, Shigeyoshi Kitazawa:
Speaker-independent consonant recognition by integrating discriminant analysis and hmm. Syst. Comput. Jpn. 22(7): 84-95 (1991) - [c3]Tatsuya Kawahara, Shuji Doshita:
Phoneme recognition by combining discriminant analysis and HMM. ICASSP 1991: 557-560 - [c2]Pascale Fung, Tatsuya Kawahara, Shuji Doshita:
Unsupervised speaker normalization by speaker Markov model converter for speaker-independent speech recognition. EUROSPEECH 1991: 1111-1114 - 1990
- [c1]Tatsuya Kawahara, Toru Ogawa, Shigeyoshi Kitazawa, Shuji Doshita:
Phoneme recognition by combining Bayesian linear discriminations of selected pairs of classes. ICSLP 1990: 229-232
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-18 19:31 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint