default search action
Tsubasa Ochiai
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j8]Tsubasa Ochiai, Kazuma Iwamoto, Marc Delcroix, Rintaro Ikeshita, Hiroshi Sato, Shoko Araki, Shigeru Katagiri:
Rethinking Processing Distortions: Disentangling the Impact of Speech Enhancement Errors on Speech Recognition Performance. IEEE ACM Trans. Audio Speech Lang. Process. 32: 3589-3602 (2024) - [c49]Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Takanori Ashihara, Shoko Araki, Jan Cernocký:
Probing Self-Supervised Learning Models With Target Speech Extraction. ICASSP Workshops 2024: 535-539 - [c48]Keigo Wakayama, Tsubasa Ochiai, Marc Delcroix, Masahiro Yasuda, Shoichiro Saito, Shoko Araki, Akira Nakayama:
Online Target Sound Extraction with Knowledge Distillation from Partially Non-Causal Teacher. ICASSP 2024: 561-565 - [c47]Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Shoko Araki, Jan Cernocký:
Target Speech Extraction with Pre-Trained Self-Supervised Learning Models. ICASSP 2024: 10421-10425 - [c46]Hanako Segawa, Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani, Rintaro Ikeshita, Shoko Araki, Takeshi Yamada, Shoji Makino:
Neural Network-Based Virtual Microphone Estimation with Virtual Microphone and Beamformer-Level Multi-Task Loss. ICASSP 2024: 11021-11025 - [c45]Kazuma Iwamoto, Tsubasa Ochiai, Marc Delcroix, Rintaro Ikeshita, Hiroshi Sato, Shoko Araki, Shigeru Katagiri:
How Does End-To-End Speech Recognition Training Impact Speech Enhancement Artifacts? ICASSP 2024: 11031-11035 - [c44]Carlos Hernandez-Olivan, Marc Delcroix, Tsubasa Ochiai, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki:
Interaural Time Difference Loss for Binaural Target Sound Extraction. IWAENC 2024: 210-214 - [i36]Marvin Tammen, Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani, Shoko Araki, Simon Doclo:
Array Geometry-Robust Attention-Based Neural Beamformer for Moving Speakers. CoRR abs/2402.03058 (2024) - [i35]Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Shoko Araki, Jan Cernocký:
Target Speech Extraction with Pre-trained Self-supervised Learning Models. CoRR abs/2402.13199 (2024) - [i34]Junyi Peng, Marc Delcroix, Tsubasa Ochiai, Oldrich Plchot, Takanori Ashihara, Shoko Araki, Jan Cernocký:
Probing Self-supervised Learning Models with Target Speech Extraction. CoRR abs/2402.13200 (2024) - [i33]Tsubasa Ochiai, Kazuma Iwamoto, Marc Delcroix, Rintaro Ikeshita, Hiroshi Sato, Shoko Araki, Shigeru Katagiri:
Rethinking Processing Distortions: Disentangling the Impact of Speech Enhancement Errors on Speech Recognition Performance. CoRR abs/2404.14860 (2024) - [i32]Hiroshi Sato, Takafumi Moriya, Masato Mimura, Shota Horiguchi, Tsubasa Ochiai, Takanori Ashihara, Atsushi Ando, Kentaro Shinayama, Marc Delcroix:
SpeakerBeam-SS: Real-time Target Speaker Extraction with Lightweight Conv-TasNet and State Space Modeling. CoRR abs/2407.01857 (2024) - [i31]Carlos Hernandez-Olivan, Marc Delcroix, Tsubasa Ochiai, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki:
Interaural time difference loss for binaural target sound extraction. CoRR abs/2408.00344 (2024) - [i30]Carlos Hernandez-Olivan, Marc Delcroix, Tsubasa Ochiai, Daisuke Niizumi, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki:
SoundBeam meets M2D: Target Sound Extraction with Audio Foundation Model. CoRR abs/2409.12528 (2024) - [i29]Takanori Ashihara, Takafumi Moriya, Shota Horiguchi, Junyi Peng, Tsubasa Ochiai, Marc Delcroix, Kohei Matsuura, Hiroshi Sato:
Investigation of Speaker Representation for Target-Speaker Speech Processing. CoRR abs/2410.11243 (2024) - 2023
- [j7]Takafumi Moriya, Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Takahiro Shinozaki:
Streaming End-to-End Target-Speaker Automatic Speech Recognition and Activity Detection. IEEE Access 11: 13906-13917 (2023) - [j6]Katerina Zmolíková, Marc Delcroix, Tsubasa Ochiai, Keisuke Kinoshita, Jan Cernocký, Dong Yu:
Neural Target Speech Extraction: An overview. IEEE Signal Process. Mag. 40(3): 8-29 (2023) - [j5]Marc Delcroix, Jorge Bennasar Vázquez, Tsubasa Ochiai, Keisuke Kinoshita, Yasunori Ohishi, Shoko Araki:
SoundBeam: Target Sound Extraction Conditioned on Sound-Class Labels and Enrollment Clues for Increased Performance and Continuous Learning. IEEE ACM Trans. Audio Speech Lang. Process. 31: 121-136 (2023) - [j4]Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani, Shoko Araki:
Mask-Based Neural Beamforming for Moving Speakers With Self-Attention-Based Tracking. IEEE ACM Trans. Audio Speech Lang. Process. 31: 835-848 (2023) - [c43]Hiroshi Sato, Ryo Masumura, Tsubasa Ochiai, Marc Delcroix, Takafumi Moriya, Takanori Ashihara, Kentaro Shinayama, Saki Mizuno, Mana Ihori, Tomohiro Tanaka, Nobukatsu Hojo:
Downstream Task Agnostic Speech Enhancement with Self-Supervised Representation Loss. INTERSPEECH 2023: 854-858 - [c42]Takafumi Moriya, Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Takanori Ashihara, Kohei Matsuura, Tomohiro Tanaka, Ryo Masumura, Atsunori Ogawa, Taichi Asami:
Knowledge Distillation for Neural Transducer-based Target-Speaker ASR: Exploiting Parallel Mixture/Single-Talker Speech Data. INTERSPEECH 2023: 899-903 - [c41]Shoko Araki, Ayako Yamamoto, Tsubasa Ochiai, Kenichi Arai, Atsunori Ogawa, Tomohiro Nakatani, Toshio Irino:
Impact of Residual Noise and Artifacts in Speech Enhancement Errors on Intelligibility of Human and Machine. INTERSPEECH 2023: 2503-2507 - [i28]Katerina Zmolíková, Marc Delcroix, Tsubasa Ochiai, Keisuke Kinoshita, Jan Cernocký, Dong Yu:
Neural Target Speech Extraction: An Overview. CoRR abs/2301.13341 (2023) - [i27]Hiroshi Sato, Ryo Masumura, Tsubasa Ochiai, Marc Delcroix, Takafumi Moriya, Takanori Ashihara, Kentaro Shinayama, Saki Mizuno, Mana Ihori, Tomohiro Tanaka, Nobukatsu Hojo:
Downstream Task Agnostic Speech Enhancement with Self-Supervised Representation Loss. CoRR abs/2305.14723 (2023) - 2022
- [c40]Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Naoyuki Kamo, Takafumi Moriya:
Learning to Enhance or Not: Neural Network-Based Switching of Enhanced and Observed Signals for Overlapping Speech Recognition. ICASSP 2022: 6287-6291 - [c39]Marc Delcroix, Keisuke Kinoshita, Tsubasa Ochiai, Katerina Zmolíková, Hiroshi Sato, Tomohiro Nakatani:
Listen only to me! How well can target speech extraction handle false alarms? INTERSPEECH 2022: 216-220 - [c38]Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Takafumi Moriya, Naoki Makishima, Mana Ihori, Tomohiro Tanaka, Ryo Masumura:
Strategies to Improve Robustness of Target Speech Extraction to Enrollment Variations. INTERSPEECH 2022: 996-1000 - [c37]Takafumi Moriya, Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Takahiro Shinozaki:
Streaming Target-Speaker ASR with Neural Transducer. INTERSPEECH 2022: 2673-2677 - [c36]Martin Kocour, Katerina Zmolíková, Lucas Ondel, Jan Svec, Marc Delcroix, Tsubasa Ochiai, Lukás Burget, Jan Cernocký:
Revisiting joint decoding based multi-talker speech recognition with DNN acoustic model. INTERSPEECH 2022: 4955-4959 - [c35]Kazuma Iwamoto, Tsubasa Ochiai, Marc Delcroix, Rintaro Ikeshita, Hiroshi Sato, Shoko Araki, Shigeru Katagiri:
How bad are artifacts?: Analyzing the impact of speech enhancement errors on ASR. INTERSPEECH 2022: 5418-5422 - [c34]Jan Svec, Katerina Zmolíková, Martin Kocour, Marc Delcroix, Tsubasa Ochiai, Ladislav Mosner, Jan Honza Cernocký:
Analysis of Impact of Emotions on Target Speech Extraction and Speech Separation. IWAENC 2022: 1-5 - [c33]Yasunori Ohishi, Marc Delcroix, Tsubasa Ochiai, Shoko Araki, Daiki Takeuchi, Daisuke Niizumi, Akisato Kimura, Noboru Harada, Kunio Kashino:
ConceptBeam: Concept Driven Target Speech Extraction. ACM Multimedia 2022: 4252-4260 - [i26]Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Naoyuki Kamo, Takafumi Moriya:
Learning to Enhance or Not: Neural Network-Based Switching of Enhanced and Observed Signals for Overlapping Speech Recognition. CoRR abs/2201.03881 (2022) - [i25]Kazuma Iwamoto, Tsubasa Ochiai, Marc Delcroix, Rintaro Ikeshita, Hiroshi Sato, Shoko Araki, Shigeru Katagiri:
How Bad Are Artifacts?: Analyzing the Impact of Speech Enhancement Errors on ASR. CoRR abs/2201.06685 (2022) - [i24]Marc Delcroix, Jorge Bennasar Vázquez, Tsubasa Ochiai, Keisuke Kinoshita, Yasunori Ohishi, Shoko Araki:
SoundBeam: Target sound extraction conditioned on sound-class labels and enrollment clues for increased performance and continuous learning. CoRR abs/2204.03895 (2022) - [i23]Marc Delcroix, Keisuke Kinoshita, Tsubasa Ochiai, Katerina Zmolíková, Hiroshi Sato, Tomohiro Nakatani:
Listen only to me! How well can target speech extraction handle false alarms? CoRR abs/2204.04811 (2022) - [i22]Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani, Shoko Araki:
Mask-based Neural Beamforming for Moving Speakers with Self-Attention-based Tracking. CoRR abs/2205.03568 (2022) - [i21]Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Takafumi Moriya, Naoki Makishima, Mana Ihori, Tomohiro Tanaka, Ryo Masumura:
Strategies to Improve Robustness of Target Speech Extraction to Enrollment Variations. CoRR abs/2206.08174 (2022) - [i20]Yasunori Ohishi, Marc Delcroix, Tsubasa Ochiai, Shoko Araki, Daiki Takeuchi, Daisuke Niizumi, Akisato Kimura, Noboru Harada, Kunio Kashino:
ConceptBeam: Concept Driven Target Speech Extraction. CoRR abs/2207.11964 (2022) - [i19]Jan Svec, Katerina Zmolíková, Martin Kocour, Marc Delcroix, Tsubasa Ochiai, Ladislav Mosner, Jan Cernocký:
Analysis of impact of emotions on target speech extraction and speech separation. CoRR abs/2208.07091 (2022) - [i18]Takafumi Moriya, Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Takahiro Shinozaki:
Streaming Target-Speaker ASR with Neural Transducer. CoRR abs/2209.04175 (2022) - 2021
- [c32]Julio Wissing, Benedikt T. Boenninghoff, Dorothea Kolossa, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Christopher Schymura:
Data Fusion for Audiovisual Speaker Localization: Extending Dynamic Stream Weights to the Spatial Domain. ICASSP 2021: 4705-4709 - [c31]Takafumi Moriya, Takanori Ashihara, Tomohiro Tanaka, Tsubasa Ochiai, Hiroshi Sato, Atsushi Ando, Yusuke Ijima, Ryo Masumura, Yusuke Shinohara:
Simpleflat: A Simple Whole-Network Pre-Training Approach for RNN Transducer-Based End-to-End Speech Recognition. ICASSP 2021: 5664-5668 - [c30]Marc Delcroix, Katerina Zmolíková, Tsubasa Ochiai, Keisuke Kinoshita, Tomohiro Nakatani:
Speaker Activity Driven Neural Speech Extraction. ICASSP 2021: 6099-6103 - [c29]Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani, Rintaro Ikeshita, Keisuke Kinoshita, Shoko Araki:
Neural Network-Based Virtual Microphone Estimator. ICASSP 2021: 6114-6118 - [c28]Wangyou Zhang, Christoph Böddeker, Shinji Watanabe, Tomohiro Nakatani, Marc Delcroix, Keisuke Kinoshita, Tsubasa Ochiai, Naoyuki Kamo, Reinhold Haeb-Umbach, Yanmin Qian:
End-to-End Dereverberation, Beamforming, and Speech Recognition with Improved Numerical Stability and Advanced Frontend. ICASSP 2021: 6898-6902 - [c27]Christoph Böddeker, Wangyou Zhang, Tomohiro Nakatani, Keisuke Kinoshita, Tsubasa Ochiai, Marc Delcroix, Naoyuki Kamo, Yanmin Qian, Reinhold Haeb-Umbach:
Convolutive Transfer Function Invariant SDR Training Criteria for Multi-Channel Reverberant Speech Separation. ICASSP 2021: 8428-8432 - [c26]Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Takafumi Moriya, Naoyuki Kamo:
Should We Always Separate?: Switching Between Enhanced and Observed Signals for Overlapping Speech Recognition. Interspeech 2021: 1149-1153 - [c25]Takafumi Moriya, Tomohiro Tanaka, Takanori Ashihara, Tsubasa Ochiai, Hiroshi Sato, Atsushi Ando, Ryo Masumura, Marc Delcroix, Taichi Asami:
Streaming End-to-End Speech Recognition for Hybrid RNN-T/Attention Architecture. Interspeech 2021: 1787-1791 - [c24]Christopher Schymura, Benedikt T. Bönninghoff, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Dorothea Kolossa:
PILOT: Introducing Transformers for Probabilistic Sound Event Localization. Interspeech 2021: 2117-2121 - [c23]Marc Delcroix, Jorge Bennasar Vázquez, Tsubasa Ochiai, Keisuke Kinoshita, Shoko Araki:
Few-Shot Learning of New Sound Classes for Target Sound Extraction. Interspeech 2021: 3500-3504 - [c22]Hiroshi Sato, Tsubasa Ochiai, Keisuke Kinoshita, Marc Delcroix, Tomohiro Nakatani, Shoko Araki:
Multimodal Attention Fusion for Target Speaker Extraction. SLT 2021: 778-784 - [i17]Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani, Rintaro Ikeshita, Keisuke Kinoshita, Shoko Araki:
Neural Network-based Virtual Microphone Estimator. CoRR abs/2101.04315 (2021) - [i16]Marc Delcroix, Katerina Zmolíková, Tsubasa Ochiai, Keisuke Kinoshita, Tomohiro Nakatani:
Speaker activity driven neural speech extraction. CoRR abs/2101.05516 (2021) - [i15]Hiroshi Sato, Tsubasa Ochiai, Keisuke Kinoshita, Marc Delcroix, Tomohiro Nakatani, Shoko Araki:
Multimodal Attention Fusion for Target Speaker Extraction. CoRR abs/2102.01326 (2021) - [i14]Wangyou Zhang, Christoph Böddeker, Shinji Watanabe, Tomohiro Nakatani, Marc Delcroix, Keisuke Kinoshita, Tsubasa Ochiai, Naoyuki Kamo, Reinhold Haeb-Umbach, Yanmin Qian:
End-to-End Dereverberation, Beamforming, and Speech Recognition with Improved Numerical Stability and Advanced Frontend. CoRR abs/2102.11525 (2021) - [i13]Julio Wissing, Benedikt T. Boenninghoff, Dorothea Kolossa, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Christopher Schymura:
Data Fusion for Audiovisual Speaker Localization: Extending Dynamic Stream Weights to the Spatial Domain. CoRR abs/2102.11588 (2021) - [i12]Christopher Schymura, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Dorothea Kolossa:
Exploiting Attention-based Sequence-to-Sequence Architectures for Sound Event Localization. CoRR abs/2103.00417 (2021) - [i11]Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Takafumi Moriya, Naoyuki Kamo:
Should We Always Separate?: Switching Between Enhanced and Observed Signals for Overlapping Speech Recognition. CoRR abs/2106.00949 (2021) - [i10]Christopher Schymura, Benedikt T. Bönninghoff, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Dorothea Kolossa:
PILOT: Introducing Transformers for Probabilistic Sound Event Localization. CoRR abs/2106.03903 (2021) - [i9]Marc Delcroix, Jorge Bennasar Vázquez, Tsubasa Ochiai, Keisuke Kinoshita, Shoko Araki:
Few-shot learning of new sound classes for target sound extraction. CoRR abs/2106.07144 (2021) - [i8]Martin Kocour, Katerina Zmolíková, Lucas Ondel, Jan Svec, Marc Delcroix, Tsubasa Ochiai, Lukás Burget, Jan Cernocký:
Revisiting joint decoding based multi-talker speech recognition with DNN acoustic model. CoRR abs/2111.00009 (2021) - 2020
- [c21]Christopher Schymura, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Dorothea Kolossa:
Exploiting Attention-based Sequence-to-Sequence Architectures for Sound Event Localization. EUSIPCO 2020: 231-235 - [c20]Christopher Schymura, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Dorothea Kolossa:
A Dynamic Stream Weight Backprop Kalman Filter for Audiovisual Speaker Tracking. ICASSP 2020: 581-585 - [c19]Marc Delcroix, Tsubasa Ochiai, Katerina Zmolíková, Keisuke Kinoshita, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki:
Improving Speaker Discrimination of Target Speech Extraction With Time-Domain Speakerbeam. ICASSP 2020: 691-695 - [c18]Tsubasa Ochiai, Marc Delcroix, Rintaro Ikeshita, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki:
Beam-TasNet: Time-domain Audio Separation Network Meets Frequency-domain Beamformer. ICASSP 2020: 6384-6388 - [c17]Tomohiro Nakatani, Riki Takahashi, Tsubasa Ochiai, Keisuke Kinoshita, Rintaro Ikeshita, Marc Delcroix, Shoko Araki:
DNN-supported Mask-based Convolutional Beamforming for Simultaneous Denoising, Dereverberation, and Source Separation. ICASSP 2020: 6399-6403 - [c16]Keisuke Kinoshita, Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani:
Improving Noise Robust Automatic Speech Recognition with Single-Channel Time-Domain Enhancement Network. ICASSP 2020: 7009-7013 - [c15]Takafumi Moriya, Tsubasa Ochiai, Shigeki Karita, Hiroshi Sato, Tomohiro Tanaka, Takanori Ashihara, Ryo Masumura, Yusuke Shinohara, Marc Delcroix:
Self-Distillation for Improving CTC-Transformer-Based ASR Systems. INTERSPEECH 2020: 546-550 - [c14]Tsubasa Ochiai, Marc Delcroix, Yuma Koizumi, Hiroaki Ito, Keisuke Kinoshita, Shoko Araki:
Listen to What You Want: Neural Network-Based Universal Sound Selector. INTERSPEECH 2020: 1441-1445 - [i7]Marc Delcroix, Tsubasa Ochiai, Katerina Zmolíková, Keisuke Kinoshita, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki:
Improving speaker discrimination of target speech extraction with time-domain SpeakerBeam. CoRR abs/2001.08378 (2020) - [i6]Keisuke Kinoshita, Tsubasa Ochiai, Marc Delcroix, Tomohiro Nakatani:
Improving noise robust automatic speech recognition with single-channel time-domain enhancement network. CoRR abs/2003.03998 (2020) - [i5]Tsubasa Ochiai, Marc Delcroix, Yuma Koizumi, Hiroaki Ito, Keisuke Kinoshita, Shoko Araki:
Listen to What You Want: Neural Network-based Universal Sound Selector. CoRR abs/2006.05712 (2020) - [i4]Christoph Böddeker, Wangyou Zhang, Tomohiro Nakatani, Keisuke Kinoshita, Tsubasa Ochiai, Marc Delcroix, Naoyuki Kamo, Yanmin Qian, Shinji Watanabe, Reinhold Haeb-Umbach:
Convolutive Transfer Function Invariant SDR training criteria for Multi-Channel Reverberant Speech Separation. CoRR abs/2011.15003 (2020)
2010 – 2019
- 2019
- [j3]Katerina Zmolíková, Marc Delcroix, Keisuke Kinoshita, Tsubasa Ochiai, Tomohiro Nakatani, Lukás Burget, Jan Cernocký:
SpeakerBeam: Speaker Aware Neural Network for Target Speaker Extraction in Speech Mixtures. IEEE J. Sel. Top. Signal Process. 13(4): 800-814 (2019) - [c13]Marc Delcroix, Katerina Zmolíková, Tsubasa Ochiai, Keisuke Kinoshita, Shoko Araki, Tomohiro Nakatani:
Compact Network for Speakerbeam Target Speaker Extraction. ICASSP 2019: 6965-6969 - [c12]Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Tomohiro Nakatani:
A Unified Framework for Neural Speech Separation and Extraction. ICASSP 2019: 6975-6979 - [c11]Marc Delcroix, Shinji Watanabe, Tsubasa Ochiai, Keisuke Kinoshita, Shigeki Karita, Atsunori Ogawa, Tomohiro Nakatani:
End-to-End SpeakerBeam for Single Channel Target Speech Recognition. INTERSPEECH 2019: 451-455 - [c10]Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Tomohiro Nakatani:
Multimodal SpeakerBeam: Single Channel Target Speech Extraction with Audio-Visual Speaker Clues. INTERSPEECH 2019: 2718-2722 - 2018
- [c9]Tsubasa Ochiai, Shinji Watanabe, Shigeru Katagiri, Takaaki Hori, John R. Hershey:
Speaker Adaptation for Multichannel End-to-End Speech Recognition. ICASSP 2018: 6707-6711 - [c8]Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, Tsubasa Ochiai:
ESPnet: End-to-End Speech Processing Toolkit. INTERSPEECH 2018: 2207-2211 - [i3]Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, Tsubasa Ochiai:
ESPnet: End-to-End Speech Processing Toolkit. CoRR abs/1804.00015 (2018) - 2017
- [j2]Tsubasa Ochiai, Shinji Watanabe, Takaaki Hori, John R. Hershey, Xiong Xiao:
Unified Architecture for Multichannel End-to-End Speech Recognition With Neural Beamforming. IEEE J. Sel. Top. Signal Process. 11(8): 1274-1288 (2017) - [c7]Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Taichi Asami, Shigeru Katagiri, Tomohiro Nakatani:
Cumulative moving averaged bottleneck speaker vectors for online speaker adaptation of CNN-based acoustic models. ICASSP 2017: 5175-5179 - [c6]Tsubasa Ochiai, Shigeki Matsuda, Hideyuki Watanabe, Shigeru Katagiri:
Automatic node selection for Deep Neural Networks using Group Lasso regularization. ICASSP 2017: 5485-5489 - [c5]Tsubasa Ochiai, Shinji Watanabe, Takaaki Hori, John R. Hershey:
Multichannel End-to-end Speech Recognition. ICML 2017: 2632-2641 - [c4]Tsubasa Ochiai, Shinji Watanabe, Shigeru Katagiri:
Does speech enhancement work with end-to-end ASR objectives?: Experimental analysis of multichannel end-to-end ASR. MLSP 2017: 1-6 - [i2]Tsubasa Ochiai, Shinji Watanabe, Takaaki Hori, John R. Hershey:
Multichannel End-to-end Speech Recognition. CoRR abs/1703.04783 (2017) - 2016
- [j1]Tsubasa Ochiai, Shigeki Matsuda, Hideyuki Watanabe, Xugang Lu, Chiori Hori, Hisashi Kawai, Shigeru Katagiri:
Speaker Adaptive Training Localizing Speaker Modules in DNN for Hybrid DNN-HMM Speech Recognizers. IEICE Trans. Inf. Syst. 99-D(10): 2431-2443 (2016) - [c3]Tsubasa Ochiai, Shigeki Matsuda, Hideyuki Watanabe, Xugang Lu, Hisashi Kawai, Shigeru Katagiri:
Bottleneck linear transformation network adaptation for speaker adaptive training-based hybrid DNN-HMM speech recognizer. ICASSP 2016: 5015-5019 - [i1]Tsubasa Ochiai, Shigeki Matsuda, Hideyuki Watanabe, Shigeru Katagiri:
Automatic Node Selection for Deep Neural Networks using Group Lasso Regularization. CoRR abs/1611.05527 (2016) - 2015
- [c2]Tsubasa Ochiai, Shigeki Matsuda, Hideyuki Watanabe, Xugang Lu, Chiori Hori, Shigeru Katagiri:
Speaker adaptive training for deep neural networks embedding linear transformation networks. ICASSP 2015: 4605-4609 - 2014
- [c1]Tsubasa Ochiai, Shigeki Matsuda, Xugang Lu, Chiori Hori, Shigeru Katagiri:
Speaker Adaptive Training using Deep Neural Networks. ICASSP 2014: 6349-6353
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-25 23:45 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint