default search action
Ryo Ishii
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j12]Koya Ito, Yoko Ishii, Ryo Ishii, Shin'ichiro Eitoku, Kazuhiro Otsuka:
Exploring Multimodal Nonverbal Functional Features for Predicting the Subjective Impressions of Interlocutors. IEEE Access 12: 96769-96782 (2024) - [j11]Yukiko I. Nakano, Fumio Nihei, Ryo Ishii, Ryuichiro Higashinaka:
Selecting Iconic Gesture Forms Based on Typical Entity Images. J. Inf. Process. 32: 196-205 (2024) - [c60]Takato Hayashi, Ryusei Kimura, Ryo Ishii, Fumio Nihei, Atsushi Fukayama, Shogo Okada:
Rapport Prediction Using Pairwise Learning in Dyadic Conversations Among Strangers and Among Friends. HCI (22) 2024: 17-28 - [c59]Kenta Hama, Atsushi Otsuka, Ryo Ishii:
Emotion Recognition in Conversation with Multi-step Prompting Using Large Language Model. HCI (20) 2024: 338-346 - [c58]Ryo Ishii, Shin'ichiro Eitoku, Shohei Matsuo, Motohiro Makiguchi, Ayami Hoshi, Louis-Philippe Morency:
Let's Dance Together! AI Dancers Can Dance to Your Favorite Music and Style. ICMI Companion 2024: 88-90 - [i4]Atsushi Otsuka, Kazuya Matsuo, Ryo Ishii, Narichika Nomoto, Hiroaki Sugiyama:
User-Specific Dialogue Generation with User Profile-Aware Pre-Training Model and Parameter-Efficient Fine-Tuning. CoRR abs/2409.00887 (2024) - [i3]Takato Hayashi, Ryusei Kimura, Ryo Ishii, Shogo Okada:
Investigating Role of Big Five Personality Traits in Audio-Visual Rapport Estimation. CoRR abs/2410.11861 (2024) - 2023
- [j10]Takato Hayashi, Candy Olivia Mawalim, Ryo Ishii, Akira Morikawa, Atsushi Fukayama, Takao Nakamura, Shogo Okada:
A Ranking Model for Evaluation of Conversation Partners Based on Rapport Levels. IEEE Access 11: 73024-73035 (2023) - [j9]Atsushi Ito, Yukiko I. Nakano, Fumio Nihei, Tatsuya Sakato, Ryo Ishii, Atsushi Fukayama, Takao Nakamura:
Estimating and Visualizing Persuasiveness of Participants in Group Discussions. J. Inf. Process. 31: 34-44 (2023) - [c57]Atsushi Otsuka, Kenta Hama, Narichika Nomoto, Ryo Ishii, Atsushi Fukayama, Takao Nakamura:
Learning User Embeddings with Generating Context of Posted Social Network Service Texts. HCI (15) 2023: 106-115 - [c56]Fumio Nihei, Ryo Ishii, Yukiko I. Nakano, Atsushi Fukayama, Takao Nakamura:
Whether Contribution of Features Differ Between Video-Mediated and In-Person Meetings in Important Utterance Estimation. ICASSP 2023: 1-5 - [c55]Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, Louis-Philippe Morency:
Continual Learning for Personalized Co-Speech Gesture Generation. ICCV 2023: 20836-20846 - [c54]Shumpei Otsuchi, Koya Ito, Yoko Ishii, Ryo Ishii, Shinichirou Eitoku, Kazuhiro Otsuka:
Identifying Interlocutors' Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic Features. ICMI 2023: 336-344 - [c53]Ryo Ishii, Fumio Nihei, Yoko Ishii, Atsushi Otsuka, Kazuya Matsuo, Narichika Nomoto, Atsushi Fukayama, Takao Nakamura:
Prediction of Love-Like Scores After Speed Dating Based on Pre-obtainable Personal Characteristic Information. INTERACT (4) 2023: 551-556 - [c52]Shunichi Kinoshita, Toshiki Onishi, Naoki Azuma, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata:
A Study of Prediction of Listener's Comprehension Based on Multimodal Information. IVA 2023: 30:1-30:4 - [c51]Ryo Ishii, Akira Morikawa, Shin'ichiro Eitoku, Atsushi Fukayama, Takao Nakamura:
How Far ahead Can Model Predict Gesture Pose from Speech and Spoken Text? IVA 2023: 39:1-39:3 - [c50]Toshiki Onishi, Naoki Azuma, Shunichi Kinoshita, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata:
Prediction of Various Backchannel Utterances Based on Multimodal Information. IVA 2023: 47:1-47:4 - [c49]Chihiro Takayama, Shinichirou Eitoku, Fumio Nihei, Ryo Ishii, Yukiko I. Nakano, Atsushi Fukayama:
Investigating the effect of video extraction summarization techniques on the accuracy of impression conveyance in group dialogue. OZCHI 2023: 140-147 - 2022
- [j8]Toshiki Onishi, Arisa Yamauchi, Asahi Ogushi, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata:
Modeling Japanese Praising Behavior by Analyzing Audio and Visual Behaviors. Frontiers Comput. Sci. 4: 815128 (2022) - [c48]Fumio Nihei, Ryo Ishii, Yukiko I. Nakano, Kyosuke Nishida, Ryo Masumura, Atsushi Fukayama, Takao Nakamura:
Dialogue Acts Aided Important Utterance Detection Based on Multiparty and Multimodal Information. INTERSPEECH 2022: 1086-1090 - [c47]Asahi Ogushi, Toshiki Onishi, Yohei Tahara, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata:
Analysis of praising skills focusing on utterance contents. INTERSPEECH 2022: 2743-2747 - [c46]Atsushi Ito, Yukiko I. Nakano, Fumio Nihei, Tatsuya Sakato, Ryo Ishii, Atsushi Fukayama, Takao Nakamura:
Predicting Persuasiveness of Participants in Multiparty Conversations. IUI Companion 2022: 85-88 - [c45]Akira Morikawa, Ryo Ishii, Hajime Noto, Atsushi Fukayama, Takao Nakamura:
Determining most suitable listener backchannel type for speaker's utterance. IVA 2022: 26:1-26:3 - [c44]Toshiki Onishi, Asahi Ogushi, Yohei Tahara, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata:
A Comparison of Praising Skills in Face-to-Face and Remote Dialogues. LREC 2022: 5805-5812 - 2021
- [j7]Bo Yang, Ryo Ishii, Zheng Wang, Tsutomu Kaizuka, Toshiyuki Sugimachi, Toshiaki Sakurai, Tetsuo Maki, Kimihiko Nakano:
Evaluation of Driver Assistance System Presenting Information of Other Vehicles through Peripheral Vision at Unsignalized Intersections. Int. J. Intell. Transp. Syst. Res. 19(1): 230-239 (2021) - [j6]Ryo Ishii, Ryuichiro Higashinaka, Koh Mitsuda, Taichi Katayama, Masahiro Mizukami, Junji Tomita, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Yushi Aono:
Methods for Efficiently Constructing Text-dialogue-agent System using Existing Anime Characters. J. Inf. Process. 29: 30-44 (2021) - [c43]Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas B. Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data. ACL/IJCNLP (1) 2021: 4170-4187 - [c42]Chihiro Takayama, Mitsuhiro Goto, Shinichirou Eitoku, Ryo Ishii, Hajime Noto, Shiro Ozawa, Takao Nakamura:
How People Distinguish Individuals from their Movements: Toward the Realization of Personalized Agents. HAI 2021: 66-74 - [c41]Ryo Ishii, Shiro Kumano, Ryuichiro Higashinaka, Shiro Ozawa, Tetsuya Kinebuchi:
Estimation of Empathy Skill Level and Personal Traits Using Gaze Behavior and Dialogue Act During Turn-Changing. HCI (41) 2021: 44-57 - [c40]Ryo Ishii, Xutong Ren, Michal Muszynski, Louis-Philippe Morency:
Multimodal and Multitask Approach to Listener's Backchannel Prediction: Can Prediction of Turn-changing and Turn-management Willingness Improve Backchannel Modeling? IVA 2021: 131-138 - [i2]Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas B. Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency:
Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data. CoRR abs/2106.13213 (2021) - 2020
- [c39]Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, Louis-Philippe Morency:
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures. EMNLP (Findings) 2020: 1884-1895 - [c38]Ryo Ishii, Ryuichiro Higashinaka, Koh Mitsuda, Taichi Katayama, Masahiro Mizukami, Junji Tomita, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Yushi Aono:
Methods of Efficiently Constructing Text-Dialogue-Agent System Using Existing Anime Character. HCI (45) 2020: 328-347 - [c37]Toshiki Onishi, Arisa Yamauchi, Ryo Ishii, Yushi Aono, Akihiro Miyata:
Analyzing Nonverbal Behaviors along with Praising. ICMI 2020: 609-613 - [c36]Ryo Ishii, Xutong Ren, Michal Muszynski, Louis-Philippe Morency:
Can Prediction of Turn-management Willingness Improve Turn-changing Modeling? IVA 2020: 28:1-28:8 - [c35]Ryo Ishii, Chaitanya Ahuja, Yukiko I. Nakano, Louis-Philippe Morency:
Impact of Personality on Nonverbal Behavior Generation. IVA 2020: 29:1-29:8 - [i1]Terrance Liu, Paul Pu Liang, Michal Muszynski, Ryo Ishii, David Brent, Randy Auerbach, Nicholas B. Allen, Louis-Philippe Morency:
Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study. CoRR abs/2012.02359 (2020)
2010 – 2019
- 2019
- [j5]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation. Multimodal Technol. Interact. 3(4): 70 (2019) - [c34]Ryo Masumura, Mana Ihori, Tomohiro Tanaka, Atsushi Ando, Ryo Ishii, Takanobu Oba, Ryuichiro Higashinaka:
Improving Speech-Based End-of-Turn Detection Via Cross-Modal Representation Learning with Punctuated Text Data. ASRU 2019: 1062-1069 - [c33]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Estimating Interpersonal Reactivity Scores Using Gaze Behavior and Dialogue Act During Turn-Changing. HCI (14) 2019: 45-53 - [c32]Fumio Nihei, Yukiko I. Nakano, Ryuichiro Higashinaka, Ryo Ishii:
Determining Iconic Gesture Forms based on Entity Image Representation. ICMI 2019: 419-425 - [c31]Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita:
Automatic Head-Nod Generation Using Utterance Text Considering Personality Traits. IWSDS 2019: 299-306 - 2018
- [c30]Ryo Ishii, Ryuichiro Higashinaka, Kyosuke Nishida, Taichi Katayama, Nozomi Kobayashi, Junji Tomita:
Automatically Generating Head Nods with Linguistic Information. HCI (14) 2018: 383-391 - [c29]Takahiro Matsumoto, Mitsuhiro Goto, Ryo Ishii, Tomoki Watanabe, Tomohiro Yamada, Michita Imai:
Where Should Robots Talk?: Spatial Arrangement Study from a Participant Workload Perspective. HRI 2018: 270-278 - [c28]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita:
Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level. ICMI 2018: 31-39 - [c27]Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita:
Generating Body Motions using Spoken Language in Dialogue. IVA 2018: 87-92 - [c26]Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita:
Automatic Generation System of Virtual Agent's Motion using Natural Language. IVA 2018: 357-358 - [c25]Ryo Ishii, Ryuichiro Higashinaka, Junji Tomita:
Predicting Nods by using Dialogue Acts in Dialogue. LREC 2018 - [c24]Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita:
Automatic Generation of Head Nods using Utterance Texts. RO-MAN 2018: 1143-1149 - [c23]Ryo Masumura, Tomohiro Tanaka, Atsushi Ando, Ryo Ishii, Ryuichiro Higashinaka, Yushi Aono:
Neural Dialogue Context Online End-of-Turn Detection. SIGDIAL Conference 2018: 224-228 - 2017
- [j4]Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, Junji Yamato:
Collective First-Person Vision for Automatic Gaze Analysis in Multiparty Conversations. IEEE Trans. Multim. 19(1): 107-122 (2017) - [c22]Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Computational model of idiosyncratic perception of others' emotions. ACII 2017: 42-49 - [c21]Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Comparing empathy perceived by interlocutors in multiparty conversation and external observers. ACII 2017: 50-57 - [c20]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings. HAI 2017: 181-187 - [c19]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing gaze behavior during turn-taking for estimating empathy skill level. ICMI 2017: 365-373 - [c18]Ryo Masumura, Taichi Asami, Hirokazu Masataki, Ryo Ishii, Ryuichiro Higashinaka:
Online End-of-Turn Detection from Speech Based on Stacked Time-Asynchronous Sequential Networks. INTERSPEECH 2017: 1661-1665 - 2016
- [j3]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Prediction of Who Will Be the Next Speaker and When Using Gaze Behavior in Multiparty Meetings. ACM Trans. Interact. Intell. Syst. 6(1): 4:1-4:31 (2016) - [j2]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Using Respiration to Predict Who Will Speak Next and When in Multiparty Meetings. ACM Trans. Interact. Intell. Syst. 6(2): 20:1-20:20 (2016) - [c17]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings. ICMI 2016: 209-216 - 2015
- [c16]Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, Junji Yamato:
Automatic gaze analysis in multiparty conversations based on Collective First-Person Vision. FG 2015: 1-8 - [c15]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Predicting next speaker based on head movement in multi-party meetings. ICASSP 2015: 2319-2323 - [c14]Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings. ICMI 2015: 99-106 - [c13]Ryo Ishii, Shiro Ozawa, Akira Kojima, Kazuhiro Otsuka, Yuki Hayashi, Yukiko I. Nakano:
Design and Evaluation of Mirror Interface MIOSS to Overlay Remote 3D Spaces. INTERACT (4) 2015: 319-326 - 2014
- [c12]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis and modeling of next speaking start timing based on gaze behavior in multi-party meetings. ICASSP 2014: 694-698 - [c11]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis of Timing Structure of Eye Contact in Turn-changing. GazeIn@ICMI 2014: 15-20 - [c10]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato:
Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings. ICMI 2014: 18-25 - 2013
- [j1]Ryo Ishii, Yukiko I. Nakano, Toyoaki Nishida:
Gaze awareness in conversational agents: Estimating a user's conversational engagement from eye gaze. ACM Trans. Interact. Intell. Syst. 3(2): 11:1-11:25 (2013) - [c9]Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Ryo Ishii, Junji Yamato:
Using a Probabilistic Topic Model to Link Observers' Perception Tendency to Personality. ACII 2013: 588-593 - [c8]Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato:
Predicting next speaker and timing from gaze transition patterns in multi-party meetings. ICMI 2013: 79-86 - [c7]Kazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato:
MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces. ICMI 2013: 389-396 - [p1]Ryo Ishii, Ryota Ooko, Yukiko I. Nakano, Tokoaki Nishida:
Effectiveness of Gaze-Based Engagement Estimation in Conversational Agents. Eye Gaze in Intelligent User Interfaces 2013: 85-110 - 2011
- [c6]Ryo Ishii, Shiro Ozawa, Takafumi Mukouchi, Norihiko Matsuura:
MoPaCo: Pseudo 3D Video Communication System. HCI (12) 2011: 131-140 - [c5]Ryo Ishii, Shiro Ozawa, Harumi Kawamura, Akira Kojima:
MoPaCo: High telepresence video communication system using motion parallax with monocular camera. ICCV Workshops 2011: 463-464 - [c4]Ryota Ooko, Ryo Ishii, Yukiko I. Nakano:
Estimating a User's Conversational Engagement Based on Head Pose Information. IVA 2011: 262-268 - 2010
- [c3]Yukiko I. Nakano, Ryo Ishii:
Estimating user's engagement from eye-gaze behaviors in human-agent conversations. IUI 2010: 139-148
2000 – 2009
- 2008
- [c2]Ryo Ishii, Yukiko I. Nakano:
Estimating User's Conversational Engagement Based on Gaze Behaviors. IVA 2008: 200-207 - 2006
- [c1]Ryo Ishii, Toshimitsu Miyajima, Kinya Fujita, Yukiko I. Nakano:
Avatar's Gaze Control to Facilitate Conversational Turn-Taking in Virtual-Space Multi-user Voice Chat System. IVA 2006: 458
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-25 23:38 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint