default search action
Yoshiki Obinata
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j4]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
Robotic environmental state recognition with pre-trained vision-language models and black-box optimization. Adv. Robotics 38(18): 1255-1264 (2024) - [j3]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Naoto Tsukamoto, Kei Okada, Masayuki Inaba:
Reflex-based open-vocabulary navigation without prior knowledge using omnidirectional camera and multiple vision-language models. Adv. Robotics 38(18): 1307-1317 (2024) - [j2]Naoaki Kanazawa, Kento Kawaharazuka, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
Real-world cooking robot system from recipes based on food state recognition using foundation models and PDDL. Adv. Robotics 38(18): 1318-1334 (2024) - [j1]Kento Kawaharazuka, Naoaki Kanazawa, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-Box Optimization. IEEE Robotics Autom. Lett. 9(5): 4059-4066 (2024) - [c7]Yoshiki Obinata, Haoyu Jia, Kento Kawaharazuka, Naoaki Kanazawa, Kei Okada:
Remote Life Support Robot Interface System for Global Task Planning and Local Action Expansion Using Foundation Models. Humanoids 2024: 738-743 - [c6]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
Robotic State Recognition with Image-to-Text Retrieval Task of Pre-Trained Vision-Language Model and Black-Box Optimization. Humanoids 2024: 934-940 - [i14]Kento Kawaharazuka, Naoaki Kanazawa, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-box Optimization. CoRR abs/2403.08239 (2024) - [i13]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Naoto Tsukamoto, Kei Okada, Masayuki Inaba:
Reflex-Based Open-Vocabulary Navigation without Prior Knowledge Using Omnidirectional Camera and Multiple Vision-Language Models. CoRR abs/2408.11380 (2024) - [i12]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
Robotic Environmental State Recognition with Pre-Trained Vision-Language Models and Black-Box Optimization. CoRR abs/2409.17519 (2024) - [i11]Naoaki Kanazawa, Kento Kawaharazuka, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
Real-World Cooking Robot System from Recipes Based on Food State Recognition Using Foundation Models and PDDL. CoRR abs/2410.02874 (2024) - [i10]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
Robotic State Recognition with Image-to-Text Retrieval Task of Pre-Trained Vision-Language Model and Black-Box Optimization. CoRR abs/2410.22707 (2024) - 2023
- [c5]Kento Kawaharazuka, Naoaki Kanazawa, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
Daily Assistive View Control Learning of Low-Cost Low-Rigidity Robot via Large-Scale Vision-Language Model. Humanoids 2023: 1-6 - [c4]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors. Humanoids 2023: 1-8 - [c3]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
VQA-based Robotic State Recognition Optimized with Genetic Algorithm. ICRA 2023: 8306-8311 - [c2]Yoshiki Obinata, Kento Kawaharazuka, Naoaki Kanazawa, Naoya Yamaguchi, Naoto Tsukamoto, Iori Yanokura, Shingo Kitagawa, Koki Shinjo, Kei Okada, Masayuki Inaba:
Semantic Scene Difference Detection in Daily Life Patroling by Mobile Robots Using Pre-Trained Large-Scale Vision-Language Model. IROS 2023: 3228-3233 - [c1]Aiko Ichikura, Kento Kawaharazuka, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
A method for Selecting Scenes and Emotion-based Descriptions for a Robot's Diary. RO-MAN 2023: 1683-1688 - [i9]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
VQA-based Robotic State Recognition Optimized with Genetic Algorithm. CoRR abs/2303.05052 (2023) - [i8]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors. CoRR abs/2303.05674 (2023) - [i7]Yoshiki Obinata, Naoaki Kanazawa, Kento Kawaharazuka, Iori Yanokura, Soonhyo Kim, Kei Okada, Masayuki Inaba:
Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots. CoRR abs/2308.03357 (2023) - [i6]Naoaki Kanazawa, Kento Kawaharazuka, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
Recognition of Heat-Induced Food State Changes by Time-Series Use of Vision-Language Model for Cooking Robot. CoRR abs/2309.01528 (2023) - [i5]Aiko Ichikura, Kento Kawaharazuka, Yoshiki Obinata, Koki Shinjo, Kei Okada, Masayuki Inaba:
Automatic Diary Generation System including Information on Joint Experiences between Humans and Robots. CoRR abs/2309.01948 (2023) - [i4]Aiko Ichikura, Kento Kawaharazuka, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
A method for Selecting Scenes and Emotion-based Descriptions for a Robot's Diary. CoRR abs/2309.01951 (2023) - [i3]Yoshiki Obinata, Kento Kawaharazuka, Naoaki Kanazawa, Naoya Yamaguchi, Naoto Tsukamoto, Iori Yanokura, Shingo Kitagawa, Koki Shinjo, Kei Okada, Masayuki Inaba:
Semantic Scene Difference Detection in Daily Life Patroling by Mobile Robots using Pre-Trained Large-Scale Vision-Language Model. CoRR abs/2309.16552 (2023) - [i2]Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba:
Binary State Recognition by Robots using Visual Question Answering of Pre-Trained Vision-Language Model. CoRR abs/2310.16405 (2023) - [i1]Kento Kawaharazuka, Naoaki Kanazawa, Yoshiki Obinata, Kei Okada, Masayuki Inaba:
Daily Assistive View Control Learning of Low-Cost Low-Rigidity Robot via Large-Scale Vision-Language Model. CoRR abs/2312.07451 (2023)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-10 20:51 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint