Code for 'Chasing Ghosts: Instruction Following as Bayesian State Tracking' published at NeurIPS 2019
-
Updated
Jan 10, 2020 - C++
Code for 'Chasing Ghosts: Instruction Following as Bayesian State Tracking' published at NeurIPS 2019
Code and utilities for creating a Vision-and-Language Navigation (VLN) simulator environment from a physical space.
[ECCV 2022] Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Code for ORAR Agent for Vision and Language Navigation on Touchdown and map2seq
Contrastive-VisionVAE-Follower is a model used for multi-modal task called Vision-and-Language Navigation (VLN).
Codebase of ACL 2023 Findings "Aerial Vision-and-Dialog Navigation"
[IROS 24] Official repository of "Mind the Error! Detection and Localization of Instruction Errors in Vision-and-Language Navigation". We present the first dataset - R2R-IE-CE - to benchmark instructions errors in VLN. We then propose a method, IEDL.
Reading list for research topics in embodied vision
This work investigates automatic hyperparameter tuning for planners such as DWA and TEB, and our navigation framework LE-Nav can be used to adjust hyperparameters of any optimization-based planner.
Dexbotic: Open-Source Vision-Language-Action Toolbox
📚这个仓库是在arxiv上收集的有关VLN,VLA,World Model,SLAM,Gaussian Splatting,非线性优化等相关论文。每天都会自动更新!issue区域是最新10篇论文
Add a description, image, and links to the vln topic page so that developers can more easily learn about it.
To associate your repository with the vln topic, visit your repo's landing page and select "manage topics."