Josiah Wong

2025BEHAVIOR Challenge

2025BRS: Whole-Body Manipulation for Everyday Tasks
  

2024Digital Cousins for Robot Learning

2024BVS: Customizable Simulated Dataset Generation
  

2022BEHAVIOR-1K: 1,000-Task Embodied AI Benchmark


2022OSCAR: Data-Driven Operational Space Control
  

2021robomimic: Benchmarking Robot Manipulation Tasks


2021MoMaRT: Mobile Manipulation Robot Teleoperation
 

2021iGibson 1.0: Simulating Large Interactive Scenes
 

2021MART: Multi-Arm Robot Teleoperation
2020robosuite: Robot Learning Simulation Framework
 

 



Connect @        



BEHAVIOR Vision Suite: Customizable Dataset Generation via Simulation
    
Abstract

The systematic evaluation and understanding of computer vision models under varying conditions require large amounts of data with comprehensive and customized labels, which real-world vision datasets rarely satisfy. While current synthetic data generators offer a promising alternative, particularly for embodied AI tasks, they often fall short for computer vision tasks due to low asset and rendering quality, limited diversity, and unrealistic physical properties. We introduce the BEHAVIOR Vision Suite (BVS), a set of tools and assets to generate fully customized synthetic data for systematic evaluation of computer vision models, based on the newly developed embodied AI benchmark, BEHAVIOR-1K. BVS supports a large number of adjustable parameters at the scene level (e.g., lighting, object placement), the object level (e.g., joint configuration, attributes such as “filled” and “folded”), and the camera level (e.g., field of view, focal length). Researchers can arbitrarily vary these parameters during data generation to perform controlled experiments. We showcase three example application scenarios: systematically evaluating the robustness of models across different continuous axes of domain shift, evaluating scene understanding models on the same set of images, and training and evaluating simulation-to-real transfer for a novel vision task: unary and binary state prediction.


CVPR 2024
Highlight


[Paper] [Website] [Code]


Yunhao Ge*, Yihe Tang*, Jiashu Xu*, Cem Gokmen*, Chengshu Li, Wensi Ai, Benjamin Jose Martinez, Arman Aydin, Mona Anvari, Ayush K Chakravarthy, Hong-Xing Yu, Josiah Wong, Sanjana Srivastava, Sharon Lee, Shengxin Zha, Laurent Itti, Yunzhu Li, Roberto Martín-Martín, Miao Liu, Pengchuan Zhang, Ruohan Zhang, Li Fei-Fei, Jiajun Wu

(* denotes equal contribution)