This guide provides instructions for setting up the environment, preparing datasets, and running training or evaluation for the 3DS-VLA policy.
- Download Copellism: Download the Copellism source from the Peract Repository.
- Set Environment Variable: Set the directory path in your environment variables:
export COPELLISM_DIR=/path/to/copellism - Install Environment: Initialize the Conda environment:
bash 0-env.sh
To train or test the model with our data, please download Data and Model first and ensure your files are placed in the following directories:
- Place
RLBench.ziphere and unzip. This is the training dataset.
- Place
train_json_single.ziphere and unzip. This is the training json folder.
- Place
checkpoint-478000.pthhere. - Place
llama_model_weightshere.
- Place
sam_vit_h_4b8939.pthhere. - Place
groundingdino_swint_ogc.pthhere.
- Place
demos.zipunder./3ds-vla/and unzip. - Place
checkpoint-9.pthunder./3ds-vla/exp/pretrain1.
To perform fine-tuning with the provided dataset, run:
bash 2-finetune.shTo perform evaluation with the provided dataset (demos.zip), run:
bash 3-TestinSim.sh.However, if you want to collect your own test dataset, use the Line3 command in 3-TestinSim.sh before evaluation the model. The evaluation is build on PerAct Repo.
If you want collect your own training dataset in RLbench, run:
bash 1-collect-data.sh.The pipeline first collects raw data within the RLBench simulator, followed by object mask extraction. Subsequently, it generates the training JSON metadata, and finally generating the point clouds."
The repo is built on Peract, RLBench, and, Llama-Adapter. Thanks for these amazing work.