Skip to content

Yiyao-Ma/PoseFlex

Repository files navigation

PoseFlex

Shape-guided Configuration-aware Learning for Endoscopic-image-based Pose Estimation of Flexible Robotic Instruments, ECCV'24

In this work, we propose a novel shape-guided configuration-aware learning framework for image-based flexible robot pose estimation. Inspired by the recent advances in 2D-3D joint representation learning, we leverage the 3D shape prior of the flexible robot to enhance its image-based shape representation. We first extract the part-level geometry representation of the 3D shape prior, then adapt this representation to the image by querying the image features corresponding to different robot parts. Furthermore, we present an effective mechanism to dynamically deform the shape prior. It aims to mitigate the shape difference between the adopted shape prior and the flexible robot depicted in the image. This more expressive shape guidance boosts the image-based robot representation and can be effectively used for flexible robot pose refinement.

Install

conda create -n poseflex python=3.8
conda activate poseflex
pip install -r requirements.txt

Other dependencies:

# install pointnet++
git clone https://github.com/erikwijmans/Pointnet2_PyTorch
cd Pointnet2_PyTorch
pip install -e .
cd ..

# Ninja to load c++ extension
pip install Ninja
  • PyTorch >= 1.7.0 < 1.11.0; python >= 3.7; CUDA >= 9.0; GCC >= 4.9; torchvision;

Test the environment by following command:

python network.py

Data

We have provided test data at here. Please download it and place it within the ./data folder:

└─── PoseFlex
  └─── data
    └─── test
      ├──image
      │     ├── 1.jpg
      │     ├── 2.jpg
      │     └── …
      ├── mask
      │     ├── 1.png
      │     ├── 2.png
      │     └── …
      ├── rot
      │     ├── d_L.npy
      │     └── …
      └── pose_prior

Test the data by following command:

python dataset.py

Train

To train a new model, you need to organize your own training/validation data as described above. Then modify the data_paths variable and change the data_index within get_dataloader function in dataset.py

Change the configuration in sft.yml and run:

python train.py --config=sft.yml

model will be saved in ./exps/.

Test

You can download the pre-trained model at here and organize them as ./exps/PoseFlexModel/.... Change the configuration in test.yml and run:

python eval.py --config=test.yml

results will be saved in ./results/. Note: When setting the model_type as PoseRefine, pose_prior is needed as prior to deform the cad model.

Others

If you can't use torch.arctan2 due to the version of torch, you can call the function onnx_atan2 in pt3d.py to replace it.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages