Skip to content

6330A/action

Repository files navigation

Pose and Joint-Aware Action Recognition

Code and Pre-processed data for the paper Pose and Joint-Aware Action Recognition accepted to WACV 2022

[Paper] [Video]

Set-up environment

  • Tested with Python Version : 3.7.11

Follow one of the following to set up the environment:

  • A) Install from conda environment : conda env create -f environment.yml
  • B) The code mainly requires the following packages : torch, torchvision, puytorch
    • Install one package at a time :
    • conda create -n pose_action python=3.7
    • conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=11.1 -c pytorch -c conda-forge
    • pip install opencv-python matplotlib wandb tqdm joblib scipy scikit-learn
  • C) Make an account on wandb and make required changes to train.py L36

Prepare data

  • mkdir data
  • mkdir metadata
----data
	----JHMDB
		----openpose_COCO_3
			---....npy
			---....npy
			---....npy
	----HMDB51
		----openpose_COCO_3
			---....npy
			---....npy
			---....npy

----metadata
	----JHMDB
		----.pkl
		----.pkl
		----.pkl
	----HMDB51
		----.pkl
		----.pkl
		----.pkl
  • Download data from here. Extract the tar files with folder structure data/$dataset/openpose_COCO_3/
  • Download metadata from here. Extract the tar files to data/metadata

Training scripts

  • Example : bash sample_scripts/hmdb.sh
  • Example : bash sample_scripts/jhmdb.sh
  • Example : bash sample_scripts/le2i.sh
  • Raw heatmaps

We also provide raw heatmaps here. OpenPose was used to extract these. Please take a look at function final_extract_hmdb in utils.py for an example function to extract pose data.

Citation

If you find this repository useful in your work, please cite us!

@InProceedings{Shah_2022_WACV,
    author    = {Shah, Anshul and Mishra, Shlok and Bansal, Ankan and Chen, Jun-Cheng and Chellappa, Rama and Shrivastava, Abhinav},
    title     = {Pose and Joint-Aware Action Recognition},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2022},
    pages     = {3850-3860}
}

本地到OSS 恒源云

进入链接,里面有OSS命令上传数据,点击OSS命令安装然后下载exe文件改名为oss.exe

本地电脑Windows PowerShell,进入存放下载的oss.exe的目录

cd H:\
.\oss.exe login
Username:150---
Password:lzy2---
成功登录
.\oss.exe cp dataset.zip oss://

OSS到实例

启动实例JupyterLab,将OSS的数据传到服务器的 /hy-tmp中

oss login
Username:150.....
Password:lzy.....
成功登录
oss cp oss://dataset.zip /hy-tmp/
cd /hy-tmp/

解压

unzip dataset.zip

原始PoseAction改动

git clone https://github.com/anshulbshah/PoseAction.git

修改opt中的--name

注释wandb,在train和trains中

HMDB51的文件名没有-

HMDB51训练脚本batchsize128太大,改为32

models.py中打印参数修改 print('Number of parameters requiring grad : {} '.format(count_parameters(enc)))

问题

libgthread-2.0.so.0: cannot open shared object file: No such file or directory

sudo apt-get install libglib2.0-0

本地连接服务器终端

在Pycharm的Tools中选择Start SSH session 参考

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published