This is our project for building the Video Action Recognition.
Dataset deployment steps:
-
Make
video-action-recognition/datadirectory. -
Download HMDB51 to the
datadirectory– About 2GB for a total of 7,000 clips distributed in 51 action classes. Add useunrar x xxx.rarto extract all .rar file in this dataset. Finally, we have anvideo-action-recognition/datawith directory tree structure like this:data └── HMDB51 ├── split │ ├── README │ ├── testTrainMulti_7030_splits │ └── test_train_splits.rar └── video ├── brush_hair ├── cartwheel ├── catch ... -
Run
dataset/dataset_list_maker.pyto create annotation list file.python dataset/dataset_list_maker.py data/HMDB51/ -
At last,
video-action-recognition/datadirectory tree structure will be like this:data └── HMDB51 ├── meta.txt ├── split │ ├── README │ ├── testTrainMulti_7030_splits │ └── test_train_splits.rar ├── test_list.txt ├── train_list.txt └── video ├── brush_hair ├── cartwheel ├── catch ...
Resnet_a models:
- Go to the current directory:
cd xxx/video-action-reconigtion - Run tensorboard:
bash tensorboard/tensorboard.sh [port]. e.gbash tensorboard/tensorboard.sh 7788 - Start training with
bash experiments/scripts/train_resnet_a.sh