Skip to content

MotionXperts/MotionExpert

Repository files navigation

MotionXpert

    __ ( }       __  __       _   _            __   __                _   
  '---. _`---,  |  \/  |     | | (_)           \ \ / /               | |  
  ___/ /        | \  / | ___ | |_ _  ___  _ __  \ V / _ __   ___ _ __| |_  
/,---'\\        | |\/| |/ _ \| __| |/ _ \| '_ \  > < | '_ \ / _ \ '__| __|
      //        | |  | | (_) | |_| | (_) | | | |/ . \| |_) |  __/ |  | |_ 
     '==        |_|  |_|\___/ \__|_|\___/|_| |_/_/ \_\ .__/ \___|_|   \__|
                                                      | |                  
                                                      |_|                   

Install

create and activate a virtual env

$ conda create -n motion2text python=3.7
$ conda activate motion2text
$ pip install -r requirements.txt

In case of installation of language_evaluation, you need to install from github source code

Prepare

Dataset

Config File

The template config file for pretrain :

TASK.PRETRAIN_SETTING can choose to use STAGCN or Attention. The first one is the implementation of Spatial Temporal Attention Graph Convolutional Networks with Mechanics-Stream for Skeleton-based and the second one is Ours implementation.

TASK.PRETRAIN_DIFFERENCE can choose to be true or false. If the TASK.PRETRAIN_DIFFERENCE is true, the model will use the difference information.

HIDDEN_CHANNEL: 32
OUT_CHANNEL: 128
TRANSFORMATION:
  REDUCTION_POLICY: 'TIME_POOL'
TASK:
  PRETRAIN: true
  PRETRAIN_SETTING: 'Attention'
  PRETRAIN_DIFFERENCE : true
DATA: 
  TRAIN: '{The PATH of the pretrain training dataset}'
  TEST: '{The PATH of the pretrain testing dataset}'
  BATCH_SIZE: 16
OPTIMIZER:
  LR: 1e-4
  MAX_EPOCH: 50
  WARMUP_STEPS: 5000
BRANCH: 1
LOGDIR: ./results/pretrain
args:
  eval_multi: false

The template config file for finetune :

HIDDEN_CHANNEL: 32
OUT_CHANNEL: 128
TRANSFORMATION:
  REDUCTION_POLICY: 'TIME_POOL'
TASK:
  PRETRAIN: false
  PRETRAIN_SETTING: 'Attention'
  PRETRAIN_DIFFERENCE : true
WEIGHT_PATH: '{The PATH of MotionExpert}/MotionExpert/results/pretrain/pretrain_checkpoints/checkpoint_epoch_00008.pth'
DATA: 
  TRAIN: '{The PATH of the finetune training dataset}'
  TEST: '{The PATH of the finetune testing dataset}'
  BATCH_SIZE: 16
OPTIMIZER:
  LR: 1e-4
  MAX_EPOCH: 50
  WARMUP_STEPS: 5000
BRANCH: 1
LOGDIR: ./results/finetune
args:
  eval_multi: false

Create the directory results in the directory {The PATH of MotionExpert}/MotionExpert.

Pretrain

Step 1 : create the pretrain directory.

Step 2 : Put the config.yaml (for example : The template config file for pretrain) pretrain directory.

Step 3 : After pretrain, the pretrain_checkpoints directory will be created automatically like the following :

Motion Expert
    | - results
        | - pretrain
            | -  pretrain_checkpoints
                | - ...
            | -  config.yaml 

For the users :

After suspending the training, it will continue training from the last epoch next time.

For the developers :

If you want to restart the whole training process, you need to delete whole pretrain_checkpoints directory, otherwise it training from the last epoch next time.

Finetuning

Step 1 : create the finetune directory.

Step 2 : create the pretrain_checkpoints directory.

Step 3 : Put the pretrained checkpoint file (for example : checkpoint_epoch_00008.pth) in pretrain_checkpoints directory.

Step 4 : Put the config.yaml (for example : The template config file for finetune) finetune directory.

Step 5 : After finetuning, the checkpoints directory will be created automatically like the following :

Motion Expert
    | - results
        | - finetune
            | - checkpoints
                | ...
            | - pretrain_checkpoints
                | - checkpoint_epoch_00008.pth
            | - config.yaml 

Additionally, if you are finetuning from an existing checkpoint, you will have to further create a folder called pretrain_checkpoints, and put the desired checkpoint into that folder.

For the developers:

If you want to restart the whole training process, you need to delete whole checkpoints directory, otherwise it training from the last epoch next time.

Build

template command

$ torchrun --nproc_per_node <specify_how_many_gpus_to_run> main.py --cfg_file <path_to_cfg_file>

or, if the above yield Error detected multiple processes in same device

$ python -m torch.distributed.launch --nproc_per_node <specify_how_many_gpus_to_run> main.py --cfg_file <path_to_cfg_file>

Run pretrain setting

$ python -m torch.distributed.launch --nproc_per_node 1 main.py --cfg_file {The PATH of MotionExpert}/MotionExpert/results/pretrain/config.yaml

Run finetune setting

$ python -m torch.distributed.launch --nproc_per_node 1 main.py --cfg_file {The PATH of MotionExpert}/MotionExpert/results/finetune/config.yaml 

Submodule - VideoAlignment

We use VideoAlignment as our submodule to handle branch2 alignment code.

To clone the submodule, after you git clone this repo, run the followings:

$ cd VideoAlignment
$ git submodule init
$ git submodule update

If you need to update the VideoAlignment Submodule branch wheh command git pull

$ git submodule update
$ git pull

Boxing

$ CUDA_VISIBLE_DEVICES=6 torchrun --nproc_per_node=1 --master_port=29051 evaluation.py --cfg_file /home/andrewchen/MotionExpert_v2/MotionExpert/results/finetune_skeleton_boxing/config_err.yaml --ckpt /home/andrewchen/MotionExpert_v2/MotionExpert/results/finetune_skeleton_boxing/checkpoints/checkpoint_epoch_00020.pth > output/finetune_skeleton_boxing

All you need to know in SportTech

開發紀錄

Reference

About

A instruction generative model for sport

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •