The repo contains PyTorch Implementation of Attentive Transformation (AT)-based Normalization method.
The spatial and channel attentive importance response can be captured by channel-wise attentive branch and spatial-wise attentive branch. Pixel-dependent learnable parameters are generated with the transformation on pixel-wise response map obtained by a combination of channel-wise and spatial-wise attentive response maps.The workflow of attentive transformation based normalization method is illustrated as below:
pip3 install requirements.txt
A dataset folder should look like this:
└── original_image_path
├── Image
│ └── PET_001
│ ├── PET_001_PET_1.npy
│ ├── PET_001_PET_2.npy
│ ├── PET_001_PET_3.npy
│ ......
└── Mask
└── PET_001
├── PET_001_Mask_1.npy
├── PET_001_Mask_2.npy
├── PET_001_Mask_3.npy
......
The dataset is preprocessed and saved with single 2D slices in format ".npy".
Please modify "original_image_path" with your own configuration.
We indroduce argsparse for ease of running our code. For detailed information, see:
python3 ./main.py --help
# Here we use BN as an example
python3 ./main.py --struct_name BN --batch-size 10 --epochs 100
python3 ./main.py --struct_name AT --original_image_path xxx --batch-size 10 --epochs 100
- https://github.com/milesial/Pytorch-UNet
- https://github.com/dvlab-research/AttenNorm
- https://github.com/gbup-group/IEBN
- https://github.com/anthonymlortiz/lcn
@article{qiao2021atnorm,
title={Improving Breast Tumor Segmentation in PET via Attentive Transformation Based Normalization},
author={Xiaoya Qiao, Chunjuan Jiang, Panli Li, Yuan Yuan, Qinglong Zeng, Lei Bi, Shaoli Song, Jinman Kim, David Dagan Feng and Qiu Huang},
year={2021}
}