Skip to content

Commit 341eecb

Browse files
Add PLOP main codebase.
1 parent 29eae5f commit 341eecb

36 files changed

+6535
-2
lines changed

README.md

+108-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,108 @@
1-
# CVPR20201_PLOP
2-
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
1+
<div align="center">
2+
3+
# PLOP: Learning without Forgetting for Continual Semantic Segmentation
4+
5+
[![Paper](http://img.shields.io/badge/paper-arxiv.1001.2234-B31B1B.svg)](https://arxiv.org/abs/2011.11390)
6+
[![Conference](https://img.shields.io/badge/CVPR-2021-important)](https://arxiv.org/abs/2011.11390)
7+
8+
</div>
9+
10+
11+
This repository contains all of our code. It is a modified version of
12+
[Cermelli et al.'s repository](https://github.com/fcdl94/MiB).
13+
14+
15+
```
16+
@inproceedings{douillard2021plop,
17+
title={PLOP: Learning without Forgetting for Continual Semantic Segmentation},
18+
authors={Douillard, Arthur and Chen, Yifu and Dapogny, Arnaud and Cord, Matthieu},
19+
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
20+
year={2021}
21+
}
22+
```
23+
24+
# Requirements
25+
26+
You need to install the following libraries:
27+
- Python (3.6)
28+
- Pytorch (1.7.1)
29+
- torchvision (0.4.0)
30+
- tensorboardX (1.8)
31+
- apex (0.1)
32+
- matplotlib (3.3.1)
33+
- numpy (1.17.2)
34+
- [inplace-abn](https://github.com/mapillary/inplace_abn) (1.0.7)
35+
36+
Note also that apex seems to only work with some CUDA versions, therefore try to install Pytorch with
37+
the 9.2 or 10.0 CUDA versions, do:
38+
39+
```
40+
conda install -y pytorch torchvision cudatoolkit=9.2 -c pytorch
41+
cd apex
42+
pip3 install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
43+
```
44+
45+
# Dataset
46+
47+
Two scripts are available to download ADE20k and Pascal-VOC 2012, please see in the `data` folder.
48+
For Cityscapes, you need to do it yourself, because you have to ask "permission" to the holders; but be
49+
reassured, it's only a formality, you can get the link in a few days by mail.
50+
51+
# How to perform training
52+
The most important file is run.py, that is in charge to start the training or test procedure.
53+
To run it, simpy use the following command:
54+
55+
> python -m torch.distributed.launch --nproc_per_node=\<num_GPUs\> run.py --data_root \<data_folder\> --name \<exp_name\> .. other args ..
56+
57+
The default is to use a pretraining for the backbone used, that is searched in the pretrained folder of the project.
58+
We used the pretrained model released by the authors of In-place ABN (as said in the paper), that can be found here:
59+
[link](https://github.com/mapillary/inplace_abn#training-on-imagenet-1k).
60+
Since the pretrained are made on multiple-gpus, they contain a prefix "module." in each key of the network. Please, be sure to remove them to be compatible with this code (simply rename them using key = key\[7:\]).
61+
If you don't want to use pretrained, please use --no-pretrained.
62+
63+
There are many options (you can see them all by using --help option), but we arranged the code to being straightforward to test the reported methods.
64+
Leaving all the default parameters, you can replicate the experiments by setting the following options.
65+
- please specify the data folder using: --data_root \<data_root\>
66+
- dataset: --dataset voc (Pascal-VOC 2012) | ade (ADE20K)
67+
- task: --task \<task\>, where tasks are
68+
- 15-5, 15-5s, 19-1 (VOC), 100-50, 100-10, 50, 100-50b, 100-10b, 50b (ADE, b indicates the order)
69+
- step (each step is run separately): --step \<N\>, where N is the step number, starting from 0
70+
- (only for Pascal-VOC) disjoint is default setup, to enable overlapped: --overlapped
71+
- learning rate: --lr 0.01 (for step 0) | 0.001 (for step > 0)
72+
- batch size: --batch_size \<24/num_GPUs\>
73+
- epochs: --epochs 30 (Pascal-VOC 2012) | 60 (ADE20K)
74+
- method: --method \<method name\>, where names are
75+
- FT, LWF, LWF-MC, ILT, EWC, RW, PI, MIB
76+
77+
For all details please follow the information provided using the help option.
78+
79+
#### Example commands
80+
81+
LwF on the 100-50 setting of ADE20K, step 0:
82+
> python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset ade --name LWF --task 100-50 --step 0 --lr 0.01 --epochs 60 --method LWF
83+
84+
MIB on the 50b setting of ADE20K, step 2:
85+
> python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset ade --name MIB --task 100-50 --step 2 --lr 0.001 --epochs 60 --method MIB
86+
87+
LWF-MC on 15-5 disjoint setting of VOC, step 1:
88+
> python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset voc --name LWF-MC --task 15-5 --step 1 --lr 0.001 --epochs 30 --method LWF-MC
89+
90+
PLOP on 15-1 overlapped setting of VOC, step 1:
91+
> python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset voc --name PLOP --task 15-5s --overlapped --step 1 --lr 0.001 --epochs 30 --method FT --pod local --pod_factor 0.01 --pod_logits --pseudo entropy --threshold 0.001 --classif_adaptive_factor --init_balanced --pod_options "{\"switch\": {\"after\": {\"extra_channels\": \"sum\", \"factor\": 0.0005, \"type\": \"local\"}}}"
92+
93+
94+
Once you trained the model, you can see the result on tensorboard (we perform the test after the whole training)
95+
or you can test it by using the same script and parameters but using the command
96+
> --test
97+
98+
that will skip all the training procedure and test the model on test data.
99+
100+
Or more simply you can use one of the provided script that will launch every step of a continual training.
101+
102+
For example, do
103+
104+
````
105+
bash scripts/plop_15-1.sh
106+
````
107+
108+
Note that you will need to modify those scripts to include the path where your data.

0 commit comments

Comments
 (0)