Skip to content

Commit da091e1

Browse files
committed
update ReResNet V2
1 parent 0b9addf commit da091e1

File tree

8 files changed

+583
-558
lines changed

8 files changed

+583
-558
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -113,3 +113,4 @@ data
113113
trash/
114114

115115
experiments
116+
work_dirs

GETTING_STARTED.md

+35-144
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ This page provides basic tutorials about the usage of ReDet.
44
For installation instructions, please see [INSTALL.md](INSTALL.md).
55

66

7-
87
## Prepare DOTA dataset.
98
It is recommended to symlink the dataset root to `ReDet/data`.
109

@@ -15,12 +14,12 @@ First, make sure your initial data are in the following structure.
1514
data/dota15
1615
├── train
1716
│   ├──images
18-
│   └── labelTxt
17+
│   └──labelTxt
1918
├── val
20-
│   ├── images
21-
│   └── labelTxt
19+
│   ├──images
20+
│   └──labelTxt
2221
└── test
23-
   └── images
22+
   └──images
2423
```
2524
Split the original images and create COCO format json.
2625
```
@@ -30,11 +29,11 @@ Then you will get data in the following structure
3029
```
3130
dota15_1024
3231
├── test1024
33-
│   ├── DOTA_test1024.json
34-
│   └── images
32+
│   ├──DOTA_test1024.json
33+
│   └──images
3534
└── trainval1024
36-
    ├── DOTA_trainval1024.json
37-
    └── images
35+
    ├──DOTA_trainval1024.json
36+
    └──images
3837
```
3938
For data preparation with data augmentation, refer to "DOTA_devkit/prepare_dota1_5_v2.py"
4039

@@ -47,16 +46,15 @@ First, make sure your initial data are in the following structure.
4746
data/HRSC2016
4847
├── Train
4948
│   ├──AllImages
50-
│   └── Annotations
49+
│   └──Annotations
5150
└── Test
5251
│   ├──AllImages
53-
│   └── Annotations
52+
│   └──Annotations
5453
```
5554

5655
Then you need to convert HRSC2016 to DOTA's format, i.e.,
5756
rename `AllImages` to `images`, convert xml `Annotations` to DOTA's `txt` format.
58-
Here we provide a script from s2anet: [HRSC2DOTA.py](https://github.com/csuhan/s2anet/blob/original_version/DOTA_devkit/HRSC2DOTA.py). It will be added to this repo later.
59-
After that, your `data/HRSC2016` should contain the following folders.
57+
Here we provide a script from s2anet: [HRSC2DOTA.py](https://github.com/csuhan/s2anet/blob/original_version/DOTA_devkit/HRSC2DOTA.py). Now, your `data/HRSC2016` should contain the following folders.
6058

6159
```
6260
data/HRSC2016
@@ -90,10 +88,6 @@ python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}]
9088

9189
# multi-gpu testing
9290
./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}]
93-
94-
# If you want to test ReDet under Cyclic group C_4 (default C_8), you need to pass the ENV: Orientation=4
95-
# See mmdet/models/backbones/re_resnet.py for details
96-
Orientation=4 python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}]
9791
```
9892

9993
Optional arguments:
@@ -103,7 +97,7 @@ Examples:
10397

10498
Assume that you have already downloaded the checkpoints to `work_dirs/`.
10599

106-
1. Test ReDet.
100+
1. Test ReDet with 1 GPU.
107101
```shell
108102
python tools/test.py configs/ReDet/ReDet_re50_refpn_1x_dota15.py \
109103
work_dirs/ReDet_re50_refpn_1x_dota15/ReDet_re50_refpn_1x_dota15-7f2d6dda.pth \
@@ -117,7 +111,7 @@ python tools/test.py configs/ReDet/ReDet_re50_refpn_1x_dota15.py \
117111
4 --out work_dirs/ReDet_re50_refpn_1x_dota15/results.pkl
118112
```
119113

120-
3. Parse the results.pkl to the format needed for [DOTA evaluation](https://captain-whu.github.io/DOTA/evaluation.html)
114+
3. Parse results for [DOTA evaluation](https://captain-whu.github.io/DOTA/evaluation.html)
121115
```
122116
python tools/parse_results.py --config configs/ReDet/ReDet_re50_refpn_1x_dota15.py --type OBB
123117
```
@@ -134,6 +128,28 @@ python tools/test.py configs/ReDet/ReDet_re50_refpn_3x_hrsc2016.py \
134128
python DOTA_devkit/hrsc2016_evaluation.py
135129
```
136130

131+
### Convert ReResNet+ReFPN to standard Pytorch layers
132+
133+
We provide a [script](tools/convert_ReDet_to_torch.py) to convert the pre-trained weights of ReResNet+ReFPN to standard Pytorch layers. Take ReDet on DOTA-v1.5 as an example.
134+
135+
1. download pretrained weights at [here](https://drive.google.com/file/d/1AjG3-Db_hmZF1YSKRVnq8j_yuxzualRo/view?usp=sharing), and convert it to standard pytorch layers.
136+
```
137+
python tools/convert_ReDet_to_torch.py configs/ReDet/ReDet_re50_refpn_1x_dota15.py \
138+
work_dirs/ReDet_re50_refpn_1x_dota15/ReDet_re50_refpn_1x_dota15-7f2d6dda.pth \
139+
work_dirs/ReDet_re50_refpn_1x_dota15/ReDet_r50_fpn_1x_dota15.pth
140+
```
141+
142+
2. use standard ResNet+FPN as the backbone of ReDet and test it on DOTA-v1.5.
143+
```
144+
mkdir work_dirs/ReDet_r50_fpn_1x_dota15
145+
146+
bash ./tools/dist_test.sh configs/ReDet/ReDet_r50_fpn_1x_dota15.py \
147+
work_dirs/ReDet_re50_refpn_1x_dota15/ReDet_r50_fpn_1x_dota15.pth 8 \
148+
--out work_dirs/ReDet_r50_fpn_1x_dota15/results.pkl
149+
150+
# submit parsed results to the evaluation server.
151+
python tools/parse_results.py --config configs/ReDet/ReDet_r50_fpn_1x_dota15.py
152+
```
137153

138154
### Demo of inference in a large size image.
139155

@@ -159,10 +175,6 @@ to the GPU num, e.g., 0.01 for 4 GPUs and 0.04 for 16 GPUs.
159175

160176
```shell
161177
python tools/train.py ${CONFIG_FILE}
162-
163-
# If you want to train a model under Cyclic group C_4 (default C_8), you need to pass the ENV: Orientation=4
164-
# See mmdet/models/backbones/re_resnet.py for details
165-
Orientation=4 python tools/train.py ${CONFIG_FILE}
166178
```
167179

168180
If you want to specify the working directory in the command, you can add an argument `--work_dir ${YOUR_WORK_DIR}`.
@@ -199,124 +211,3 @@ You can check [slurm_train.sh](tools/slurm_train.sh) for full arguments and envi
199211
If you have just multiple machines connected with ethernet, you can refer to
200212
pytorch [launch utility](https://pytorch.org/docs/stable/distributed_deprecated.html#launch-utility).
201213
Usually it is slow if you do not have high speed networking like infiniband.
202-
203-
204-
## How-to
205-
206-
### Use my own datasets
207-
208-
The simplest way is to convert your dataset to existing dataset formats (COCO or PASCAL VOC).
209-
210-
Here we show an example of adding a custom dataset of 5 classes, assuming it is also in COCO format.
211-
212-
In `mmdet/datasets/my_dataset.py`:
213-
214-
```python
215-
from .coco import CocoDataset
216-
217-
218-
class MyDataset(CocoDataset):
219-
220-
CLASSES = ('a', 'b', 'c', 'd', 'e')
221-
```
222-
223-
In `mmdet/datasets/__init__.py`:
224-
225-
```python
226-
from .my_dataset import MyDataset
227-
```
228-
229-
Then you can use `MyDataset` in config files, with the same API as CocoDataset.
230-
231-
232-
It is also fine if you do not want to convert the annotation format to COCO or PASCAL format.
233-
Actually, we define a simple annotation format and all existing datasets are
234-
processed to be compatible with it, either online or offline.
235-
236-
The annotation of a dataset is a list of dict, each dict corresponds to an image.
237-
There are 3 field `filename` (relative path), `width`, `height` for testing,
238-
and an additional field `ann` for training. `ann` is also a dict containing at least 2 fields:
239-
`bboxes` and `labels`, both of which are numpy arrays. Some datasets may provide
240-
annotations like crowd/difficult/ignored bboxes, we use `bboxes_ignore` and `labels_ignore`
241-
to cover them.
242-
243-
Here is an example.
244-
```
245-
[
246-
{
247-
'filename': 'a.jpg',
248-
'width': 1280,
249-
'height': 720,
250-
'ann': {
251-
'bboxes': <np.ndarray, float32> (n, 4),
252-
'labels': <np.ndarray, float32> (n, ),
253-
'bboxes_ignore': <np.ndarray, float32> (k, 4),
254-
'labels_ignore': <np.ndarray, float32> (k, ) (optional field)
255-
}
256-
},
257-
...
258-
]
259-
```
260-
261-
There are two ways to work with custom datasets.
262-
263-
- online conversion
264-
265-
You can write a new Dataset class inherited from `CustomDataset`, and overwrite two methods
266-
`load_annotations(self, ann_file)` and `get_ann_info(self, idx)`,
267-
like [CocoDataset](mmdet/datasets/coco.py) and [VOCDataset](mmdet/datasets/voc.py).
268-
269-
- offline conversion
270-
271-
You can convert the annotation format to the expected format above and save it to
272-
a pickle or json file, like [pascal_voc.py](tools/convert_datasets/pascal_voc.py).
273-
Then you can simply use `CustomDataset`.
274-
275-
### Develop new components
276-
277-
We basically categorize model components into 4 types.
278-
279-
- backbone: usually a FCN network to extract feature maps, e.g., ResNet, MobileNet.
280-
- neck: the component between backbones and heads, e.g., FPN, PAFPN.
281-
- head: the component for specific tasks, e.g., bbox prediction and mask prediction.
282-
- roi extractor: the part for extracting RoI features from feature maps, e.g., RoI Align.
283-
284-
Here we show how to develop new components with an example of MobileNet.
285-
286-
1. Create a new file `mmdet/models/backbones/mobilenet.py`.
287-
288-
```python
289-
import torch.nn as nn
290-
291-
from ..registry import BACKBONES
292-
293-
294-
@BACKBONES.register
295-
class MobileNet(nn.Module):
296-
297-
def __init__(self, arg1, arg2):
298-
pass
299-
300-
def forward(x): # should return a tuple
301-
pass
302-
```
303-
304-
2. Import the module in `mmdet/models/backbones/__init__.py`.
305-
306-
```python
307-
from .mobilenet import MobileNet
308-
```
309-
310-
3. Use it in your config file.
311-
312-
```python
313-
model = dict(
314-
...
315-
backbone=dict(
316-
type='MobileNet',
317-
arg1=xxx,
318-
arg2=xxx),
319-
...
320-
```
321-
322-
For more information on how it works, you can refer to [TECHNICAL_DETAILS.md](TECHNICAL_DETAILS.md) (TODO).

README.md

+4-2
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,11 @@ More precisely, we incorporate rotation-equivariant networks into the detector t
1818
Based on the rotation-equivariant features, we also present Rotation-invariant RoI Align (RiRoI Align), which adaptively extracts rotation-invariant features from equivariant features according to the orientation of RoI.
1919
Extensive experiments on several challenging aerial image datasets DOTA-v1.0, DOTA-v1.5 and HRSC2016, show that our method can achieve state-of-the-art performance on the task of aerial object detection.
2020
Compared with previous best results, our ReDet gains 1.2, 3.5 and 2.6 mAP on DOTA-v1.0, DOTA-v1.5 and HRSC2016 respectively while reducing the number of parameters by 60% (313 Mb vs. 121 Mb).
21+
2122
## Changelog
2223

23-
* **2021-04-13**. Update our [pretrained ReResNet](https://drive.google.com/file/d/1FshfREfLZaNl5FcaKrH0lxFyZt50Uyu2/view) and fix by [this commit](https://github.com/csuhan/ReDet/commit/88f8170db12a34ec342ab61571db217c9589888d). For the users that can not reach our reported mAP, please download it and train again.
24+
* **2022-03-28**. Speed up ReDet now! We convert the pre-trained weights of ReResNet+ReFPN to standard pytorch layers (see [GETTING_STARTED.md](GETTING_STARTED.md)). In the testing phase, you can directly use ResNet+FPN as the backbone of ReDet without compromising its rotation equivariance. Besides, you can also convert ReResNet to standard ResNet with [this script](https://github.com/csuhan/ReDet/blob/ReDet_mmcls/tools/convert_re_resnet_to_torch.py).
25+
* **2021-04-13**. Update our [pretrained ReResNet](https://drive.google.com/file/d/1FshfREfLZaNl5FcaKrH0lxFyZt50Uyu2/view) and fix by [this commit](https://github.com/csuhan/ReDet/commit/88f8170db12a34ec342ab61571db217c9589888d). If you cannot reach the reported mAP, please download it and try again.
2426
* **2021-03-09**. Code released.
2527

2628
## Benchmark and model zoo
@@ -64,7 +66,7 @@ Please see [GETTING_STARTED.md](GETTING_STARTED.md) for the basic usage.
6466

6567
## Citation
6668

67-
```
69+
```BibTeX
6870
@InProceedings{han2021ReDet,
6971
author = {Han, Jiaming and Ding, Jian and Xue, Nan and Xia, Gui-Song},
7072
title = {ReDet: A Rotation-equivariant Detector for Aerial Object Detection},

0 commit comments

Comments
 (0)