You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: GETTING_STARTED.md
+35-144
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,6 @@ This page provides basic tutorials about the usage of ReDet.
4
4
For installation instructions, please see [INSTALL.md](INSTALL.md).
5
5
6
6
7
-
8
7
## Prepare DOTA dataset.
9
8
It is recommended to symlink the dataset root to `ReDet/data`.
10
9
@@ -15,12 +14,12 @@ First, make sure your initial data are in the following structure.
15
14
data/dota15
16
15
├── train
17
16
│ ├──images
18
-
│ └──labelTxt
17
+
│ └──labelTxt
19
18
├── val
20
-
│ ├──images
21
-
│ └──labelTxt
19
+
│ ├──images
20
+
│ └──labelTxt
22
21
└── test
23
-
└──images
22
+
└──images
24
23
```
25
24
Split the original images and create COCO format json.
26
25
```
@@ -30,11 +29,11 @@ Then you will get data in the following structure
30
29
```
31
30
dota15_1024
32
31
├── test1024
33
-
│ ├──DOTA_test1024.json
34
-
│ └──images
32
+
│ ├──DOTA_test1024.json
33
+
│ └──images
35
34
└── trainval1024
36
-
├──DOTA_trainval1024.json
37
-
└──images
35
+
├──DOTA_trainval1024.json
36
+
└──images
38
37
```
39
38
For data preparation with data augmentation, refer to "DOTA_devkit/prepare_dota1_5_v2.py"
40
39
@@ -47,16 +46,15 @@ First, make sure your initial data are in the following structure.
47
46
data/HRSC2016
48
47
├── Train
49
48
│ ├──AllImages
50
-
│ └──Annotations
49
+
│ └──Annotations
51
50
└── Test
52
51
│ ├──AllImages
53
-
│ └──Annotations
52
+
│ └──Annotations
54
53
```
55
54
56
55
Then you need to convert HRSC2016 to DOTA's format, i.e.,
57
56
rename `AllImages` to `images`, convert xml `Annotations` to DOTA's `txt` format.
58
-
Here we provide a script from s2anet: [HRSC2DOTA.py](https://github.com/csuhan/s2anet/blob/original_version/DOTA_devkit/HRSC2DOTA.py). It will be added to this repo later.
59
-
After that, your `data/HRSC2016` should contain the following folders.
57
+
Here we provide a script from s2anet: [HRSC2DOTA.py](https://github.com/csuhan/s2anet/blob/original_version/DOTA_devkit/HRSC2DOTA.py). Now, your `data/HRSC2016` should contain the following folders.
### Convert ReResNet+ReFPN to standard Pytorch layers
132
+
133
+
We provide a [script](tools/convert_ReDet_to_torch.py) to convert the pre-trained weights of ReResNet+ReFPN to standard Pytorch layers. Take ReDet on DOTA-v1.5 as an example.
134
+
135
+
1. download pretrained weights at [here](https://drive.google.com/file/d/1AjG3-Db_hmZF1YSKRVnq8j_yuxzualRo/view?usp=sharing), and convert it to standard pytorch layers.
Copy file name to clipboardExpand all lines: README.md
+4-2
Original file line number
Diff line number
Diff line change
@@ -18,9 +18,11 @@ More precisely, we incorporate rotation-equivariant networks into the detector t
18
18
Based on the rotation-equivariant features, we also present Rotation-invariant RoI Align (RiRoI Align), which adaptively extracts rotation-invariant features from equivariant features according to the orientation of RoI.
19
19
Extensive experiments on several challenging aerial image datasets DOTA-v1.0, DOTA-v1.5 and HRSC2016, show that our method can achieve state-of-the-art performance on the task of aerial object detection.
20
20
Compared with previous best results, our ReDet gains 1.2, 3.5 and 2.6 mAP on DOTA-v1.0, DOTA-v1.5 and HRSC2016 respectively while reducing the number of parameters by 60% (313 Mb vs. 121 Mb).
21
+
21
22
## Changelog
22
23
23
-
***2021-04-13**. Update our [pretrained ReResNet](https://drive.google.com/file/d/1FshfREfLZaNl5FcaKrH0lxFyZt50Uyu2/view) and fix by [this commit](https://github.com/csuhan/ReDet/commit/88f8170db12a34ec342ab61571db217c9589888d). For the users that can not reach our reported mAP, please download it and train again.
24
+
***2022-03-28**. Speed up ReDet now! We convert the pre-trained weights of ReResNet+ReFPN to standard pytorch layers (see [GETTING_STARTED.md](GETTING_STARTED.md)). In the testing phase, you can directly use ResNet+FPN as the backbone of ReDet without compromising its rotation equivariance. Besides, you can also convert ReResNet to standard ResNet with [this script](https://github.com/csuhan/ReDet/blob/ReDet_mmcls/tools/convert_re_resnet_to_torch.py).
25
+
***2021-04-13**. Update our [pretrained ReResNet](https://drive.google.com/file/d/1FshfREfLZaNl5FcaKrH0lxFyZt50Uyu2/view) and fix by [this commit](https://github.com/csuhan/ReDet/commit/88f8170db12a34ec342ab61571db217c9589888d). If you cannot reach the reported mAP, please download it and try again.
24
26
***2021-03-09**. Code released.
25
27
26
28
## Benchmark and model zoo
@@ -64,7 +66,7 @@ Please see [GETTING_STARTED.md](GETTING_STARTED.md) for the basic usage.
64
66
65
67
## Citation
66
68
67
-
```
69
+
```BibTeX
68
70
@InProceedings{han2021ReDet,
69
71
author = {Han, Jiaming and Ding, Jian and Xue, Nan and Xia, Gui-Song},
70
72
title = {ReDet: A Rotation-equivariant Detector for Aerial Object Detection},
0 commit comments