Skip to content

Commit f4810ac

Browse files
ppwwyyxxfacebook-github-bot
authored andcommittedJan 6, 2020
update docs
Summary: Pull Request resolved: fairinternal/detectron2#365 Differential Revision: D19292963 Pulled By: ppwwyyxx fbshipit-source-id: b209bdedfeb81f8aacf8239a0a31b99fc26fd2c5
1 parent 0243191 commit f4810ac

File tree

12 files changed

+35
-28
lines changed

12 files changed

+35
-28
lines changed
 

‎.github/ISSUE_TEMPLATE/questions-help-support.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
name: "❓How to Use Detectron2"
2+
name: "❓How to do something?"
33
about: How to do X with detectron2? How detectron2 does X?
44

55
---

‎GETTING_STARTED.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,8 @@ python demo/demo.py --config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_
2626
The configs are made for training, therefore we need to specify `MODEL.WEIGHTS` to a model from model zoo for evaluation.
2727
This command will run the inference and show visualizations in an OpenCV window.
2828

29-
For details of the command line arguments, see `demo.py -h`. Some common ones are:
29+
For details of the command line arguments, see `demo.py -h` or look at its source code
30+
to understand its behavior. Some common arguments are:
3031
* To run __on your webcam__, replace `--input files` with `--webcam`.
3132
* To run __on a video__, replace `--input files` with `--video-input video.mp4`.
3233
* To run __on cpu__, add `MODEL.DEVICE cpu` after `--opts`.

‎INSTALL.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -23,14 +23,14 @@ After having the above dependencies, run:
2323
git clone https://github.com/facebookresearch/detectron2.git
2424
cd detectron2
2525
pip install -e .
26+
# (add --user if you don't have permission)
2627
2728
# or if you are on macOS
2829
# MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ pip install -e .
29-
30-
# or, as an alternative to `pip install`, use
31-
# python setup.py build develop
3230
```
33-
Note: you often need to rebuild detectron2 after reinstalling PyTorch.
31+
32+
To __rebuild__ detectron2, `rm -rf build/ **/*.so` then `pip install -e .`.
33+
You often need to rebuild detectron2 after reinstalling PyTorch.
3434

3535
### Common Installation Issues
3636

@@ -60,7 +60,7 @@ Undefined C++ symbols in `detectron2/_C*.so`.
6060
Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ run time.
6161
This can happen with old anaconda.
6262

63-
Try `conda update libgcc`. Then remove the files you built (`build/`, `**/*.so`) and rebuild them.
63+
Try `conda update libgcc`. Then rebuild detectron2.
6464
</details>
6565

6666
<details>

‎demo/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11

22
## Detectron2 Demo
33

4-
We provide a command line tools for running a simple demo.
4+
We provide a command line tool to run a simple demo of builtin models.
55
The usage is explained in [GETTING_STARTED.md](../GETTING_STARTED.md).
66

77
See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-)

‎detectron2/config/defaults.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030
# to be loaded to the model. You can find available models in the model zoo.
3131
_C.MODEL.WEIGHTS = ""
3232

33-
# Values to be used for image normalization (BGR order).
33+
# Values to be used for image normalization (BGR order, since INPUT.FORMAT defaults to BGR).
3434
# To train on images of different number of channels, just set different mean & std.
3535
# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675]
3636
_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675]

‎detectron2/evaluation/coco_evaluation.py

+7-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,13 @@ def __init__(self, dataset_name, cfg, distributed, output_dir=None):
4444
cfg (CfgNode): config instance
4545
distributed (True): if True, will collect results from all ranks for evaluation.
4646
Otherwise, will evaluate the results in the current process.
47-
output_dir (str): optional, an output directory to dump results.
47+
output_dir (str): optional, an output directory to dump all
48+
results predicted on the dataset. The dump contains two files:
49+
50+
1. "instance_predictions.pth" a file in torch serialization
51+
format that contains all the raw original predictions.
52+
2. "coco_instances_results.json" a json file in COCO's result
53+
format.
4854
"""
4955
self._tasks = self._tasks_from_config(cfg)
5056
self._distributed = distributed
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
22
from .build import PROPOSAL_GENERATOR_REGISTRY, build_proposal_generator
3-
from .rpn import RPN_HEAD_REGISTRY, build_rpn_head
3+
from .rpn import RPN_HEAD_REGISTRY, build_rpn_head, RPN

‎detectron2/modeling/proposal_generator/rpn.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -133,8 +133,8 @@ def forward(self, images, features, gt_instances=None):
133133
Each `Instances` stores ground-truth instances for the corresponding image.
134134
135135
Returns:
136-
proposals: list[Instances] or None
137-
loss: dict[Tensor]
136+
proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits"
137+
loss: dict[Tensor] or None
138138
"""
139139
gt_boxes = [x.gt_boxes for x in gt_instances] if gt_instances is not None else None
140140
del gt_instances

‎detectron2/modeling/proposal_generator/rrpn.py

+1-14
Original file line numberDiff line numberDiff line change
@@ -28,20 +28,7 @@ def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]):
2828
self.box2box_transform = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS)
2929

3030
def forward(self, images, features, gt_instances=None):
31-
"""
32-
Args:
33-
images (ImageList): input images of length `N`
34-
features (dict[str: Tensor]): input data as a mapping from feature
35-
map name to tensor. Axis 0 represents the number of images `N` in
36-
the input data; axes 1-3 are channels, height, and width, which may
37-
vary between feature maps (e.g., if a feature pyramid is used).
38-
gt_instances (list[Instances], optional): a length `N` list of `Instances`s.
39-
Each `Instances` stores ground-truth instances for the corresponding image.
40-
41-
Returns:
42-
proposals: list[Instances] or None
43-
loss: dict[Tensor]
44-
"""
31+
# same signature as RPN.forward
4532
gt_boxes = [x.gt_boxes for x in gt_instances] if gt_instances is not None else None
4633
del gt_instances
4734
features = [features[f] for f in self.in_features]

‎detectron2/structures/boxes.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ class Boxes:
120120
(support indexing, `to(device)`, `.device`, and iteration over all boxes)
121121
122122
Attributes:
123-
tensor: float matrix of Nx4.
123+
tensor (torch.Tensor): float matrix of Nx4.
124124
"""
125125

126126
BoxSizeType = Union[List[int], Tuple[int, int]]

‎docs/tutorials/models.md

+3
Original file line numberDiff line numberDiff line change
@@ -96,3 +96,6 @@ from detectron2.utils.events import EventStorage
9696
with EventStorage() as storage:
9797
losses = model(inputs)
9898
```
99+
100+
Another small thing to remember: detectron2 models do not support `model.to(device)` or `model.cpu()`.
101+
The device is defined in `cfg.MODEL.DEVICE` and cannot be changed afterwards.

‎projects/DensePose/doc/GETTING_STARTED.md

+10
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,16 @@ Please see [Apply Net](TOOL_APPLY_NET.md) for more details on the tool.
1111

1212
## Training
1313

14+
First, prepare the [dataset](http://densepose.org/#dataset) into the following structure under the directory you'll run training scripts:
15+
<pre>
16+
datasets/coco/
17+
annotations/
18+
densepose_{train,minival,valminusminival}2014.json
19+
<a href="densepose/densepose_minival2014_100.json">densepose_minival2014_100.json </a> (optional, for testing only)
20+
{train,val}2014/
21+
# image files that are mentioned in the corresponding json
22+
</pre>
23+
1424
To train a model one can use the [train_net.py](../train_net.py) script.
1525
This script was used to train all DensePose models in [Model Zoo](MODEL_ZOO.md).
1626
For example, to launch end-to-end DensePose-RCNN training with ResNet-50 FPN backbone

0 commit comments

Comments
 (0)
Please sign in to comment.