You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/user_guides/finetune.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ Take the finetuning process on Cityscapes Dataset as an example, the users need
12
12
13
13
## Inherit base configs
14
14
15
-
To release the burden and reduce bugs in writing the whole configs, MMDetection V2.0 support inheriting configs from multiple existing configs. To finetune a Mask RCNN model, the new config needs to inherit
15
+
To release the burden and reduce bugs in writing the whole configs, MMDetection V3.0 support inheriting configs from multiple existing configs. To finetune a Mask RCNN model, the new config needs to inherit
16
16
`_base_/models/mask-rcnn_r50_fpn.py` to build the basic structure of the model. To use the Cityscapes Dataset, the new config can also simply inherit `_base_/datasets/cityscapes_instance.py`. For runtime settings such as logger settings, the new config needs to inherit `_base_/default_runtime.py`. For training schedules, the new config can to inherit `_base_/schedules/schedule_1x.py`. These configs are in the `configs` directory and the users can also choose to write the whole contents rather than use inheritance.
17
17
18
18
```python
@@ -56,7 +56,7 @@ model = dict(
56
56
57
57
## Modify dataset
58
58
59
-
The users may also need to prepare the dataset and write the configs about dataset, refer to [Customize Datasets](../advanced_guides/customize_dataset.md) for more detail. MMDetection V3.0 already supports VOC, WIDERFACE, COCO, LIVS, OpenImages, DeepFashion and Cityscapes Dataset.
59
+
The users may also need to prepare the dataset and write the configs about dataset, refer to [Customize Datasets](../advanced_guides/customize_dataset.md) for more detail. MMDetection V3.0 already supports VOC, WIDERFACE, COCO, LIVS, OpenImages, DeepFashion, Objects365, and Cityscapes Dataset.
Copy file name to clipboardExpand all lines: docs/en/user_guides/inference.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ This note will show how to inference, which means using trained models to detect
5
5
6
6
In MMDetection, a model is defined by a [configuration file](config.md) and existing model parameters are saved in a checkpoint file.
7
7
8
-
To start with, we recommend [Faster RCNN](../../../configs/faster_rcnn) with this [configuration file](../../../configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py) and this [checkpoint file](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth). It is recommended to download the checkpoint file to `checkpoints` directory.
8
+
To start with, we recommend [Faster RCNN](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn) with this [configuration file](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py) and this [checkpoint file](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth). It is recommended to download the checkpoint file to `checkpoints` directory.
@@ -84,14 +84,14 @@ for frame in track_iter_progress(video_reader):
84
84
cv2.destroyAllWindows()
85
85
```
86
86
87
-
A notebook demo can be found in [demo/inference_demo.ipynb](../../../demo/inference_demo.ipynb).
87
+
A notebook demo can be found in [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/3.x/demo/inference_demo.ipynb).
88
88
89
89
Note: `inference_detector` only supports single-image inference for now.
90
90
91
91
## Demos
92
92
93
93
We also provide three demo scripts, implemented with high-level APIs and supporting functionality codes.
94
-
Source codes are available [here](../../../demo).
94
+
Source codes are available [here](https://github.com/open-mmlab/mmdetection/blob/3.x/demo).
0 commit comments