Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
a54ef7c
Add doc of modify loss (#3777)
yhcao6 Sep 29, 2020
c2584e1
improve the function of simple_test_bboxes (#3853)
yuzhj Sep 30, 2020
bb514fa
Clean background_labels in the dense heads (#3221)
ZwwWayne Sep 30, 2020
b3f1e05
fix rpn transforming bug in two stage networks (#3754)
yuzhj Sep 30, 2020
b347bf2
[Refactor] refactor get_subset_by_classes in dataloader for training …
ZwwWayne Sep 30, 2020
c2da97c
Fix nonzero in NMS for PyTorch 1.6.0 (#3867)
shinya7y Oct 1, 2020
5b18b94
Support TTA of ATSS, FCOS, YOLOv3 (#3844)
shinya7y Oct 1, 2020
a15edd6
[Docs] Fix typo in docs/tutorials/new_dataset.md (#3876)
Oct 3, 2020
d40e19b
[Docs] Remove duplicate content in docs/config.md (#3875)
Oct 3, 2020
ced1c57
[Enhance]: Convert mask to bool before using it as img's index for ro…
Oct 3, 2020
4a67d66
Fix typo in bbox_flip (#3886)
yeliudev Oct 6, 2020
31dbbf7
fix the API change bug of PAA (#3883)
ZwwWayne Oct 7, 2020
aebbaff
fix cv2 import error of ligGL.so.1 (#3891)
aboettcher Oct 8, 2020
8753583
[enhance]: Improve documentation of modules and dataset customization…
ZwwWayne Oct 10, 2020
6a77952
support to use pytorch 1.6 in docker (#3905)
ZwwWayne Oct 10, 2020
24d6635
Add missing notes in data customization (#3906)
ZwwWayne Oct 10, 2020
547eb8d
[Fix]: fix mask rcnn training stuck problem when there is no positive…
ZwwWayne Oct 11, 2020
44a7ef2
Bump to v2.5.0 (#3879)
ZwwWayne Oct 11, 2020
0a7628d
Added `generate_inputs_and_wrap_model` function for pytorch2onnx (#3857)
RyanXLi Oct 12, 2020
9282ff1
typo (#3917)
yuzhj Oct 13, 2020
502cee2
Edit mmdet.core.export docstring (#3912)
RyanXLi Oct 13, 2020
92b5892
supports for HungarianMatchAssigner, add bbox_cxcywh_to_xyxy and bbox…
v-qjqs Oct 14, 2020
970581a
format box-wise related giou calculating as a function and implement …
v-qjqs Oct 14, 2020
609378f
supports for BboxGIoU2D and re-implements giou_loss using bbox_gious
v-qjqs Oct 15, 2020
96cb868
remove unnecessary
v-qjqs Oct 15, 2020
d6c4955
reformat
v-qjqs Oct 15, 2020
ce17002
reformat docstring
v-qjqs Oct 15, 2020
47c5c10
reformat
v-qjqs Oct 15, 2020
7c760c7
rename
v-qjqs Oct 15, 2020
59654c4
supports for giou calculating in BboxOverlaps2D, and re-implements gi…
v-qjqs Oct 15, 2020
409af1e
fix sabl validating bug in cascade_rcnn (#3913)
yuzhj Oct 15, 2020
dafe3ed
reformat
v-qjqs Oct 15, 2020
4cd0e3d
move giou related unit test from test_losses.py to test_iou2d_calcula…
v-qjqs Oct 15, 2020
0a8a50f
reformat
v-qjqs Oct 15, 2020
4829de2
Avoid division by zero in PAA head when num_pos=0
v-qjqs Oct 15, 2020
153386c
Merge branch 'master' of https://github.com/open-mmlab/mmdetection in…
v-qjqs Oct 15, 2020
25a3ac1
[Fix]: Avoid division by zero in PAA head when num_pos=0 (#3938)
ZwwWayne Oct 16, 2020
8563c12
explicitly add mode in giou_loss
v-qjqs Oct 16, 2020
df46b0f
Merge branch 'master' of https://github.com/open-mmlab/mmdetection in…
v-qjqs Oct 16, 2020
d6b1386
Add supports for giou calculation in BboxOverlaps2D, and re-implement…
v-qjqs Oct 16, 2020
90b6334
Add supports for giou calculation in BboxOverlaps2D, and add iou_calc…
v-qjqs Oct 16, 2020
a429da0
rename hungarian_match_assigner as hungarian_assigner
v-qjqs Oct 16, 2020
abd7f95
fix init
v-qjqs Oct 16, 2020
5e18f0b
reformat docstring
v-qjqs Oct 16, 2020
9703c71
Avoid division by zero in PAA head when num_pos=0
v-qjqs Oct 16, 2020
5d07dda
fix cpu (#3948)
OceanPang Oct 18, 2020
07638a0
add mode for iou_calculator and make giou cost as a default case
v-qjqs Oct 18, 2020
2ee42cb
make mode as a param in iou_calculator
v-qjqs Oct 19, 2020
5577771
Merge branch 'detr' of https://github.com/open-mmlab/mmdetection into…
v-qjqs Oct 19, 2020
776bf53
reformat docsting
v-qjqs Oct 19, 2020
8e7af6c
Merge branch 'bbox_giou2d' of https://github.com/v-qjqs/mmdetection i…
v-qjqs Oct 19, 2020
aa3dc20
Merge branch 'master' of https://github.com/open-mmlab/mmdetection in…
v-qjqs Oct 19, 2020
659f2ed
make iou_mode outside of iou_calculator
v-qjqs Oct 19, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,9 @@ jobs:
- name: Install PyTorch
run: pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
- name: Install MMCV
run: pip install mmcv-full==latest+torch${{matrix.torch}}+cpu -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
run: |
pip install mmcv-full==latest+torch${{matrix.torch}}+cpu -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
python -c 'import mmcv; print(mmcv.__version__)'
- name: Install unittest dependencies
run: pip install -r requirements/tests.txt -r requirements/optional.txt
- name: Build and install
Expand Down Expand Up @@ -118,6 +120,7 @@ jobs:
run: |
pip install mmcv-full==${{matrix.mmcv}} -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
pip install -r requirements.txt
python -c 'import mmcv; print(mmcv.__version__)'
- name: Build and install
run: |
rm -rf .eggs
Expand Down
5 changes: 1 addition & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,12 +109,11 @@ Some other methods are also supported in [projects using MMDetection](./docs/pro

Please refer to [install.md](docs/install.md) for installation and dataset preparation.


## Getting Started

Please see [getting_started.md](docs/getting_started.md) for the basic usage of MMDetection.
We provide [colab tutorial](demo/MMDet_Tutorial.ipynb) for beginners.
There are also tutorials for [finetuning models](docs/tutorials/finetune.md), [adding new dataset](docs/tutorials/new_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), and [adding new modules](docs/tutorials/new_modules.md).
There are also tutorials for [finetuning models](docs/tutorials/finetune.md), [adding new dataset](docs/tutorials/new_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), [customizing models](docs/tutorials/customize_models.md), and [customizing runtime settings](docs/tutorials/customize_runtime.md).

For trouble shooting, please refer to [trouble_shooting.md](docs/trouble_shooting.md)

Expand All @@ -127,7 +126,6 @@ We appreciate all contributions to improve MMDetection. Please refer to [CONTRIB
MMDetection is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new detectors.


## Citation

If you use this toolbox or benchmark in your research, please cite this project.
Expand All @@ -146,7 +144,6 @@ If you use this toolbox or benchmark in your research, please cite this project.
}
```


## Contact

This repo is currently maintained by Kai Chen ([@hellock](http://github.com/hellock)), Yuhang Cao ([@yhcao6](https://github.com/yhcao6)), Wenwei Zhang ([@ZwwWayne](https://github.com/ZwwWayne)),
Expand Down
2 changes: 1 addition & 1 deletion configs/yolo/yolov3_d53_mstrain-608_273e_coco.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
min_bbox_size=0,
score_thr=0.05,
conf_thr=0.005,
nms=dict(type='nms', iou_thr=0.45),
nms=dict(type='nms', iou_threshold=0.45),
max_per_img=100)
# dataset settings
dataset_type = 'CocoDataset'
Expand Down
6 changes: 3 additions & 3 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ARG PYTORCH="1.5"
ARG PYTORCH="1.6.0"
ARG CUDA="10.1"
ARG CUDNN="7"

Expand All @@ -8,12 +8,12 @@ ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0+PTX"
ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"

RUN apt-get update && apt-get install -y git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 \
RUN apt-get update && apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

# Install MMCV
RUN pip install mmcv-full==latest+torch1.5.0+cu101 -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html
RUN pip install mmcv-full==latest+torch1.6.0+cu101 -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html

# Install MMDetection
RUN conda clean --all
Expand Down
5 changes: 5 additions & 0 deletions docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ bbox
.. automodule:: mmdet.core.bbox
:members:

export
^^^^^^^^^^
.. automodule:: mmdet.core.export
:members:

mask
^^^^^^^^^^
.. automodule:: mmdet.core.mask
Expand Down
56 changes: 55 additions & 1 deletion docs/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,60 @@
## Changelog

### v2.5.0 (5/10/2020)

#### Highlights

- Support new methods: [YOLACT](https://arxiv.org/abs/1904.02689), [CentripetalNet](https://arxiv.org/abs/2003.09119).
- Add more documentations for easier and more clear usage.

#### Backwards Incompatible Changes

**FP16 related methods are imported from mmcv instead of mmdet. (#3766, #3822)**
Mixed precision training utils in `mmdet.core.fp16` are moved to `mmcv.runner`, including `force_fp32`, `auto_fp16`, `wrap_fp16_model`, and `Fp16OptimizerHook`. A deprecation warning will be raised if users attempt to import those methods from `mmdet.core.fp16`, and will be finally removed in V2.8.0.

**[0, N-1] represents foreground classes and N indicates background classes for all models. (#3221)**
Before v2.5.0, the background label for RPN is 0, and N for other heads. Now the behavior is consistent for all models. Thus `self.background_labels` in `dense_heads` is removed and all heads use `self.num_classes` to indicate the class index of background labels.
This change has no effect on the pre-trained models in the v2.x model zoo, but will affect the training of all models with RPN heads. Two-stage detectors whose RPN head uses softmax will be affected because the order of categories is changed.

**Only call `get_subset_by_classes` when `test_mode=True` and `self.filter_empty_gt=True` (#3695)**
Function `get_subset_by_classes` in dataset is refactored and only filters out images when `test_mode=True` and `self.filter_empty_gt=True`.
In the original implementation, `get_subset_by_classes` is not related to the flag `self.filter_empty_gt` and will only be called when the classes is set during initialization no matter `test_mode` is `True` or `False`. This brings ambiguous behavior and potential bugs in many cases. After v2.5.0, if `filter_empty_gt=False`, no matter whether the classes are specified in a dataset, the dataset will use all the images in the annotations. If `filter_empty_gt=True` and `test_mode=True`, no matter whether the classes are specified, the dataset will call ``get_subset_by_classes` to check the images and filter out images containing no GT boxes. Therefore, the users should be responsible for the data filtering/cleaning process for the test dataset.

#### New Features

- Test time augmentation for single stage detectors (#3844, #3638)
- Support to show the name of experiments during training (#3764)
- Add `Shear`, `Rotate`, `Translate` Augmentation (#3656, #3619, #3687)
- Add image-only transformations including `Constrast`, `Equalize`, `Color`, and `Brightness`. (#3643)
- Support [YOLACT](https://arxiv.org/abs/1904.02689) (#3456)
- Support [CentripetalNet](https://arxiv.org/abs/2003.09119) (#3390)
- Support PyTorch 1.6 in docker (#3905)

#### Bug Fixes

- Fix the bug of training ATSS when there is no ground truth boxes (#3702)
- Fix the bug of using Focal Loss when there is `num_pos` is 0 (#3702)
- Fix the label index mapping in dataset browser (#3708)
- Fix Mask R-CNN training stuck problem when ther is no positive rois (#3713)
- Fix the bug of `self.rpn_head.test_cfg` in `RPNTestMixin` by using `self.rpn_head` in rpn head (#3808)
- Fix deprecated `Conv2d` from mmcv.ops (#3791)
- Fix device bug in RepPoints (#3836)
- Fix SABL validating bug (#3849)
- Use `https://download.openmmlab.com/mmcv/dist/index.html` for installing MMCV (#3840)
- Fix nonzero in NMS for PyTorch 1.6.0 (#3867)
- Fix the API change bug of PAA (#3883)
- Fix typo in bbox_flip (#3886)
- Fix cv2 import error of ligGL.so.1 in Dockerfile (#3891)

#### Improvements

- Change to use `mmcv.utils.collect_env` for collecting environment information to avoid duplicate codes (#3779)
- Update checkpoint file names to v2.0 models in documentation (#3795)
- Update tutorials for changing runtime settings (#3778), modifing loss (#3777)
- Improve the function of `simple_test_bboxes` in SABL (#3853)
- Convert mask to bool before using it as img's index for robustness and speedup (#3870)
- Improve documentation of modules and dataset customization (#3821)

### v2.4.0 (5/9/2020)

**Highlights**
Expand Down Expand Up @@ -57,7 +112,6 @@ This change influences all the test APIs in MMDetection and downstream codebases
- Update urls to download.openmmlab.com (#3665)
- Support non-mask training for CocoDataset (#3711)


### v2.3.0 (5/8/2020)

**Highlights**
Expand Down
2 changes: 1 addition & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -429,7 +429,7 @@ model = dict(
neck=dict(...))
```

The `_delete_=True` would replace all old keys in `backbone` field with new keys new keys.
The `_delete_=True` would replace all old keys in `backbone` field with new keys.

### Use intermediate variables in configs

Expand Down
5 changes: 3 additions & 2 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,8 @@ pip install mmcv-full

| MMDetection version | MMCV version |
|:-------------------:|:-------------------:|
| master | mmcv-full>=1.1.1, <=1.2|
| master | mmcv-full>=1.1.5, <=1.2|
| 2.5.0 | mmcv-full>=1.1.5, <=1.2|
| 2.4.0 | mmcv-full>=1.1.1, <=1.2|
| 2.3.0 | mmcv-full==1.0.5|
| 2.3.0rc0 | mmcv-full>=1.0.2 |
Expand Down Expand Up @@ -147,7 +148,7 @@ Note: We set `use_torchvision=True` on-the-fly in CPU mode for `RoIPool` and `Ro
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection/blob/master/docker/Dockerfile) to build an image.

```shell
# build an image with PyTorch 1.5, CUDA 10.1
# build an image with PyTorch 1.6, CUDA 10.1
docker build -t mmdetection docker/
```

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# Tutorial 2: Adding New Dataset
# Tutorial 2: Customize Datasets

## Customize datasets by reorganizing data
## Support new data format

### Reorganize dataset to existing format
To support a new data format, you can either convert them to existing formats (COCO format or PASCAL format) or directly convert them to the middle format. You could also choose to convert them offline (before training by a script) or online (implement a new dataset and do the conversion at training). In MMDetection, we recommand to convert the data into COCO formats and do the conversion offline, thus you only need to modify the config's data annotation pathes and classes after the conversion to your data.

### Reorganize new data formats to existing format

The simplest way is to convert your dataset to existing dataset formats (COCO or PASCAL VOC).

Expand Down Expand Up @@ -42,7 +44,8 @@ The annotation json files in COCO format has the following necessary keys:
```

There are three necessary keys in the json file:
- `images`: contains a list of images with theire informations like `file_name`, `height`, `width`, and `id`.

- `images`: contains a list of images with their informations like `file_name`, `height`, `width`, and `id`.
- `annotations`: contains the list of instance annotations.
- `categories`: contains the list of categories names and their ID.

Expand Down Expand Up @@ -80,7 +83,12 @@ data = dict(

We use this way to support CityScapes dataset. The script is in [cityscapes.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/convert_datasets/cityscapes.py) and we also provide the finetuning [configs](https://github.com/open-mmlab/mmdetection/blob/master/configs/cityscapes).

### Reorganize dataset to middle format
**Note**

1. For instance segmentation datasets, **MMDetection only supports evaluating mask AP of dataset in COCO format for now**.
2. It is recommanded to convert the data offline before training, thus you can still use `CocoDataset` and only need to modify the path of annotations and the training classes.

### Reorganize new data format to middle format

It is also fine if you do not want to convert the annotation format to COCO or PASCAL format.
Actually, we define a simple annotation format and all existing datasets are
Expand All @@ -94,7 +102,9 @@ annotations like crowd/difficult/ignored bboxes, we use `bboxes_ignore` and `lab
to cover them.

Here is an example.
```

```python

[
{
'filename': 'a.jpg',
Expand Down Expand Up @@ -208,14 +218,19 @@ dataset_A_train = dict(
)
```

## Customize datasets by mixing dataset
## Customize datasets by dataset wrappers

MMDetection also supports many dataset wrappers to mix the dataset or modify the dataset distribution for training.
Currently it supports to three dataset wrappers as below:

MMDetection also supports to mix dataset for training.
Currently it supports to concat and repeat datasets.
- `RepeatDataset`: simply repeat the whole dataset.
- `ClassBalancedDataset`: repeat dataset in a class balanced manner.
- `ConcatDataset`: concat datasets.

### Repeat dataset

We use `RepeatDataset` as wrapper to repeat the dataset. For example, suppose the original dataset is `Dataset_A`, to repeat it, the config looks like the following

```python
dataset_A_train = dict(
type='RepeatDataset',
Expand All @@ -234,6 +249,7 @@ We use `ClassBalancedDataset` as wrapper to repeat the dataset based on category
frequency. The dataset to repeat needs to instantiate function `self.get_cat_ids(idx)`
to support `ClassBalancedDataset`.
For example, to repeat `Dataset_A` with `oversample_thr=1e-3`, the config looks like the following

```python
dataset_A_train = dict(
type='ClassBalancedDataset',
Expand All @@ -245,6 +261,7 @@ dataset_A_train = dict(
)
)
```

You may refer to [source code](../../mmdet/datasets/dataset_wrappers.py) for details.

### Concatenate dataset
Expand All @@ -260,7 +277,9 @@ There are three ways to concatenate the dataset.
pipeline=train_pipeline
)
```

If the concatenated dataset is used for test or evaluation, this manner supports to evaluate each dataset separately. To test the concatenated datasets as a whole, you can set `separate_eval=False` as below.

```python
dataset_A_train = dict(
type='Dataset_A',
Expand All @@ -287,6 +306,7 @@ There are three ways to concatenate the dataset.
test = dataset_A_test
)
```

If the concatenated dataset is used for test or evaluation, this manner also supports to evaluate each dataset separately.

3. We also support to define `ConcatDataset` explicitly as the following.
Expand All @@ -304,12 +324,13 @@ There are three ways to concatenate the dataset.
datasets=[dataset_A_val, dataset_B_val],
separate_eval=False))
```

This manner allows users to evaluate all the datasets as a single one by setting `separate_eval=False`.

**Note:**
1. The option `separate_eval=False` assumes the datasets use `self.data_infos` during evaluation. Therefore, COCO datasets do not support this behavior since COCO datasets do not fully rely on `self.data_infos` for evaluation. Combining different types of ofdatasets and evaluating them as a whole is not tested thus is not suggested.
2. Evaluating `ClassBalancedDataset` and `RepeatDataset` is not supported thus evaluating concatenated datasets of these types is also not supported.

1. The option `separate_eval=False` assumes the datasets use `self.data_infos` during evaluation. Therefore, COCO datasets do not support this behavior since COCO datasets do not fully rely on `self.data_infos` for evaluation. Combining different types of datasets and evaluating them as a whole is not tested thus is not suggested.
2. Evaluating `ClassBalancedDataset` and `RepeatDataset` is not supported thus evaluating concatenated datasets of these types is also not supported.

A more complex example that repeats `Dataset_A` and `Dataset_B` by N and M times, respectively, and then concatenates the repeated datasets is as the following.

Expand Down Expand Up @@ -353,12 +374,12 @@ data = dict(

```

### Modify classes of existing dataset
## Modify Dataset Classes

With existing dataset types, we can modify the class names of them to train subset of the dataset.
With existing dataset types, we can modify the class names of them to train subset of the annotations.
For example, if you want to train only three classes of the current dataset,
you can modify the classes of dataset.
The dataset will subtract subset of the data which contains at least one class in the `classes`.
The dataset will filter out the ground truth boxes of other classes automatically.

```python
classes = ('person', 'bicycle', 'car')
Expand All @@ -378,10 +399,17 @@ car
```

Users can set the classes as a file path, the dataset will load it and convert it to a list automatically.

```python
classes = 'path/to/classes.txt'
data = dict(
train=dict(classes=classes),
val=dict(classes=classes),
test=dict(classes=classes))
```

**Note**:

- Before MMDetection v2.5.0, the dataset will filter out the empty GT images automatically if the classes are set and there is no way to disable that through config. This is an undesirable behavior and introduces confusion because if the classes are not set, the dataset only filter the empty GT images when `filter_empty_gt=True` and `test_mode=False`. After MMDetection v2.5.0, we decouple the image filtering process and the classes modification, i.e., the dataset will only filter empty GT images when `filter_empty_gt=True` and `test_mode=False`, no matter whether the classes are set. Thus, setting the classes only influences the annotations of classes used for training and users could decide whether to filter empty GT images by themselves.
- Since the middle format only has box labels and does not contain the class names, when using `CustomDataset`, users cannot filter out the empty GT images through configs but only do this offline.
- The features for setting dataset classes and dataset filtering will be refactored to be more user-friendly in v2.6.0 or v2.7.0 (depends on the progress).
Loading