Skip to content

Conversation

@myownskyW7
Copy link
Collaborator

No description provided.

@myownskyW7 myownskyW7 requested review from ZwwWayne and hellock August 22, 2020 08:18
@hellock
Copy link
Member

hellock commented Aug 22, 2020

Unit tests are missing.

return decoded_bboxes


def generat_buckets(proposals, bucket_num, scale_factor=1.0):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow the naming conventions: bucket_num -> num_buckets

scale_factor (float): Scale factor to rescale proposals.
offset_topk (int): Topk buckets are used to generate \
bucket fine regression targets. Defaults to 2.
offset_allow (float): Offset allowance to generate \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may consider a more comprehensive variable name.

# generate bucket labels and weight
side_num = int(np.ceil(bucket_num / 2.0))
labels = torch.cat([
l_label[:, 0][:, None], r_label[:, 0][:, None], t_label[:, 0][:, None],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

labels = torch.stack([l_label[:, 0], r_label[:, 0], t_label[:, 0], d_label[:, 0]], dim=-1)

self.reg_pos_conv_xs[i].init_weights()
self.reg_pos_conv_ys[i].init_weights()
nn.init.normal_(self.reg_conv_att_x.weight, 0, 0.01)
nn.init.constant_(self.reg_conv_att_x.bias, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the init methods in mmcv.cnn

self.reg_pre_convs[i].init_weights()
for i in range(self.reg_pos_num):
self.reg_pos_conv_xs[i].init_weights()
self.reg_pos_conv_ys[i].init_weights()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ConvModules are initialized during construction by default.

reg_fy = (reg_fy * reg_fy_att).sum(dim=3)
return reg_fx, reg_fy

def direction_feature_extractor(self, reg_x):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use terms that are consistent with the paper.

reg_fy = torch.transpose(reg_fy, 1, 2)
return reg_fx.contiguous(), reg_fy.contiguous()

def reg_pred(self, x, offfset_fcs, cls_fcs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstring.

bucket_offset_targets, bucket_offset_weights)

def _bucket_target_single(self, pos_proposals, neg_proposals,
pos_gt_bboxes, pos_gt_labels, cfg):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstring.

@codecov
Copy link

codecov bot commented Aug 29, 2020

Codecov Report

Merging #3603 into master will decrease coverage by 0.17%.
The diff coverage is 57.77%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3603      +/-   ##
==========================================
- Coverage   61.31%   61.14%   -0.18%     
==========================================
  Files         213      216       +3     
  Lines       14531    15187     +656     
  Branches     2441     2520      +79     
==========================================
+ Hits         8910     9286     +376     
- Misses       5177     5442     +265     
- Partials      444      459      +15     
Flag Coverage Δ
#unittests 61.14% <57.77%> (-0.18%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmdet/core/bbox/__init__.py 100.00% <ø> (ø)
mmdet/models/dense_heads/sabl_retina_head.py 30.41% <30.41%> (ø)
mmdet/core/bbox/coder/bucketing_bbox_coder.py 62.40% <62.40%> (ø)
mmdet/core/bbox/transforms.py 64.36% <73.68%> (+2.60%) ⬆️
mmdet/models/roi_heads/bbox_heads/sabl_head.py 78.92% <78.92%> (ø)
mmdet/core/bbox/coder/__init__.py 100.00% <100.00%> (ø)
mmdet/models/dense_heads/__init__.py 100.00% <100.00%> (ø)
mmdet/models/roi_heads/bbox_heads/__init__.py 100.00% <100.00%> (ø)
mmdet/models/detectors/cornernet.py 94.87% <0.00%> (-5.13%) ⬇️
mmdet/models/dense_heads/corner_head.py 74.31% <0.00%> (-1.92%) ⬇️
... and 5 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update dfbb6d6...03cb69e. Read the comment docs.

Args:
num_buckets (int): Number of buckets.
scale_factor (int): Scale factor of proposals to generate buckets.
offset_topk (int): Topk buckets are used to generate \
Copy link
Collaborator

@ZwwWayne ZwwWayne Aug 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'\' is unnecessary here in arguments, we can remove them for simplicity.

self.cls_ignore_neighbor = cls_ignore_neighbor

def encode(self, bboxes, gt_bboxes):
"""Get bucketing estimation and fine regression targets during
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to use standard docstrings here rather than a simple line.

return bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, d_buckets


def label2onehot(labels, num_labels):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

F.one_hot

reg_post_kernel=3,
reg_pre_num=2,
reg_post_num=1,
num_classes=80,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make num_classes positional.

@hellock hellock merged commit 26562a1 into open-mmlab:master Sep 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants