He 2016
He 2016
1 Introduction
Deep residual networks (ResNets) [1] consist of many stacked “Residual Units”.
Each unit (Fig. 1(a)) can be expressed in a general form:
yl =h(xl ) + F(xl , Wl ),
xl+1 = f (yl ),
where xl and xl+1 are input and output of the l-th unit, and F is a residual
function. In [1], h(xl ) = xl is an identity mapping and f is a ReLU [2] function.
ResNets that are over 100-layer deep have shown state-of-the-art accuracy for
several challenging recognition tasks on ImageNet [3] and MS COCO [4] compe-
titions. The central idea of ResNets is to learn the additive residual function F
with respect to h(xl ), with a key choice of using an identity mapping h(xl ) = xl .
This is realized by attaching an identity skip connection (“shortcut”).
In this paper, we analyze deep residual networks by focusing on creating
a “direct” path for propagating information—not only within a residual unit,
but through the entire network. Our derivations reveal that if both h(xl ) and
f (yl ) are identity mappings, the signal could be directly propagated from one
unit to any other units, in both forward and backward passes. Our experiments
empirically show that training in general becomes easier when the architecture
is closer to the above two conditions.
To understand the role of skip connections, we analyze and compare various
types of h(xl ). We find that the identity mapping h(xl ) = xl chosen in [1]
c Springer International Publishing AG 2016
B. Leibe et al. (Eds.): ECCV 2016, Part IV, LNCS 9908, pp. 630–645, 2016.
DOI: 10.1007/978-3-319-46493-0 38
Identity Mappings in Deep Residual Networks 631
Fig. 1. Left: (a) original Residual Unit in [1]; (b) proposed Residual Unit. The grey
arrows indicate the easiest paths for the information to propagate, corresponding to
the additive term “xl ” in Eq. (4) (forward propagation) and the additive term “1” in
Eq. (5) (backward propagation). Right: training curves on CIFAR-10 of 1001-layer
ResNets. Solid lines denote test error (y-axis on the right), and dashed lines denote
training loss (y-axis on the left). The proposed unit makes ResNet-1001 easier to train.
achieves the fastest error reduction and lowest training loss among all variants
we investigated, whereas skip connections of scaling, gating [5–7], and 1 × 1
convolutions all lead to higher training loss and error. These experiments suggest
that keeping a “clean” information path (indicated by the grey arrows in Figs. 1,
2 and 4) is helpful for easing optimization.
To construct an identity mapping f (yl ) = yl , we view the activation func-
tions (ReLU and BN [8]) as “pre-activation” of the weight layers, in contrast
to conventional wisdom of “post-activation”. This point of view leads to a new
residual unit design, shown in (Fig. 1(b)). Based on this unit, we present compet-
itive results on CIFAR-10/100 with a 1001-layer ResNet, which is much easier
to train and generalizes better than the original ResNet in [1]. We further report
improved results on ImageNet using a 200-layer ResNet, for which the counter-
part of [1] starts to overfit. These results suggest that there is much room to
exploit the dimension of network depth, a key to the success of modern deep
learning.
Here xl is the input feature to the l-th Residual Unit. Wl = {Wl,k |1≤k≤K } is a
set of weights (and biases) associated with the l-th Residual Unit, and K is the
number of layers in a Residual Unit (K is 2 or 3 in [1]). F denotes the residual
function, e.g., a stack of two 3×3 convolutional layers in [1]. The function f is
the operation after element-wise addition, and in [1] f is ReLU. The function h
is set as an identity mapping: h(xl ) = xl .1
If f is also an identity mapping: xl+1 ≡ yl , we can put Eq. (2) into Eq. (1)
and obtain:
xl+1 = xl + F(xl , Wl ). (3)
Recursively (xl+2 = xl+1 + F (xl+1 , Wl+1 ) = xl + F (xl , Wl ) + F (xl+1 , Wl+1 ), etc.) we
will have:
L−1
x L = xl + F(xi , Wi ), (4)
i=l
for any deeper unit L and any shallower unit l. Equation (4) exhibits some nice
properties. (i) The feature xL of any deeper unit L can be represented as the
L−1
feature xl of any shallower unit l plus a residual function in a form of i=l F,
indicating that the model is in a residual fashion between any units L and l.
L−1
(ii) The feature xL = x0 + i=0 F(xi , Wi ), of any deep unit L, is the summation
of the outputs of all preceding residual functions (plus x0 ). This is in contrast to
a “plain network” where a feature xL is a series of matrix-vector products, say,
L−1
i=0 Wi x0 (ignoring BN and ReLU).
Equation (4) also leads to nice backward propagation properties. Denoting
the loss function as E, from the chain rule of backpropagation [9] we have:
L−1
∂E ∂E ∂xL ∂E ∂
= = 1+ F(xi , Wi ) . (5)
∂xl ∂xL ∂xl ∂xL ∂xl
i=l
∂E
Equation (5) indicates that the gradient ∂xl
can be decomposed into two addi-
∂E
tive terms: a term of ∂xL that propagates information directly without concern-
∂E ∂ L−1
ing any weight layers, and another term of ∂x L ∂xl i=l F that propagates
∂E
through the weight layers. The additive term of ∂x L
ensures that information is
directly propagated back to any shallower unit l. Equation (5) also suggests that
∂E
it is unlikely for the gradient ∂x to be canceled out for a mini-batch, because
∂
L−1
l
in general the term ∂xl i=l F cannot be always -1 for all samples in a mini-
batch. This implies that the gradient of a layer does not vanish even when the
weights are arbitrarily small.
1
It is noteworthy that there are Residual Units for increasing dimensions and reducing
feature map sizes [1] in which h is not identity. In this case the following derivations
do not hold strictly. But as there are only a very few such units (two on CIFAR
and three on ImageNet, depending on image sizes [1]), we expect that they do not
have the exponential impact as we present in Sect. 3. One may also think of our
derivations as applied to all Residual Units within the same feature map size.
Identity Mappings in Deep Residual Networks 633
Discussions. Equations (4) and (5) suggest that the signal can be directly prop-
agated from any unit to another, both forward and backward. The foundation
of Eq. (4) is two identity mappings: (i) the identity skip connection h(xl ) = xl ,
and (ii) the condition that f is an identity mapping.
These directly propagated information flows are represented by the grey
arrows in Figs. 1, 2 and 4. And the above two conditions are true when these grey
arrows cover no operations (expect addition) and thus are “clean”. In the following
two sections we separately investigate the impacts of the two conditions.
L−1
L−1
xL = ( λi )xl + F̂(xi , Wi ), (7)
i=l i=l
where the notation F̂ absorbs the scalars into the residual functions. Similar to
Eq. (5), we have backpropagation of the following form:
L−1 L−1
∂E ∂E ∂
= ( λi ) + F̂(xi , Wi ) . (8)
∂xl ∂xL ∂xl
i=l i=l
L−1
Unlike Eq. (5), in Eq. (8) the first additive term is modulated by a factor i=l λi .
For an extremely deep network (L is large), if λi > 1 for all i, this factor can be
exponentially large; if λi < 1 for all i, this factor can be exponentially small and
vanish, which blocks the backpropagated signal from the shortcut and forces it
to flow through the weight layers. This results in optimization difficulties as we
show by experiments.
In the above analysis, the original identity skip connection in Eq. (3) is
replaced with a simple scaling h(xl ) = λl xl . If the skip connection h(xl ) rep-
resents more complicated transforms (such as gating and 1 × 1 convolutions),
L−1
in Eq. (8) the first term becomes i=l hi where h is the derivative of h. This
product may also impede information propagation and hamper the training pro-
cedure as witnessed in the following experiments.
Fig. 2. Various types of shortcut connections used in Table 1. The grey arrows indicate
the easiest paths for the information to propagate. The shortcut connections in (b–f)
are impeded by different components. For simplifying illustrations we do not display
the BN layers, which are adopted right after the weight layers for all units here.
Table 1. Classification error on the CIFAR-10 test set using ResNet-110 [1], with
different types of shortcut connections applied to all Residual Units. We report “fail”
when the test error is higher than 20 %.
2
See also: people.idsia.ch/∼rupesh/very deep learning/ by [6, 7].
636 K. He et al.
2 20 2 20
15 15
0.2 0.2
Training Loss
Training Loss
Test Error (%)
0.02 0.02
5 5
15 15
0.2 0.2
Training Loss
Training Loss
Test Error (%)
0.02 0.02
5 5
Fig. 3. Training curves on CIFAR-10 of various shortcuts. Solid lines denote test error
(y-axis on the right), and dashed lines denote training loss (y-axis on the left).
When the initialized bg is very negatively biased (e.g., −6), the value of
1 − g(x) is closer to 1 and the shortcut connection is nearly an identity mapping.
Therefore, the result (6.91 %, Table 1) is much closer to the ResNet-110 baseline.
1×1 Convolutional Shortcut. Next we experiment with 1 × 1 convolutional
shortcut connections that replace the identity. This option has been investigated
in [1] (known as option C) on a 34-layer ResNet (16 Residual Units) and shows
good results, suggesting that 1 × 1 shortcut connections could be useful. But we
find that this is not the case when there are many Residual Units. The 110-layer
ResNet has a poorer result (12.22 %, Table 1) when using 1 × 1 convolutional
shortcuts. Again, the training error becomes higher (Fig. 3(d)). When stacking
so many Residual Units (54 for ResNet-110), even the shortest path may still
impede signal propagation. We witnessed similar phenomena on ImageNet with
ResNet-101 when using 1 × 1 convolutional shortcuts.
Dropout Shortcut. Last we experiment with dropout [11] (at a ratio of 0.5)
which we adopt on the output of the identity shortcut (Fig. 2(f)). The network
fails to converge to a good solution. Dropout statistically imposes a scale of λ
with an expectation of 0.5 on the shortcut, and similar to constant scaling by
0.5, it impedes signal propagation.
Identity Mappings in Deep Residual Networks 637
Table 2. Classification error (%) on the CIFAR-10 test set using different activation
functions.
Fig. 4. Various usages of activation in Table 2. All these units consist of the same
components—only the orders are different.
3.2 Discussions
As indicated by the grey arrows in Fig. 2, the shortcut connections are the most
direct paths for the information to propagate. Multiplicative manipulations (scal-
ing, gating, 1 × 1 convolutions, and dropout) on the shortcuts can hamper infor-
mation propagation and lead to optimization problems.
It is noteworthy that the gating and 1 × 1 convolutional shortcuts introduce
more parameters, and should have stronger representational abilities than iden-
tity shortcuts. In fact, the shortcut-only gating and 1 × 1 convolution cover the
solution space of identity shortcuts (i.e., they could be optimized as identity
shortcuts). However, their training error is higher than that of identity short-
cuts, indicating that the degradation of these models is caused by optimization
issues, instead of representational abilities.
It is easy to see that Eq. (9) is similar to Eq. (4), and can enable a backward
formulation similar to Eq. (5). For this new Residual Unit as in Eq. (9), the new
after-addition activation becomes an identity mapping. This design means that
Identity Mappings in Deep Residual Networks 639
Table 3. Classification error (%) on the CIFAR-10/100 test set using the original
Residual Units and our pre-activation Residual Units.
2 20 2 20
164, original
164, proposed (pre−activation)
15 15
0.2 0.2
Training Loss
Training Loss
0.02 0.02
5 5
110, original
110, BN after add
0.002 0 0.002 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Iterations 4 Iterations x 10
4
x 10
Fig. 6. Training curves on CIFAR-10. Left: BN after addition (Fig. 4(b)) using ResNet-
110. Right: pre-activation unit (Fig. 4(e)) on ResNet-164. Solid lines denote test error,
and dashed lines denote training loss.
4.2 Analysis
zero and f does not truncate it (xl is always non-negative due to the previous
ReLU, so yl is below zero only when the magnitude of F is very negative). The
truncation, however, is more frequent when there are 1000 layers.
Reducing Overfitting. Another impact of using the proposed pre-activation
unit is on regularization, as shown in Fig. 6 (right). The pre-activation version
reaches slightly higher training loss at convergence, but produces lower test
error. This phenomenon is observed on ResNet-110, ResNet-110(1-layer), and
ResNet-164 on both CIFAR-10 and 100. This is presumably caused by BN’s reg-
ularization effect [8]. In the original Residual Unit (Fig. 4(a)), although the BN
normalizes the signal, this is soon added to the shortcut and thus the merged
signal is not normalized. This unnormalized signal is then used as the input of
the next weight layer. On the contrary, in our pre-activation version, the inputs
to all weight layers have been normalized.
5 Results
3
The ResNet-200 has 16 more 3-layer bottleneck Residual Units than ResNet-152,
which are added on the feature map of 28×28.
642 K. He et al.
†
ResNet-1001 [ours] (10.2M) 4.62 (4.69±0.20)
The original ResNet-152 [1] has top-1 error of 21.3 % on a 320×320 crop, and
our pre-activation counterpart has 21.1 %. The gain is not big on ResNet-152
because this model has not shown severe generalization difficulties. However,
the original ResNet-200 has an error rate of 21.8 %, higher than the baseline
ResNet-152. But we find that the original ResNet-200 has lower training error
than ResNet-152, suggesting that it suffers from overfitting.
Our pre-activation ResNet-200 has an error rate of 20.7 %, which is 1.1 %
lower than the baseline ResNet-200 and also lower than the two versions of
ResNet-152. When using the scale and aspect ratio augmentation of [19,20],
our ResNet-200 has a result better than Inception v3 [19] (Table 5). Concurrent
with our work, an Inception-ResNet-v2 model [21] achieves a single-crop result
of 19.9 %/4.9 %. We expect our observations and the proposed Residual Unit
will help this type and generally other types of ResNets.
Computational Cost. Our models’ computational complexity is linear on
depth (so a 1001-layer net is ∼10× complex of a 100-layer net). On CIFAR,
ResNet-1001 takes about 27 h to train on 2 GPUs; on ImageNet, ResNet-200
takes about 3 weeks to train on 8 GPUs (on par with VGG nets [22]).
Identity Mappings in Deep Residual Networks 643
Table 5. Comparisons of single-crop error on the ILSVRC 2012 validation set. All
ResNets are trained using the same hyper-parameters and implementations as [1]). Our
Residual Units are the full pre-activation version (Fig. 4(e)). † : code/model available at
https://github.com/facebook/fb.resnet.torch/tree/master/pretrained, using scale and
aspect ratio augmentation in [20].
6 Conclusions
This paper investigates the propagation formulations behind the connection
mechanisms of deep residual networks. Our derivations imply that identity short-
cut connections and identity after-addition activation are essential for making
information propagation smooth. Ablation experiments demonstrate phenom-
ena that are consistent with our derivations. We also present 1000-layer deep
networks that can be easily trained and achieve improved accuracy.
References
1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition.
In: CVPR (2016)
2. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann
machines. In: ICML (2010)
3. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z.,
Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large
scale visual recognition challenge. IJCV 115, 211–252 (2015)
4. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P.,
Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla,
T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part V. LNCS, vol. 8693, pp.
740–755. Springer, Heidelberg (2014)
5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9,
1735–1780 (1997)
6. Srivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks. In: ICML Work-
shop (2015)
7. Srivastava, R.K., Greff, K., Schmidhuber, J.: Training very deep networks. In: NIPS
(2015)
8. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by
reducing internal covariate shift. In: ICML (2015)
9. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W.,
Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural
Comput. 1, 541–551 (1989)
10. Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical
report (2009)
11. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.:
Improving neural networks by preventing co-adaptation of feature detectors (2012).
arXiv:1207.0580
12. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network
learning by exponential linear units (ELUs). In: ICLR (2016)
13. Graham, B.: Fractional max-pooling (2014). arXiv:1412.6071
14. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplic-
ity: the all convolutional net (2014). arXiv:1412.6806
15. Lin, M., Chen, Q., Yan, S.: Network in network. In: ICLR (2014)
16. Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets. In:
AISTATS (2015)
17. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: Fitnets:
hints for thin deep nets. In: ICLR (2015)
Identity Mappings in Deep Residual Networks 645
18. Mishkin, D., Matas, J.: All you need is a good init. In: ICLR (2016)
19. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the incep-
tion architecture for computer vision. In: CVPR (2016)
20. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D.,
Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)
21. Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact
of residual connections on learning (2016). arXiv:1602.07261
22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale
image recognition. In: ICLR (2015)
23. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-
level performance on imagenet classification. In: ICCV (2015)