Authors: Di Xie,Jiang Xiong,Shiliang Pu
Where published:
CVPR 2017 7
ArXiv: 1703.01827
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1703.01827v3
Deep neural network is difficult to train and this predicament becomes worse
as the depth increases. The essence of this problem exists in the magnitude of
backpropagated errors that will result in gradient vanishing or exploding
phenomenon. We show that a variant of regularizer which utilizes orthonormality
among different filter banks can alleviate this problem. Moreover, we design a
backward error modulation mechanism based on the quasi-isometry assumption
between two consecutive parametric layers. Equipped with these two ingredients,
we propose several novel optimization solutions that can be utilized for
training a specific-structured (repetitively triple modules of Conv-BNReLU)
extremely deep convolutional neural network (CNN) WITHOUT any shortcuts/
identity mappings from scratch. Experiments show that our proposed solutions
can achieve distinct improvements for a 44-layer and a 110-layer plain networks
on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train
plain CNNs to match the performance of the residual counterparts.
Besides, we propose new principles for designing network structure from the
insights evoked by orthonormality. Combined with residual structure, we achieve
comparative performance on the ImageNet dataset.