[notes] ImageNet Classification with Deep Convolutional Neual Network

Paper:
ImageNet Classification with Deep Convolutional Neual Network


Achievements:
The model addressed by Alex etl. achieved top-1 and top-5 test error rate of 37.5% and 17.0% of classifying the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes.


Model Architecture:


model architecture plot:


contains eight learned layers five convolutional and three fully-connected.
The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU. The kernels of the third convolutional layer are connected to all kernel maps in the second layer.
 
Response-normalization layers follow the first and second convolutional layers. Max-pooling layers, of the kind described in Section 3.4, follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.


Interesting Points:
ReLU Nonlinearity: speed-up, six times faster than an equivalent network with tanh neurons.
Overlapping Pooling: enhance accuracy and prevent overfitting, reduces the top-1 and top-5 error rates by 0.4% and 0.3%; training model with overlapping pooling find it slightly more difficult to overfit.

Dropout:prevent overfitting, reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons.



[notes] ImageNet Classification with Deep Convolutional Neual Network,古老的榕树,5-wow.com

郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。