Skip to main content

Table 3 The performance of different network on MNIST

From: An multi-scale learning network with depthwise separable convolutions

Methods

Iterations, 5000; batch_size, 128

Acc(%)

Number of training parameters

GoogLeNet

97.98

6640

AlexNet

97.80

864

MobileNet

94.59

114

Our method 1

98.99

246

Our method 2

99.03

246