MobileNet v2 models for Keras.
MobileNetV2 is a general architecture and can be used for multiple use cases. Depending on the use case, it can use different input layer size and different width factors. This allows different width models to reduce the number of multiply-adds and thereby reduce inference cost on mobile devices.
MobileNetV2 is very similar to the original MobileNet, except that it uses inverted residual blocks with bottlenecking features. It has a drastically lower parameter count than the original MobileNet. MobileNets support any input size greater than 32 x 32, with larger image sizes offering better performance.
The number of parameters and number of multiply-adds
can be modified by using the alpha
parameter,
which increases/decreases the number of filters in each layer.
By altering the image size and alpha
parameter,
all 22 models from the paper can be built, with ImageNet weights provided.
The paper demonstrates the performance of MobileNets using alpha
values of
1.0 (also called 100 % MobileNet), 0.35, 0.5, 0.75, 1.0, 1.3, and 1.4
For each of these alpha
values, weights for 5 different input image sizes
are provided (224, 192, 160, 128, and 96).
The following table describes the performance of
MobileNet on various input sizes:
MACs stands for Multiply Adds Classification Checkpoint|MACs (M)|Parameters (M)|Top 1 Accuracy|Top 5 Accuracy --------------------------|------------|---------------|---------|----|--------- | [mobilenet_v2_1.4_224] | 582 | 6.06 | 75.0 | 92.5 | | [mobilenet_v2_1.3_224] | 509 | 5.34 | 74.4 | 92.1 | | [mobilenet_v2_1.0_224] | 300 | 3.47 | 71.8 | 91.0 | | [mobilenet_v2_1.0_192] | 221 | 3.47 | 70.7 | 90.1 | | [mobilenet_v2_1.0_160] | 154 | 3.47 | 68.8 | 89.0 | | [mobilenet_v2_1.0_128] | 99 | 3.47 | 65.3 | 86.9 | | [mobilenet_v2_1.0_96] | 56 | 3.47 | 60.3 | 83.2 | | [mobilenet_v2_0.75_224] | 209 | 2.61 | 69.8 | 89.6 | | [mobilenet_v2_0.75_192] | 153 | 2.61 | 68.7 | 88.9 | | [mobilenet_v2_0.75_160] | 107 | 2.61 | 66.4 | 87.3 | | [mobilenet_v2_0.75_128] | 69 | 2.61 | 63.2 | 85.3 | | [mobilenet_v2_0.75_96] | 39 | 2.61 | 58.8 | 81.6 | | [mobilenet_v2_0.5_224] | 97 | 1.95 | 65.4 | 86.4 | | [mobilenet_v2_0.5_192] | 71 | 1.95 | 63.9 | 85.4 | | [mobilenet_v2_0.5_160] | 50 | 1.95 | 61.0 | 83.2 | | [mobilenet_v2_0.5_128] | 32 | 1.95 | 57.7 | 80.8 | | [mobilenet_v2_0.5_96] | 18 | 1.95 | 51.2 | 75.8 | | [mobilenet_v2_0.35_224] | 59 | 1.66 | 60.3 | 82.9 | | [mobilenet_v2_0.35_192] | 43 | 1.66 | 58.2 | 81.2 | | [mobilenet_v2_0.35_160] | 30 | 1.66 | 55.7 | 79.1 | | [mobilenet_v2_0.35_128] | 20 | 1.66 | 50.8 | 75.0 | | [mobilenet_v2_0.35_96] | 11 | 1.66 | 45.5 | 70.4 |
Reference paper:
Functions
MobileNetV2(...)
: Instantiates the MobileNetV2 architecture.
decode_predictions(...)
: Decodes the prediction of an ImageNet model.
preprocess_input(...)
: Preprocesses a tensor or Numpy array encoding a batch of images.