Optional shape tuple, to be specified if you would
like to use a model with an input image resolution that is not
(224, 224, 3).
It should have exactly 3 inputs channels (224, 224, 3).
You can also omit this option if you would like
to infer input_shape from an input_tensor.
If you choose to include both input_tensor and input_shape then
input_shape will be used if they match, if the shapes
do not match then we will throw an error.
E.g. (160, 160, 3) would be one valid value.
alpha
controls the width of the network. This is known as the
depth multiplier in the MobileNetV3 paper, but the name is kept for
consistency with MobileNetV1 in Keras.
If alpha < 1.0, proportionally decreases the number
of filters in each layer.
If alpha > 1.0, proportionally increases the number
of filters in each layer.
If alpha = 1, default number of filters from the paper
are used at each layer.
minimalistic
In addition to large and small models this module also
contains so-called minimalistic models, these models have the same
per-layer dimensions characteristic as MobilenetV3 however, they don't
utilize any of the advanced blocks (squeeze-and-excite units, hard-swish,
and 5x5 convolutions). While these models are less efficient on CPU, they
are much more performant on GPU/DSP.
include_top
Boolean, whether to include the fully-connected
layer at the top of the network. Defaults to True.
weights
String, one of None (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor
Optional Keras tensor (i.e. output of
layers.Input())
to use as image input for the model.
pooling
String, optional pooling mode for feature extraction
when include_top is False.
None means that the output of the model
will be the 4D tensor output of the
last convolutional block.
avg means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a
2D tensor.
max means that global max pooling will
be applied.
classes
Integer, optional number of classes to classify images
into, only to be specified if include_top is True, and
if no weights argument is specified.
dropout_rate
fraction of the input units to drop on the last layer.
classifier_activation
A str or callable. The activation function to use
on the "top" layer. Ignored unless include_top=True. Set
classifier_activation=None to return the logits of the "top" layer.
When loading pretrained weights, classifier_activation can only
be None or "softmax".
Call arguments:
inputs: A floating point numpy.array or a tf.Tensor, 4D with 3 color
channels, with values in the range [0, 255].
[null,null,["Last updated 2021-05-14 UTC."],[],[],null,["# tf.keras.applications.MobileNetV3Small\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/keras/applications/mobilenet_v3.py#L353-L386) |\n\nInstantiates the MobileNetV3Small architecture.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.applications.MobileNetV3Small`](https://www.tensorflow.org/api_docs/python/tf/keras/applications/MobileNetV3Small)\n\n\u003cbr /\u003e\n\n tf.keras.applications.MobileNetV3Small(\n input_shape=None, alpha=1.0, minimalistic=False, include_top=True,\n weights='imagenet', input_tensor=None, classes=1000, pooling=None,\n dropout_rate=0.2, classifier_activation='softmax'\n )\n\n#### Reference:\n\n- [Searching for MobileNetV3](https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)\n\nThe following table describes the performance of MobileNets v3:\n---------------------------------------------------------------\n\nMACs stands for Multiply Adds\n\n| Classification Checkpoint | MACs(M) | Parameters(M) | Top1 Accuracy | Pixel1 CPU(ms) |\n|-----------------------------------------|---------|---------------|---------------|----------------|\n| mobilenet_v3_large_1.0_224 | 217 | 5.4 | 75.6 | 51.2 |\n| mobilenet_v3_large_0.75_224 | 155 | 4.0 | 73.3 | 39.8 |\n| mobilenet_v3_large_minimalistic_1.0_224 | 209 | 3.9 | 72.3 | 44.1 |\n| mobilenet_v3_small_1.0_224 | 66 | 2.9 | 68.1 | 15.8 |\n| mobilenet_v3_small_0.75_224 | 44 | 2.4 | 65.4 | 12.8 |\n| mobilenet_v3_small_minimalistic_1.0_224 | 65 | 2.0 | 61.9 | 12.2 |\n\nFor image classification use cases, see\n[this page for detailed examples](https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\nFor transfer learning use cases, make sure to read the\n[guide to transfer learning \\& fine-tuning](https://keras.io/guides/transfer_learning/).\n| **Note:** each Keras Application expects a specific kind of input preprocessing. For ModelNetV3, input preprocessing is included as part of the model (as a `Rescaling` layer), and thus [`tf.keras.applications.mobilenet_v3.preprocess_input`](../../../tf/keras/applications/mobilenet_v3/preprocess_input) is actually a pass-through function. ModelNetV3 models expect their inputs to be float tensors of pixels with values in the \\[0-255\\] range.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input_shape` | Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels (224, 224, 3). You can also omit this option if you would like to infer input_shape from an input_tensor. If you choose to include both input_tensor and input_shape then input_shape will be used if they match, if the shapes do not match then we will throw an error. E.g. `(160, 160, 3)` would be one valid value. |\n| `alpha` | controls the width of the network. This is known as the depth multiplier in the MobileNetV3 paper, but the name is kept for consistency with MobileNetV1 in Keras. \u003cbr /\u003e - If `alpha` \\\u003c 1.0, proportionally decreases the number of filters in each layer. - If `alpha` \\\u003e 1.0, proportionally increases the number of filters in each layer. - If `alpha` = 1, default number of filters from the paper are used at each layer. |\n| `minimalistic` | In addition to large and small models this module also contains so-called minimalistic models, these models have the same per-layer dimensions characteristic as MobilenetV3 however, they don't utilize any of the advanced blocks (squeeze-and-excite units, hard-swish, and 5x5 convolutions). While these models are less efficient on CPU, they are much more performant on GPU/DSP. |\n| `include_top` | Boolean, whether to include the fully-connected layer at the top of the network. Defaults to `True`. |\n| `weights` | String, one of `None` (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. |\n| `input_tensor` | Optional Keras tensor (i.e. output of [`layers.Input()`](../../../tf/keras/Input)) to use as image input for the model. |\n| `pooling` | String, optional pooling mode for feature extraction when `include_top` is `False`. - `None` means that the output of the model will be the 4D tensor output of the last convolutional block. - `avg` means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. - `max` means that global max pooling will be applied. |\n| `classes` | Integer, optional number of classes to classify images into, only to be specified if `include_top` is True, and if no `weights` argument is specified. |\n| `dropout_rate` | fraction of the input units to drop on the last layer. |\n| `classifier_activation` | A `str` or callable. The activation function to use on the \"top\" layer. Ignored unless `include_top=True`. Set `classifier_activation=None` to return the logits of the \"top\" layer. When loading pretrained weights, `classifier_activation` can only be `None` or `\"softmax\"`. |\n\n#### Call arguments:\n\n- **`inputs`** : A floating point `numpy.array` or a [`tf.Tensor`](../../../tf/Tensor), 4D with 3 color channels, with values in the range \\[0, 255\\].\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A [`keras.Model`](../../../tf/keras/Model) instance. ||\n\n\u003cbr /\u003e"]]