The resulting SavedModel will contain the default serving signature, which
can be used with the TFLite converter to create a TFLite flatbuffer for
inference.
An optional structure of tf.TensorSpecs representing the
expected input of model.predict_on_batch, to override reading from
model.input_spec. Typically this will be similar to model.input_spec,
with any example labels removed. If None, default to
model.input_spec['x'] if the input_spec is a mapping, otherwise default
to model.input_spec[0].
[null,null,["Last updated 2024-09-20 UTC."],[],[],null,["# tff.learning.models.save\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/federated/blob/v0.87.0 Version 2.0, January 2004 Licensed under the Apache License, Version 2.0 (the) |\n\nSerializes `model` as a TensorFlow SavedModel to `path`. \n\n tff.learning.models.save(\n model: ../../../tff/learning/models/VariableModel,\n path: str,\n input_type=None\n ) -\u003e None\n\nThe resulting SavedModel will contain the default serving signature, which\ncan be used with the TFLite converter to create a TFLite flatbuffer for\ninference.\n| **Note:** The model returned by [`tff.learning.models.load`](../../../tff/learning/models/load) will *not* be the same Python type as the saved model. If the model serialized using this method is a subclass of [`tff.learning.models.VariableModel`](../../../tff/learning/models/VariableModel), that subclass is *not* returned. All method behavior is retained, but the Python type does not cross serialization boundaries. The return type of `metric_finalizers` will be an OrderedDict of str to [`tff.tensorflow.computation`](../../../tff/tensorflow/computation) (annotated TFF computations) which could be different from that of the model before serialization.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `model` | The [`tff.learning.models.VariableModel`](../../../tff/learning/models/VariableModel) to save. |\n| `path` | The `str` directory path to serialize the model to. |\n| `input_type` | An optional structure of [`tf.TensorSpec`](https://www.tensorflow.org/api_docs/python/tf/TensorSpec)s representing the expected input of `model.predict_on_batch`, to override reading from `model.input_spec`. Typically this will be similar to `model.input_spec`, with any example labels removed. If None, default to `model.input_spec['x']` if the input_spec is a mapping, otherwise default to `model.input_spec[0]`. |\n\n\u003cbr /\u003e"]]