tf.keras.layers.Lambda
Stay organized with collections
Save and categorize content based on your preferences.
Wraps arbitrary expressions as a Layer
object.
Inherits From: Layer
, Module
tf.keras.layers.Lambda(
function, output_shape=None, mask=None, arguments=None, **kwargs
)
The Lambda
layer exists so that arbitrary expressions can be used
as a Layer
when constructing Sequential
and Functional API models. Lambda
layers are best suited for simple
operations or quick experimentation. For more advanced use cases, follow
this guide
for subclassing tf.keras.layers.Layer
.
The main reason to subclass tf.keras.layers.Layer
instead of using a
Lambda
layer is saving and inspecting a Model. Lambda
layers
are saved by serializing the Python bytecode, which is fundamentally
non-portable. They should only be loaded in the same environment where
they were saved. Subclassed layers can be saved in a more portable way
by overriding their get_config
method. Models that rely on
subclassed Layers are also often easier to visualize and reason about.
Examples:
# add a x -> x^2 layer
model.add(Lambda(lambda x: x ** 2))
# add a layer that returns the concatenation
# of the positive part of the input and
# the opposite of the negative part
def antirectifier(x):
x -= K.mean(x, axis=1, keepdims=True)
x = K.l2_normalize(x, axis=1)
pos = K.relu(x)
neg = K.relu(-x)
return K.concatenate([pos, neg], axis=1)
model.add(Lambda(antirectifier))
Variables |
While it is possible to use Variables with Lambda layers, this practice is
discouraged as it can easily lead to bugs. For instance, consider the
following layer:
scale = tf.Variable(1.)
scale_layer = tf.keras.layers.Lambda(lambda x: x * scale)
Because scale_layer does not directly track the scale variable, it will
not appear in scale_layer.trainable_weights and will therefore not be
trained if scale_layer is used in a Model.
A better pattern is to write a subclassed Layer:
class ScaleLayer(tf.keras.layers.Layer):
def __init__(self):
super(ScaleLayer, self).__init__()
self.scale = tf.Variable(1.)
def call(self, inputs):
return inputs * self.scale
In general, Lambda layers can be convenient for simple stateless
computation, but anything more complex should use a subclass Layer instead.
|
Args |
function
|
The function to be evaluated. Takes input tensor as first
argument.
|
output_shape
|
Expected output shape from function. This argument can be
inferred if not explicitly provided. Can be a tuple or function. If a
tuple, it only specifies the first dimension onward;
sample dimension is assumed either the same as the input: output_shape =
(input_shape[0], ) + output_shape or, the input is None and
the sample dimension is also None : output_shape = (None, ) +
output_shape If a function, it specifies the entire shape as a function
of the
input shape: output_shape = f(input_shape)
|
mask
|
Either None (indicating no masking) or a callable with the same
signature as the compute_mask layer method, or a tensor that will be
returned as output mask regardless of what the input is.
|
arguments
|
Optional dictionary of keyword arguments to be passed to the
function.
|
Input shape: Arbitrary. Use the keyword argument input_shape (tuple of
integers, does not include the samples axis) when using this layer as the
first layer in a model.
Output shape: Specified by output_shape
argument
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2022-10-27 UTC.
[null,null,["Last updated 2022-10-27 UTC."],[],[],null,["# tf.keras.layers.Lambda\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.7.0/keras/layers/core/lambda_layer.py#L31-L360) |\n\nWraps arbitrary expressions as a `Layer` object.\n\nInherits From: [`Layer`](../../../tf/keras/layers/Layer), [`Module`](../../../tf/Module)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.layers.Lambda`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda)\n\n\u003cbr /\u003e\n\n tf.keras.layers.Lambda(\n function, output_shape=None, mask=None, arguments=None, **kwargs\n )\n\nThe `Lambda` layer exists so that arbitrary expressions can be used\nas a `Layer` when constructing `Sequential`\nand Functional API models. `Lambda` layers are best suited for simple\noperations or quick experimentation. For more advanced use cases, follow\n[this guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models)\nfor subclassing [`tf.keras.layers.Layer`](../../../tf/keras/layers/Layer).\n| **Warning:** [`tf.keras.layers.Lambda`](../../../tf/keras/layers/Lambda) layers have (de)serialization limitations!\n\nThe main reason to subclass [`tf.keras.layers.Layer`](../../../tf/keras/layers/Layer) instead of using a\n`Lambda` layer is saving and inspecting a Model. `Lambda` layers\nare saved by serializing the Python bytecode, which is fundamentally\nnon-portable. They should only be loaded in the same environment where\nthey were saved. Subclassed layers can be saved in a more portable way\nby overriding their `get_config` method. Models that rely on\nsubclassed Layers are also often easier to visualize and reason about.\n\n#### Examples:\n\n # add a x -\u003e x^2 layer\n model.add(Lambda(lambda x: x ** 2))\n\n # add a layer that returns the concatenation\n # of the positive part of the input and\n # the opposite of the negative part\n\n def antirectifier(x):\n x -= K.mean(x, axis=1, keepdims=True)\n x = K.l2_normalize(x, axis=1)\n pos = K.relu(x)\n neg = K.relu(-x)\n return K.concatenate([pos, neg], axis=1)\n\n model.add(Lambda(antirectifier))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Variables --------- ||\n|---|---|\n| While it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs. For instance, consider the following layer: \u003cbr /\u003e scale = tf.Variable(1.) scale_layer = tf.keras.layers.Lambda(lambda x: x * scale) Because scale_layer does not directly track the `scale` variable, it will not appear in `scale_layer.trainable_weights` and will therefore not be trained if `scale_layer` is used in a Model. A better pattern is to write a subclassed Layer: class ScaleLayer(tf.keras.layers.Layer): def __init__(self): super(ScaleLayer, self).__init__() self.scale = tf.Variable(1.) def call(self, inputs): return inputs * self.scale In general, Lambda layers can be convenient for simple stateless computation, but anything more complex should use a subclass Layer instead. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `function` | The function to be evaluated. Takes input tensor as first argument. |\n| `output_shape` | Expected output shape from function. This argument can be inferred if not explicitly provided. Can be a tuple or function. If a tuple, it only specifies the first dimension onward; sample dimension is assumed either the same as the input: `output_shape = (input_shape[0], ) + output_shape` or, the input is `None` and the sample dimension is also `None`: `output_shape = (None, ) + output_shape` If a function, it specifies the entire shape as a function of the input shape: `output_shape = f(input_shape)` |\n| `mask` | Either None (indicating no masking) or a callable with the same signature as the `compute_mask` layer method, or a tensor that will be returned as output mask regardless of what the input is. |\n| `arguments` | Optional dictionary of keyword arguments to be passed to the function. |\n\n\u003cbr /\u003e\n\nInput shape: Arbitrary. Use the keyword argument input_shape (tuple of\nintegers, does not include the samples axis) when using this layer as the\nfirst layer in a model.\nOutput shape: Specified by `output_shape` argument"]]