View source on GitHub |
Wraps arbitrary expressions as a Layer
object.
tf.keras.layers.Lambda(
function, output_shape=None, mask=None, arguments=None, **kwargs
)
The Lambda
layer exists so that arbitrary expressions can be used
as a Layer
when constructing Sequential
and Functional API models. Lambda
layers are best suited for simple
operations or quick experimentation. For more advanced use cases, follow
this guide
for subclassing tf.keras.layers.Layer
.
The main reason to subclass tf.keras.layers.Layer
instead of using a
Lambda
layer is saving and inspecting a Model. Lambda
layers
are saved by serializing the Python bytecode, which is fundamentally
non-portable. They should only be loaded in the same environment where
they were saved. Subclassed layers can be saved in a more portable way
by overriding their get_config
method. Models that rely on
subclassed Layers are also often easier to visualize and reason about.
Examples:
# add a x -> x^2 layer
model.add(Lambda(lambda x: x ** 2))
# add a layer that returns the concatenation
# of the positive part of the input and
# the opposite of the negative part
def antirectifier(x):
x -= K.mean(x, axis=1, keepdims=True)
x = K.l2_normalize(x, axis=1)
pos = K.relu(x)
neg = K.relu(-x)
return K.concatenate([pos, neg], axis=1)
model.add(Lambda(antirectifier))
Variables | |
---|---|
While it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs. For instance, consider the following layer: |
scale = tf.Variable(1.)
scale_layer = tf.keras.layers.Lambda(lambda x: x * scale)
Because scale_layer does not directly track the scale
variable, it will
not appear in scale_layer.trainable_weights
and will therefore not be
trained if scale_layer
is used in a Model.
A better pattern is to write a subclassed Layer:
class ScaleLayer(tf.keras.layers.Layer):
def __init__(self):
super(ScaleLayer, self).__init__()
self.scale = tf.Variable(1.)
def call(self, inputs):
return inputs * self.scale
In general, Lambda layers can be convenient for simple stateless computation, but anything more complex should use a subclass Layer instead.
Input shape: Arbitrary. Use the keyword argument input_shape (tuple of
integers, does not include the samples axis) when using this layer as the
first layer in a model.
Output shape: Specified by output_shape
argument