tf.keras.layers.TimeDistributed
Stay organized with collections
Save and categorize content based on your preferences.
This wrapper allows to apply a layer to every temporal slice of an input.
Inherits From: Wrapper
, Layer
, Module
tf.keras.layers.TimeDistributed(
layer, **kwargs
)
Every input should be at least 3D, and the dimension of index one of the
first input will be considered to be the temporal dimension.
Consider a batch of 32 video samples, where each sample is a 128x128 RGB
image with channels_last
data format, across 10 timesteps.
The batch input shape is (32, 10, 128, 128, 3)
.
You can then use TimeDistributed
to apply the same Conv2D
layer to each
of the 10 timesteps, independently:
inputs = tf.keras.Input(shape=(10, 128, 128, 3))
conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3))
outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs)
outputs.shape
TensorShape([None, 10, 126, 126, 64])
Because TimeDistributed
applies the same instance of Conv2D
to each of
the timestamps, the same set of weights are used at each timestamp.
Call arguments |
inputs
|
Input tensor of shape (batch, time, ...) or nested tensors,
and each of which has shape (batch, time, ...).
|
training
|
Python boolean indicating whether the layer should behave in
training mode or in inference mode. This argument is passed to the
wrapped layer (only if the layer supports this argument).
|
mask
|
Binary tensor of shape (samples, timesteps) indicating whether
a given timestep should be masked. This argument is passed to the
wrapped layer (only if the layer supports this argument).
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-01-23 UTC.
[null,null,["Last updated 2024-01-23 UTC."],[],[],null,["# tf.keras.layers.TimeDistributed\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.15.0/keras/layers/rnn/time_distributed.py#L32-L352) |\n\nThis wrapper allows to apply a layer to every temporal slice of an input.\n\nInherits From: [`Wrapper`](../../../tf/keras/layers/Wrapper), [`Layer`](../../../tf/keras/layers/Layer), [`Module`](../../../tf/Module) \n\n tf.keras.layers.TimeDistributed(\n layer, **kwargs\n )\n\nEvery input should be at least 3D, and the dimension of index one of the\nfirst input will be considered to be the temporal dimension.\n\nConsider a batch of 32 video samples, where each sample is a 128x128 RGB\nimage with `channels_last` data format, across 10 timesteps.\nThe batch input shape is `(32, 10, 128, 128, 3)`.\n\nYou can then use `TimeDistributed` to apply the same `Conv2D` layer to each\nof the 10 timesteps, independently: \n\n inputs = tf.keras.Input(shape=(10, 128, 128, 3))\n conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3))\n outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs)\n outputs.shape\n TensorShape([None, 10, 126, 126, 64])\n\nBecause `TimeDistributed` applies the same instance of `Conv2D` to each of\nthe timestamps, the same set of weights are used at each timestamp.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------|-----------------------------------------------------------------------|\n| `layer` | a [`tf.keras.layers.Layer`](../../../tf/keras/layers/Layer) instance. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Call arguments -------------- ||\n|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | Input tensor of shape (batch, time, ...) or nested tensors, and each of which has shape (batch, time, ...). |\n| `training` | Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the wrapped layer (only if the layer supports this argument). |\n| `mask` | Binary tensor of shape `(samples, timesteps)` indicating whether a given timestep should be masked. This argument is passed to the wrapped layer (only if the layer supports this argument). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-----------------------------------------------------------------------------------------------|\n| `ValueError` | If not initialized with a [`tf.keras.layers.Layer`](../../../tf/keras/layers/Layer) instance. |\n\n\u003cbr /\u003e"]]