|  View source on GitHub | 
1D Convolutional LSTM.
Inherits From: RNN, Layer, Operation
tf.keras.layers.ConvLSTM1D(
    filters,
    kernel_size,
    strides=1,
    padding='valid',
    data_format=None,
    dilation_rate=1,
    activation='tanh',
    recurrent_activation='sigmoid',
    use_bias=True,
    kernel_initializer='glorot_uniform',
    recurrent_initializer='orthogonal',
    bias_initializer='zeros',
    unit_forget_bias=True,
    kernel_regularizer=None,
    recurrent_regularizer=None,
    bias_regularizer=None,
    activity_regularizer=None,
    kernel_constraint=None,
    recurrent_constraint=None,
    bias_constraint=None,
    dropout=0.0,
    recurrent_dropout=0.0,
    seed=None,
    return_sequences=False,
    return_state=False,
    go_backwards=False,
    stateful=False,
    **kwargs
)
Similar to an LSTM layer, but the input transformations and recurrent transformations are both convolutional.
| Args | |
|---|---|
| filters | int, the dimension of the output space (the number of filters in the convolution). | 
| kernel_size | int or tuple/list of 1 integer, specifying the size of the convolution window. | 
| strides | int or tuple/list of 1 integer, specifying the stride length
of the convolution. strides > 1is incompatible withdilation_rate > 1. | 
| padding | string, "valid"or"same"(case-insensitive)."valid"means no padding."same"results in padding evenly to
the left/right or up/down of the input such that output has the
same height/width dimension as the input. | 
| data_format | string, either "channels_last"or"channels_first".
The ordering of the dimensions in the inputs."channels_last"corresponds to inputs with shape(batch, steps, features)while"channels_first"corresponds to inputs with shape(batch, features, steps). It defaults to theimage_data_formatvalue found in your Keras config file at~/.keras/keras.json.
If you never set it, then it will be"channels_last". | 
| dilation_rate | int or tuple/list of 1 integers, specifying the dilation rate to use for dilated convolution. | 
| activation | Activation function to use. By default hyperbolic tangent
activation function is applied ( tanh(x)). | 
| recurrent_activation | Activation function to use for the recurrent step. | 
| use_bias | Boolean, whether the layer uses a bias vector. | 
| kernel_initializer | Initializer for the kernelweights matrix,
used for the linear transformation of the inputs. | 
| recurrent_initializer | Initializer for the recurrent_kernelweights
matrix, used for the linear transformation of the recurrent state. | 
| bias_initializer | Initializer for the bias vector. | 
| unit_forget_bias | Boolean. If True, add 1 to the bias of
the forget gate at initialization.
Use in combination withbias_initializer="zeros".
This is recommended in Jozefowicz et al., 2015 | 
| kernel_regularizer | Regularizer function applied to the kernelweights
matrix. | 
| recurrent_regularizer | Regularizer function applied to the recurrent_kernelweights matrix. | 
| bias_regularizer | Regularizer function applied to the bias vector. | 
| activity_regularizer | Regularizer function applied to. | 
| kernel_constraint | Constraint function applied to the kernelweights
matrix. | 
| recurrent_constraint | Constraint function applied to the recurrent_kernelweights matrix. | 
| bias_constraint | Constraint function applied to the bias vector. | 
| dropout | Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. | 
| recurrent_dropout | Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. | 
| seed | Random seed for dropout. | 
| return_sequences | Boolean. Whether to return the last output
in the output sequence, or the full sequence. Default: False. | 
| return_state | Boolean. Whether to return the last state in addition
to the output. Default: False. | 
| go_backwards | Boolean (default: False).
IfTrue, process the input sequence backwards and return the
reversed sequence. | 
| stateful | Boolean (default False). If True, the last state
for each sample at index i in a batch will be used as initial
state for the sample of index i in the following batch. | 
| unroll | Boolean (default: False).
IfTrue, the network will be unrolled,
else a symbolic loop will be used.
Unrolling can speed-up a RNN,
although it tends to be more memory-intensive.
Unrolling is only suitable for short sequences. | 
Input shape:
- If data_format="channels_first": 4D tensor with shape:(samples, time, channels, rows)
- If data_format="channels_last": 4D tensor with shape:(samples, time, rows, channels)
Output shape:
- If return_state: a list of tensors. The first tensor is the output. The remaining tensors are the last states, each 3D tensor with shape:(samples, filters, new_rows)ifdata_format='channels_first'or shape:(samples, new_rows, filters)ifdata_format='channels_last'.rowsvalues might have changed due to padding.
- If return_sequences: 4D tensor with shape:(samples, timesteps, filters, new_rows)if data_format='channels_first' or shape:(samples, timesteps, new_rows, filters)ifdata_format='channels_last'.
- Else, 3D tensor with shape: (samples, filters, new_rows)ifdata_format='channels_first'or shape:(samples, new_rows, filters)ifdata_format='channels_last'.
References:
- Shi et al., 2015 (the current implementation does not include the feedback loop on the cells output).
Methods
from_config
@classmethodfrom_config( config )
Creates a layer from its config.
This method is the reverse of get_config,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights).
| Args | |
|---|---|
| config | A Python dictionary, typically the output of get_config. | 
| Returns | |
|---|---|
| A layer instance. | 
get_initial_state
get_initial_state(
    batch_size
)
inner_loop
inner_loop(
    sequences, initial_state, mask, training=False
)
reset_state
reset_state()
reset_states
reset_states()
symbolic_call
symbolic_call(
    *args, **kwargs
)