View source on GitHub
|
Inherits From: RNNCell, Layer, Layer, Module
tf.compat.v1.nn.rnn_cell.BasicLSTMCell(
num_units,
forget_bias=1.0,
state_is_tuple=True,
activation=None,
reuse=None,
name=None,
dtype=None,
**kwargs
)
Basic LSTM recurrent network cell.
The implementation is based on
We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.
It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline.
For advanced models, please use the full tf.compat.v1.nn.rnn_cell.LSTMCell
that follows.
Note that this cell is not optimized for performance. Please use
tf.compat.v1.keras.layers.CuDNNLSTM for better performance on GPU, or
tf.raw_ops.LSTMBlockCell for better performance on CPU.
Methods
apply
apply(
*args, **kwargs
)
get_initial_state
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
get_losses_for
get_losses_for(
inputs
)
Retrieves losses relevant to a specific set of inputs.
| Args | |
|---|---|
inputs
|
Input tensor or list/tuple of input tensors. |
| Returns | |
|---|---|
List of loss tensors of the layer that depend on inputs.
|
get_updates_for
get_updates_for(
inputs
)
Retrieves updates relevant to a specific set of inputs.
| Args | |
|---|---|
inputs
|
Input tensor or list/tuple of input tensors. |
| Returns | |
|---|---|
List of update ops of the layer that depend on inputs.
|
zero_state
zero_state(
batch_size, dtype
)
Return zero-filled state tensor(s).
| Args | |
|---|---|
batch_size
|
int, float, or unit Tensor representing the batch size. |
dtype
|
the data type to use for the state. |
| Returns | |
|---|---|
If state_size is an int or TensorShape, then the return value is a
N-D tensor of shape [batch_size, state_size] filled with zeros.
If |
View source on GitHub