float, The bias added to forget gates (see above).
input_size
Deprecated and unused.
activation
Activation function of the inner states.
layer_norm
If True, layer normalization will be applied.
norm_gain
float, The layer normalization gain initial value. If
layer_norm has been set to False, this argument will be ignored.
norm_shift
float, The layer normalization shift initial value. If
layer_norm has been set to False, this argument will be ignored.
dropout_keep_prob
unit Tensor or float between 0 and 1 representing the
recurrent dropout probability value. If float and 1.0, no dropout will
be applied.
dropout_prob_seed
(optional) integer, the randomness seed.
reuse
(optional) Python boolean describing whether to reuse variables
in an existing scope. If not True, and the existing scope already has
the given variables, an error is raised.
Attributes
graph
DEPRECATED FUNCTION
output_size
Integer or TensorShape: size of outputs produced by this cell.
scope_name
state_size
size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
int, float, or unit Tensor representing the batch size.
dtype
the data type to use for the state.
Returns
If state_size is an int or TensorShape, then the return value is a
N-D tensor of shape [batch_size, state_size] filled with zeros.
If state_size is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of 2-D tensors with
the shapes [batch_size, s] for each s in state_size.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.rnn.LayerNormBasicLSTMCell\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/rnn/python/ops/rnn_cell.py#L1342-L1463) |\n\nLSTM unit with layer normalization and recurrent dropout.\n\nInherits From: [`RNNCell`](../../../tf/nn/rnn_cell/RNNCell) \n\n tf.contrib.rnn.LayerNormBasicLSTMCell(\n num_units, forget_bias=1.0, input_size=None, activation=tf.math.tanh,\n layer_norm=True, norm_gain=1.0, norm_shift=0.0, dropout_keep_prob=1.0,\n dropout_prob_seed=None, reuse=None\n )\n\nThis class adds layer normalization and recurrent dropout to a\nbasic LSTM unit. Layer normalization implementation is based on:\n\n\u003chttps://arxiv.org/abs/1607.06450\u003e\n\n\"Layer Normalization\"\nJimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton\n\nand is applied before the internal nonlinearities.\nRecurrent dropout is base on:\n\n\u003chttps://arxiv.org/abs/1603.05118\u003e\n\n\"Recurrent Dropout without Memory Loss\"\nStanislau Semeniuta, Aliaksei Severyn, Erhardt Barth.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `num_units` | int, The number of units in the LSTM cell. |\n| `forget_bias` | float, The bias added to forget gates (see above). |\n| `input_size` | Deprecated and unused. |\n| `activation` | Activation function of the inner states. |\n| `layer_norm` | If `True`, layer normalization will be applied. |\n| `norm_gain` | float, The layer normalization gain initial value. If `layer_norm` has been set to `False`, this argument will be ignored. |\n| `norm_shift` | float, The layer normalization shift initial value. If `layer_norm` has been set to `False`, this argument will be ignored. |\n| `dropout_keep_prob` | unit Tensor or float between 0 and 1 representing the recurrent dropout probability value. If float and 1.0, no dropout will be applied. |\n| `dropout_prob_seed` | (optional) integer, the randomness seed. |\n| `reuse` | (optional) Python boolean describing whether to reuse variables in an existing scope. If not `True`, and the existing scope already has the given variables, an error is raised. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `graph` | DEPRECATED FUNCTION \u003cbr /\u003e | **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Stop using this property because tf.layers layers no longer track their graph. |\n| `output_size` | Integer or TensorShape: size of outputs produced by this cell. |\n| `scope_name` | \u003cbr /\u003e |\n| `state_size` | size(s) of state(s) used by this cell. \u003cbr /\u003e It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_initial_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/rnn_cell_impl.py#L281-L309) \n\n get_initial_state(\n inputs=None, batch_size=None, dtype=None\n )\n\n### `zero_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/rnn_cell_impl.py#L311-L340) \n\n zero_state(\n batch_size, dtype\n )\n\nReturn zero-filled state tensor(s).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|--------------|---------------------------------------------------------|\n| `batch_size` | int, float, or unit Tensor representing the batch size. |\n| `dtype` | the data type to use for the state. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| If `state_size` is an int or TensorShape, then the return value is a `N-D` tensor of shape `[batch_size, state_size]` filled with zeros. \u003cbr /\u003e If `state_size` is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of `2-D` tensors with the shapes `[batch_size, s]` for each s in `state_size`. ||\n\n\u003cbr /\u003e"]]