Based on IndRNNs (https://arxiv.org/abs/1803.04831) and similar to GRUCell,
yet with the \(U_r\), \(U_z\), and \(U\) matrices in equations 5, 6, and
8 of http://arxiv.org/abs/1406.1078 respectively replaced by diagonal
matrices, i.e. a Hadamard product with a single vector:
$$\tilde{h}^{(t)}_j = \phi\left([\mathbf W \mathbf x]_j +
[\mathbf u \circ \mathbf r \circ \mathbf h_{(t-1)}]_j\right)$$
where \(\circ\) denotes the Hadamard operator. This means that each IndyGRU
node sees only its own state, as opposed to seeing all states in the same
layer.
Args
num_units
int, The number of units in the GRU cell.
activation
Nonlinearity to use. Default: tanh.
reuse
(optional) Python boolean describing whether to reuse variables
in an existing scope. If not True, and the existing scope already has
the given variables, an error is raised.
kernel_initializer
(optional) The initializer to use for the weight
matrices applied to the input.
bias_initializer
(optional) The initializer to use for the bias.
name
String, the name of the layer. Layers with the same name will
share weights, but to avoid mistakes we require reuse=True in such
cases.
dtype
Default dtype of the layer (default of None means use the type
of the first input). Required when build is called before call.
Attributes
graph
DEPRECATED FUNCTION
output_size
Integer or TensorShape: size of outputs produced by this cell.
scope_name
state_size
size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers
or TensorShapes.
int, float, or unit Tensor representing the batch size.
dtype
the data type to use for the state.
Returns
If state_size is an int or TensorShape, then the return value is a
N-D tensor of shape [batch_size, state_size] filled with zeros.
If state_size is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of 2-D tensors with
the shapes [batch_size, s] for each s in state_size.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.rnn.IndyGRUCell\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/rnn/python/ops/rnn_cell.py#L3155-L3275) |\n\nIndependently Gated Recurrent Unit cell.\n\nInherits From: [`LayerRNNCell`](../../../tf/contrib/rnn/LayerRNNCell) \n\n tf.contrib.rnn.IndyGRUCell(\n num_units, activation=None, reuse=None, kernel_initializer=None,\n bias_initializer=None, name=None, dtype=None\n )\n\nBased on IndRNNs (\u003chttps://arxiv.org/abs/1803.04831\u003e) and similar to GRUCell,\nyet with the \\\\(U_r\\\\), \\\\(U_z\\\\), and \\\\(U\\\\) matrices in equations 5, 6, and\n8 of \u003chttp://arxiv.org/abs/1406.1078\u003e respectively replaced by diagonal\nmatrices, i.e. a Hadamard product with a single vector: \n$$r_j = \\\\sigma\\\\left(\\[\\\\mathbf W_r\\\\mathbf x\\]_j + \\[\\\\mathbf u_r\\\\circ \\\\mathbf h_{(t-1)}\\]_j\\\\right)$$ \n$$z_j = \\\\sigma\\\\left(\\[\\\\mathbf W_z\\\\mathbf x\\]_j + \\[\\\\mathbf u_z\\\\circ \\\\mathbf h_{(t-1)}\\]_j\\\\right)$$ \n$$\\\\tilde{h}\\^{(t)}_j = \\\\phi\\\\left(\\[\\\\mathbf W \\\\mathbf x\\]_j + \\[\\\\mathbf u \\\\circ \\\\mathbf r \\\\circ \\\\mathbf h_{(t-1)}\\]_j\\\\right)$$\n\nwhere \\\\(\\\\circ\\\\) denotes the Hadamard operator. This means that each IndyGRU\nnode sees only its own state, as opposed to seeing all states in the same\nlayer.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `num_units` | int, The number of units in the GRU cell. |\n| `activation` | Nonlinearity to use. Default: `tanh`. |\n| `reuse` | (optional) Python boolean describing whether to reuse variables in an existing scope. If not `True`, and the existing scope already has the given variables, an error is raised. |\n| `kernel_initializer` | (optional) The initializer to use for the weight matrices applied to the input. |\n| `bias_initializer` | (optional) The initializer to use for the bias. |\n| `name` | String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. |\n| `dtype` | Default dtype of the layer (default of `None` means use the type of the first input). Required when `build` is called before `call`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `graph` | DEPRECATED FUNCTION \u003cbr /\u003e | **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Stop using this property because tf.layers layers no longer track their graph. |\n| `output_size` | Integer or TensorShape: size of outputs produced by this cell. |\n| `scope_name` | \u003cbr /\u003e |\n| `state_size` | size(s) of state(s) used by this cell. \u003cbr /\u003e It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_initial_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/rnn_cell_impl.py#L281-L309) \n\n get_initial_state(\n inputs=None, batch_size=None, dtype=None\n )\n\n### `zero_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/ops/rnn_cell_impl.py#L311-L340) \n\n zero_state(\n batch_size, dtype\n )\n\nReturn zero-filled state tensor(s).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|--------------|---------------------------------------------------------|\n| `batch_size` | int, float, or unit Tensor representing the batch size. |\n| `dtype` | the data type to use for the state. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| If `state_size` is an int or TensorShape, then the return value is a `N-D` tensor of shape `[batch_size, state_size]` filled with zeros. \u003cbr /\u003e If `state_size` is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of `2-D` tensors with the shapes `[batch_size, s]` for each s in `state_size`. ||\n\n\u003cbr /\u003e"]]