tf.compat.v1.keras.layers.CuDNNLSTM
Stay organized with collections
Save and categorize content based on your preferences.
Fast LSTM implementation backed by cuDNN.
Inherits From: RNN
, Layer
, Module
tf.compat.v1.keras.layers.CuDNNLSTM(
units,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros',
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
return_sequences=False,
return_state=False,
go_backwards=False,
stateful=False,
**kwargs
)
More information about cuDNN can be found on the NVIDIA
developer website.
Can only be run on GPU.
Args |
units
|
Positive integer, dimensionality of the output space.
|
kernel_initializer
|
Initializer for the kernel weights matrix, used
for the linear transformation of the inputs.
|
unit_forget_bias
|
Boolean. If True, add 1 to the bias of the forget gate
at initialization. Setting it to true will also force
bias_initializer="zeros" . This is recommended in Jozefowicz et
al.
|
recurrent_initializer
|
Initializer for the recurrent_kernel weights
matrix, used for the linear transformation of the recurrent state.
|
bias_initializer
|
Initializer for the bias vector.
|
kernel_regularizer
|
Regularizer function applied to the kernel weights
matrix.
|
recurrent_regularizer
|
Regularizer function applied to the
recurrent_kernel weights matrix.
|
bias_regularizer
|
Regularizer function applied to the bias vector.
|
activity_regularizer
|
Regularizer function applied to the output of the
layer (its "activation").
|
kernel_constraint
|
Constraint function applied to the kernel weights
matrix.
|
recurrent_constraint
|
Constraint function applied to the
recurrent_kernel weights matrix.
|
bias_constraint
|
Constraint function applied to the bias vector.
|
return_sequences
|
Boolean. Whether to return the last output. in the
output sequence, or the full sequence.
|
return_state
|
Boolean. Whether to return the last state in addition to
the output.
|
go_backwards
|
Boolean (default False). If True, process the input
sequence backwards and return the reversed sequence.
|
stateful
|
Boolean (default False). If True, the last state for each
sample at index i in a batch will be used as initial state for the
sample of index i in the following batch.
|
Methods
get_losses_for
View source
get_losses_for(
inputs=None
)
reset_states
View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer.
Can only be used when RNN layer is constructed with stateful
= True
.
Args:
states: Numpy arrays that contains the value for the initial state,
which will be feed to cell at the first time step. When the value is
None, zero filled numpy array will be created based on the cell
state size.
Raises |
AttributeError
|
When the RNN layer is not stateful.
|
ValueError
|
When the batch size of the RNN layer is unknown.
|
ValueError
|
When the input numpy array is not compatible with the RNN
layer state, either size wise or dtype wise.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.compat.v1.keras.layers.CuDNNLSTM\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.13.1/keras/layers/rnn/cudnn_lstm.py#L32-L257) |\n\nFast LSTM implementation backed by cuDNN.\n\nInherits From: [`RNN`](../../../../../tf/keras/layers/RNN), [`Layer`](../../../../../tf/keras/layers/Layer), [`Module`](../../../../../tf/Module) \n\n tf.compat.v1.keras.layers.CuDNNLSTM(\n units,\n kernel_initializer='glorot_uniform',\n recurrent_initializer='orthogonal',\n bias_initializer='zeros',\n unit_forget_bias=True,\n kernel_regularizer=None,\n recurrent_regularizer=None,\n bias_regularizer=None,\n activity_regularizer=None,\n kernel_constraint=None,\n recurrent_constraint=None,\n bias_constraint=None,\n return_sequences=False,\n return_state=False,\n go_backwards=False,\n stateful=False,\n **kwargs\n )\n\nMore information about cuDNN can be found on the [NVIDIA\ndeveloper website](https://developer.nvidia.com/cudnn).\nCan only be run on GPU.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `units` | Positive integer, dimensionality of the output space. |\n| `kernel_initializer` | Initializer for the `kernel` weights matrix, used for the linear transformation of the inputs. |\n| `unit_forget_bias` | Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force `bias_initializer=\"zeros\"`. This is recommended in [Jozefowicz et al.](http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf) |\n| `recurrent_initializer` | Initializer for the `recurrent_kernel` weights matrix, used for the linear transformation of the recurrent state. |\n| `bias_initializer` | Initializer for the bias vector. |\n| `kernel_regularizer` | Regularizer function applied to the `kernel` weights matrix. |\n| `recurrent_regularizer` | Regularizer function applied to the `recurrent_kernel` weights matrix. |\n| `bias_regularizer` | Regularizer function applied to the bias vector. |\n| `activity_regularizer` | Regularizer function applied to the output of the layer (its \"activation\"). |\n| `kernel_constraint` | Constraint function applied to the `kernel` weights matrix. |\n| `recurrent_constraint` | Constraint function applied to the `recurrent_kernel` weights matrix. |\n| `bias_constraint` | Constraint function applied to the bias vector. |\n| `return_sequences` | Boolean. Whether to return the last output. in the output sequence, or the full sequence. |\n| `return_state` | Boolean. Whether to return the last state in addition to the output. |\n| `go_backwards` | Boolean (default False). If True, process the input sequence backwards and return the reversed sequence. |\n| `stateful` | Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|----------|---------------|\n| `cell` | \u003cbr /\u003e \u003cbr /\u003e |\n| `states` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_losses_for`\n\n[View source](https://github.com/keras-team/keras/tree/v2.13.1/keras/layers/rnn/base_cudnn_rnn.py#L149-L150) \n\n get_losses_for(\n inputs=None\n )\n\n### `reset_states`\n\n[View source](https://github.com/keras-team/keras/tree/v2.13.1/keras/layers/rnn/base_rnn.py#L846-L945) \n\n reset_states(\n states=None\n )\n\nReset the recorded states for the stateful RNN layer.\n\nCan only be used when RNN layer is constructed with `stateful` = `True`.\nArgs:\nstates: Numpy arrays that contains the value for the initial state,\nwhich will be feed to cell at the first time step. When the value is\nNone, zero filled numpy array will be created based on the cell\nstate size.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|------------------|--------------------------------------------------------------------------------------------------------|\n| `AttributeError` | When the RNN layer is not stateful. |\n| `ValueError` | When the batch size of the RNN layer is unknown. |\n| `ValueError` | When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. |\n\n\u003cbr /\u003e"]]