tf.contrib.rnn.NASCell

View source on GitHub

Neural Architecture Search (NAS) recurrent network cell.

Inherits From: LayerRNNCell

This implements the recurrent cell from the paper:

https://arxiv.org/abs/1611.01578

Barret Zoph and Quoc V. Le. "Neural Architecture Search with Reinforcement Learning" Proc. ICLR 2017.

The class uses an optional projection layer.

num_units int, The number of units in the NAS cell.
num_proj (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.
use_bias (optional) bool, If True then use biases within the cell. This is False by default.
reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised.
**kwargs Additional keyword arguments.

graph DEPRECATED FUNCTION

output_size Integer or TensorShape: size of outputs produced by this cell.
scope_name

state_size size(s) of state(s) used by this cell.

It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.

Methods

get_initial_state

View source

zero_state

View source

Return zero-filled state tensor(s).

Args
batch_size int, float, or unit Tensor representing the batch size.
dtype the data type to use for the state.

Returns
If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.