A autoregressively masked dense layer. Analogous to tf.layers.dense
.
View aliases
Main aliases
tfp.experimental.substrates.numpy.bijectors.masked_dense
tfp.substrates.numpy.bijectors.masked_dense(
inputs,
units,
num_blocks=None,
exclusive=False,
kernel_initializer=None,
reuse=None,
name=None,
*args,
**kwargs
)
See [Germain et al. (2015)][1] for detailed explanation.
Args |
inputs
|
Tensor input.
|
units
|
Python int scalar representing the dimensionality of the output
space.
|
num_blocks
|
Python int scalar representing the number of blocks for the
MADE masks.
|
exclusive
|
Python bool scalar representing whether to zero the diagonal of
the mask, used for the first layer of a MADE.
|
kernel_initializer
|
Initializer function for the weight matrix.
If None (default), weights are initialized using the
tf.glorot_random_initializer .
|
reuse
|
Python bool scalar representing whether to reuse the weights of a
previous layer by the same name.
|
name
|
Python str used to describe ops managed by this function.
|
*args
|
tf.layers.dense arguments.
|
**kwargs
|
tf.layers.dense keyword arguments.
|
Raises |
NotImplementedError
|
if rightmost dimension of inputs is unknown prior to
graph execution.
|
References
[1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE:
Masked Autoencoder for Distribution Estimation. In International
Conference on Machine Learning, 2015. https://arxiv.org/abs/1502.03509