tfp.bijectors.real_nvp_default_template
Stay organized with collections
Save and categorize content based on your preferences.
Build a scale-and-shift function using a multi-layer neural network.
tfp.bijectors.real_nvp_default_template(
hidden_layers,
shift_only=False,
activation=tf.nn.relu,
name=None,
*args,
**kwargs
)
This will be wrapped in a make_template to ensure the variables are only
created once. It takes the d
-dimensional input x[0:d] and returns the D-d
dimensional outputs loc
('mu') and log_scale
('alpha').
The default template does not support conditioning and will raise an
exception if condition_kwargs
are passed to it. To use conditioning in
Real NVP bijector, implement a conditioned shift/scale template that
handles the condition_kwargs
.
Args |
hidden_layers
|
Python list -like of non-negative integer, scalars
indicating the number of units in each hidden layer. Default: [512,
512] .
|
shift_only
|
Python bool indicating if only the shift term shall be
computed (i.e. NICE bijector). Default: False .
|
activation
|
Activation function (callable). Explicitly setting to None
implies a linear activation.
|
name
|
A name for ops managed by this function. Default:
'real_nvp_default_template'.
|
*args
|
tf.layers.dense arguments.
|
**kwargs
|
tf.layers.dense keyword arguments.
|
Returns |
shift
|
Float -like Tensor of shift terms ('mu' in
[Papamakarios et al. (2016)][1]).
|
log_scale
|
Float -like Tensor of log(scale) terms ('alpha' in
[Papamakarios et al. (2016)][1]).
|
Raises |
NotImplementedError
|
if rightmost dimension of inputs is unknown prior to
graph execution, or if condition_kwargs is not empty.
|
References
[1]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked
Autoregressive Flow for Density Estimation. In Neural Information
Processing Systems, 2017. https://arxiv.org/abs/1705.07057
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.bijectors.real_nvp_default_template\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/bijectors/real_nvp.py#L331-L409) |\n\nBuild a scale-and-shift function using a multi-layer neural network. \n\n tfp.bijectors.real_nvp_default_template(\n hidden_layers,\n shift_only=False,\n activation=tf.nn.relu,\n name=None,\n *args,\n **kwargs\n )\n\nThis will be wrapped in a make_template to ensure the variables are only\ncreated once. It takes the `d`-dimensional input x\\[0:d\\] and returns the `D-d`\ndimensional outputs `loc` ('mu') and `log_scale` ('alpha').\n\nThe default template does not support conditioning and will raise an\nexception if `condition_kwargs` are passed to it. To use conditioning in\nReal NVP bijector, implement a conditioned shift/scale template that\nhandles the `condition_kwargs`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `hidden_layers` | Python `list`-like of non-negative integer, scalars indicating the number of units in each hidden layer. Default: `[512, 512]`. |\n| `shift_only` | Python `bool` indicating if only the `shift` term shall be computed (i.e. NICE bijector). Default: `False`. |\n| `activation` | Activation function (callable). Explicitly setting to `None` implies a linear activation. |\n| `name` | A name for ops managed by this function. Default: 'real_nvp_default_template'. |\n| `*args` | `tf.layers.dense` arguments. |\n| `**kwargs` | `tf.layers.dense` keyword arguments. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------|---------------------------------------------------------------------------------------------|\n| `shift` | `Float`-like `Tensor` of shift terms ('mu' in \\[Papamakarios et al. (2016)\\]\\[1\\]). |\n| `log_scale` | `Float`-like `Tensor` of log(scale) terms ('alpha' in \\[Papamakarios et al. (2016)\\]\\[1\\]). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|-----------------------|----------------------------------------------------------------------------------------------------------------|\n| `NotImplementedError` | if rightmost dimension of `inputs` is unknown prior to graph execution, or if `condition_kwargs` is not empty. |\n\n\u003cbr /\u003e\n\n#### References\n\n\\[1\\]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked\nAutoregressive Flow for Density Estimation. In *Neural Information\nProcessing Systems* , 2017. \u003chttps://arxiv.org/abs/1705.07057\u003e"]]