tfp.substrates.numpy.bijectors.MaskedAutoregressiveFlow

Affine MaskedAutoregressiveFlow bijector.

Inherits From: Bijector

The affine autoregressive flow [(Papamakarios et al., 2016)][3] provides a relatively simple framework for user-specified (deep) architectures to learn a distribution over continuous events. Regarding terminology,

'Autoregressive models decompose the joint density as a product of conditionals, and model each conditional in turn. Normalizing flows transform a base density (e.g. a standard Gaussian) into the target density by an invertible transformation with tractable Jacobian.' [(Papamakarios et al., 2016)][3]

In other words, the 'autoregressive property' is equivalent to the decomposition, p(x) = prod{ p(x[perm[i]] | x[perm[0:i]]) : i=0, ..., d } where perm is some permutation of {0, ..., d}. In the simple case where the permutation is identity this reduces to: p(x) = prod{ p(x[i] | x[0:i]) : i=0, ..., d }.

In TensorFlow Probability, 'normalizing flows' are implemented as tfp.bijectors.Bijectors. The forward 'autoregression' is implemented using a tf.while_loop and a deep neural network (DNN) with masked weights such that the autoregressive property is automatically met in the inverse.

A TransformedDistribution using MaskedAutoregressiveFlow(...) uses the (expensive) forward-mode calculation to draw samples and the (cheap) reverse-mode calculation to compute log-probabilities. Conversely, a TransformedDistribution using Invert(MaskedAutoregressiveFlow(...)) uses the (expensive) forward-mode calculation to compute log-probabilities and the (cheap) reverse-mode calculation to compute samples. See 'Example Use' [below] for more details.

Given a shift_and_log_scale_fn, the forward and inverse transformations are (a sequence of) affine transformations. A 'valid' shift_and_log_scale_fn must compute each shift (aka loc or 'mu' in [Germain et al. (2015)][1]) and log(scale) (aka 'alpha' in [Germain et al. (2015)][1]) such that each are broadcastable with the arguments to forward and inverse, i.e., such that the calculations in forward, inverse [below] are possible.

For convenience, tfp.bijectors.AutoregressiveNetwork is offered as a possible shift_and_log_scale_fn function. It implements the MADE architecture [(Germain et al., 2015)][1]. MADE is a feed-forward network that computes a shift and log(scale) using masked dense layers in a deep neural network. Weights are masked to ensure the autoregressive property. It is possible that this architecture is suboptimal for your task. To build alternative networks, either change the arguments to tfp.bijectors.AutoregressiveNetwork or use some other architecture, e.g., using tf.keras.layers.

Assuming shift_and_log_scale_fn has valid shape and autoregressive semantics, the forward transformation is

def forward(x):
  y = zeros_like(x)
  event_size = x.shape[-event_dims:].num_elements()
  for _ in range(event_size):
    shift, log_scale = shift_and_log_scale_fn(y)
    y = x * tf.exp(log_scale) + shift
  return y

and the inverse transformation is

def inverse(y):
  shift, log_scale = shift_and_log_scale_fn(y)
  return (y - shift) / tf.exp(log_scale)

Notice that the inverse does not need a for-loop. This is because in the forward pass each calculation of shift and log_scale is based on the y calculated so far (not x). In the inverse, the y is fully known, thus is equivalent to the scaling used in forward after event_size passes, i.e., the 'last' y used to compute shift, log_scale. (Roughly speaking, this also proves the transform is bijective.)

The bijector_fn argument allows specifying a more general coupling relation, such as the LSTM-inspired activation from [4], or Neural Spline Flow [5]. It must logically operate on each element of the input individually, and still obey the 'autoregressive property' described above. The forward transformation is

def forward(x):
  y = zeros_like(x)
  event_size = x.shape[-event_dims:].num_elements()
  for _ in range(event_size):
    bijector = bijector_fn(y)
    y = bijector.forward(x)
  return y

and inverse transformation is

def inverse(y):
    bijector = bijector_fn(y)
    return bijector.inverse(y)

Examples

tfd = tfp.distributions
tfb = tfp.bijectors

dims = 2

# A common choice for a normalizing flow is to use a Gaussian for the base
# distribution.  (However, any continuous distribution would work.) Here, we
# use `tfd.Sample` to create a joint Gaussian distribution with diagonal
# covariance for the base distribution (note that in the Gaussian case,
# `tfd.MultivariateNormalDiag` could also be used.)
maf = tfd.TransformedDistribution(
    distribution=tfd.Sample(
        tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
    bijector=tfb.MaskedAutoregressiveFlow(
        shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
            params=2, hidden_units=[512, 512])))

x = maf.sample()  # Expensive; uses `tf.while_loop`, no Bijector caching.
maf.log_prob(x)   # Almost free; uses Bijector caching.
# Cheap; no `tf.while_loop` despite no Bijector caching.
maf.log_prob(tf.zeros(dims))

# [Papamakarios et al. (2016)][3] also describe an Inverse Autoregressive
# Flow [(Kingma et al., 2016)][2]:
iaf = tfd.TransformedDistribution(
    distribution=tfd.Sample(
        tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
    bijector=tfb.Invert(tfb.MaskedAutoregressiveFlow(
        shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
            params=2, hidden_units=[512, 512]))))

x = iaf.sample()  # Cheap; no `tf.while_loop` despite no Bijector caching.
iaf.log_prob(x)   # Almost free; uses Bijector caching.
# Expensive; uses `tf.while_loop`, no Bijector caching.
iaf.log_prob(tf.zeros(dims))

# In many (if not most) cases the default `shift_and_log_scale_fn` will be a
# poor choice.  Here's an example of using a 'shift only' version and with a
# different number/depth of hidden layers.
made = tfb.AutoregressiveNetwork(params=1, hidden_units=[32])
maf_no_scale_hidden2 = tfd.TransformedDistribution(
    distribution=tfd.Sample(
        tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
    bijector=tfb.MaskedAutoregressiveFlow(
        lambda y: (made(y)[..., 0], None),
        is_constant_jacobian=True))
maf_no_scale_hidden2._made = made  # Ensure maf_no_scale_hidden2.trainable
# NOTE: The last line ensures that maf_no_scale_hidden2.trainable_variables
# will include all variables from `made`.

Variable Tracking

A tfb.MaskedAutoregressiveFlow instance saves a reference to the values passed as shift_and_log_scale_fn and bijector_fn to its constructor. Thus, for most values passed as shift_and_log_scale_fn or bijector_fn, variables referenced by those values will be found and tracked by the tfb.MaskedAutoregressiveFlow instance. Please see the tf.Module documentation for further details.

However, if the value passed to shift_and_log_scale_fn or bijector_fn is a Python function, then tfb.MaskedAutoregressiveFlow cannot automatically track variables used inside shift_and_log_scale_fn or bijector_fn. To get tfb.MaskedAutoregressiveFlow to track such variables, either:

  1. Replace the Python function with a tf.Module, tf.keras.Layer, or other callable object through which tf.Module can find variables.

  2. Or, add a reference to the variables to the tfb.MaskedAutoregressiveFlow instance by setting an attribute -- for example:

made1 = tfb.AutoregressiveNetwork(params=1, hidden_units=[10, 10])
made2 = tfb.AutoregressiveNetwork(params=1, hidden_units=[10, 10])
maf = tfb.MaskedAutoregressiveFlow(lambda y: (made1(y), made2(y) + 1.))
maf._made_variables = made1.variables + made2.variables

References

[1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning, 2015. https://arxiv.org/abs/1502.03509

[2]: Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.04934

[3]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked Autoregressive Flow for Density Estimation. In Neural Information Processing Systems, 2017. https://arxiv.org/abs/1705.07057

[4]: Diederik P Kingma, Tim Salimans, Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.04934

[5]: Conor Durkan, Artur Bekasov, Iain Murray, George Papamakarios. Neural Spline Flows, 2019. http://arxiv.org/abs/1906.04032

shift_and_log_scale_fn Python callable which computes shift and log_scale from the inverse domain (y). Calculation must respect the 'autoregressive property' (see class docstring). Suggested default tfb.AutoregressiveNetwork(params=2, hidden_layers=...). Typically the function contains tf.Variables. Returning None for either (both) shift, log_scale is equivalent to (but more efficient than) returning zero. If shift_and_log_scale_fn returns a single Tensor, the returned value will be unstacked to get the shift and log_scale: tf.unstack(shift_and_log_scale_fn(y), num=2, axis=-1).
bijector_fn Python callable which returns a tfb.Bijector which transforms event tensor with the signature (input, **condition_kwargs) -> bijector. The bijector must operate on scalar events and must not alter the rank of its input. The bijector_fn will be called with Tensors from the inverse domain (y). Calculation must respect the 'autoregressive property' (see class docstring).
is_constant_jacobian Python bool. Default: False. When True the implementation assumes log_scale does not depend on the forward domain (x) or inverse domain (y) values. (No validation is made; is_constant_jacobian=False is always safe but possibly computationally inefficient.)
validate_args Python bool indicating whether arguments should be checked for correctness.
unroll_loop Python bool indicating whether the tf.while_loop in _forward should be replaced with a static for loop. Requires that the final dimension of x be known at graph construction time. Defaults to False.
event_ndims Python integer, the intrinsic dimensionality of this bijector. 1 corresponds to a simple vector autoregressive bijector as implemented by the tfp.bijectors.AutoregressiveNetwork, 2 might be useful for a 2D convolutional shift_and_log_scale_fn and so on.
name Python str, name given to ops managed by this object.

ValueError If both or none of shift_and_log_scale_fn and bijector_fn are specified.

dtype

forward_min_event_ndims Returns the minimal number of dimensions bijector.forward operates on.

Multipart bijectors return structured ndims, which indicates the expected structure of their inputs. Some multipart bijectors, notably Composites, may return structures of None.

graph_parents Returns this Bijector's graph_parents as a Python list.
inverse_min_event_ndims Returns the minimal number of dimensions bijector.inverse operates on.

Multipart bijectors return structured event_ndims, which indicates the expected structure of their outputs. Some multipart bijectors, notably Composites, may return structures of None.

is_constant_jacobian Returns true iff the Jacobian matrix is not a function of x.

name Returns the string name of this Bijector.
parameters Dictionary of parameters used to instantiate this Bijector.
trainable_variables

validate_args Returns True if Tensor arguments will be validated.
variables

Methods

forward

View source

Returns the forward Bijector evaluation, i.e., X = g(Y).

Args
x Tensor (structure). The input to the 'forward' evaluation.
name The name to give this op.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor (structure).

Raises
TypeError if self.dtype is specified and x.dtype is not self.dtype.
NotImplementedError if _forward is not implemented.

forward_dtype

View source

Returns the dtype returned by forward for the provided input.

forward_event_ndims

View source

Returns the number of event dimensions produced by forward.

forward_event_shape

View source

Shape of a single sample from a single batch as a TensorShape.

Same meaning as forward_event_shape_tensor. May be only partially defined.

Args
input_shape TensorShape (structure) indicating event-portion shape passed into forward function.

Returns
forward_event_shape_tensor TensorShape (structure) indicating event-portion shape after applying forward. Possibly unknown.

forward_event_shape_tensor

View source

Shape of a single sample from a single batch as an int32 1D Tensor.

Args
input_shape Tensor, int32 vector (structure) indicating event-portion shape passed into forward function.
name name to give to the op

Returns
forward_event_shape_tensor Tensor, int32 vector (structure) indicating event-portion shape after applying forward.

forward_log_det_jacobian

View source

Returns both the forward_log_det_jacobian.

Args
x Tensor (structure). The input to the 'forward' Jacobian determinant evaluation.
event_ndims Number of dimensions in the probabilistic events being transformed. Must be greater than or equal to self.forward_min_event_ndims. The result is summed over the final dimensions to produce a scalar Jacobian determinant for each event, i.e. it has shape rank(x) - event_ndims dimensions. Multipart bijectors require structured event_ndims, such that rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of the structured input. Furthermore, the first event_ndims[i] of each x[i].shape must be the same for all i (broadcasting is not allowed).
name The name to give this op.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor (structure), if this bijector is injective. If not injective this is not implemented.

Raises
TypeError if y's dtype is incompatible with the expected output dtype.
NotImplementedError if neither _forward_log_det_jacobian nor {_inverse, _inverse_log_det_jacobian} are implemented, or this is a non-injective bijector.

inverse

View source

Returns the inverse Bijector evaluation, i.e., X = g^{-1}(Y).

Args
y Tensor (structure). The input to the 'inverse' evaluation.
name The name to give this op.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor (structure), if this bijector is injective. If not injective, returns the k-tuple containing the unique k points (x1, ..., xk) such that g(xi) = y.

Raises
TypeError if y's structured dtype is incompatible with the expected output dtype.
NotImplementedError if _inverse is not implemented.

inverse_dtype

View source

Returns the dtype returned by forward for the provided input.

inverse_event_ndims

View source

Returns the number of event dimensions produced by inverse.

inverse_event_shape

View source

Shape of a single sample from a single batch as a TensorShape.

Same meaning as inverse_event_shape_tensor. May be only partially defined.

Args
output_shape TensorShape (structure) indicating event-portion shape passed into inverse function.

Returns
inverse_event_shape_tensor TensorShape (structure) indicating event-portion shape after applying inverse. Possibly unknown.

inverse_event_shape_tensor

View source

Shape of a single sample from a single batch as an int32 1D Tensor.

Args
output_shape Tensor, int32 vector (structure) indicating event-portion shape passed into inverse function.
name name to give to the op

Returns
inverse_event_shape_tensor Tensor, int32 vector (structure) indicating event-portion shape after applying inverse.

inverse_log_det_jacobian

View source

Returns the (log o det o Jacobian o inverse)(y).

Mathematically, returns: log(det(dX/dY))(Y). (Recall that: X=g^{-1}(Y).)

Note that forward_log_det_jacobian is the negative of this function, evaluated at g^{-1}(y).

Args
y Tensor (structure). The input to the 'inverse' Jacobian determinant evaluation.
event_ndims Number of dimensions in the probabilistic events being transformed. Must be greater than or equal to self.inverse_min_event_ndims. The result is summed over the final dimensions to produce a scalar Jacobian determinant for each event, i.e. it has shape rank(y) - event_ndims dimensions. Multipart bijectors require structured event_ndims, such that rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of the structured input. Furthermore, the first event_ndims[i] of each x[i].shape must be the same for all i (broadcasting is not allowed).
name The name to give this op.
**kwargs Named arguments forwarded to subclass implementation.

Returns
ildj Tensor, if this bijector is injective. If not injective, returns the tuple of local log det Jacobians, log(det(Dg_i^{-1}(y))), where g_i is the restriction of g to the ith partition Di.

Raises
TypeError if x's dtype is incompatible with the expected inverse-dtype.
NotImplementedError if _inverse_log_det_jacobian is not implemented.

__call__

View source

Applies or composes the Bijector, depending on input type.

This is a convenience function which applies the Bijector instance in three different ways, depending on the input:

  1. If the input is a tfd.Distribution instance, return tfd.TransformedDistribution(distribution=input, bijector=self).
  2. If the input is a tfb.Bijector instance, return tfb.Chain([self, input]).
  3. Otherwise, return self.forward(input)

Args
value A tfd.Distribution, tfb.Bijector, or a (structure of) Tensor.
name Python str name given to ops created by this function.
**kwargs Additional keyword arguments passed into the created tfd.TransformedDistribution, tfb.Bijector, or self.forward.

Returns
composition A tfd.TransformedDistribution if the input was a tfd.Distribution, a tfb.Chain if the input was a tfb.Bijector, or a (structure of) Tensor computed by self.forward.

Examples

sigmoid = tfb.Reciprocal()(
    tfb.AffineScalar(shift=1.)(
      tfb.Exp()(
        tfb.AffineScalar(scale=-1.))))
# ==> `tfb.Chain([
#         tfb.Reciprocal(),
#         tfb.AffineScalar(shift=1.),
#         tfb.Exp(),
#         tfb.AffineScalar(scale=-1.),
#      ])`  # ie, `tfb.Sigmoid()`

log_normal = tfb.Exp()(tfd.Normal(0, 1))
# ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())`

tfb.Exp()([-1., 0., 1.])
# ==> tf.exp([-1., 0., 1.])

__eq__

View source

Return self==value.

__ne__

View source

Return self!=value.