View source on GitHub
  
 | 
A piecewise rational quadratic spline, as developed in [1].
Inherits From: AutoCompositeTensorBijector, Bijector
tfp.substrates.numpy.bijectors.RationalQuadraticSpline(
    bin_widths,
    bin_heights,
    knot_slopes,
    range_min=-1,
    validate_args=False,
    name=None
)
This transformation represents a monotonically increasing piecewise rational
quadratic function. Outside of the bounds of knot_x/knot_y, the transform
behaves as an identity function.
Typically this bijector will be used as part of a chain, with splines for
trailing x dimensions conditioned on some of the earlier x dimensions, and
with the inverse then solved first for unconditioned dimensions, then using
conditioning derived from those inverses, and so forth. For example, if we
split a 15-D xs vector into 3 components, we may implement a forward and
inverse as follows:
nsplits = 3
class SplineParams(tf.Module):
  def __init__(self, nbins=32, interval_width=2, range_min=-1,
               min_bin_width=1e-3, min_slope=1e-3):
    self._nbins = nbins
    self._interval_width = interval_width  # Sum of bin widths.
    self._range_min = range_min  # Position of first knot.
    self._min_bin_width = min_bin_width  # Bin width lower bound.
    self._min_slope = min_slope  # Lower bound for slopes at internal knots.
    self._built = False
    self._bin_widths = None
    self._bin_heights = None
    self._knot_slopes = None
  def __call__(self, x, nunits):
    if not self._built:
      def _bin_positions(x):
        out_shape = tf.concat((tf.shape(x)[:-1], (nunits, self._nbins)), 0)
        x = tf.reshape(x, out_shape)
        return tf.math.softmax(x, axis=-1) * (
              self._interval_width - self._nbins * self._min_bin_width
              ) + self._min_bin_width
      def _slopes(x):
        out_shape = tf.concat((
          tf.shape(x)[:-1], (nunits, self._nbins - 1)), 0)
        x = tf.reshape(x, out_shape)
        return tf.math.softplus(x) + self._min_slope
      self._bin_widths = tf.keras.layers.Dense(
        nunits * self._nbins, activation=_bin_positions, name='w')
      self._bin_heights = tf.keras.layers.Dense(
        nunits * self._nbins, activation=_bin_positions, name='h')
      self._knot_slopes = tf.keras.layers.Dense(
        nunits * (self._nbins - 1), activation=_slopes, name='s')
      self._built = True
    return tfb.RationalQuadraticSpline(
      bin_widths=self._bin_widths(x),
      bin_heights=self._bin_heights(x),
      knot_slopes=self._knot_slopes(x),
      range_min=self._range_min)
xs = np.random.randn(3, 15).astype(np.float32)  # Keras won't Dense(.)(vec).
splines = [SplineParams() for _ in range(nsplits)]
def spline_flow():
  stack = tfb.Identity()
  for i in range(nsplits):
    stack = tfb.RealNVP(5 * i, bijector_fn=splines[i])(stack)
  return stack
ys = spline_flow().forward(xs)
ys_inv = spline_flow().inverse(ys)  # ys_inv ~= xs
For a one-at-a-time autoregressive flow as in [1], it would be profitable to
implement a mask over xs to parallelize either the inverse or the forward
pass and implement the other using a tf.while_loop. See
tfp.bijectors.MaskedAutoregressiveFlow for support doing so (paired with
tfp.bijectors.Invert depending which direction should be parallel).
References
[1]: Conor Durkan, Artur Bekasov, Iain Murray, George Papamakarios. Neural Spline Flows. arXiv preprint arXiv:1906.04032, 2019. https://arxiv.org/abs/1906.04032
Methods
copy
copy(
    **override_parameters_kwargs
)
Creates a copy of the bijector.
| Args | |
|---|---|
**override_parameters_kwargs
 | 
String/value dictionary of initialization arguments to override with new values. | 
| Returns | |
|---|---|
bijector
 | 
A new instance of type(self) initialized from the union
of self.parameters and override_parameters_kwargs, i.e.,
dict(self.parameters, **override_parameters_kwargs).
 | 
experimental_batch_shape
experimental_batch_shape(
    x_event_ndims=None, y_event_ndims=None
)
Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct
transformations it represents on events of a given size. For example: the
bijector tfb.Scale([1., 2.]) has batch shape [2] for scalar events
(event_ndims = 0), because applying it to a scalar event produces
two scalar outputs, the result of two different scaling transformations.
The same bijector has batch shape [] for vector events, because applying
it to a vector produces (via elementwise multiplication) a single vector
output.
Bijectors that operate independently on multiple state parts, such as
tfb.JointMap, must broadcast to a coherent batch shape. Some events may
not be valid: for example, the bijector
tfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])]) does not
produce a valid batch shape when event_ndims = [0, 0], since the batch
shapes of the two parts are inconsistent. The same bijector
does define valid batch shapes of [], [2], and [3] if event_ndims
is [1, 1], [0, 1], or [1, 0], respectively.
Since transforming a single event produces a scalar log-det-Jacobian, the
batch shape of a bijector with non-constant Jacobian is expected to equal
the shape of forward_log_det_jacobian(x, event_ndims=x_event_ndims)
or inverse_log_det_jacobian(y, event_ndims=y_event_ndims), for x
or y of the specified ndims.
| Args | |
|---|---|
x_event_ndims
 | 
Optional Python int (structure) number of dimensions in
a probabilistic event passed to forward; this must be greater than
or equal to self.forward_min_event_ndims. If None, defaults to
self.forward_min_event_ndims. Mutually exclusive with y_event_ndims.
Default value: None.
 | 
y_event_ndims
 | 
Optional Python int (structure) number of dimensions in
a probabilistic event passed to inverse; this must be greater than
or equal to self.inverse_min_event_ndims. Mutually exclusive with
x_event_ndims.
Default value: None.
 | 
| Returns | |
|---|---|
batch_shape
 | 
TensorShape batch shape of this bijector for a
value with the given event rank. May be unknown or partially defined.
 | 
experimental_batch_shape_tensor
experimental_batch_shape_tensor(
    x_event_ndims=None, y_event_ndims=None
)
Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct
transformations it represents on events of a given size. For example: the
bijector tfb.Scale([1., 2.]) has batch shape [2] for scalar events
(event_ndims = 0), because applying it to a scalar event produces
two scalar outputs, the result of two different scaling transformations.
The same bijector has batch shape [] for vector events, because applying
it to a vector produces (via elementwise multiplication) a single vector
output.
Bijectors that operate independently on multiple state parts, such as
tfb.JointMap, must broadcast to a coherent batch shape. Some events may
not be valid: for example, the bijector
tfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])]) does not
produce a valid batch shape when event_ndims = [0, 0], since the batch
shapes of the two parts are inconsistent. The same bijector
does define valid batch shapes of [], [2], and [3] if event_ndims
is [1, 1], [0, 1], or [1, 0], respectively.
Since transforming a single event produces a scalar log-det-Jacobian, the
batch shape of a bijector with non-constant Jacobian is expected to equal
the shape of forward_log_det_jacobian(x, event_ndims=x_event_ndims)
or inverse_log_det_jacobian(y, event_ndims=y_event_ndims), for x
or y of the specified ndims.
| Args | |
|---|---|
x_event_ndims
 | 
Optional Python int (structure) number of dimensions in
a probabilistic event passed to forward; this must be greater than
or equal to self.forward_min_event_ndims. If None, defaults to
self.forward_min_event_ndims. Mutually exclusive with y_event_ndims.
Default value: None.
 | 
y_event_ndims
 | 
Optional Python int (structure) number of dimensions in
a probabilistic event passed to inverse; this must be greater than
or equal to self.inverse_min_event_ndims. Mutually exclusive with
x_event_ndims.
Default value: None.
 | 
| Returns | |
|---|---|
batch_shape_tensor
 | 
integer Tensor batch shape of this bijector for a
value with the given event rank.
 | 
experimental_compute_density_correction
experimental_compute_density_correction(
    x, tangent_space, backward_compat=False, **kwargs
)
Density correction for this transformation wrt the tangent space, at x.
Subclasses of Bijector may call the most specific applicable
method of TangentSpace, based on whether the transformation is
dimension-preserving, coordinate-wise, a projection, or something
more general. The backward-compatible assumption is that the
transformation is dimension-preserving (goes from R^n to R^n).
| Args | |
|---|---|
x
 | 
Tensor (structure). The point at which to calculate the density.
 | 
tangent_space
 | 
TangentSpace or one of its subclasses.  The tangent to
the support manifold at x.
 | 
backward_compat
 | 
bool specifying whether to assume that the Bijector
is dimension-preserving.
 | 
**kwargs
 | 
Optional keyword arguments forwarded to tangent space methods. | 
| Returns | |
|---|---|
density_correction
 | 
Tensor representing the density correction---in log
space---under the transformation that this Bijector denotes.
 | 
| Raises | |
|---|---|
TypeError if backward_compat is False but no method of
TangentSpace has been called explicitly.
 | 
forward
forward(
    x, name='forward', **kwargs
)
Returns the forward Bijector evaluation, i.e., X = g(Y).
| Args | |
|---|---|
x
 | 
Tensor (structure). The input to the 'forward' evaluation.
 | 
name
 | 
The name to give this op. | 
**kwargs
 | 
Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
Tensor (structure).
 | 
| Raises | |
|---|---|
TypeError
 | 
if self.dtype is specified and x.dtype is not
self.dtype.
 | 
NotImplementedError
 | 
if _forward is not implemented.
 | 
forward_dtype
forward_dtype(
    dtype=UNSPECIFIED, name='forward_dtype', **kwargs
)
Returns the dtype returned by forward for the provided input.
forward_event_ndims
forward_event_ndims(
    event_ndims, **kwargs
)
Returns the number of event dimensions produced by forward.
| Args | |
|---|---|
event_ndims
 | 
Structure of Python and/or Tensor ints, and/or None
values. The structure should match that of
self.forward_min_event_ndims, and all non-None values must be
greater than or equal to the corresponding value in
self.forward_min_event_ndims.
 | 
**kwargs
 | 
Optional keyword arguments forwarded to nested bijectors. | 
| Returns | |
|---|---|
forward_event_ndims
 | 
Structure of integers and/or None values matching
self.inverse_min_event_ndims. These are computed using 'prefer static'
semantics: if any inputs are None, some or all of the outputs may be
None, indicating that the output dimension could not be inferred
(conversely, if all inputs are non-None, all outputs will be
non-None). If all input event_ndims are Python ints, all of the
(non-None) outputs will be Python ints; otherwise, some or
all of the outputs may be Tensor ints.
 | 
forward_event_shape
forward_event_shape(
    input_shape
)
Shape of a single sample from a single batch as a TensorShape.
Same meaning as forward_event_shape_tensor. May be only partially defined.
| Args | |
|---|---|
input_shape
 | 
TensorShape (structure) indicating event-portion shape
passed into forward function.
 | 
| Returns | |
|---|---|
forward_event_shape_tensor
 | 
TensorShape (structure) indicating
event-portion shape after applying forward. Possibly unknown.
 | 
forward_event_shape_tensor
forward_event_shape_tensor(
    input_shape, name='forward_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32 1D Tensor.
| Args | |
|---|---|
input_shape
 | 
Tensor, int32 vector (structure) indicating event-portion
shape passed into forward function.
 | 
name
 | 
name to give to the op | 
| Returns | |
|---|---|
forward_event_shape_tensor
 | 
Tensor, int32 vector (structure)
indicating event-portion shape after applying forward.
 | 
forward_log_det_jacobian
forward_log_det_jacobian(
    x, event_ndims=None, name='forward_log_det_jacobian', **kwargs
)
Returns both the forward_log_det_jacobian.
| Args | |
|---|---|
x
 | 
Tensor (structure). The input to the 'forward' Jacobian determinant
evaluation.
 | 
event_ndims
 | 
Optional number of dimensions in the probabilistic events
being transformed; this must be greater than or equal to
self.forward_min_event_ndims. If event_ndims is specified, the
log Jacobian determinant is summed to produce a
scalar log-determinant for each event. Otherwise
(if event_ndims is None), no reduction is performed.
Multipart bijectors require structured event_ndims, such that the
batch rank rank(y[i]) - event_ndims[i] is the same for all
elements i of the structured input. In most cases (with the
exception of tfb.JointMap) they further require that
event_ndims[i] - self.inverse_min_event_ndims[i] is the same for
all elements i of the structured input.
Default value: None (equivalent to self.forward_min_event_ndims).
 | 
name
 | 
The name to give this op. | 
**kwargs
 | 
Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
Tensor (structure), if this bijector is injective.
If not injective this is not implemented.
 | 
| Raises | |
|---|---|
TypeError
 | 
if y's dtype is incompatible with the expected output dtype.
 | 
NotImplementedError
 | 
if neither _forward_log_det_jacobian
nor {_inverse, _inverse_log_det_jacobian} are implemented, or
this is a non-injective bijector.
 | 
ValueError
 | 
if the value of event_ndims is not valid for this bijector.
 | 
inverse
inverse(
    y, name='inverse', **kwargs
)
Returns the inverse Bijector evaluation, i.e., X = g^{-1}(Y).
| Args | |
|---|---|
y
 | 
Tensor (structure). The input to the 'inverse' evaluation.
 | 
name
 | 
The name to give this op. | 
**kwargs
 | 
Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
Tensor (structure), if this bijector is injective.
If not injective, returns the k-tuple containing the unique
k points (x1, ..., xk) such that g(xi) = y.
 | 
| Raises | |
|---|---|
TypeError
 | 
if y's structured dtype is incompatible with the expected
output dtype.
 | 
NotImplementedError
 | 
if _inverse is not implemented.
 | 
inverse_dtype
inverse_dtype(
    dtype=UNSPECIFIED, name='inverse_dtype', **kwargs
)
Returns the dtype returned by inverse for the provided input.
inverse_event_ndims
inverse_event_ndims(
    event_ndims, **kwargs
)
Returns the number of event dimensions produced by inverse.
| Args | |
|---|---|
event_ndims
 | 
Structure of Python and/or Tensor ints, and/or None
values. The structure should match that of
self.inverse_min_event_ndims, and all non-None values must be
greater than or equal to the corresponding value in
self.inverse_min_event_ndims.
 | 
**kwargs
 | 
Optional keyword arguments forwarded to nested bijectors. | 
| Returns | |
|---|---|
inverse_event_ndims
 | 
Structure of integers and/or None values matching
self.forward_min_event_ndims. These are computed using 'prefer static'
semantics: if any inputs are None, some or all of the outputs may be
None, indicating that the output dimension could not be inferred
(conversely, if all inputs are non-None, all outputs will be
non-None). If all input event_ndims are Python ints, all of the
(non-None) outputs will be Python ints; otherwise, some or
all of the outputs may be Tensor ints.
 | 
inverse_event_shape
inverse_event_shape(
    output_shape
)
Shape of a single sample from a single batch as a TensorShape.
Same meaning as inverse_event_shape_tensor. May be only partially defined.
| Args | |
|---|---|
output_shape
 | 
TensorShape (structure) indicating event-portion shape
passed into inverse function.
 | 
| Returns | |
|---|---|
inverse_event_shape_tensor
 | 
TensorShape (structure) indicating
event-portion shape after applying inverse. Possibly unknown.
 | 
inverse_event_shape_tensor
inverse_event_shape_tensor(
    output_shape, name='inverse_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32 1D Tensor.
| Args | |
|---|---|
output_shape
 | 
Tensor, int32 vector (structure) indicating
event-portion shape passed into inverse function.
 | 
name
 | 
name to give to the op | 
| Returns | |
|---|---|
inverse_event_shape_tensor
 | 
Tensor, int32 vector (structure)
indicating event-portion shape after applying inverse.
 | 
inverse_log_det_jacobian
inverse_log_det_jacobian(
    y, event_ndims=None, name='inverse_log_det_jacobian', **kwargs
)
Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns: log(det(dX/dY))(Y). (Recall that: X=g^{-1}(Y).)
Note that forward_log_det_jacobian is the negative of this function,
evaluated at g^{-1}(y).
| Args | |
|---|---|
y
 | 
Tensor (structure). The input to the 'inverse' Jacobian determinant
evaluation.
 | 
event_ndims
 | 
Optional number of dimensions in the probabilistic events
being transformed; this must be greater than or equal to
self.inverse_min_event_ndims. If event_ndims is specified, the
log Jacobian determinant is summed to produce a
scalar log-determinant for each event. Otherwise
(if event_ndims is None), no reduction is performed.
Multipart bijectors require structured event_ndims, such that the
batch rank rank(y[i]) - event_ndims[i] is the same for all
elements i of the structured input. In most cases (with the
exception of tfb.JointMap) they further require that
event_ndims[i] - self.inverse_min_event_ndims[i] is the same for
all elements i of the structured input.
Default value: None (equivalent to self.inverse_min_event_ndims).
 | 
name
 | 
The name to give this op. | 
**kwargs
 | 
Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
ildj
 | 
Tensor, if this bijector is injective.
If not injective, returns the tuple of local log det
Jacobians, log(det(Dg_i^{-1}(y))), where g_i is the restriction
of g to the ith partition Di.
 | 
| Raises | |
|---|---|
TypeError
 | 
if x's dtype is incompatible with the expected inverse-dtype.
 | 
NotImplementedError
 | 
if _inverse_log_det_jacobian is not implemented.
 | 
ValueError
 | 
if the value of event_ndims is not valid for this bijector.
 | 
parameter_properties
@classmethodparameter_properties( dtype=tf.float32 )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the bijector's
Tensor-valued constructor arguments.
| Args | |
|---|---|
dtype
 | 
Optional float dtype to assume for continuous-valued parameters.
Some constraining bijectors require advance knowledge of the dtype
because certain constants (e.g., tfb.Softplus.low) must be
instantiated with the same dtype as the values to be transformed.
 | 
| Returns | |
|---|---|
parameter_properties
 | 
A
str ->tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names toParameterProperties`
instances.
 | 
__call__
__call__(
    value, name=None, **kwargs
)
Applies or composes the Bijector, depending on input type.
This is a convenience function which applies the Bijector instance in
three different ways, depending on the input:
- If the input is a 
tfd.Distributioninstance, returntfd.TransformedDistribution(distribution=input, bijector=self). - If the input is a 
tfb.Bijectorinstance, returntfb.Chain([self, input]). - Otherwise, return 
self.forward(input) 
| Args | |
|---|---|
value
 | 
A tfd.Distribution, tfb.Bijector, or a (structure of) Tensor.
 | 
name
 | 
Python str name given to ops created by this function.
 | 
**kwargs
 | 
Additional keyword arguments passed into the created
tfd.TransformedDistribution, tfb.Bijector, or self.forward.
 | 
| Returns | |
|---|---|
composition
 | 
A tfd.TransformedDistribution if the input was a
tfd.Distribution, a tfb.Chain if the input was a tfb.Bijector, or
a (structure of) Tensor computed by self.forward.
 | 
Examples
sigmoid = tfb.Reciprocal()(
    tfb.Shift(shift=1.)(
      tfb.Exp()(
        tfb.Scale(scale=-1.))))
# ==> `tfb.Chain([
#         tfb.Reciprocal(),
#         tfb.Shift(shift=1.),
#         tfb.Exp(),
#         tfb.Scale(scale=-1.),
#      ])`  # ie, `tfb.Sigmoid()`
log_normal = tfb.Exp()(tfd.Normal(0, 1))
# ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())`
tfb.Exp()([-1., 0., 1.])
# ==> tf.exp([-1., 0., 1.])
__eq__
__eq__(
    other
)
Return self==value.
__getitem__
__getitem__(
    slices
)
__iter__
__iter__()
    View source on GitHub