|  View source on GitHub | 
Bijector which applies a structure of bijectors in parallel.
Inherits From: Composition, AutoCompositeTensorBijector, Bijector, AutoCompositeTensor
tfp.bijectors.JointMap(
    bijectors=None, validate_args=False, parameters=None, name=None
)
This is the "structured" counterpart to Chain. Whereas Chain applies an
  ordered sequence, JointMap applies a structure of transformations to a
  matching structure of inputs.
Example Use:
  exp = Exp()
  scale = Scale(2.)
  parallel = JointMap({'a': exp, 'b': scale})
  x = {'a': 1., 'b': 2.}
  parallel.forward(x)
  # = {'a': exp.forward(x['a']), 'b': scale.forward(x['b'])}
  # = {'a': tf.exp(1.), 'b': 2. * 2.}
  parallel.inverse(x)
  # = {'a': exp.inverse(x['a']), 'b': scale.inverse(x['b'])}
  # = {'a': tf.log(1.), 'b': 2. / 2.}
Bijectors need not be a dictionary; it could be a list, tuple, list of
  dictionaries, or anything else supported by tf.nest.map_structure.
If every element of bijectors is a CompositeTensor, the resulting JointMap bijector is a CompositeTensor as well. If any element of bijectors is not a CompositeTensor, then a non-CompositeTensor _JointMap instance is created instead. Bijector subclasses that inherit from JointMap will also inherit from CompositeTensor.
| Raises | |
|---|---|
| ValueError | if bijectors have different dtypes. | 
| Attributes | |
|---|---|
| bijectors | |
| dtype | |
| forward_min_event_ndims | Returns the minimal number of dimensions bijector.forward operates on. Multipart bijectors return structured  | 
| graph_parents | Returns this Bijector's graph_parents as a Python list. | 
| inverse_min_event_ndims | Returns the minimal number of dimensions bijector.inverse operates on. Multipart bijectors return structured  | 
| is_constant_jacobian | Returns true iff the Jacobian matrix is not a function of x. | 
| name | Returns the string name of this Bijector. | 
| name_scope | Returns a tf.name_scopeinstance for this class. | 
| non_trainable_variables | Sequence of non-trainable variables owned by this module and its submodules. | 
| parameters | Dictionary of parameters used to instantiate this Bijector. | 
| submodules | Sequence of all sub-modules. Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on). 
 | 
| trainable_variables | Sequence of trainable variables owned by this module and its submodules. | 
| validate_args | Returns True if Tensor arguments will be validated. | 
| validate_event_size | |
| variables | Sequence of variables owned by this module and its submodules. | 
Methods
copy
copy(
    **override_parameters_kwargs
)
Creates a copy of the bijector.
| Args | |
|---|---|
| **override_parameters_kwargs | String/value dictionary of initialization arguments to override with new values. | 
| Returns | |
|---|---|
| bijector | A new instance of type(self)initialized from the union
of self.parameters and override_parameters_kwargs, i.e.,dict(self.parameters, **override_parameters_kwargs). | 
experimental_batch_shape
experimental_batch_shape(
    x_event_ndims=None, y_event_ndims=None
)
Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct
transformations it represents on events of a given size. For example: the
bijector tfb.Scale([1., 2.]) has batch shape [2] for scalar events
(event_ndims = 0), because applying it to a scalar event produces
two scalar outputs, the result of two different scaling transformations.
The same bijector has batch shape [] for vector events, because applying
it to a vector produces (via elementwise multiplication) a single vector
output.
Bijectors that operate independently on multiple state parts, such as
tfb.JointMap, must broadcast to a coherent batch shape. Some events may
not be valid: for example, the bijector
tfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])]) does not
produce a valid batch shape when event_ndims = [0, 0], since the batch
shapes of the two parts are inconsistent. The same bijector
does define valid batch shapes of [], [2], and [3] if event_ndims
is [1, 1], [0, 1], or [1, 0], respectively.
Since transforming a single event produces a scalar log-det-Jacobian, the
batch shape of a bijector with non-constant Jacobian is expected to equal
the shape of forward_log_det_jacobian(x, event_ndims=x_event_ndims)
or inverse_log_det_jacobian(y, event_ndims=y_event_ndims), for x
or y of the specified ndims.
| Args | |
|---|---|
| x_event_ndims | Optional Python int(structure) number of dimensions in
a probabilistic event passed toforward; this must be greater than
or equal toself.forward_min_event_ndims. IfNone, defaults toself.forward_min_event_ndims. Mutually exclusive withy_event_ndims.
Default value:None. | 
| y_event_ndims | Optional Python int(structure) number of dimensions in
a probabilistic event passed toinverse; this must be greater than
or equal toself.inverse_min_event_ndims. Mutually exclusive withx_event_ndims.
Default value:None. | 
| Returns | |
|---|---|
| batch_shape | TensorShapebatch shape of this bijector for a
value with the given event rank. May be unknown or partially defined. | 
experimental_batch_shape_tensor
experimental_batch_shape_tensor(
    x_event_ndims=None, y_event_ndims=None
)
Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct
transformations it represents on events of a given size. For example: the
bijector tfb.Scale([1., 2.]) has batch shape [2] for scalar events
(event_ndims = 0), because applying it to a scalar event produces
two scalar outputs, the result of two different scaling transformations.
The same bijector has batch shape [] for vector events, because applying
it to a vector produces (via elementwise multiplication) a single vector
output.
Bijectors that operate independently on multiple state parts, such as
tfb.JointMap, must broadcast to a coherent batch shape. Some events may
not be valid: for example, the bijector
tfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])]) does not
produce a valid batch shape when event_ndims = [0, 0], since the batch
shapes of the two parts are inconsistent. The same bijector
does define valid batch shapes of [], [2], and [3] if event_ndims
is [1, 1], [0, 1], or [1, 0], respectively.
Since transforming a single event produces a scalar log-det-Jacobian, the
batch shape of a bijector with non-constant Jacobian is expected to equal
the shape of forward_log_det_jacobian(x, event_ndims=x_event_ndims)
or inverse_log_det_jacobian(y, event_ndims=y_event_ndims), for x
or y of the specified ndims.
| Args | |
|---|---|
| x_event_ndims | Optional Python int(structure) number of dimensions in
a probabilistic event passed toforward; this must be greater than
or equal toself.forward_min_event_ndims. IfNone, defaults toself.forward_min_event_ndims. Mutually exclusive withy_event_ndims.
Default value:None. | 
| y_event_ndims | Optional Python int(structure) number of dimensions in
a probabilistic event passed toinverse; this must be greater than
or equal toself.inverse_min_event_ndims. Mutually exclusive withx_event_ndims.
Default value:None. | 
| Returns | |
|---|---|
| batch_shape_tensor | integer Tensorbatch shape of this bijector for a
value with the given event rank. | 
experimental_compute_density_correction
experimental_compute_density_correction(
    x, tangent_space, backward_compat=False, **kwargs
)
Density correction for this transformation wrt the tangent space, at x.
Subclasses of Bijector may call the most specific applicable
method of TangentSpace, based on whether the transformation is
dimension-preserving, coordinate-wise, a projection, or something
more general. The backward-compatible assumption is that the
transformation is dimension-preserving (goes from R^n to R^n).
| Args | |
|---|---|
| x | Tensor(structure). The point at which to calculate the density. | 
| tangent_space | TangentSpaceor one of its subclasses.  The tangent to
the support manifold atx. | 
| backward_compat | boolspecifying whether to assume that the Bijector
is dimension-preserving. | 
| **kwargs | Optional keyword arguments forwarded to tangent space methods. | 
| Returns | |
|---|---|
| density_correction | Tensorrepresenting the density correction---in log
space---under the transformation that this Bijector denotes. | 
| Raises | |
|---|---|
| TypeError if backward_compatis False but no method ofTangentSpacehas been called explicitly. | 
forward
forward(
    x, name='forward', **kwargs
)
Returns the forward Bijector evaluation, i.e., X = g(Y).
| Args | |
|---|---|
| x | Tensor(structure). The input to the 'forward' evaluation. | 
| name | The name to give this op. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| Tensor(structure). | 
| Raises | |
|---|---|
| TypeError | if self.dtypeis specified andx.dtypeis notself.dtype. | 
| NotImplementedError | if _forwardis not implemented. | 
forward_dtype
forward_dtype(
    dtype=UNSPECIFIED, name='forward_dtype', **kwargs
)
Returns the dtype returned by forward for the provided input.
forward_event_ndims
forward_event_ndims(
    event_ndims, **kwargs
)
Returns the number of event dimensions produced by forward.
| Args | |
|---|---|
| event_ndims | Structure of Python and/or Tensor ints, and/orNonevalues. The structure should match that ofself.forward_min_event_ndims, and all non-Nonevalues must be
greater than or equal to the corresponding value inself.forward_min_event_ndims. | 
| **kwargs | Optional keyword arguments forwarded to nested bijectors. | 
| Returns | |
|---|---|
| forward_event_ndims | Structure of integers and/or Nonevalues matchingself.inverse_min_event_ndims. These are computed using 'prefer static'
semantics: if any inputs areNone, some or all of the outputs may beNone, indicating that the output dimension could not be inferred
(conversely, if all inputs are non-None, all outputs will be
non-None). If all inputevent_ndimsare Pythonints, all of the
(non-None) outputs will be Pythonints; otherwise, some or
all of the outputs may beTensorints. | 
forward_event_shape
forward_event_shape(
    input_shape
)
Shape of a single sample from a single batch as a TensorShape.
Same meaning as forward_event_shape_tensor. May be only partially defined.
| Args | |
|---|---|
| input_shape | TensorShape(structure) indicating event-portion shape
passed intoforwardfunction. | 
| Returns | |
|---|---|
| forward_event_shape_tensor | TensorShape(structure) indicating
event-portion shape after applyingforward. Possibly unknown. | 
forward_event_shape_tensor
forward_event_shape_tensor(
    input_shape, name='forward_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32 1D Tensor.
| Args | |
|---|---|
| input_shape | Tensor,int32vector (structure) indicating event-portion
shape passed intoforwardfunction. | 
| name | name to give to the op | 
| Returns | |
|---|---|
| forward_event_shape_tensor | Tensor,int32vector (structure)
indicating event-portion shape after applyingforward. | 
forward_log_det_jacobian
forward_log_det_jacobian(
    x, event_ndims=None, name='forward_log_det_jacobian', **kwargs
)
Returns both the forward_log_det_jacobian.
| Args | |
|---|---|
| x | Tensor(structure). The input to the 'forward' Jacobian determinant
evaluation. | 
| event_ndims | Optional number of dimensions in the probabilistic events
being transformed; this must be greater than or equal to self.forward_min_event_ndims. Ifevent_ndimsis specified, the
log Jacobian determinant is summed to produce a
scalar log-determinant for each event. Otherwise
(ifevent_ndimsisNone), no reduction is performed.
Multipart bijectors require structured event_ndims, such that the
batch rankrank(y[i]) - event_ndims[i]is the same for all
elementsiof the structured input. In most cases (with the
exception oftfb.JointMap) they further require thatevent_ndims[i] - self.inverse_min_event_ndims[i]is the same for
all elementsiof the structured input.
Default value:None(equivalent toself.forward_min_event_ndims). | 
| name | The name to give this op. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| Tensor(structure), if this bijector is injective.
If not injective this is not implemented. | 
| Raises | |
|---|---|
| TypeError | if y's dtype is incompatible with the expected output dtype. | 
| NotImplementedError | if neither _forward_log_det_jacobiannor {_inverse,_inverse_log_det_jacobian} are implemented, or
this is a non-injective bijector. | 
| ValueError | if the value of event_ndimsis not valid for this bijector. | 
inverse
inverse(
    y, name='inverse', **kwargs
)
Returns the inverse Bijector evaluation, i.e., X = g^{-1}(Y).
| Args | |
|---|---|
| y | Tensor(structure). The input to the 'inverse' evaluation. | 
| name | The name to give this op. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| Tensor(structure), if this bijector is injective.
If not injective, returns the k-tuple containing the uniquekpoints(x1, ..., xk)such thatg(xi) = y. | 
| Raises | |
|---|---|
| TypeError | if y's structured dtype is incompatible with the expected
output dtype. | 
| NotImplementedError | if _inverseis not implemented. | 
inverse_dtype
inverse_dtype(
    dtype=UNSPECIFIED, name='inverse_dtype', **kwargs
)
Returns the dtype returned by inverse for the provided input.
inverse_event_ndims
inverse_event_ndims(
    event_ndims, **kwargs
)
Returns the number of event dimensions produced by inverse.
| Args | |
|---|---|
| event_ndims | Structure of Python and/or Tensor ints, and/orNonevalues. The structure should match that ofself.inverse_min_event_ndims, and all non-Nonevalues must be
greater than or equal to the corresponding value inself.inverse_min_event_ndims. | 
| **kwargs | Optional keyword arguments forwarded to nested bijectors. | 
| Returns | |
|---|---|
| inverse_event_ndims | Structure of integers and/or Nonevalues matchingself.forward_min_event_ndims. These are computed using 'prefer static'
semantics: if any inputs areNone, some or all of the outputs may beNone, indicating that the output dimension could not be inferred
(conversely, if all inputs are non-None, all outputs will be
non-None). If all inputevent_ndimsare Pythonints, all of the
(non-None) outputs will be Pythonints; otherwise, some or
all of the outputs may beTensorints. | 
inverse_event_shape
inverse_event_shape(
    output_shape
)
Shape of a single sample from a single batch as a TensorShape.
Same meaning as inverse_event_shape_tensor. May be only partially defined.
| Args | |
|---|---|
| output_shape | TensorShape(structure) indicating event-portion shape
passed intoinversefunction. | 
| Returns | |
|---|---|
| inverse_event_shape_tensor | TensorShape(structure) indicating
event-portion shape after applyinginverse. Possibly unknown. | 
inverse_event_shape_tensor
inverse_event_shape_tensor(
    output_shape, name='inverse_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32 1D Tensor.
| Args | |
|---|---|
| output_shape | Tensor,int32vector (structure) indicating
event-portion shape passed intoinversefunction. | 
| name | name to give to the op | 
| Returns | |
|---|---|
| inverse_event_shape_tensor | Tensor,int32vector (structure)
indicating event-portion shape after applyinginverse. | 
inverse_log_det_jacobian
inverse_log_det_jacobian(
    y, event_ndims=None, name='inverse_log_det_jacobian', **kwargs
)
Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns: log(det(dX/dY))(Y). (Recall that: X=g^{-1}(Y).)
Note that forward_log_det_jacobian is the negative of this function,
evaluated at g^{-1}(y).
| Args | |
|---|---|
| y | Tensor(structure). The input to the 'inverse' Jacobian determinant
evaluation. | 
| event_ndims | Optional number of dimensions in the probabilistic events
being transformed; this must be greater than or equal to self.inverse_min_event_ndims. Ifevent_ndimsis specified, the
log Jacobian determinant is summed to produce a
scalar log-determinant for each event. Otherwise
(ifevent_ndimsisNone), no reduction is performed.
Multipart bijectors require structured event_ndims, such that the
batch rankrank(y[i]) - event_ndims[i]is the same for all
elementsiof the structured input. In most cases (with the
exception oftfb.JointMap) they further require thatevent_ndims[i] - self.inverse_min_event_ndims[i]is the same for
all elementsiof the structured input.
Default value:None(equivalent toself.inverse_min_event_ndims). | 
| name | The name to give this op. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| ildj | Tensor, if this bijector is injective.
If not injective, returns the tuple of local log det
Jacobians,log(det(Dg_i^{-1}(y))), whereg_iis the restriction
ofgto theithpartitionDi. | 
| Raises | |
|---|---|
| TypeError | if x's dtype is incompatible with the expected inverse-dtype. | 
| NotImplementedError | if _inverse_log_det_jacobianis not implemented. | 
| ValueError | if the value of event_ndimsis not valid for this bijector. | 
parameter_properties
@classmethodparameter_properties( dtype=tf.float32 )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the bijector's
Tensor-valued constructor arguments.
| Args | |
|---|---|
| dtype | Optional float dtypeto assume for continuous-valued parameters.
Some constraining bijectors require advance knowledge of the dtype
because certain constants (e.g.,tfb.Softplus.low) must be
instantiated with the same dtype as the values to be transformed. | 
| Returns | |
|---|---|
| parameter_properties | A str ->tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names toParameterProperties`
instances. | 
with_name_scope
@classmethodwith_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):@tf.Module.with_name_scopedef __call__(self, x):if not hasattr(self, 'w'):self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))return tf.matmul(x, self.w)
Using the above module would produce tf.Variables and tf.Tensors whose
names included the module name:
mod = MyModule()mod(tf.ones([1, 2]))<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>mod.w<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,numpy=..., dtype=float32)>
| Args | |
|---|---|
| method | The method to wrap. | 
| Returns | |
|---|---|
| The original method wrapped such that it enters the module's name scope. | 
__call__
__call__(
    value, name=None, **kwargs
)
Applies or composes the Bijector, depending on input type.
This is a convenience function which applies the Bijector instance in
three different ways, depending on the input:
- If the input is a tfd.Distributioninstance, returntfd.TransformedDistribution(distribution=input, bijector=self).
- If the input is a tfb.Bijectorinstance, returntfb.Chain([self, input]).
- Otherwise, return self.forward(input)
| Args | |
|---|---|
| value | A tfd.Distribution,tfb.Bijector, or a (structure of)Tensor. | 
| name | Python strname given to ops created by this function. | 
| **kwargs | Additional keyword arguments passed into the created tfd.TransformedDistribution,tfb.Bijector, orself.forward. | 
| Returns | |
|---|---|
| composition | A tfd.TransformedDistributionif the input was atfd.Distribution, atfb.Chainif the input was atfb.Bijector, or
a (structure of)Tensorcomputed byself.forward. | 
Examples
sigmoid = tfb.Reciprocal()(
    tfb.Shift(shift=1.)(
      tfb.Exp()(
        tfb.Scale(scale=-1.))))
# ==> `tfb.Chain([
#         tfb.Reciprocal(),
#         tfb.Shift(shift=1.),
#         tfb.Exp(),
#         tfb.Scale(scale=-1.),
#      ])`  # ie, `tfb.Sigmoid()`
log_normal = tfb.Exp()(tfd.Normal(0, 1))
# ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())`
tfb.Exp()([-1., 0., 1.])
# ==> tf.exp([-1., 0., 1.])
__eq__
__eq__(
    other
)
Return self==value.
__getitem__
__getitem__(
    slices
)
__iter__
__iter__()