View source on GitHub
|
Affine MaskedAutoregressiveFlow bijector.
Inherits From: Bijector
tfp.substrates.numpy.bijectors.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=None,
bijector_fn=None,
is_constant_jacobian=False,
validate_args=False,
unroll_loop=False,
event_ndims=1,
name=None
)
The affine autoregressive flow [(Papamakarios et al., 2016)][3] provides a relatively simple framework for user-specified (deep) architectures to learn a distribution over continuous events. Regarding terminology,
'Autoregressive models decompose the joint density as a product of conditionals, and model each conditional in turn. Normalizing flows transform a base density (e.g. a standard Gaussian) into the target density by an invertible transformation with tractable Jacobian.' [(Papamakarios et al., 2016)][3]
In other words, the 'autoregressive property' is equivalent to the
decomposition, p(x) = prod{ p(x[perm[i]] | x[perm[0:i]]) : i=0, ..., d }
where perm is some permutation of {0, ..., d}. In the simple case where
the permutation is identity this reduces to:
p(x) = prod{ p(x[i] | x[0:i]) : i=0, ..., d }.
In TensorFlow Probability, 'normalizing flows' are implemented as
tfp.bijectors.Bijectors. The forward 'autoregression' is implemented
using a tf.while_loop and a deep neural network (DNN) with masked weights
such that the autoregressive property is automatically met in the inverse.
A TransformedDistribution using MaskedAutoregressiveFlow(...) uses the
(expensive) forward-mode calculation to draw samples and the (cheap)
reverse-mode calculation to compute log-probabilities. Conversely, a
TransformedDistribution using Invert(MaskedAutoregressiveFlow(...)) uses
the (expensive) forward-mode calculation to compute log-probabilities and the
(cheap) reverse-mode calculation to compute samples. See 'Example Use'
[below] for more details.
Given a shift_and_log_scale_fn, the forward and inverse transformations are
(a sequence of) affine transformations. A 'valid' shift_and_log_scale_fn
must compute each shift (aka loc or 'mu' in [Germain et al. (2015)][1])
and log(scale) (aka 'alpha' in [Germain et al. (2015)][1]) such that each
are broadcastable with the arguments to forward and inverse, i.e., such
that the calculations in forward, inverse [below] are possible.
For convenience, tfp.bijectors.AutoregressiveNetwork is offered as a
possible shift_and_log_scale_fn function. It implements the MADE
architecture [(Germain et al., 2015)][1]. MADE is a feed-forward network that
computes a shift and log(scale) using masked dense layers in a deep
neural network. Weights are masked to ensure the autoregressive property. It
is possible that this architecture is suboptimal for your task. To build
alternative networks, either change the arguments to
tfp.bijectors.AutoregressiveNetwork or use some other architecture, e.g.,
using tf.keras.layers.
Assuming shift_and_log_scale_fn has valid shape and autoregressive
semantics, the forward transformation is
def forward(x):
y = zeros_like(x)
event_size = x.shape[-event_dims:].num_elements()
for _ in range(event_size):
shift, log_scale = shift_and_log_scale_fn(y)
y = x * tf.exp(log_scale) + shift
return y
and the inverse transformation is
def inverse(y):
shift, log_scale = shift_and_log_scale_fn(y)
return (y - shift) / tf.exp(log_scale)
Notice that the inverse does not need a for-loop. This is because in the
forward pass each calculation of shift and log_scale is based on the y
calculated so far (not x). In the inverse, the y is fully known, thus
is equivalent to the scaling used in forward after event_size passes,
i.e., the 'last' y used to compute shift, log_scale. (Roughly speaking,
this also proves the transform is bijective.)
The bijector_fn argument allows specifying a more general coupling relation,
such as the LSTM-inspired activation from [4], or Neural Spline Flow [5]. It
must logically operate on each element of the input individually, and still
obey the 'autoregressive property' described above. The forward
transformation is
def forward(x):
y = zeros_like(x)
event_size = x.shape[-event_dims:].num_elements()
for _ in range(event_size):
bijector = bijector_fn(y)
y = bijector.forward(x)
return y
and inverse transformation is
def inverse(y):
bijector = bijector_fn(y)
return bijector.inverse(y)
Examples
tfd = tfp.distributions
tfb = tfp.bijectors
dims = 2
# A common choice for a normalizing flow is to use a Gaussian for the base
# distribution. (However, any continuous distribution would work.) Here, we
# use `tfd.Sample` to create a joint Gaussian distribution with diagonal
# covariance for the base distribution (note that in the Gaussian case,
# `tfd.MultivariateNormalDiag` could also be used.)
maf = tfd.TransformedDistribution(
distribution=tfd.Sample(
tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
bijector=tfb.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[512, 512])))
x = maf.sample() # Expensive; uses `tf.while_loop`, no Bijector caching.
maf.log_prob(x) # Almost free; uses Bijector caching.
# Cheap; no `tf.while_loop` despite no Bijector caching.
maf.log_prob(tf.zeros(dims))
# [Papamakarios et al. (2016)][3] also describe an Inverse Autoregressive
# Flow [(Kingma et al., 2016)][2]:
iaf = tfd.TransformedDistribution(
distribution=tfd.Sample(
tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
bijector=tfb.Invert(tfb.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[512, 512]))))
x = iaf.sample() # Cheap; no `tf.while_loop` despite no Bijector caching.
iaf.log_prob(x) # Almost free; uses Bijector caching.
# Expensive; uses `tf.while_loop`, no Bijector caching.
iaf.log_prob(tf.zeros(dims))
# In many (if not most) cases the default `shift_and_log_scale_fn` will be a
# poor choice. Here's an example of using a 'shift only' version and with a
# different number/depth of hidden layers.
made = tfb.AutoregressiveNetwork(params=1, hidden_units=[32])
maf_no_scale_hidden2 = tfd.TransformedDistribution(
distribution=tfd.Sample(
tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
bijector=tfb.MaskedAutoregressiveFlow(
lambda y: (made(y)[..., 0], None),
is_constant_jacobian=True))
maf_no_scale_hidden2._made = made # Ensure maf_no_scale_hidden2.trainable
# NOTE: The last line ensures that maf_no_scale_hidden2.trainable_variables
# will include all variables from `made`.
Variable Tracking
A tfb.MaskedAutoregressiveFlow instance saves a reference to the values
passed as shift_and_log_scale_fn and bijector_fn to its constructor.
Thus, for most values passed as shift_and_log_scale_fn or bijector_fn,
variables referenced by those values will be found and tracked by the
tfb.MaskedAutoregressiveFlow instance. Please see the tf.Module
documentation for further details.
However, if the value passed to shift_and_log_scale_fn or bijector_fn is a
Python function, then tfb.MaskedAutoregressiveFlow cannot automatically
track variables used inside shift_and_log_scale_fn or bijector_fn. To get
tfb.MaskedAutoregressiveFlow to track such variables, either:
Replace the Python function with a
tf.Module,tf.keras.Layer, or other callable object through whichtf.Modulecan find variables.Or, add a reference to the variables to the
tfb.MaskedAutoregressiveFlowinstance by setting an attribute -- for example:```` made1 = tfb.AutoregressiveNetwork(params=1, hidden_units=[10, 10]) made2 = tfb.AutoregressiveNetwork(params=1, hidden_units=[10, 10]) maf = tfb.MaskedAutoregressiveFlow(lambda y: (made1(y), made2(y) + 1.)) maf._made_variables = made1.variables + made2.variables ````References
[1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning, 2015. https://arxiv.org/abs/1502.03509
[2]: Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.04934
[3]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked Autoregressive Flow for Density Estimation. In Neural Information Processing Systems, 2017. https://arxiv.org/abs/1705.07057
[4]: Diederik P Kingma, Tim Salimans, Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.04934
[5]: Conor Durkan, Artur Bekasov, Iain Murray, George Papamakarios. Neural Spline Flows, 2019. http://arxiv.org/abs/1906.04032
Args
shift_and_log_scale_fnPython callablewhich computesshiftandlog_scalefrom the inverse domain (y). Calculation must respect the 'autoregressive property' (see class docstring). Suggested defaulttfb.AutoregressiveNetwork(params=2, hidden_layers=...). Typically the function containstf.Variables. ReturningNonefor either (both)shift,log_scaleis equivalent to (but more efficient than) returning zero. Ifshift_and_log_scale_fnreturns a singleTensor, the returned value will be unstacked to get theshiftandlog_scale:tf.unstack(shift_and_log_scale_fn(y), num=2, axis=-1).bijector_fnPython callablewhich returns atfb.Bijectorwhich transforms event tensor with the signature(input, **condition_kwargs) -> bijector. The bijector must operate on scalar events and must not alter the rank of its input. Thebijector_fnwill be called withTensorsfrom the inverse domain (y). Calculation must respect the 'autoregressive property' (see class docstring).is_constant_jacobianPython bool. Default:False. WhenTruethe implementation assumeslog_scaledoes not depend on the forward domain (x) or inverse domain (y) values. (No validation is made;is_constant_jacobian=Falseis always safe but possibly computationally inefficient.)validate_argsPython boolindicating whether arguments should be checked for correctness.unroll_loopPython boolindicating whether thetf.while_loopin_forwardshould be replaced with a static for loop. Requires that the final dimension ofxbe known at graph construction time. Defaults toFalse.event_ndimsPython integer, the intrinsic dimensionality of this bijector. 1 corresponds to a simple vector autoregressive bijector as implemented by thetfp.bijectors.AutoregressiveNetwork, 2 might be useful for a 2D convolutionalshift_and_log_scale_fnand so on.namePython str, name given to ops managed by this object.Raises
ValueErrorIf both or none of shift_and_log_scale_fnandbijector_fnare specified.Methods
copycopy( **override_parameters_kwargs )Creates a copy of the bijector.
Args **override_parameters_kwargsString/value dictionary of initialization arguments to override with new values. Returns bijectorA new instance of type(self)initialized from the union of self.parameters and override_parameters_kwargs, i.e.,dict(self.parameters, **override_parameters_kwargs).experimental_batch_shapeexperimental_batch_shape( x_event_ndims=None, y_event_ndims=None )Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct transformations it represents on events of a given size. For example: the bijector
tfb.Scale([1., 2.])has batch shape[2]for scalar events (event_ndims = 0), because applying it to a scalar event produces two scalar outputs, the result of two different scaling transformations. The same bijector has batch shape[]for vector events, because applying it to a vector produces (via elementwise multiplication) a single vector output.Bijectors that operate independently on multiple state parts, such as
tfb.JointMap, must broadcast to a coherent batch shape. Some events may not be valid: for example, the bijectortfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])])does not produce a valid batch shape whenevent_ndims = [0, 0], since the batch shapes of the two parts are inconsistent. The same bijector does define valid batch shapes of[],[2], and[3]ifevent_ndimsis[1, 1],[0, 1], or[1, 0], respectively.Since transforming a single event produces a scalar log-det-Jacobian, the batch shape of a bijector with non-constant Jacobian is expected to equal the shape of
forward_log_det_jacobian(x, event_ndims=x_event_ndims)orinverse_log_det_jacobian(y, event_ndims=y_event_ndims), forxoryof the specifiedndims.Args x_event_ndimsOptional Python int(structure) number of dimensions in a probabilistic event passed toforward; this must be greater than or equal toself.forward_min_event_ndims. IfNone, defaults toself.forward_min_event_ndims. Mutually exclusive withy_event_ndims. Default value:None.y_event_ndimsOptional Python int(structure) number of dimensions in a probabilistic event passed toinverse; this must be greater than or equal toself.inverse_min_event_ndims. Mutually exclusive withx_event_ndims. Default value:None.Returns batch_shapeTensorShapebatch shape of this bijector for a value with the given event rank. May be unknown or partially defined.experimental_batch_shape_tensorexperimental_batch_shape_tensor( x_event_ndims=None, y_event_ndims=None )Returns the batch shape of this bijector for inputs of the given rank.
The batch shape of a bijector decribes the set of distinct transformations it represents on events of a given size. For example: the bijector
tfb.Scale([1., 2.])has batch shape[2]for scalar events (event_ndims = 0), because applying it to a scalar event produces two scalar outputs, the result of two different scaling transformations. The same bijector has batch shape[]for vector events, because applying it to a vector produces (via elementwise multiplication) a single vector output.Bijectors that operate independently on multiple state parts, such as
tfb.JointMap, must broadcast to a coherent batch shape. Some events may not be valid: for example, the bijectortfd.JointMap([tfb.Scale([1., 2.]), tfb.Scale([1., 2., 3.])])does not produce a valid batch shape whenevent_ndims = [0, 0], since the batch shapes of the two parts are inconsistent. The same bijector does define valid batch shapes of[],[2], and[3]ifevent_ndimsis[1, 1],[0, 1], or[1, 0], respectively.Since transforming a single event produces a scalar log-det-Jacobian, the batch shape of a bijector with non-constant Jacobian is expected to equal the shape of
forward_log_det_jacobian(x, event_ndims=x_event_ndims)orinverse_log_det_jacobian(y, event_ndims=y_event_ndims), forxoryof the specifiedndims.Args x_event_ndimsOptional Python int(structure) number of dimensions in a probabilistic event passed toforward; this must be greater than or equal toself.forward_min_event_ndims. IfNone, defaults toself.forward_min_event_ndims. Mutually exclusive withy_event_ndims. Default value:None.y_event_ndimsOptional Python int(structure) number of dimensions in a probabilistic event passed toinverse; this must be greater than or equal toself.inverse_min_event_ndims. Mutually exclusive withx_event_ndims. Default value:None.Returns batch_shape_tensorinteger Tensorbatch shape of this bijector for a value with the given event rank.experimental_compute_density_correctionexperimental_compute_density_correction( x, tangent_space, backward_compat=False, **kwargs )Density correction for this transformation wrt the tangent space, at x.
Subclasses of Bijector may call the most specific applicable method of
TangentSpace, based on whether the transformation is dimension-preserving, coordinate-wise, a projection, or something more general. The backward-compatible assumption is that the transformation is dimension-preserving (goes from R^n to R^n).Args xTensor(structure). The point at which to calculate the density.tangent_spaceTangentSpaceor one of its subclasses. The tangent to the support manifold atx.backward_compatboolspecifying whether to assume that the Bijector is dimension-preserving.**kwargsOptional keyword arguments forwarded to tangent space methods. Returns density_correctionTensorrepresenting the density correction---in log space---under the transformation that this Bijector denotes.Raises TypeError if backward_compatis False but no method ofTangentSpacehas been called explicitly.forwardforward( x, name='forward', **kwargs )Returns the forward
Bijectorevaluation, i.e., X = g(Y).Args xTensor(structure). The input to the 'forward' evaluation.nameThe name to give this op. **kwargsNamed arguments forwarded to subclass implementation. Returns Tensor(structure).Raises TypeErrorif self.dtypeis specified andx.dtypeis notself.dtype.NotImplementedErrorif _forwardis not implemented.forward_dtypeforward_dtype( dtype=UNSPECIFIED, name='forward_dtype', **kwargs )Returns the dtype returned by
forwardfor the provided input.forward_event_ndimsforward_event_ndims( event_ndims, **kwargs )Returns the number of event dimensions produced by
forward.Args event_ndimsStructure of Python and/or Tensor ints, and/orNonevalues. The structure should match that ofself.forward_min_event_ndims, and all non-Nonevalues must be greater than or equal to the corresponding value inself.forward_min_event_ndims.**kwargsOptional keyword arguments forwarded to nested bijectors. Returns forward_event_ndimsStructure of integers and/or Nonevalues matchingself.inverse_min_event_ndims. These are computed using 'prefer static' semantics: if any inputs areNone, some or all of the outputs may beNone, indicating that the output dimension could not be inferred (conversely, if all inputs are non-None, all outputs will be non-None). If all inputevent_ndimsare Pythonints, all of the (non-None) outputs will be Pythonints; otherwise, some or all of the outputs may beTensorints.forward_event_shapeforward_event_shape( input_shape )Shape of a single sample from a single batch as a
TensorShape.Same meaning as
forward_event_shape_tensor. May be only partially defined.Args input_shapeTensorShape(structure) indicating event-portion shape passed intoforwardfunction.Returns forward_event_shape_tensorTensorShape(structure) indicating event-portion shape after applyingforward. Possibly unknown.forward_event_shape_tensorforward_event_shape_tensor( input_shape, name='forward_event_shape_tensor' )Shape of a single sample from a single batch as an
int321DTensor.Args input_shapeTensor,int32vector (structure) indicating event-portion shape passed intoforwardfunction.namename to give to the op Returns forward_event_shape_tensorTensor,int32vector (structure) indicating event-portion shape after applyingforward.forward_log_det_jacobianforward_log_det_jacobian( x, event_ndims=None, name='forward_log_det_jacobian', **kwargs )Returns both the forward_log_det_jacobian.
Args xTensor(structure). The input to the 'forward' Jacobian determinant evaluation.event_ndimsOptional number of dimensions in the probabilistic events being transformed; this must be greater than or equal to self.forward_min_event_ndims. Ifevent_ndimsis specified, the log Jacobian determinant is summed to produce a scalar log-determinant for each event. Otherwise (ifevent_ndimsisNone), no reduction is performed. Multipart bijectors require structured event_ndims, such that the batch rankrank(y[i]) - event_ndims[i]is the same for all elementsiof the structured input. In most cases (with the exception oftfb.JointMap) they further require thatevent_ndims[i] - self.inverse_min_event_ndims[i]is the same for all elementsiof the structured input. Default value:None(equivalent toself.forward_min_event_ndims).nameThe name to give this op. **kwargsNamed arguments forwarded to subclass implementation. Returns Tensor(structure), if this bijector is injective. If not injective this is not implemented.Raises TypeErrorif y's dtype is incompatible with the expected output dtype.NotImplementedErrorif neither _forward_log_det_jacobiannor {_inverse,_inverse_log_det_jacobian} are implemented, or this is a non-injective bijector.ValueErrorif the value of event_ndimsis not valid for this bijector.inverseinverse( y, name='inverse', **kwargs )Returns the inverse
Bijectorevaluation, i.e., X = g^{-1}(Y).Args yTensor(structure). The input to the 'inverse' evaluation.nameThe name to give this op. **kwargsNamed arguments forwarded to subclass implementation. Returns Tensor(structure), if this bijector is injective. If not injective, returns the k-tuple containing the uniquekpoints(x1, ..., xk)such thatg(xi) = y.Raises TypeErrorif y's structured dtype is incompatible with the expected output dtype.NotImplementedErrorif _inverseis not implemented.inverse_dtypeinverse_dtype( dtype=UNSPECIFIED, name='inverse_dtype', **kwargs )Returns the dtype returned by
inversefor the provided input.inverse_event_ndimsinverse_event_ndims( event_ndims, **kwargs )Returns the number of event dimensions produced by
inverse.Args event_ndimsStructure of Python and/or Tensor ints, and/orNonevalues. The structure should match that ofself.inverse_min_event_ndims, and all non-Nonevalues must be greater than or equal to the corresponding value inself.inverse_min_event_ndims.**kwargsOptional keyword arguments forwarded to nested bijectors. Returns inverse_event_ndimsStructure of integers and/or Nonevalues matchingself.forward_min_event_ndims. These are computed using 'prefer static' semantics: if any inputs areNone, some or all of the outputs may beNone, indicating that the output dimension could not be inferred (conversely, if all inputs are non-None, all outputs will be non-None). If all inputevent_ndimsare Pythonints, all of the (non-None) outputs will be Pythonints; otherwise, some or all of the outputs may beTensorints.inverse_event_shapeinverse_event_shape( output_shape )Shape of a single sample from a single batch as a
TensorShape.Same meaning as
inverse_event_shape_tensor. May be only partially defined.Args output_shapeTensorShape(structure) indicating event-portion shape passed intoinversefunction.Returns inverse_event_shape_tensorTensorShape(structure) indicating event-portion shape after applyinginverse. Possibly unknown.inverse_event_shape_tensorinverse_event_shape_tensor( output_shape, name='inverse_event_shape_tensor' )Shape of a single sample from a single batch as an
int321DTensor.Args output_shapeTensor,int32vector (structure) indicating event-portion shape passed intoinversefunction.namename to give to the op Returns inverse_event_shape_tensorTensor,int32vector (structure) indicating event-portion shape after applyinginverse.inverse_log_det_jacobianinverse_log_det_jacobian( y, event_ndims=None, name='inverse_log_det_jacobian', **kwargs )Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns:
log(det(dX/dY))(Y). (Recall that:X=g^{-1}(Y).)Note that
forward_log_det_jacobianis the negative of this function, evaluated atg^{-1}(y).Args yTensor(structure). The input to the 'inverse' Jacobian determinant evaluation.event_ndimsOptional number of dimensions in the probabilistic events being transformed; this must be greater than or equal to self.inverse_min_event_ndims. Ifevent_ndimsis specified, the log Jacobian determinant is summed to produce a scalar log-determinant for each event. Otherwise (ifevent_ndimsisNone), no reduction is performed. Multipart bijectors require structured event_ndims, such that the batch rankrank(y[i]) - event_ndims[i]is the same for all elementsiof the structured input. In most cases (with the exception oftfb.JointMap) they further require thatevent_ndims[i] - self.inverse_min_event_ndims[i]is the same for all elementsiof the structured input. Default value:None(equivalent toself.inverse_min_event_ndims).nameThe name to give this op. **kwargsNamed arguments forwarded to subclass implementation. Returns ildjTensor, if this bijector is injective. If not injective, returns the tuple of local log det Jacobians,log(det(Dg_i^{-1}(y))), whereg_iis the restriction ofgto theithpartitionDi.Raises TypeErrorif x's dtype is incompatible with the expected inverse-dtype.NotImplementedErrorif _inverse_log_det_jacobianis not implemented.ValueErrorif the value of event_ndimsis not valid for this bijector.parameter_properties@classmethodparameter_properties( dtype=tf.float32 )Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the bijector's
Tensor-valued constructor arguments.Args dtypeOptional float dtypeto assume for continuous-valued parameters. Some constraining bijectors require advance knowledge of the dtype because certain constants (e.g.,tfb.Softplus.low) must be instantiated with the same dtype as the values to be transformed.Returns parameter_propertiesA str ->tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names toParameterProperties` instances.__call____call__( value, name=None, **kwargs )Applies or composes the
Bijector, depending on input type.This is a convenience function which applies the
Bijectorinstance in three different ways, depending on the input:- If the input is a
tfd.Distributioninstance, returntfd.TransformedDistribution(distribution=input, bijector=self). - If the input is a
tfb.Bijectorinstance, returntfb.Chain([self, input]). - Otherwise, return
self.forward(input)
Args valueA tfd.Distribution,tfb.Bijector, or a (structure of)Tensor.namePython strname given to ops created by this function.**kwargsAdditional keyword arguments passed into the created tfd.TransformedDistribution,tfb.Bijector, orself.forward.Returns compositionA tfd.TransformedDistributionif the input was atfd.Distribution, atfb.Chainif the input was atfb.Bijector, or a (structure of)Tensorcomputed byself.forward.Examples
sigmoid = tfb.Reciprocal()( tfb.Shift(shift=1.)( tfb.Exp()( tfb.Scale(scale=-1.)))) # ==> `tfb.Chain([ # tfb.Reciprocal(), # tfb.Shift(shift=1.), # tfb.Exp(), # tfb.Scale(scale=-1.), # ])` # ie, `tfb.Sigmoid()` log_normal = tfb.Exp()(tfd.Normal(0, 1)) # ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())` tfb.Exp()([-1., 0., 1.]) # ==> tf.exp([-1., 0., 1.])- If the input is a
__eq__
__eq__(
other
)
Return self==value.
__getitem__
__getitem__(
slices
)
__iter__
__iter__()
View source on GitHub