|  View source on GitHub | 
Posterior predictive of a variational Gaussian process.
Inherits From: GaussianProcess, AutoCompositeTensorDistribution, Distribution, AutoCompositeTensor
tfp.distributions.VariationalGaussianProcess(
    kernel,
    index_points,
    inducing_index_points,
    variational_inducing_observations_loc,
    variational_inducing_observations_scale,
    mean_fn=None,
    observation_noise_variance=None,
    predictive_noise_variance=None,
    cholesky_fn=None,
    use_whitening_transform=False,
    jitter=1e-06,
    validate_args=False,
    allow_nan_stats=False,
    name='VariationalGaussianProcess'
)
This distribution implements the variational Gaussian process (VGP), as
described in [Titsias, 2009][1] and [Hensman, 2013][2]. The VGP is an
inducing point-based approximation of an exact GP posterior
(see Mathematical Details, below). Ultimately, this Distribution class
represents a marginal distrbution over function values at a
collection of index_points. It is parameterized by
- a kernel function,
- a mean function,
- the (scalar) observation noise variance of the normal likelihood,
- a set of index points,
- a set of inducing index points, and
- the parameters of the (full-rank, Gaussian) variational posterior distribution over function values at the inducing points, conditional on some observations.
A VGP is "trained" by selecting any kernel parameters, the locations of the
inducing index points, and the variational parameters. [Titsias, 2009][1] and
[Hensman, 2013][2] describe a variational lower bound on the marginal log
likelihood of observed data, which this class offers through the
variational_loss method (this is the negative lower bound, for convenience
when plugging into a TF Optimizer's minimize function).
Training may be done in minibatches.
[Titsias, 2009][1] describes a closed form for the optimal variational
parameters, in the case of sufficiently small observational data (ie,
small enough to fit in memory but big enough to warrant approximating the GP
posterior). A method to compute these optimal parameters in terms of the full
observational data set is provided as a staticmethod,
optimal_variational_posterior. It returns a
MultivariateNormalLinearOperator instance with optimal location and
scale parameters.
Mathematical Details
Notation
We will in general be concerned about three collections of index points, and it'll be good to give them names:
- x[1], ..., x[N]: observation index points -- locations of our observed data.
- z[1], ..., z[M]: inducing index points -- locations of the "summarizing" inducing points
- t[1], ..., t[P]: predictive index points -- locations where we are making posterior predictions based on observations and the variational parameters.
To lighten notation, we'll use X, Z, T to denote the above collections.
Similarly, we'll denote by f(X) the collection of function values at each of
the x[i], and by Y, the collection of (noisy) observed data at each x[i].
We'll denote kernel matrices generated from pairs of index points asK_tt,K_xt,K_tz`, etc, e.g.,
         | k(t[1], z[1])    k(t[1], z[2])  ...  k(t[1], z[M]) |
  K_tz = | k(t[2], z[1])    k(t[2], z[2])  ...  k(t[2], z[M]) |
         |      ...              ...                 ...      |
         | k(t[P], z[1])    k(t[P], z[2])  ...  k(t[P], z[M]) |
Preliminaries
A Gaussian process is an indexed collection of random variables, any finite
collection of which are jointly Gaussian. Typically, the index set is some
finite-dimensional, real vector space, and indeed we make this assumption in
what follows. The GP may then be thought of as a distribution over functions
on the index set. Samples from the GP are functions on the whole index set;
these can't be represented in finite compute memory, so one typically works
with the marginals at a finite collection of index points. The properties of
the GP are entirely determined by its mean function m and covariance
function k. The generative process, assuming a mean-zero normal likelihood
with stddev sigma, is
  f ~ GP(m, k)
  Y | f(X) ~ Normal(f(X), sigma),   i = 1, ... , N
In finite terms (ie, marginalizing out all but a finite number of f(X)'sigma), we can write
  f(X) ~ MVN(loc=m(X), cov=K_xx)
  Y | f(X) ~ Normal(f(X), sigma),   i = 1, ... , N
Posterior inference is possible in analytical closed form but becomes intractible as data sizes get large. See [Rasmussen, 2006][3] for details.
The VGP
The VGP is an inducing point-based approximation of an exact GP posterior, where two approximating assumptions have been made:
- function values at non-inducing points are mutually independent conditioned on function values at the inducing points,
- the (expensive) posterior over function values at inducing points conditional on observations is replaced with an arbitrary (learnable) full-rank Gaussian distribution, - q(f(Z)) = MVN(loc=m, scale=S),- where - mand- Sare parameters to be chosen by optimizing an evidence lower bound (ELBO).
The posterior predictive distribution becomes
  q(f(T)) = integral df(Z) p(f(T) | f(Z)) q(f(Z))
          = MVN(loc = A @ m, scale = B^(1/2))
where
  A = K_tz @ K_zz^-1
  B = K_tt - A @ (K_zz - S S^T) A^T
The approximate posterior predictive distribution q(f(T)) is what the
VariationalGaussianProcess class represents.
Model selection in this framework entails choosing the kernel parameters, inducing point locations, and variational parameters. We do this by optimizing a variational lower bound on the marginal log likelihood of observed data. The lower bound takes the following form (see [Titsias, 2009][1] and [Hensman, 2013][2] for details on the derivation):
  L(Z, m, S, Y) = (
      MVN(loc=(K_zx @ K_zz^-1) @ m, scale_diag=sigma).log_prob(Y) -
      (Tr(K_xx - K_zx @ K_zz^-1 @ K_xz) +
       Tr(S @ S^T @ K_zz^1 @ K_zx @ K_xz @ K_zz^-1)) / (2 * sigma^2) -
      KL(q(f(Z)) || p(f(Z))))
where in the final KL term, p(f(Z)) is the GP prior on inducing point
function values. This variational lower bound can be computed on minibatches
of the full data set (X, Y). A method to compute the negative variational
lower bound is implemented as VariationalGaussianProcess.variational_loss.
Optimal variational parameters
As described in [Titsias, 2009][1], a closed form optimum for the variational
location and scale parameters, m and S, can be computed when the
observational data are not prohibitively voluminous. The
optimal_variational_posterior function to computes the optimal variational
posterior distribution over inducing point function values in terms of the GP
parameters (mean and kernel functions), inducing point locations, observation
index points, and observations. Note that the inducing index point locations
must still be optimized even when these parameters are known functions of the
inducing index points. The optimal parameters are computed as follows:
  C = sigma^-2 (K_zz + K_zx @ K_xz)^-1
  optimal Gaussian covariance: K_zz @ C @ K_zz
  optimal Gaussian location: sigma^-2 K_zz @ C @ K_zx @ Y
Usage Examples
Here's an example of defining and training a VariationalGaussianProcess on some toy generated data.
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfk = tfp.math.psd_kernels
# We'll use double precision throughout for better numerics.
dtype = np.float64
# Generate noisy data from a known function.
f = lambda x: np.exp(-x[..., 0]**2 / 20.) * np.sin(1. * x[..., 0])
true_observation_noise_variance_ = dtype(1e-1) ** 2
num_training_points_ = 100
x_train_ = np.concatenate(
    [np.random.uniform(-6., 0., [num_training_points_ // 2 , 1]),
    np.random.uniform(1., 10., [num_training_points_ // 2 , 1])],
    axis=0).astype(dtype)
y_train_ = (f(x_train_) +
            np.random.normal(
                0., np.sqrt(true_observation_noise_variance_),
                [num_training_points_]).astype(dtype))
# Create kernel with trainable parameters, and trainable observation noise
# variance variable. Each of these is constrained to be positive.
amplitude = tfp.util.TransformedVariable(
    1., tfb.Softplus(), dtype=dtype, name='amplitude')
length_scale = tfp.util.TransformedVariable(
    1., tfb.Softplus(), dtype=dtype, name='length_scale')
kernel = tfk.ExponentiatedQuadratic(
    amplitude=amplitude,
    length_scale=length_scale)
observation_noise_variance = tfp.util.TransformedVariable(
    1., tfb.Softplus(), dtype=dtype, name='observation_noise_variance')
# Create trainable inducing point locations and variational parameters.
num_inducing_points_ = 20
inducing_index_points = tf.Variable(
    np.linspace(-5., 5., num_inducing_points_)[..., np.newaxis],
    dtype=dtype, name='inducing_index_points')
variational_inducing_observations_loc = tf.Variable(
    np.zeros([num_inducing_points_], dtype=dtype),
    name='variational_inducing_observations_loc')
variational_inducing_observations_scale = tf.Variable(
    np.eye(num_inducing_points_, dtype=dtype),
    name='variational_inducing_observations_scale')
# These are the index point locations over which we'll construct the
# (approximate) posterior predictive distribution.
num_predictive_index_points_ = 500
index_points_ = np.linspace(-13, 13,
                            num_predictive_index_points_,
                            dtype=dtype)[..., np.newaxis]
# Construct our variational GP Distribution instance.
vgp = tfd.VariationalGaussianProcess(
    kernel,
    index_points=index_points_,
    inducing_index_points=inducing_index_points,
    variational_inducing_observations_loc=
        variational_inducing_observations_loc,
    variational_inducing_observations_scale=
        variational_inducing_observations_scale,
    observation_noise_variance=observation_noise_variance)
# For training, we use some simplistic numpy-based minibatching.
batch_size = 64
optimizer = tf.optimizers.Adam(learning_rate=.1)
@tf.function
def optimize(x_train_batch, y_train_batch):
  with tf.GradientTape() as tape:
    # Create the loss function we want to optimize.
    loss = vgp.variational_loss(
        observations=y_train_batch,
        observation_index_points=x_train_batch,
        kl_weight=float(batch_size) / float(num_training_points_))
  grads = tape.gradient(loss, vgp.trainable_variables)
  optimizer.apply_gradients(zip(grads, vgp.trainable_variables))
  return loss
num_iters = 10000
num_logs = 10
for i in range(num_iters):
  batch_idxs = np.random.randint(num_training_points_, size=[batch_size])
  x_train_batch = x_train_[batch_idxs, ...]
  y_train_batch = y_train_[batch_idxs]
  loss = optimize(x_train_batch, y_train_batch)
  if i % (num_iters / num_logs) == 0 or i + 1 == num_iters:
    print(i, loss.numpy())
# Generate a plot with
#   - the posterior predictive mean
#   - training data
#   - inducing index points (plotted vertically at the mean of the variational
#     posterior over inducing point function values)
#   - 50 posterior predictive samples
num_samples = 50
samples = vgp.sample(num_samples).numpy()
mean = vgp.mean().numpy()
inducing_index_points_ = inducing_index_points.numpy()
variational_loc = variational_inducing_observations_loc.numpy()
plt.figure(figsize=(15, 5))
plt.scatter(inducing_index_points_[..., 0], variational_loc,
            marker='x', s=50, color='k', zorder=10)
plt.scatter(x_train_[..., 0], y_train_, color='#00ff00', zorder=9)
plt.plot(np.tile(index_points_, (num_samples)),
          samples.T, color='r', alpha=.1)
plt.plot(index_points_, mean, color='k')
plt.plot(index_points_, f(index_points_), color='b')
Here we use the same data setup, but compute the optimal variational
parameters instead of training them.
# We'll use double precision throughout for better numerics.
dtype = np.float64
# Generate noisy data from a known function.
f = lambda x: np.exp(-x[..., 0]**2 / 20.) * np.sin(1. * x[..., 0])
true_observation_noise_variance_ = dtype(1e-1) ** 2
num_training_points_ = 1000
x_train_ = np.random.uniform(-10., 10., [num_training_points_, 1])
y_train_ = (f(x_train_) +
            np.random.normal(
                0., np.sqrt(true_observation_noise_variance_),
                [num_training_points_]))
# Create kernel with trainable parameters, and trainable observation noise
# variance variable. Each of these is constrained to be positive.
amplitude = tfp.util.TransformedVariable(
    1., tfb.Softplus(), dtype=dtype, name='amplitude')
length_scale = tfp.util.TransformedVariable(
    1., tfb.Softplus(), dtype=dtype, name='length_scale')
kernel = tfk.ExponentiatedQuadratic(
    amplitude=amplitude,
    length_scale=length_scale)
observation_noise_variance = tfp.util.TransformedVariable(
    1., tfb.Softplus(), dtype=dtype, name='observation_noise_variance')
# Create trainable inducing point locations and variational parameters.
num_inducing_points_ = 10
inducing_index_points = tf.Variable(
    np.linspace(-10., 10., num_inducing_points_)[..., np.newaxis],
    dtype=dtype, name='inducing_index_points')
variational_loc, variational_scale = (
    tfd.VariationalGaussianProcess.optimal_variational_posterior(
        kernel=kernel,
        inducing_index_points=inducing_index_points,
        observation_index_points=x_train_,
        observations=y_train_,
        observation_noise_variance=observation_noise_variance))
# These are the index point locations over which we'll construct the
# (approximate) posterior predictive distribution.
num_predictive_index_points_ = 500
index_points_ = np.linspace(-13, 13,
                            num_predictive_index_points_,
                            dtype=dtype)[..., np.newaxis]
# Construct our variational GP Distribution instance.
vgp = tfd.VariationalGaussianProcess(
    kernel,
    index_points=index_points_,
    inducing_index_points=inducing_index_points,
    variational_inducing_observations_loc=variational_loc,
    variational_inducing_observations_scale=variational_scale,
    observation_noise_variance=observation_noise_variance)
# For training, we use some simplistic numpy-based minibatching.
batch_size = 64
optimizer = tf.optimizers.Adam(learning_rate=.05, beta_1=.5, beta_2=.99)
@tf.function
def optimize(x_train_batch, y_train_batch):
  with tf.GradientTape() as tape:
    # Create the loss function we want to optimize.
    loss = vgp.variational_loss(
        observations=y_train_batch,
        observation_index_points=x_train_batch,
        kl_weight=float(batch_size) / float(num_training_points_))
  grads = tape.gradient(loss, vgp.trainable_variables)
  optimizer.apply_gradients(zip(grads, vgp.trainable_variables))
  return loss
num_iters = 300
num_logs = 10
for i in range(num_iters):
  batch_idxs = np.random.randint(num_training_points_, size=[batch_size])
  x_train_batch_ = x_train_[batch_idxs, ...]
  y_train_batch_ = y_train_[batch_idxs]
  loss = optimize(x_train_batch, y_train_batch)
  if i % (num_iters / num_logs) == 0 or i + 1 == num_iters:
    print(i, loss.numpy())
# Generate a plot with
#   - the posterior predictive mean
#   - training data
#   - inducing index points (plotted vertically at the mean of the
#     variational posterior over inducing point function values)
#   - 50 posterior predictive samples
num_samples = 50
samples_ = vgp.sample(num_samples).numpy()
mean_ = vgp.mean().numpy()
inducing_index_points_ = inducing_index_points.numpy()
variational_loc_ = variational_loc.numpy()
plt.figure(figsize=(15, 5))
plt.scatter(inducing_index_points_[..., 0], variational_loc_,
            marker='x', s=50, color='k', zorder=10)
plt.scatter(x_train_[..., 0], y_train_, color='#00ff00', alpha=.1, zorder=9)
plt.plot(np.tile(index_points_, num_samples),
          samples_.T, color='r', alpha=.1)
plt.plot(index_points_, mean_, color='k')
plt.plot(index_points_, f(index_points_), color='b')
References
[1]: Titsias, M. "Variational Model Selection for Sparse Gaussian Process Regression", 2009. http://proceedings.mlr.press/v5/titsias09a/titsias09a.pdf [2]: Hensman, J., Lawrence, N. "Gaussian Processes for Big Data", 2013 https://arxiv.org/abs/1309.6835 [3]: Carl Rasmussen, Chris Williams. Gaussian Processes For Machine Learning, 2006. http://www.gaussianprocess.org/gpml/ [4]: Hensman, J., Matthews, A. G., Filippone M., Ghahramani Z. "MCMC for Variationally Sparse Gaussian Processes" https://arxiv.org/abs/1506.04000
| Raises | |
|---|---|
| ValueError | if mean_fnis notNoneand is not callable. | 
| Attributes | |
|---|---|
| allow_nan_stats | Python booldescribing behavior when a stat is undefined.Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined. | 
| batch_shape | Shape of a single sample from a single event index as a TensorShape.May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. | 
| cholesky_fn | |
| dtype | The DTypeofTensors handled by thisDistribution. | 
| event_shape | Shape of a single sample from a single batch as a TensorShape.May be partially defined or unknown. | 
| experimental_shard_axis_names | The list or structure of lists of active shard axis names. | 
| index_points | |
| inducing_index_points | |
| jitter | DEPRECATED FUNCTION | 
| kernel | |
| marginal_fn | |
| mean_fn | |
| name | Name prepended to all ops created by this Distribution. | 
| name_scope | Returns a tf.name_scopeinstance for this class. | 
| non_trainable_variables | Sequence of non-trainable variables owned by this module and its submodules. | 
| observation_noise_variance | |
| parameters | Dictionary of parameters used to instantiate this Distribution. | 
| predictive_noise_variance | |
| reparameterization_type | Describes how samples from the distribution are reparameterized. Currently this is one of the static instances
 | 
| submodules | Sequence of all sub-modules. Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on). 
 | 
| trainable_variables | Sequence of trainable variables owned by this module and its submodules. | 
| validate_args | Python boolindicating possibly expensive checks are enabled. | 
| variables | Sequence of variables owned by this module and its submodules. | 
| variational_inducing_observations_loc | |
| variational_inducing_observations_scale | |
Methods
batch_shape_tensor
batch_shape_tensor(
    name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
| Args | |
|---|---|
| name | name to give to the op | 
| Returns | |
|---|---|
| batch_shape | Tensor. | 
cdf
cdf(
    value, name='cdf', **kwargs
)
Cumulative distribution function.
Given random variable X, the cumulative distribution function cdf is:
cdf(x) := P[X <= x]
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| cdf | a Tensorof shapesample_shape(x) + self.batch_shapewith
values of typeself.dtype. | 
copy
copy(
    **override_parameters_kwargs
)
Creates a deep copy of the distribution.
| Args | |
|---|---|
| **override_parameters_kwargs | String/value dictionary of initialization arguments to override with new values. | 
| Returns | |
|---|---|
| distribution | A new instance of type(self)initialized from the union
of self.parameters and override_parameters_kwargs, i.e.,dict(self.parameters, **override_parameters_kwargs). | 
covariance
covariance(
    name='covariance', **kwargs
)
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-k, vector-valued distribution, it is calculated
as,
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E
denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g.,
matrix-valued, Wishart), Covariance shall return a (batch of) matrices
under some vectorization of the events, i.e.,
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices,
0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function
mapping indices of this distribution's event dimensions to indices of a
length-k' vector.
| Args | |
|---|---|
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| covariance | Floating-point Tensorwith shape[B1, ..., Bn, k', k']where the firstndimensions are batch coordinates andk' = reduce_prod(self.event_shape). | 
cross_entropy
cross_entropy(
    other, name='cross_entropy'
)
Computes the (Shannon) cross entropy.
Denote this distribution (self) by P and the other distribution by
Q. Assuming P, Q are absolutely continuous with respect to
one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon)
cross entropy is defined as:
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
other types with built-in registrations: MultivariateNormalDiag, MultivariateNormalDiagPlusLowRank, MultivariateNormalFullCovariance, MultivariateNormalLinearOperator, MultivariateNormalTriL
| Args | |
|---|---|
| other | tfp.distributions.Distributioninstance. | 
| name | Python strprepended to names of ops created by this function. | 
| Returns | |
|---|---|
| cross_entropy | self.dtypeTensorwith shape[B1, ..., Bn]representingndifferent calculations of (Shannon) cross entropy. | 
entropy
entropy(
    name='entropy', **kwargs
)
Shannon entropy in nats.
event_shape_tensor
event_shape_tensor(
    name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
| Args | |
|---|---|
| name | name to give to the op | 
| Returns | |
|---|---|
| event_shape | Tensor. | 
experimental_default_event_space_bijector
experimental_default_event_space_bijector(
    *args, **kwargs
)
Bijector mapping the reals (R**n) to the event space of the distribution.
Distributions with continuous support may implement
_default_event_space_bijector which returns a subclass of
tfp.bijectors.Bijector that maps R**n to the distribution's event space.
For example, the default bijector for the Beta distribution
is tfp.bijectors.Sigmoid(), which maps the real line to [0, 1], the
support of the Beta distribution. The default bijector for the
CholeskyLKJ distribution is tfp.bijectors.CorrelationCholesky, which
maps R^(k * (k-1) // 2) to the submanifold of k x k lower triangular
matrices with ones along the diagonal.
The purpose of experimental_default_event_space_bijector is
to enable gradient descent in an unconstrained space for Variational
Inference and Hamiltonian Monte Carlo methods. Some effort has been made to
choose bijectors such that the tails of the distribution in the
unconstrained space are between Gaussian and Exponential.
For distributions with discrete event space, or for which TFP currently
lacks a suitable bijector, this function returns None.
| Args | |
|---|---|
| *args | Passed to implementation _default_event_space_bijector. | 
| **kwargs | Passed to implementation _default_event_space_bijector. | 
| Returns | |
|---|---|
| event_space_bijector | Bijectorinstance orNone. | 
experimental_fit
@classmethodexperimental_fit( value, sample_ndims=1, validate_args=False, **init_kwargs )
Instantiates a distribution that maximizes the likelihood of x.
| Args | |
|---|---|
| value | a Tensorvalid sample from this distribution family. | 
| sample_ndims | Positive intTensor number of leftmost dimensions ofvaluethat index i.i.d. samples.
Default value:1. | 
| validate_args | Python bool, defaultFalse. WhenTrue, distribution
parameters are checked for validity despite possibly degrading runtime
performance. WhenFalse, invalid inputs may silently render incorrect
outputs.
Default value:False. | 
| **init_kwargs | Additional keyword arguments passed through to cls.__init__. These take precedence in case of collision with the
fitted parameters; for example,tfd.Normal.experimental_fit([1., 1.], scale=20.)returns a Normal
distribution withscale=20.rather than the maximum likelihood
parameterscale=0.. | 
| Returns | |
|---|---|
| maximum_likelihood_instance | instance of clswith parameters that
maximize the likelihood ofvalue. | 
experimental_local_measure
experimental_local_measure(
    value, backward_compat=False, **kwargs
)
Returns a log probability density together with a TangentSpace.
A TangentSpace allows us to calculate the correct push-forward
density when we apply a transformation to a Distribution on
a strict submanifold of R^n (typically via a Bijector in the
TransformedDistribution subclass). The density correction uses
the basis of the tangent space.
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| backward_compat | boolspecifying whether to fall back to returningFullSpaceas the tangent space, and representing R^n with the standard
 basis. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| log_prob | a Tensorrepresenting the log probability density, of shapesample_shape(x) + self.batch_shapewith values of typeself.dtype. | 
| tangent_space | a TangentSpaceobject (by defaultFullSpace)
representing the tangent space to the manifold atvalue. | 
| Raises | |
|---|---|
| UnspecifiedTangentSpaceError if backward_compatis False and
the_experimental_tangent_spaceattribute has not been defined. | 
experimental_sample_and_log_prob
experimental_sample_and_log_prob(
    sample_shape=(), seed=None, name='sample_and_log_prob', **kwargs
)
Samples from this distribution and returns the log density of the sample.
The default implementation simply calls sample and log_prob:
def _sample_and_log_prob(self, sample_shape, seed, **kwargs):
  x = self.sample(sample_shape=sample_shape, seed=seed, **kwargs)
  return x, self.log_prob(x, **kwargs)
However, some subclasses may provide more efficient and/or numerically stable implementations.
| Args | |
|---|---|
| sample_shape | integer Tensordesired shape of samples to draw.
Default value:(). | 
| seed | PRNG seed; see tfp.random.sanitize_seedfor details.
Default value:None. | 
| name | name to give to the op.
Default value: 'sample_and_log_prob'. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| samples | a Tensor, or structure ofTensors, with prepended dimensionssample_shape. | 
| log_prob | a Tensorof shapesample_shape(x) + self.batch_shapewith
values of typeself.dtype. | 
get_marginal_distribution
get_marginal_distribution(
    index_points=None
)
Compute the marginal of this GP over function values at index_points.
| Args | |
|---|---|
| index_points | (nested) Tensorrepresenting finite (batch of) vector(s)
of points in the index set over which the GP is defined. Shape (or
the shape of each nested component) has the form[b1, ..., bB, e,
f1, ..., fF]whereFis the number of feature dimensions and must
equalkernel.feature_ndims(or its corresponding nested component)
andeis the number (size) of index points in each batch.
Ultimately this distribution corresponds to ae-dimensional
multivariate normal. The batch shape must be broadcastable withkernel.batch_shapeand any batch dims yielded bymean_fn. | 
| Returns | |
|---|---|
| marginal | a Normal distribution with vector event shape. | 
is_scalar_batch
is_scalar_batch(
    name='is_scalar_batch'
)
Indicates that batch_shape == [].
| Args | |
|---|---|
| name | Python strprepended to names of ops created by this function. | 
| Returns | |
|---|---|
| is_scalar_batch | boolscalarTensor. | 
is_scalar_event
is_scalar_event(
    name='is_scalar_event'
)
Indicates that event_shape == [].
| Args | |
|---|---|
| name | Python strprepended to names of ops created by this function. | 
| Returns | |
|---|---|
| is_scalar_event | boolscalarTensor. | 
kl_divergence
kl_divergence(
    other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence.
Denote this distribution (self) by p and the other distribution by
q. Assuming p, q are absolutely continuous with respect to reference
measure r, the KL divergence is defined as:
KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .]
denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.
other types with built-in registrations: MultivariateNormalDiag, MultivariateNormalDiagPlusLowRank, MultivariateNormalFullCovariance, MultivariateNormalLinearOperator, MultivariateNormalTriL
| Args | |
|---|---|
| other | tfp.distributions.Distributioninstance. | 
| name | Python strprepended to names of ops created by this function. | 
| Returns | |
|---|---|
| kl_divergence | self.dtypeTensorwith shape[B1, ..., Bn]representingndifferent calculations of the Kullback-Leibler
divergence. | 
log_cdf
log_cdf(
    value, name='log_cdf', **kwargs
)
Log cumulative distribution function.
Given random variable X, the cumulative distribution function cdf is:
log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields
a more accurate answer than simply taking the logarithm of the cdf when
x << -1.
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| logcdf | a Tensorof shapesample_shape(x) + self.batch_shapewith
values of typeself.dtype. | 
log_prob
log_prob(
    value, name='log_prob', **kwargs
)
Log probability density/mass function.
Additional documentation from GaussianProcess:
kwargs:
- index_points: optional- float- Tensorrepresenting a finite (batch of) of points in the index set over which this GP is defined. The shape (or shape of each nested component) has the form- [b1, ..., bB, e,f1, ..., fF]where- Fis the number of feature dimensions and must equal- self.kernel.feature_ndims(or its corresponding nested component) and- eis the number of index points in each batch. Ultimately, this distribution corresponds to an- e-dimensional multivariate normal. The batch shape must be broadcastable with- kernel.batch_shapeand any batch dims yieldedby- mean_fn. If not specified,- self.index_pointsis used. Default value:- None.
- is_missing: optional- bool- Tensorof shape- [..., e], where- eis the number of index points in each batch. Represents a batch of Boolean masks. When- is_missingis not- None, the returned log-prob is for the marginal distribution, in which all dimensions for which- is_missingis- Truehave been marginalized out. The batch dimensions of- is_missingmust broadcast with the sample and batch dimensions of- valueand of this- Distribution. Default value:- None.
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| log_prob | a Tensorof shapesample_shape(x) + self.batch_shapewith
values of typeself.dtype. | 
log_survival_function
log_survival_function(
    value, name='log_survival_function', **kwargs
)
Log survival function.
Given random variable X, the survival function is defined:
log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log
survival function, which are more accurate than 1 - cdf(x) when x >> 1.
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype. | 
mean
mean(
    name='mean', **kwargs
)
Mean.
mode
mode(
    name='mode', **kwargs
)
Mode.
optimal_variational_posterior
@staticmethodoptimal_variational_posterior( kernel, inducing_index_points, observation_index_points, observations, observation_noise_variance, mean_fn=None, cholesky_fn=None, jitter=1e-06, name=None )
Model selection for optimal variational hyperparameters.
Given the full training set (parameterized by observations and
observation_index_points), compute the optimal variational
location and scale for the VGP. This is based of the method suggested
in [Titsias, 2009][1].
| Args | |
|---|---|
| kernel | PositiveSemidefiniteKernel-like instance representing the
GP's covariance function. | 
| inducing_index_points | floatTensorof locations of inducing points in
the index set. Shape has the form[b1, ..., bB, e2, f1, ..., fF], just
likeobservation_index_points. The batch shape components needn't be
identical to those ofobservation_index_points, but must be broadcast
compatible with them. | 
| observation_index_points | floatTensorrepresenting finite (batch of)
vector(s) of points where observations are defined. Shape has the
form[b1, ..., bB, e1, f1, ..., fF]whereFis the number of feature
dimensions and must equalkernel.feature_ndimsande1is the number
(size) of index points in each batch (we denote ite1to distinguish
it from the numer of inducing index points, denotede2below). | 
| observations | floatTensorrepresenting collection, or batch of
collections, of observations corresponding toobservation_index_points. Shape has the form[b1, ..., bB, e], which
must be brodcastable with the batch and example shapes ofobservation_index_points. The batch shape[b1, ..., bB]must be
broadcastable with the shapes of all other batched parameters
(kernel.batch_shape,observation_index_points, etc.). | 
| observation_noise_variance | floatTensorrepresenting the variance
of the noise in the Normal likelihood distribution of the model. May be
batched, in which case the batch shape must be broadcastable with the
shapes of all other batched parameters (kernel.batch_shape,index_points, etc.).
Default value:0. | 
| mean_fn | Python callablethat acts on index points to produce a (batch
of) vector(s) of mean values at those index points. Takes aTensorof
shape[b1, ..., bB, e, f1, ..., fF]and returns aTensorwhose shape
is (broadcastable with)[b1, ..., bB, e].
Default value:Noneimplies constant zero function. | 
| cholesky_fn | Callable which takes a single (batch) matrix argument and
returns a Cholesky-like lower triangular factor.  Default value: None,
in which casemake_cholesky_with_jitter_fnis used with thejitterparameter. | 
| jitter | floatscalarTensoradded to the diagonal of the covariance
matrix to ensure positive definiteness of the covariance matrix.
Default value:1e-6. | 
| name | Python strname prefixed to Ops created by this class.
Default value: "optimal_variational_posterior". | 
| Returns | |
|---|---|
| loc, scale: Tuple representing the variational location and scale. | 
| Raises | |
|---|---|
| ValueError | if mean_fnis notNoneand is not callable. | 
References
[1]: Titsias, M. "Variational Model Selection for Sparse Gaussian Process Regression", 2009. http://proceedings.mlr.press/v5/titsias09a/titsias09a.pdf
param_shapes
@classmethodparam_shapes( sample_shape, name='DistributionParamShapes' )
Shapes of parameters given the desired shape of a call to sample(). (deprecated)
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution so that a particular shape is
returned for that instance's call to sample().
Subclasses should override class method _param_shapes.
| Args | |
|---|---|
| sample_shape | Tensoror python list/tuple. Desired shape of a call tosample(). | 
| name | name to prepend ops with. | 
| Returns | |
|---|---|
| dictof parameter name toTensorshapes. | 
param_static_shapes
@classmethodparam_static_shapes( sample_shape )
param_shapes with static (i.e. TensorShape) shapes. (deprecated)
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution so that a particular shape is
returned for that instance's call to sample(). Assumes that the sample's
shape is known statically.
Subclasses should override class method _param_shapes to return
constant-valued tensors when constant values are fed.
| Args | |
|---|---|
| sample_shape | TensorShapeor python list/tuple. Desired shape of a call
tosample(). | 
| Returns | |
|---|---|
| dictof parameter name toTensorShape. | 
| Raises | |
|---|---|
| ValueError | if sample_shapeis aTensorShapeand is not fully defined. | 
parameter_properties
@classmethodparameter_properties( dtype=tf.float32, num_classes=None )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the distribution's
Tensor-valued constructor arguments.
Distribution subclasses are not required to implement
_parameter_properties, so this method may raise NotImplementedError.
Providing a _parameter_properties implementation enables several advanced
features, including:
- Distribution batch slicing (sliced_distribution = distribution[i:j]).
- Automatic inference of _batch_shapeand_batch_shape_tensor, which must otherwise be computed explicitly.
- Automatic instantiation of the distribution within TFP's internal property tests.
- Automatic construction of 'trainable' instances of the distribution using appropriate bijectors to avoid violating parameter constraints. This enables the distribution family to be used easily as a surrogate posterior in variational inference.
In the future, parameter property annotations may enable additional
functionality; for example, returning Distribution instances from
tf.vectorized_map.
| Args | |
|---|---|
| dtype | Optional float dtypeto assume for continuous-valued parameters.
Some constraining bijectors require advance knowledge of the dtype
because certain constants (e.g.,tfb.Softplus.low) must be
instantiated with the same dtype as the values to be transformed. | 
| num_classes | Optional intTensornumber of classes to assume when
inferring the shape of parameters for categorical-like distributions.
Otherwise ignored. | 
| Returns | |
|---|---|
| parameter_properties | A str ->tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names toParameterProperties`
instances. | 
| Raises | |
|---|---|
| NotImplementedError | if the distribution class does not implement _parameter_properties. | 
posterior_predictive
posterior_predictive(
    observations, predictive_index_points=None, **kwargs
)
Return the posterior predictive distribution associated with this distribution.
Returns the posterior predictive distribution p(Y' | X, Y, X') where:
- X'is- predictive_index_points
- Xis- self.index_points.
- Yis- observations.
This is equivalent to using the
GaussianProcessRegressionModel.precompute_regression_model method.
| Args | |
|---|---|
| observations | floatTensorrepresenting collection, or batch of
collections, of observations corresponding toself.index_points. Shape has the form[b1, ..., bB, e], which
must be broadcastable with the batch and example shapes ofself.index_points. The batch shape[b1, ..., bB]must be
broadcastable with the shapes of all other batched parameters | 
| predictive_index_points | (nested) Tensorrepresenting finite collection,
or batch of collections, of points in the index set over which the GP
is defined. Shape (or shape of each nested component) has the form[b1, ..., bB, e, f1, ..., fF]whereFis the number of feature
dimensions and must equalkernel.feature_ndims(or its
corresponding nested component) andeis the number (size) of
predictive index points in each batch. The batch shape must be
broadcastable with this distributionsbatch_shape.
Default value:None. | 
| **kwargs | Any other keyword arguments to pass / override. | 
| Returns | |
|---|---|
| gprm | An instance of Distributionthat represents the posterior
predictive. | 
prob
prob(
    value, name='prob', **kwargs
)
Probability density/mass function.
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| prob | a Tensorof shapesample_shape(x) + self.batch_shapewith
values of typeself.dtype. | 
quantile
quantile(
    value, name='quantile', **kwargs
)
Quantile function. Aka 'inverse cdf' or 'percent point function'.
Given random variable X and p in [0, 1], the quantile is:
quantile(p) := x such that P[X <= x] == p
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| quantile | a Tensorof shapesample_shape(x) + self.batch_shapewith
values of typeself.dtype. | 
sample
sample(
    sample_shape=(), seed=None, name='sample', **kwargs
)
Generate samples of the specified shape.
Note that a call to sample() without arguments will generate a single
sample.
| Args | |
|---|---|
| sample_shape | 0D or 1D int32Tensor. Shape of the generated samples. | 
| seed | PRNG seed; see tfp.random.sanitize_seedfor details. | 
| name | name to give to the op. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| samples | a Tensorwith prepended dimensionssample_shape. | 
stddev
stddev(
    name='stddev', **kwargs
)
Standard deviation.
Standard deviation is defined as,
stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E
denotes expectation, and stddev.shape = batch_shape + event_shape.
| Args | |
|---|---|
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| stddev | Floating-point Tensorwith shape identical tobatch_shape + event_shape, i.e., the same shape asself.mean(). | 
surrogate_posterior_expected_log_likelihood
surrogate_posterior_expected_log_likelihood(
    observations,
    observation_index_points=None,
    log_likelihood_fn=None,
    quadrature_size=10,
    name=None
)
Compute the expected log likelihood term in the ELBO, using quadrature.
In variational inference, we're interested in optimizing the ELBO, which looks like
  ELBO = -E_{q(z)} log p(x | z) + KL(q(z) || p(z))
where q(z) is the variational, or "surrogate", posterior over latents z,
p(x | z) is the likelihood of some data x conditional on latents z,
and p(z) is the prior over z.
In the specific case of the VariationalGaussianProcess model, the
surrograte posterior q(z) is such that the above expectation factorizes
into a sum over 1-dimensional integrals of the log likelihood times a
certain Gaussian distribution (a 1-dimensional marginal of the full
variational GP). This means we can get a really good estimate of the
likelihood term using Gauss-Hermite quadrature, which is what this method
does. In the particular case of a Gaussian likelihood, we can actually get
an exact answer with 3 quadrature points (we could also work it out
analytically, but it's still exact and a bit simpler to just have one
implementation for all likelihoods).
The observation_index_points arguments are optional and if omitted default
to the index_points of this class (ie, the predictive locations).
Example: binary classification
  def log_prob(observations, f):
    # Parameterize a collection of independent Bernoulli random variables
    # with logits given by the passed-in function values `f`. Return the
    # joint log probability of the (binary) `observations` under that
    # model.
    berns = tfd.Independent(tfd.Bernoulli(logits=f),
                            reinterpreted_batch_ndims=1)
    return berns.log_prob(observations)
  # Compute the expected log likelihood using Gauss-Hermite quadrature.
  recon = vgp.surrogate_posterior_expected_log_likelihood(
      observations,
      observation_index_points,
      log_likelihood_fn=log_prob,
      quadrature_size=20)
  elbo = -recon + vgp.surrogate_posterior_kl_divergence_prior()
| Args | |
|---|---|
| observations | observed data at the given observation_index_points; must
be acceptable inputs to the givenlog_likelihood_fncallable. | 
| observation_index_points | floatTensorrepresenting finite collection,
or batch of collections, of points in the index set for which some data
has been observed. Shape has the form[b1, .., bB, e, f1, ..., fF]'
whereFis the number of feature dimensions and must equalself.kernel.feature_ndims, andeis the number (size) of index
points in each batch.[b1, ..., bB, e]must be broadcastable with the
shape ofobservations, and[b1, ..., bB]must be broadcastable with
the shapes of all other batched parameters of thisVariationalGaussianProcessinstance (kernel.batch_shape,index_points, etc).
</td>
</tr><tr>
<td>log_likelihood_fn</td>
<td>
Acallable, which takes a set of observed data and
function values (ie, events under this GP model at the
observation_index_points) and returns the log likelihood of those data
conditioned on those function values. Default value isNone, which
implies aNormallikelihood and 3 qudrature points.
</td>
</tr><tr>
<td>quadrature_size</td>
<td>
number of grid points to use in Gauss-Hermite quadrature
scheme. Default of10(arbitrarily), or if3iflog_likelihood_fnisNone(implying a Gaussian likelihood for which3points will give
an exact answer.)
</td>
</tr><tr>
<td>name</td>
<td>
Pythonstr` name prefixed to Ops created by this class.
Default value: "surrogate_posterior_expected_log_likelihood". | 
| Returns | |
|---|---|
| surrogate_posterior_expected_log_likelihood | the value of the expected log likelihood of the given observed data under the surrogate posterior model of latent function values and given likelihood model. | 
surrogate_posterior_kl_divergence_prior
surrogate_posterior_kl_divergence_prior(
    name=None
)
The KL divergence between the surrograte posterior and GP prior.
| Args | |
|---|---|
| name | Python strname prefixed to Ops created by this class.
Default value: "surrogate_posterior_kl_divergence_prior". | 
| Returns | |
|---|---|
| kl_divergence | the value of the KL divergence between the surrograte
posterior implied by this VariationalGaussianProcessinstance and the
prior, which is an unconditional GP with the same kernel and priormean_fn | 
survival_function
survival_function(
    value, name='survival_function', **kwargs
)
Survival function.
Given random variable X, the survival function is defined:
survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype. | 
unnormalized_log_prob
unnormalized_log_prob(
    value, name='unnormalized_log_prob', **kwargs
)
Potentially unnormalized log probability density/mass function.
This function is similar to log_prob, but does not require that the
return value be normalized.  (Normalization here refers to the total
integral of probability being one, as it should be by definition for any
probability distribution.)  This is useful, for example, for distributions
where the normalization constant is difficult or expensive to compute.  By
default, this simply calls log_prob.
| Args | |
|---|---|
| value | floatordoubleTensor. | 
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| unnormalized_log_prob | a Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype. | 
variance
variance(
    name='variance', **kwargs
)
Variance.
Variance is defined as,
Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E
denotes expectation, and Var.shape = batch_shape + event_shape.
| Args | |
|---|---|
| name | Python strprepended to names of ops created by this function. | 
| **kwargs | Named arguments forwarded to subclass implementation. | 
| Returns | |
|---|---|
| variance | Floating-point Tensorwith shape identical tobatch_shape + event_shape, i.e., the same shape asself.mean(). | 
variational_loss
variational_loss(
    observations,
    observation_index_points=None,
    log_likelihood_fn=None,
    quadrature_size=3,
    kl_weight=1.0,
    name='variational_loss'
)
Variational loss for the VGP.
Given observations and observation_index_points, compute the
negative variational lower bound as specified in [Hensman, 2013][1].
| Args | |
|---|---|
| observations | floatTensorrepresenting collection, or batch of
collections, of observations corresponding toobservation_index_points. Shape has the form[b1, ..., bB, e], which
must be brodcastable with the batch and example shapes ofobservation_index_points. The batch shape[b1, ..., bB]must be
broadcastable with the shapes of all other batched parameters
(kernel.batch_shape,observation_index_points, etc.). | 
| observation_index_points | floatTensorrepresenting finite (batch of)
vector(s) of points where observations are defined. Shape has the
form[b1, ..., bB, e1, f1, ..., fF]whereFis the number of feature
dimensions and must equalkernel.feature_ndimsande1is the number
(size) of index points in each batch (we denote ite1to distinguish
it from the numer of inducing index points, denotede2below). If
set toNoneusesindex_pointsas the origin for observations.
Default value: None. | 
| log_likelihood_fn | log likelihood function. | 
| quadrature_size | num quadrature grid points. | 
| kl_weight | Amount by which to scale the KL divergence loss between prior and posterior. Default value: 1. | 
| name | Python strname prefixed to Ops created by this class.
Default value: 'variational_loss'. | 
| Returns | |
|---|---|
| loss | Scalar tensor representing the negative variational lower bound.
Can be directly used in a tf.Optimizer. | 
References
[1]: Hensman, J., Lawrence, N. "Gaussian Processes for Big Data", 2013 https://arxiv.org/abs/1309.6835
with_name_scope
@classmethodwith_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):@tf.Module.with_name_scopedef __call__(self, x):if not hasattr(self, 'w'):self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))return tf.matmul(x, self.w)
Using the above module would produce tf.Variables and tf.Tensors whose
names included the module name:
mod = MyModule()mod(tf.ones([1, 2]))<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>mod.w<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,numpy=..., dtype=float32)>
| Args | |
|---|---|
| method | The method to wrap. | 
| Returns | |
|---|---|
| The original method wrapped such that it enters the module's name scope. | 
__getitem__
__getitem__(
    slices
)
Slices the batch axes of this distribution, returning a new instance.
b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]
x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.linalg.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]
| Args | |
|---|---|
| slices | slices from the [] operator | 
| Returns | |
|---|---|
| dist | A new tfd.Distributioninstance with sliced parameters. | 
__iter__
__iter__()