Help protect the Great Barrier Reef with TensorFlow on Kaggle

tfp.substrates.numpy.distributions.GaussianProcessRegressionModel

Posterior predictive distribution in a conjugate GP regression model.

Inherits From: GaussianProcess, Distribution

This class represents the distribution over function values at a set of points in some index set, conditioned on noisy observations at some other set of points. More specifically, we assume a Gaussian process prior, f ~ GP(m, k) with IID normal noise on observations of function values. In this model posterior inference can be done analytically. This Distribution is parameterized by

• the mean and covariance functions of the GP prior,
• the set of (noisy) observations and index points to which they correspond,
• the set of index points at which the resulting posterior predictive distribution over function values is defined,
• the observation noise variance,
• jitter, to compensate for numerical instability of Cholesky decomposition,

in addition to the usual params like validate_args and allow_nan_stats.

Mathematical Details

Gaussian process regression (GPR) assumes a Gaussian process (GP) prior and a normal likelihood as a generative model for data. Given GP mean function m, covariance kernel k, and observation noise variance v, we have

f ~ GP(m, k)

iid
(y[i] | f, x[i])  ~  Normal(f(x[i]), v),   i = 1, ... , N

where y[i] are the noisy observations of function values at points x[i].

In practice, f is an infinite object (eg, a function over R^n) which can't be realized on a finite machine, but fortunately the marginal distribution over function values at a finite set of points is just a multivariate normal with mean and covariance given by the mean and covariance functions applied at our finite set of points (see [Rasmussen and Williams, 2006] for a more extensive discussion of these facts).

We spell out the generative model in detail below, but first, a digression on notation. In what follows we drop the indices on vectorial objects such as x[i], it being implied that we are generally considering finite collections of index points and corresponding function values and noisy observations thereof. Thus x should be considered to stand for a collection of index points (indeed, themselves often vectorial). Furthermore:

• f(x) refers to the collection of function values at the index points in the collection x",
• m(t) refers to the collection of values of the mean function at the index points in the collection t, and
• k(x, t) refers to the matrix whose entries are values of the kernel function k at all pairs of index points from x and t.

With these conventions in place, we may write

(f(x) | x) ~ MVN(m(x), k(x, x))

(y | f(x), x) ~ Normal(f(x), v)

When we condition on observed data y at the points x, we can derive the posterior distribution over function values f(x) at those points. We can then compute the posterior predictive distribution over function values f(t) at a new set of points t, conditional on those observed data.

(f(t) | t, x, y) ~ MVN(loc, cov)

where

loc = m(t) + k(t, x) @ inv(k(x, x) + v * I) @ (y - m(x))
cov = k(t, t) - k(t, x) @ inv(k(x, x) + v * I) @ k(x, t)

where I is the identity matrix of appropriate dimension. Finally, the distribution over noisy observations at the new set of points t is obtained by adding IID noise from Normal(0., observation_noise_variance).

Examples

Draw joint samples from the posterior predictive distribution in a GP

regression model

import numpy as np
from tensorflow_probability.python.internal.backend.numpy.compat import v2 as tf
import tensorflow_probability as tfp; tfp = tfp.substrates.numpy

tfb = tfp.bijectors
tfd = tfp.distributions
psd_kernels = tfp.math.psd_kernels

# Generate noisy observations from a known function at some random points.
observation_noise_variance = .5
f = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2)
observation_index_points = np.random.uniform(-1., 1., 50)[..., np.newaxis]
observations = (f(observation_index_points) +
np.random.normal(0., np.sqrt(observation_noise_variance)))

index_points = np.linspace(-1., 1., 100)[..., np.newaxis]

kernel = psd_kernels.MaternFiveHalves()

gprm = tfd.GaussianProcessRegressionModel(
kernel=kernel,
index_points=index_points,
observation_index_points=observation_index_points,
observations=observations,
observation_noise_variance=observation_noise_variance)

samples = gprm.sample(10)
# ==> 10 independently drawn, joint samples at `index_points`.

Above, we have used the kernel with default parameters, which are unlikely to be good. Instead, we can train the kernel hyperparameters on the data, as in the next example.

Optimize model parameters via maximum marginal likelihood

Here we learn the kernel parameters as well as the observation noise variance using gradient descent on the maximum marginal likelihood.

# Suppose we have some data from a known function. Note the index points in
# general have shape `[b1, ..., bB, f1, ..., fF]` (here we assume `F == 1`),
# so we need to explicitly consume the feature dimensions (just the last one
# here).
f = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2)

observation_index_points = np.random.uniform(-1., 1., 50)[..., np.newaxis]
observations = f(observation_index_points) + np.random.normal(0., .05, 50)

# Define a kernel with trainable parameters. Note we use TransformedVariable
# to apply a positivity constraint.
amplitude = tfp.util.TransformedVariable(
1., tfb.Exp(), dtype=tf.float64, name='amplitude')
length_scale = tfp.util.TransformedVariable(
1., tfb.Exp(), dtype=tf.float64, name='length_scale')

observation_noise_variance = tfp.util.TransformedVariable(
np.exp(-5), tfb.Exp(), name='observation_noise_variance')

# We'll use an unconditioned GP to train the kernel parameters.
gp = tfd.GaussianProcess(
kernel=kernel,
index_points=observation_index_points,
observation_noise_variance=observation_noise_variance)

@tf.function
def optimize():
loss = -gp.log_prob(observations)
return loss

# We can construct the posterior at a new set of `index_points` using the same
# kernel (with the same parameters, which we'll optimize below).
index_points = np.linspace(-1., 1., 100)[..., np.newaxis]
gprm = tfd.GaussianProcessRegressionModel(
kernel=kernel,
index_points=index_points,
observation_index_points=observation_index_points,
observations=observations,
observation_noise_variance=observation_noise_variance)

# First train the model, then draw and plot posterior samples.
for i in range(1000):
neg_log_likelihood_ = optimize()
if i % 100 == 0:
print("Step {}: NLL = {}".format(i, neg_log_likelihood_))

print("Final NLL = {}".format(neg_log_likelihood_))

samples = gprm.sample(10).numpy()
# ==> 10 independently drawn, joint samples at `index_points`.

import matplotlib.pyplot as plt
plt.scatter(np.squeeze(observation_index_points), observations)
plt.plot(np.stack([index_points[:, 0]]*10).T, samples.T, c='r', alpha=.2)
Marginalization of model hyperparameters

Here we use TensorFlow Probability's MCMC functionality to perform marginalization of the model hyperparameters: kernel params as well as observation noise variance.

f = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2)
observation_index_points = np.random.uniform(-1., 1., 25)[..., np.newaxis]
observations = np.random.normal(f(observation_index_points), .05)

gaussian_process_model = tfd.JointDistributionSequential([
tfd.LogNormal(np.float64(0.), np.float64(1.)),
tfd.LogNormal(np.float64(0.), np.float64(1.)),
tfd.LogNormal(np.float64(0.), np.float64(1.)),
lambda noise_variance, length_scale, amplitude: tfd.GaussianProcess(
index_points=observation_index_points,
observation_noise_variance=noise_variance)
])

initial_chain_states = [
1e-1 * tf.ones([], dtype=np.float64, name='init_amplitude'),
1e-1 * tf.ones([], dtype=np.float64, name='init_length_scale'),
1e-1 * tf.ones([], dtype=np.float64, name='init_obs_noise_variance')
]

# Since HMC operates over unconstrained space, we need to transform the
# samples so they live in real-space.
unconstraining_bijectors = [
tfp.bijectors.Softplus(),
tfp.bijectors.Softplus(),
tfp.bijectors.Softplus(),
]

def unnormalized_log_posterior(*args):
return gaussian_process_model.log_prob(*args, x=observations)

num_results = 200
@tf.function
def run_mcmc():
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=500,
num_steps_between_results=3,
current_state=initial_chain_states,
kernel=tfp.mcmc.TransformedTransitionKernel(
inner_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_log_posterior,
step_size=[np.float64(.15)],
num_leapfrog_steps=3),
bijector=unconstraining_bijectors),
trace_fn=lambda _, pkr: pkr.inner_results.is_accepted)
[
amplitudes,
length_scales,
observation_noise_variances
], is_accepted = run_mcmc()

print("Acceptance rate: {}".format(np.mean(is_accepted)))

# Now we can sample from the posterior predictive distribution at a new set
# of index points.
index_points = np.linspace(-1., 1., 200)[..., np.newaxis]
gprm = tfd.GaussianProcessRegressionModel(
# Batch of `num_results` kernels parameterized by the MCMC samples.
index_points=index_points,
observation_index_points=observation_index_points,
observations=observations,
observation_noise_variance=observation_noise_variances)
samples = gprm.sample()

# Plot posterior samples and their mean, target function, and observations.
plt.plot(np.stack([index_points[:, 0]]*num_results).T,
samples.numpy().T,
c='r',
alpha=.01)
plt.plot(index_points[:, 0], np.mean(samples, axis=0), c='k')
plt.plot(index_points[:, 0], f(index_points))
plt.scatter(observation_index_points[:, 0], observations)

References

: Carl Rasmussen, Chris Williams. Gaussian Processes For Machine Learning, 2006.

kernel PositiveSemidefiniteKernel-like instance representing the GP's covariance function.
index_points float Tensor representing finite collection, or batch of collections, of points in the index set over which the GP is defined. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims and e is the number (size) of index points in each batch. Ultimately this distribution corresponds to an e-dimensional multivariate normal. The batch shape must be broadcastable with kernel.batch_shape and any batch dims yielded by mean_fn.
observation_index_points float Tensor representing finite collection, or batch of collections, of points in the index set for which some data has been observed. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims, and e is the number (size) of index points in each batch. [b1, ..., bB, e] must be broadcastable with the shape of observations, and [b1, ..., bB] must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc). The default value is None, which corresponds to the empty set of observations, and simply results in the prior predictive model (a GP with noise of variance predictive_noise_variance).
observations float Tensor representing collection, or batch of collections, of observations corresponding to observation_index_points. Shape has the form [b1, ..., bB, e], which must be brodcastable with the batch and example shapes of observation_index_points. The batch shape [b1, ..., bB] must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc.). The default value is None, which corresponds to the empty set of observations, and simply results in the prior predictive model (a GP with noise of variance predictive_noise_variance).
observation_noise_variance float Tensor representing the variance of the noise in the Normal likelihood distribution of the model. May be batched, in which case the batch shape must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc.). Default value: 0.
predictive_noise_variance float Tensor representing the variance in the posterior predictive model. If None, we simply re-use observation_noise_variance for the posterior predictive noise. If set explicitly, however, we use this value. This allows us, for example, to omit predictive noise variance (by setting this to zero) to obtain noiseless posterior predictions of function values, conditioned on noisy observations.
mean_fn Python callable that acts on index_points to produce a collection, or batch of collections, of mean values at index_points. Takes a Tensor of shape [b1, ..., bB, f1, ..., fF] and returns a Tensor whose shape is broadcastable with [b1, ..., bB]. Default value: None implies the constant zero function.
cholesky_fn Callable which takes a single (batch) matrix argument and returns a Cholesky-like lower triangular factor. Default value: None, in which case make_cholesky_with_jitter_fn is used with the jitter parameter.
jitter float scalar Tensor added to the diagonal of the covariance matrix to ensure positive definiteness of the covariance matrix. This argument is ignored if cholesky_fn is set. Default value: 1e-6.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs. Default value: False.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value NaN to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined. Default value: False.
name Python str name prefixed to Ops created by this class. Default value: 'GaussianProcessRegressionModel'.
_conditional_kernel Internal parameter -- do not use.
_conditional_mean_fn Internal parameter -- do not use.

ValueError if either

• only one of observations and observation_index_points is given, or
• mean_fn is not None and not callable.

allow_nan_stats Python bool describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

batch_shape Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

cholesky_fn

dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape.

May be partially defined or unknown.

experimental_shard_axis_names The list or structure of lists of active shard axis names.
index_points

jitter

kernel

marginal_fn

mean_fn

name Name prepended to all ops created by this Distribution.
observation_index_points

observation_noise_variance

observations

parameters Dictionary of parameters used to instantiate this Distribution.
predictive_noise_variance

reparameterization_type Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances tfd.FULLY_REPARAMETERIZED or tfd.NOT_REPARAMETERIZED.

trainable_variables

validate_args Python bool indicating possibly expensive checks are enabled.
variables

Methods

batch_shape_tensor

View source

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args
name name to give to the op

Returns
batch_shape Tensor.

cdf

View source

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

Creates a deep copy of the distribution.

Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.

Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

other types with built-in registrations: MultivariateNormalDiag, MultivariateNormalDiagPlusLowRank, MultivariateNormalFullCovariance, MultivariateNormalLinearOperator, MultivariateNormalTriL, Normal

Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.

Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

View source

Shannon entropy in nats.

event_shape_tensor

View source

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args
name name to give to the op

Returns
event_shape Tensor.

experimental_default_event_space_bijector

View source

Bijector mapping the reals (R**n) to the event space of the distribution.

Distributions with continuous support may implement _default_event_space_bijector which returns a subclass of tfp.bijectors.Bijector that maps R**n to the distribution's event space. For example, the default bijector for the Beta distribution is tfp.bijectors.Sigmoid(), which maps the real line to [0, 1], the support of the Beta distribution. The default bijector for the CholeskyLKJ distribution is tfp.bijectors.CorrelationCholesky, which maps R^(k * (k-1) // 2) to the submanifold of k x k lower triangular matrices with ones along the diagonal.

The purpose of experimental_default_event_space_bijector is to enable gradient descent in an unconstrained space for Variational Inference and Hamiltonian Monte Carlo methods. Some effort has been made to choose bijectors such that the tails of the distribution in the unconstrained space are between Gaussian and Exponential.

For distributions with discrete event space, or for which TFP currently lacks a suitable bijector, this function returns None.

Args
*args Passed to implementation _default_event_space_bijector.
**kwargs Passed to implementation _default_event_space_bijector.

Returns
event_space_bijector Bijector instance or None.

experimental_fit

View source

Instantiates a distribution that maximizes the likelihood of x.

Args
value a Tensor valid sample from this distribution family.
sample_ndims Positive int Tensor number of leftmost dimensions of value that index i.i.d. samples. Default value: 1.
validate_args Python bool, default False. When True, distribution parameters are checked for validity despite possibly degrading runtime performance. When False, invalid inputs may silently render incorrect outputs. Default value: False.
**init_kwargs Additional keyword arguments passed through to cls.__init__. These take precedence in case of collision with the fitted parameters; for example, tfd.Normal.experimental_fit([1., 1.], scale=20.) returns a Normal distribution with scale=20. rather than the maximum likelihood parameter scale=0..

Returns
maximum_likelihood_instance instance of cls with parameters that maximize the likelihood of value.

experimental_local_measure

View source

Returns a log probability density together with a TangentSpace.

A TangentSpace allows us to calculate the correct push-forward density when we apply a transformation to a Distribution on a strict submanifold of R^n (typically via a Bijector in the TransformedDistribution subclass). The density correction uses the basis of the tangent space.

Args
value float or double Tensor.
backward_compat bool specifying whether to fall back to returning FullSpace as the tangent space, and representing R^n with the standard basis.
**kwargs Named arguments forwarded to subclass implementation.

Returns
log_prob a Tensor representing the log probability density, of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
tangent_space a TangentSpace object (by default FullSpace) representing the tangent space to the manifold at value.

Raises
UnspecifiedTangentSpaceError if backward_compat is False and the _experimental_tangent_space attribute has not been defined.

experimental_sample_and_log_prob

View source

Samples from this distribution and returns the log density of the sample.

The default implementation simply calls sample and log_prob:

def _sample_and_log_prob(self, sample_shape, seed, **kwargs):
x = self.sample(sample_shape=sample_shape, seed=seed, **kwargs)
return x, self.log_prob(x, **kwargs)

However, some subclasses may provide more efficient and/or numerically stable implementations.

Args
sample_shape integer Tensor desired shape of samples to draw. Default value: ().
seed PRNG seed; see tfp.random.sanitize_seed for details. Default value: None.
name name to give to the op. Default value: 'sample_and_log_prob'.
**kwargs Named arguments forwarded to subclass implementation.

Returns
samples a Tensor, or structure of Tensors, with prepended dimensions sample_shape.
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

get_marginal_distribution

View source

Compute the marginal of this GP over function values at index_points.

Args
index_points float Tensor representing finite (batch of) vector(s) of points in the index set over which the GP is defined. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims and e is the number (size) of index points in each batch. Ultimately this distribution corresponds to a e-dimensional multivariate normal. The batch shape must be broadcastable with kernel.batch_shape and any batch dims yielded by mean_fn.

Returns
marginal a Normal or MultivariateNormalLinearOperator distribution, according to whether index_points consists of one or many index points, respectively.

is_scalar_batch

View source

Indicates that batch_shape == [].

Args
name Python str prepended to names of ops created by this function.

Returns
is_scalar_batch bool scalar Tensor.

is_scalar_event

View source

Indicates that event_shape == [].

Args
name Python str prepended to names of ops created by this function.

Returns
is_scalar_event bool scalar Tensor.

kl_divergence

View source

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

other types with built-in registrations: MultivariateNormalDiag, MultivariateNormalDiagPlusLowRank, MultivariateNormalFullCovariance, MultivariateNormalLinearOperator, MultivariateNormalTriL, Normal

Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.

Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

log_cdf

View source

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

View source

Log probability density/mass function.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

View source

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

View source

Mean.

View source

Mode.

param_shapes

View source

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.

Returns
dict of parameter name to Tensor shapes.

param_static_shapes

View source

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().

Returns
dict of parameter name to TensorShape.

Raises
ValueError if sample_shape is a TensorShape and is not fully defined.

parameter_properties

View source

Returns a dict mapping constructor arg names to property annotations.

This dict should include an entry for each of the distribution's Tensor-valued constructor arguments.

Distribution subclasses are not required to implement _parameter_properties, so this method may raise NotImplementedError. Providing a _parameter_properties implementation enables several advanced features, including:

• Distribution batch slicing (sliced_distribution = distribution[i:j]).
• Automatic inference of _batch_shape and _batch_shape_tensor, which must otherwise be computed explicitly.
• Automatic instantiation of the distribution within TFP's internal property tests.
• Automatic construction of 'trainable' instances of the distribution using appropriate bijectors to avoid violating parameter constraints. This enables the distribution family to be used easily as a surrogate posterior in variational inference.

In the future, parameter property annotations may enable additional functionality; for example, returning Distribution instances from tf.vectorized_map.

Args
dtype Optional float dtype to assume for continuous-valued parameters. Some constraining bijectors require advance knowledge of the dtype because certain constants (e.g., tfb.Softplus.low) must be instantiated with the same dtype as the values to be transformed.
num_classes Optional int Tensor number of classes to assume when inferring the shape of parameters for categorical-like distributions. Otherwise ignored.

Returns
parameter_properties A str ->tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names toParameterProperties` instances.

Raises
NotImplementedError if the distribution class does not implement _parameter_properties.

posterior_predictive

View source

Return the posterior predictive distribution associated with this distribution.

Returns the posterior predictive distribution p(Y' | X, Y, X') where:

• X' is predictive_index_points
• X is self.index_points.
• Y is observations.

This is equivalent to using the GaussianProcessRegressionModel.precompute_regression_model method.

Args
observations float Tensor representing collection, or batch of collections, of observations corresponding to self.index_points. Shape has the form [b1, ..., bB, e], which must be broadcastable with the batch and example shapes of self.index_points. The batch shape [b1, ..., bB] must be broadcastable with the shapes of all other batched parameters
predictive_index_points float Tensor representing finite collection, or batch of collections, of points in the index set over which the GP is defined. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims and e is the number (size) of predictive index points in each batch. The batch shape must be broadcastable with this distributions batch_shape. Default value: None.
**kwargs Any other keyword arguments to pass / override.

Returns
gprm An instance of Distribution that represents the posterior predictive.

precompute_regression_model

View source

Returns a GaussianProcessRegressionModel with precomputed quantities.

This differs from the constructor by precomputing quantities associated with observations in a non-tape safe way. index_points is the only parameter that is allowed to vary (i.e. is a Variable / changes after initialization).

Specifically:

• We make observation_index_points and observations mandatory parameters.
• We precompute kernel(observation_index_points, observation_index_points) along with any other associated quantities relating to the kernel, observations and observation_index_points.

A typical usecase would be optimizing kernel hyperparameters for a GaussianProcess, and computing the posterior predictive with respect to those optimized hyperparameters and observation / index-points pairs.

Args
kernel PositiveSemidefiniteKernel-like instance representing the GP's covariance function.
observation_index_points float Tensor representing finite collection, or batch of collections, of points in the index set for which some data has been observed. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims, and e is the number (size) of index points in each batch. [b1, ..., bB, e] must be broadcastable with the shape of observations, and [b1, ..., bB] must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc). The default value is None, which corresponds to the empty set of observations, and simply results in the prior predictive model (a GP with noise of variance predictive_noise_variance).
observations float Tensor representing collection, or batch of collections, of observations corresponding to observation_index_points. Shape has the form [b1, ..., bB, e], which must be brodcastable with the batch and example shapes of observation_index_points. The batch shape [b1, ..., bB] must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc.). The default value is None, which corresponds to the empty set of observations, and simply results in the prior predictive model (a GP with noise of variance predictive_noise_variance).
index_points float Tensor representing finite collection, or batch of collections, of points in the index set over which the GP is defined. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims and e is the number (size) of index points in each batch. Ultimately this distribution corresponds to an e-dimensional multivariate normal. The batch shape must be broadcastable with kernel.batch_shape and any batch dims yielded by mean_fn.
observation_noise_variance float Tensor representing the variance of the noise in the Normal likelihood distribution of the model. May be batched, in which case the batch shape must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc.). Default value: 0.
predictive_noise_variance float Tensor representing the variance in the posterior predictive model. If None, we simply re-use observation_noise_variance for the posterior predictive noise. If set explicitly, however, we use this value. This allows us, for example, to omit predictive noise variance (by setting this to zero) to obtain noiseless posterior predictions of function values, conditioned on noisy observations.
mean_fn Python callable that acts on index_points to produce a collection, or batch of collections, of mean values at index_points. Takes a Tensor of shape [b1, ..., bB, f1, ..., fF] and returns a Tensor whose shape is broadcastable with [b1, ..., bB]. Default value: None implies the constant zero function.
cholesky_fn Callable which takes a single (batch) matrix argument and returns a Cholesky-like lower triangular factor. Default value: None, in which case make_cholesky_with_jitter_fn is used with the jitter parameter.
jitter float scalar Tensor added to the diagonal of the covariance matrix to ensure positive definiteness of the covariance matrix. Default value: 1e-6.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs. Default value: False.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value NaN to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined. Default value: False.
name Python str name prefixed to Ops created by this class. Default value: 'PrecomputedGaussianProcessRegressionModel'.

Returns An instance of GaussianProcessRegressionModel with precomputed quantities associated with observations.

prob

View source

Probability density/mass function.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

View source

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

View source

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed PRNG seed; see tfp.random.sanitize_seed for details.
name name to give to the op.
**kwargs Named arguments forwarded to subclass implementation.

Returns
samples a Tensor with prepended dimensions sample_shape.

stddev

View source

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

View source

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

unnormalized_log_prob

View source

Potentially unnormalized log probability density/mass function.

This function is similar to log_prob, but does not require that the return value be normalized. (Normalization here refers to the total integral of probability being one, as it should be by definition for any probability distribution.) This is useful, for example, for distributions where the normalization constant is difficult or expensive to compute. By default, this simply calls log_prob.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
unnormalized_log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

View source

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

__getitem__

View source

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.stateless_normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.linalg.cholesky(cov)
loc = tf.random.stateless_normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => 
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => 

Args
slices slices from the [] operator

Returns
dist A new tfd.Distribution instance with sliced parameters.

View source

[]
[]