tf.contrib.distributions.matrix_diag_transform

View source on GitHub

Transform diagonal of [batch-]matrix, leave rest of matrix unchanged.

Create a trainable covariance defined by a Cholesky factor:

# Transform network layer into 2 x 2 array.
matrix_values = tf.contrib.layers.fully_connected(activations, 4)
matrix = tf.reshape(matrix_values, (batch_size, 2, 2))

# Make the diagonal positive. If the upper triangle was zero, this would be a
# valid Cholesky factor.
chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)

# LinearOperatorLowerTriangular ignores the upper triangle.
operator = LinearOperatorLowerTriangular(chol)

Example of heteroskedastic 2-D linear regression.

tfd = tfp.distributions

# Get a trainable Cholesky factor.
matrix_values = tf.contrib.layers.fully_connected(activations, 4)
matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)

# Get a trainable mean.
mu = tf.contrib.layers.fully_connected(activations, 2)

# This is a fully trainable multivariate normal!
dist = tfd.MultivariateNormalTriL(mu, chol)

# Standard log loss. Minimizing this will "train" mu and chol, and then dist
# will be a distribution predicting labels as multivariate Gaussians.
loss = -1 * tf.reduce_mean(dist.log_prob(labels))

matrix Rank R Tensor, R >= 2, where the last two dimensions are equal.
transform Element-wise function mapping Tensors to Tensors. To be applied to the diagonal of matrix. If None, matrix is returned unchanged. Defaults to None.
name A name to give created ops. Defaults to "matrix_diag_transform".

A Tensor with same shape and dtype as matrix.