shift is a numeric Tensor and scale is a LinearOperator.
If X is a scalar then the forward transformation is: scale * X + shift
where * denotes the scalar product.
If there are no sample dims, we call X = tf.expand_dims(X, 0), i.e.,
new_sample_shape = [1]. Otherwise do nothing.
The sample shape is flattened to have one dimension, i.e.,
new_sample_shape = [n] where n = tf.reduce_prod(old_sample_shape).
The sample dim is cyclically rotated left by 1, i.e.,
new_shape = [B1,...,Bb, k, n] where n is as above, k is the
event_shape, and B1,...,Bb are the batch shapes for each of b batch
dimensions.
(For more details see shape.make_batch_of_event_sample_matrices.)
The result of the above transformation is that X can be regarded as a batch
of matrices where each column is a draw from the distribution. After
premultiplying by scale, we take the inverse of this procedure. The input
Y also undergoes the same transformation before/after premultiplying by
inv(scale).
Example Use:
linalg = tf.linalg
x = [1., 2, 3]
shift = [-1., 0., 1]
diag = [1., 2, 3]
scale = linalg.LinearOperatorDiag(diag)
affine = AffineLinearOperator(shift, scale)
# In this case, `forward` is equivalent to:
# y = scale @ x + shift
y = affine.forward(x) # [0., 4, 10]
shift = [2., 3, 1]
tril = [[1., 0, 0],
[2, 1, 0],
[3, 2, 1]]
scale = linalg.LinearOperatorLowerTriangular(tril)
affine = AffineLinearOperator(shift, scale)
# In this case, `forward` is equivalent to:
# np.squeeze(np.matmul(tril, np.expand_dims(x, -1)), -1) + shift
y = affine.forward(x) # [3., 7, 11]
Args
shift
Floating-point Tensor.
scale
Subclass of LinearOperator. Represents the (batch) positive
definite matrix M in R^{k x k}.
validate_args
Python bool indicating whether arguments should be
checked for correctness.
name
Python str name given to ops managed by this object.
Raises
TypeError
if scale is not a LinearOperator.
TypeError
if shift.dtype does not match scale.dtype.
ValueError
if not scale.is_non_singular.
Attributes
dtype
dtype of Tensors transformable by this distribution.
forward_min_event_ndims
Returns the minimal number of dimensions bijector.forward operates on.
graph_parents
Returns this Bijector's graph_parents as a Python list.
inverse_min_event_ndims
Returns the minimal number of dimensions bijector.inverse operates on.
is_constant_jacobian
Returns true iff the Jacobian matrix is not a function of x.
name
Returns the string name of this Bijector.
scale
The scaleLinearOperator in Y = scale @ X + shift.
shift
The shiftTensor in Y = scale @ X + shift.
validate_args
Returns True if Tensor arguments will be validated.
Tensor. The input to the "forward" Jacobian determinant evaluation.
event_ndims
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.forward_min_event_ndims. The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event,
i.e. it has shape x.shape.ndims - event_ndims dimensions.
name
The name to give this op.
Returns
Tensor, if this bijector is injective.
If not injective this is not implemented.
Raises
TypeError
if self.dtype is specified and y.dtype is not
self.dtype.
NotImplementedError
if neither _forward_log_det_jacobian
nor {_inverse, _inverse_log_det_jacobian} are implemented, or
this is a non-injective bijector.
Note that forward_log_det_jacobian is the negative of this function,
evaluated at g^{-1}(y).
Args
y
Tensor. The input to the "inverse" Jacobian determinant evaluation.
event_ndims
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.inverse_min_event_ndims. The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event,
i.e. it has shape y.shape.ndims - event_ndims dimensions.
name
The name to give this op.
Returns
Tensor, if this bijector is injective.
If not injective, returns the tuple of local log det
Jacobians, log(det(Dg_i^{-1}(y))), where g_i is the restriction
of g to the ith partition Di.
Raises
TypeError
if self.dtype is specified and y.dtype is not
self.dtype.