|TensorFlow 1 version||View source on GitHub|
LinearOperator that wraps a [batch] matrix.
Compat aliases for migration
See Migration guide for more details.
tf.linalg.LinearOperatorFullMatrix( matrix, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, is_square=None, name='LinearOperatorFullMatrix' )
Used in the notebooks
|Used in the tutorials|
This operator wraps a [batch] matrix
A (which is a
Tensor) with shape
[B1,...,Bb, M, N] for some
b >= 0. The first
b indices index a
batch member. For every batch index
A[i1,...,ib, : :] is
M x N matrix.
# Create a 2 x 2 linear operator. matrix = [[1., 2.], [3., 4.]] operator = LinearOperatorFullMatrix(matrix) operator.to_dense() ==> [[1., 2.] [3., 4.]] operator.shape ==> [2, 2] operator.log_abs_determinant() ==> scalar Tensor x = ... Shape [2, 4] Tensor operator.matmul(x) ==> Shape [2, 4] Tensor # Create a [2, 3] batch of 4 x 4 linear operators. matrix = tf.random.normal(shape=[2, 3, 4, 4]) operator = LinearOperatorFullMatrix(matrix)
This operator acts on [batch] matrix with compatible shape.
x is a batch matrix with compatible shape for
operator.shape = [B1,...,Bb] + [M, N], with b >= 0 x.shape = [B1,...,Bb] + [N, R], with R >= 0.
LinearOperatorFullMatrix has exactly the same performance as would be
achieved by using standard
TensorFlow matrix ops. Intelligent choices are
made based on the following initialization hints.
dtypeis real, and
is_positive_definite, a Cholesky factorization is used for the determinant and solve.
In all cases, suppose
operator is a
LinearOperatorFullMatrix of shape
[M, N], and
x.shape = [N, R]. Then
O(M * N * R).
O(N^3 * R).
x have shape
[B1,...,Bb, M, N] and
[B1,...,Bb, N, R], every operation increases in complexity by
Matrix property hints
LinearOperator is initialized with b