tf.raw_ops.MatrixSolveLs

Solves one or more linear least-squares problems.

Compat aliases for migration

See Migration guide for more details.

tf.compat.v1.raw_ops.MatrixSolveLs

matrix is a tensor of shape [..., M, N] whose inner-most 2 dimensions form real or complex matrices of size [M, N]. Rhs is a tensor of the same type as matrix and shape [..., M, K]. The output is a tensor shape [..., N, K] where each output matrix solves each of the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

We use the following notation for (complex) matrix and right-hand sides in the batch:

matrix=ACm×n, rhs=BCm×k, output=XCn×k, l2_regularizer=λR.

If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if mn then X=(AHA+λI)1AHB, which solves the least-squares problem X=argminZn×k||AZB||F2+λ||Z||F2. If m<n then output is computed as X=AH(AAH+λI)1B, which (for λ=0) is the minimum-norm solution to the under-determined linear system, i.e. X=argminZCn×k||Z||F2, subject to AZ=B. Notice that the fast path is only numerically stable when A is numerically full rank and has a condition number cond(A)<1ϵmach or λ is sufficiently large.

If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when A is rank deficient. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored.

matrix A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. Shape is [..., M, N].
rhs A Tensor. Must have the same type as matrix. Shape is [..., M, K].
l2_regularizer A Tensor of type float64. Scalar tensor.
fast An optional bool. Defaults to True.
name A name for the operation (optional).

A Tensor. Has the same type as matrix.

numpy compatibility

Equivalent to np.linalg.lstsq