Multiply matrix "a" by matrix "b".
tf.compat.v1.sparse_matmul(
a: _atypes.TensorFuzzingAnnotation[TV_SparseMatMul_Ta],
b: _atypes.TensorFuzzingAnnotation[TV_SparseMatMul_Tb],
transpose_a: bool = False,
transpose_b: bool = False,
a_is_sparse: bool = False,
b_is_sparse: bool = False,
name=None
) -> _atypes.TensorFuzzingAnnotation[_atypes.Float32]
The inputs must be two-dimensional matrices and the inner dimension of "a" must
match the outer dimension of "b". Both "a" and "b" must be Tensor
s not
SparseTensor
s. This op is optimized for the case where at least one of "a" or
"b" is sparse, in the sense that they have a large proportion of zero values.
The breakeven for using this versus a dense matrix multiply on one platform was
30% zero values in the sparse matrix.
The gradient computation of this operation will only take advantage of sparsity in the input gradient when that gradient comes from a Relu.
Returns | |
---|---|
A Tensor of type float32 .
|