|TensorFlow 1 version||View source on GitHub|
Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix
Compat aliases for migration
See Migration guide for more details.
tf.sparse.sparse_dense_matmul( sp_a, b, adjoint_a=False, adjoint_b=False, name=None )
Used in the notebooks
|Used in the guide|
(or SparseTensor) "B". Please note that one and only one of the inputs MUST be a SparseTensor and the other MUST be a dense matrix.
The following input format is recommended (but not required) for optimal performance:
adjoint_a == false:
Ashould be sorted in lexicographically increasing order. Use
sparse.reorderif you're not sure.
adjoint_a == true:
Ashould be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).
||SparseTensor (or dense Matrix) A, of rank 2.|
||dense Matrix (or SparseTensor) B, with the same dtype as sp_a.|
||Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A).|
||Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B).|
||A name prefix for the returned tensors (optional)|
A dense matrix (pseudo-code in dense np.matrix notation):
tf.nn.embedding_lookup_sparse for sparse multiplication:
It's not obvious but you can consider
embedding_lookup_sparse as another
sparse and dense multiplication. In some situations, you may prefer to use
embedding_lookup_sparse even though you're not dealing with embeddings.
There are two questions to ask in the decision process: Do you need gradients
computed as sparse too? Is your sparse data represented as two
SparseTensors: ids and values? There is more explanation about data format
below. If you answer any of these questions as yes, consider using
Following explains differences between the expected SparseTensors:
For example if dense form of your sparse data has shape
[3, 5] and values:
[[ a ] [b c] [ d ]]
SparseTensor format expected by
sp_a (indices, values):
[0, 1]: a [1, 0]: b [1, 4]: c [2, 2]: d
SparseTensor format expected by
[0, 0]: 1 [0, 0]: a [1, 0]: 0 [1, 0]: b [1, 1]: 4 [1, 1]: c [2, 0]: 2 [2, 0]: d
Deciding when to use
There are a number of questions to ask in the decision process, including:
- Will the SparseTensor
Afit in memory if densified?
- Is the column count of the product large (>> 1)?
- Is the density of
Alarger than approximately 15%?
If the answer to several of these questions is yes, consider
SparseTensor to a dense one and using
This operation tends to perform well when
A is more sparse, if the column
size of the product is small (e.g. matrix-vector multiplication), if
sp_a.dense_shape takes on large values.
Below is a rough speed comparison between
labeled 'sparse', and
matmul(a_is_sparse=True), labeled 'dense'. For
purposes of the comparison, the time spent converting from a
Tensor is not included, so it is overly conservative with respect to
the time ratio.
CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB GPU: NVidia Tesla k40c
-c opt --config=cuda --copt=-mavx
tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks A sparse [m, k] with % nonzer