I have a csr format sparse matrix A of size 50*10000, and a np.array v of size 50, for which I would like to calculate the product v.dot(A). How do I do this efficiently?
Of course that running v.dot(A) is not such a good idea, because scipy.sparse does not support np operations. Unfortunately, to my knowledge, scipy.sparse does not have a function for left multiplication of a matrix by a vector.
I've tried the following methods, but all seem to be pretty time-costly:
- Transpone A and use the standard
.dot
I transpose A and then use the .dot method. This multiplies A.T with v as a column vector.
``` >>> A = sparse.csr_matrix([[1, 2, 0, 0], ... [0, 0, 3, 4]]) >>> v = np.array([1, 1]) >>> A.T.dot(v) array([1, 2, 3, 4], dtype=int32) ``` - Transposing v and using the multiply and sum methods
I'm using the csr_matrix.multiply() method, which preforms point-wise multiplication. The I'll sum over the rows.
>>> vt = v[np.newaxis].T >>> A.multiply(vt).sum(axis=0) matrix([[1, 2, 3, 4]], dtype=int32) - Turning v to a sparse matrix and using the
.dotmethod
I tried different construction methods, all seemed costly. This is the most readable example (probably not the most efficient one):
>>> sparse_v = sparse.csr_matrix(v) >>> sparse_v.dot(A).todense() matrix([[1, 2, 3, 4]], dtype=int32) Method 1 is by far the fastest, but the .T method is still very time-consuming. Is there not a better way to perform left multiplication on sparse matrices?