1

I have a matrix A:

1 3 1 7 5 2 4 3 7 8 2 1 3 9 6 4 5 2 

and a matrix B:

2 9 1 4 3 8 9 7 3 4 4 2 6 5 7 2 9 2 

I want to compute C:

1*2+3*9+1*1 7*4+5*3+2*8 4*9+3*7+7*3 8*4+2*4+1*2 3*6+9*5+6*7 4*2+5*9+2*2 

How can I express this purely using matrix operations? I realize I can do this using . versions but I am interested in pure matrix operations. For example, when I have two vectors x and y, I vastly prefer x'*y over sum(x.*y). Hence I am interested in how to do the above also using matrix operations.

2 Answers 2

4

If you do not want to use vector operators, you can get the same result by performing a matrix multiplication with the transpose of the second multiplicand (otherwise you'll get a 3x3 result, in this case), and then extracting the diagonal.

Like so: C = diag(A * B')

I'm not quite sure how Octave optimizes this, but it appears to only be slightly slower than the element-wise approach. (at least, for this data set)

function test(func, n, a, b) for i = 1:n func(a, b); endfor endfunction octave> tic; test(@(a, b) sum(a.*b, 2), 100000, A, B); toc Elapsed time is 2.843 seconds. octave> tic; test(@(a, b) diag(a*b'), 100000, A, B); toc Elapsed time is 3.2 seconds. 

CAUTION: A real life problem shows this to be far slower than the element-wise approach:

octave:100> size(yy) ans = 5000 10 octave:101> size(expected) ans = 5000 10 octave:102> tic; diag(yy * expected'); toc; Elapsed time is 0.5447 seconds. octave:103> tic; sum(yy .* expected, 2); toc; Elapsed time is 0.0016899 seconds. 
Sign up to request clarification or add additional context in comments.

7 Comments

thanks, that seems like it. i wonder if octave optimizes it internally to do an efficient computation (?)
@EitanT: Because he wants to compute the sums of the rows of the products of the elements in the two matrices. In other words, he was a vector of length N, where N is the number of rows in the original matrices. If you don't take the diagonal of the product matrix, you get an NxN result.
thank you both; i'll accept this as a an authoritative answer; it is also consistent with the opinion of the folks from math.stackexchange
octave:100> size(yy) ans = 5000 10 octave:101> size(expected) ans = 5000 10 octave:102> tic; diag(yy * expected'); toc; Elapsed time is 0.5447 seconds. octave:103> tic; sum(yy .* expected, 2); toc; Elapsed time is 0.0016899 seconds.
my actual problem showed this to be far slower. i've attempted to edit the answer with the info. this is still the correct answer, but the performance difference must be noted by anybody adopting it.
|
4

You should use vector operations

C = sum( A .* B , 2 ); 

The .* operator multiplies matrices value-by-value, while sum( <matrix> , 2 ) sums along the rows of a matrix inside the first parameter.

15 Comments

thank you for your response, but please note the last para of my question.
If you're worried about performance, this is about as good as it can get. These are still vector operations and are very fast.
yes, i understand; but a sophisticated implementation could effectively do this as an optimization of a pure matrix operations specification.
-1: You can do it faster without the sum: use matrix multiplication on the transpose instead of elementwise.
@Phonon My downvote is locked, but if agksmehx finds your solution to be consistently faster you deserve an upvote and I will stand corrected. Edit your answer somehow and I will change my vote to an upvote instead.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.