Contrary to your first sentence, a and b are not the same size. But let's focus on your example.
So you want this - 2 dot products, one for each row of a and b
np.array([np.dot(x,y) for x,y in zip(a,b)])
or to avoid appending
X = np.zeros((2,2)) for i in range(2): X[i,...] = np.dot(a[i],b[i])
the dot product can be expressed with einsum (matrix index notation) as
[np.einsum('ij,j->i',x,y) for x,y in zip(a,b)]
so the next step is to index that first dimension:
np.einsum('kij,kj->ki',a,b)
I'm quite familiar with einsum, but it still took a bit of trial and error to figure out what you want. Now that the problem is clear I can compute it in several other ways
A, B = np.array(a), np.array(b) np.multiply(A,B[:,np.newaxis,:]).sum(axis=2) (A*B[:,None,:]).sum(2) np.dot(A,B.T)[0,...] np.tensordot(b,a,(-1,-1))[:,0,:]
I find it helpful to work with arrays that have different sizes. For example if A were (2,3,4) and B (2,4), it would be more obvious the dot sum has to be on the last dimension.
Another numpy iteration tool is np.nditer. einsum uses this (in C). http://docs.scipy.org/doc/numpy/reference/arrays.nditer.html
it = np.nditer([A, B, None],flags=['external_loop'], op_axes=[[0,1,2], [0,-1,1], None]) for x,y,w in it: # x, y are shape (2,) w[...] = np.dot(x,y) it.operands[2][...,0]
Avoiding that [...,0] step, requires a more elaborate setup.
C = np.zeros((2,2)) it = np.nditer([A, B, C],flags=['external_loop','reduce_ok'], op_axes=[[0,1,2], [0,-1,1], [0,1,-1]], op_flags=[['readonly'],['readonly'],['readwrite']]) for x,y,w in it: w[...] = np.dot(x,y) # w[...] += x*y print C # array([[ 7., 15.],[ 14., 32.]])