Given an m-by-n matrix M with entries M_{i j}, we get an n-by-m matrix M transpose whose ijth entry is M_{j i}, i.e. M transpose i j equals M{j i}

# 10. Dot product, 2

## 10. Dot product, 2

### Transposition

You may have noticed that the definition of the dot product looks a lot like matrix multiplication. In fact, it is a special case of matrix multiplication: v_1 w_1 plus dot dot dot plus v_n w_n equals the row vector v_1 across to v_n times the column vector w_1 down to w_n. Technically, the matrix product gives a 1-by-1 matrix whose unique entry is the dot product, but let's not be too pedantic.

Here, we took the column vector v equals v_1 down to v_n and turned it on its side to get a row vector which we call the *transpose* of v, written: v transpose or v superscript T equals v_1 across to v_n.

More generally, you can transpose a matrix:

The transpose of the 2-by-2 matrix 1, 2; 3, 4 is the 2-by-2 matrix 1, 3; 2, 4.

The transpose of the 2-by-3 matrix 1, 2, 3; 4, 5, 6 is the 3-by-2 matrix 1, 4; 2, 5; 3, 6.

So the rows of M become the columns of M transpose.

With all this in place, we observe that the dot product v dot w is v transpose times w.

The transpose of the product (A times B) equals the transpose of B times the transpose of A.

Writing out the ijth entry of the transpose of the product A times B using index notation, we get: A B all tranpose i j equals the i j entry of A B, which equals sum over k of A_{j k} times B_{k i} Similarly expanding B transpose times A transpose we get its ij entry equals sum over k of B transpose i k times A transpose k j, which equals the sum over k of B_{k i} A_{j k} The two expressions differ only by the order of the factors A_{j k} and B_{k i}.

The order of these factors doesn't matter: A_{j k} and B_{k i} are just numbers (entries of A and B), so they commute. This is one reason index notation is so convenient: it converts expressions involving noncommuting objects like matrices into expressions involving commuting quantities (numbers).