6Understanding the relation between row rank and column rank
We have seen that the kernel of a matrix is a measure for the uniqueness of solutions. But how to compute this kernel? A vector is in the kernel if and only if . Explicitly this is the case if
for all . This is the standard scalar product of the vector with the -th row vector
We thus have the description that the kernel of A is given by all vectors in which are orthogonal to the vectors . Formally
where is the standard scalar product of .
Consider the linear system of equations
that we encountered before. It is represented by the matrix
with kernel
If we draw the kernel and the span of the rows
we see that they are indeed orthogonal to each other:
If we have , by definition we find an such that . In the last section we defined the kernel and found that the difference of solutions is contained in the kernel. On the other hand, we can add any element from to and obtain a new solution with : If we have and , then . In the picture this corresponds to moving the point x parallel to the kernel.
If we pick the right element in the kernel, we can move into the span of the rows and get with . We can do this with any element in the image of . Hence we find for every element , an element with .
The picture suggests even more: Whenever we have another solution , then the difference has to lie in the kernel of . Since is also in , it has to lie in the intersection of the kernel and . By the picture it has to be zero, which implies . If we look closely, we find a familiar reason for this: The kernel is orthogonal to the span of the rows.
Formally we can express this observation as follows. We know that is in , hence we can write it as a linear combination of the rows of . Next we can exploit that elements in the kernel are orthogonal to the rows, by computing the scalar product of with as the linear combination of the rows. That is
Hence we get and thus .
All together, we showed that given some in the image of , i.e. some linear combination of the columns, there exists a unique solution with , such that is a linear combination of the rows. So, the matrix yields a 1-1-correspondence between the space of linear combinations of the rows and the span of the columns . Formally, the restriction of to
gives a bijection (a 1-1 map or correspondence) between and . Furthermore, the correspondence preserves the structure of and . This means that we can transport all calculations and relations between elements from to and back using the above restriction of to (and its inverse). Hence both spaces have the same properties. In particular and , i.e. the span of the rows and the span of the columns, have the same dimension. The dimension of is the row rank and the dimension of is the column rank. Hence the row rank equals the column rank.