The image of a linear map $f:{\mathbf{R}}^{n}\to {\mathbf{R}}^{m}$ is the set of vectors $b\in {\mathbf{R}}^{m}$ such that $b=f(v)$ for some $v\in {\mathbf{R}}^{n}$ . It is written as $\mathrm{im}$ .
40. Images
40. Images
If you think of applying a map as "following light rays" (like in some earlier examples), you can think of the image as the shadow your map casts.
If the map $f$ is the vertical projection $f\left(\begin{array}{c}\hfill x\hfill \\ \hfill y\hfill \\ \hfill z\hfill \end{array}\right)=\left(\begin{array}{c}\hfill x\hfill \\ \hfill y\hfill \\ \hfill 0\hfill \end{array}\right)$ then the image of $f$ is the $xy$ plane. That is $$\mathrm{im}(f)=\{\left(\begin{array}{c}\hfill x\hfill \\ \hfill y\hfill \\ \hfill 0\hfill \end{array}\right):x,y\in \mathbf{R}\}.$$
Consider the 3by2 matrix $A=\left(\begin{array}{cc}\hfill 1\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill \end{array}\right)$ . The image of the corresponding linear map is the set of all vectors of the form $$A\left(\begin{array}{c}\hfill x\hfill \\ \hfill y\hfill \end{array}\right)=\left(\begin{array}{c}\hfill x+y\hfill \\ \hfill 2x\hfill \\ \hfill y\hfill \end{array}\right).$$ We studied this example earlier and even drew a picture of its image: it is the grey plane in the figure below. (There's a slight "videographic typo" (i.e. "mistake") in the video, see if you can spot it).
Remarks

$0\in \mathrm{im}(f)$ because $0=f(0)$ .

If $f$ is invertible then $\mathrm{im}(f)={\mathbf{R}}^{m}$ . This is because if $b\in {\mathbf{R}}^{m}$ then $b=f({f}^{1}(b))$ , so $b\in \mathrm{im}(f)$ .
Image is a subspace
The image of $f$ is a subspace.
If ${b}_{1},{b}_{2}\in \mathrm{im}(f)$ then so is ${b}_{1}+{b}_{2}$ . To see this, observe that if ${b}_{1},{b}_{2}\in \mathrm{im}(f)$ then ${b}_{1}=f({v}_{1})$ and ${b}_{2}=f({v}_{2})$ for some ${v}_{1},{v}_{2}\in {\mathbf{R}}^{n}$ . This means that ${b}_{1}+{b}_{2}=f({v}_{1})+f({v}_{2})=f({v}_{1}+{v}_{2})$ (since $f$ is linear), so ${b}_{1}+{b}_{2}\in \mathrm{im}(f)$ .
Similarly, $\lambda {b}_{1}=\lambda f({v}_{1})=f(\lambda {v}_{1})$ (since $f$ is linear), so $\lambda {b}_{1}\in \mathrm{im}(f)$ .
Relation with simultaneous equations
$Av=b$ has a solution if and only if $b\in \mathrm{im}(f)$ where $f(v)=Av$ .
This is a tautology from the definition of image! $Av=b$ has a solution if and only if there is a $v$ such that $f(v)=Av=b$ .
So putting this together with the last lecture, we see that $Av=b$ has a solution if and only if $b\in \mathrm{im}(f)$ and, if it has a solution, then the space of solutions is a translate of $\mathrm{ker}(f)$ .
Rank
The rank of a linear map/matrix is the dimension of its image.
(Ranknullity theorem) If $A$ is an $m$ by$n$ matrix (or $f:{\mathbf{R}}^{n}\to {\mathbf{R}}^{m}$ is a linear map) then $$\mathrm{rank}(A)+\mathrm{nullity}(A)=n.$$ Here $n$ is the number of columns or $A$ (or the dimension of the target of $f$ ).
The 3by3 matrix $A=\left(\begin{array}{ccc}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right)$ sends everything to zero, so its image is a single point, which has dimension zero, so $\mathrm{rank}(A)=0$ . The kernel is the set of things which map to zero, and since everything maps to zero the kernel is ${\mathbf{R}}^{3}$ . Therefore the nullity (dimension of the kernel) is three. Note that $0+3=3$ and $n=3$ ($A$ is a 3by3 matrix) so the ranknullity theorem holds.
The 3by3 matrix $B=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right)$ sends $\left(\begin{array}{c}\hfill x\hfill \\ \hfill y\hfill \\ \hfill z\hfill \end{array}\right)$ to $\left(\begin{array}{c}\hfill x\hfill \\ \hfill 0\hfill \\ \hfill 0\hfill \end{array}\right)$ , so its image is the $x$ axis. Therefore the rank (dimension of the image) is 1. The nullity is the number of free variables ($B$ is in reduced echelon form already) which is 2 (there is one leading entry). Again, $1+2=3$ , which is good. We can see that as the rank increases, the nullity goes down (as required by the ranknullity theorem).
The matrix $C=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right)$ is the vertical projection to the $xy$ plane, so its rank is 2 (image is the $xy$ plane). Its nullity is 1 (one free variable). Again, $2+1=3$ .
The 3by3 identity matrix $I=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right)$ has rank 3 (for any $v$ we have $v=Iv$ so every vector is in the image) and the nullity is 0 (only the origin maps to the origin). Again, $3+0=3$ .
The ranknullity theorem is basically saying that the map $f$ is taking ${\mathbf{R}}^{n}$ , crushing down some of the dimensions (those in the kernel), and mapping the rest faithfully onto the image (so the $n$ dimensions of ${\mathbf{R}}^{n}$ either contribute to the kernel or to the image).
Proof of ranknullity theorem
The nullity of $A$ is the number of free variables of $A$ when you put it into reduced echelon form. If we can show that the rank is the number of dependent variables then we're done: there are $n$ variables which are either free (contributing to kernel) or dependent (contributing to rank). Recall that the dependent variables correspond to the columns with leading entries in reduced echelon form.
So we need to show that the rank is the number of leading entries of $A$ in reduced echelon form.
First step: we prove that the rank doesn't change when we do a row operation. Suppose we start with a matrix $A$ , do a row operation to get a matrix ${A}^{\prime}$ . We know there is an elementary matrix $E$ such that ${A}^{\prime}=EA$ . This tells us immediately that $\mathrm{im}(A)$ and $\mathrm{im}({A}^{\prime})$ have the same dimension: $b\mapsto Eb$ gives us an "isomorphism" (invertible linear map) from the image of $A$ to the image of ${A}^{\prime}$ .
As the rank doesn't change under row operations, we may assume without loss of generality that $A$ is in reduced echelon form.
Second step: if $A$ is in reduced echelon form then it has $k$ nonzero rows (for some $k$ ) followed by $mk$ zero rows. Now:

The number $k$ is the number of leading entries (because each nonzero row has a leading entry and each zero row doesn't).

Recall that $Av=b$ has a solution if and only if ${b}_{k+1}={b}_{k+2}=\mathrm{\cdots}={b}_{m}=0$ : these are the necessary and sufficient conditions for solving the simultaneous equations. If $A$ has a zero row then $b$ has to have a zero in that row, and if all these higher $b$ s are zero then the other rows of $A$ just give us equations which determine the dependent variables.
Since the image of $A$ is the set of $b$ for which $Av=b$ has a solution, this means that $\mathrm{im}(A)$ is the set of $b$ for which ${b}_{k+1}=\mathrm{\cdots}={b}_{m}=0$ , i.e. those $b$ of the form $\left(\begin{array}{c}\hfill {b}_{1}\hfill \\ \hfill \mathrm{\vdots}\hfill \\ \hfill {b}_{k}\hfill \\ \hfill 0\hfill \\ \hfill \mathrm{\vdots}\hfill \\ \hfill 0\hfill \end{array}\right)$ . This is a $k$ dimensional space, so we see that the rank equals $k$ , the number of leading entries.
This completes the proof of the ranknullity theorem.