This lecture is intended as a foretaste of things to come. We will develop extra layers of abstraction, and this will allow us to apply the ideas of linear algebra in new contexts.
They key operations we've used on vectors are additon and rescaling:
These operations are all we need to state the definitions of "linear map" and "linear subspace". But we can add and rescale things much more general than column vectors in
.
Definition:
A (real) vector space is a set
with operations
The elements of
don't have to be elements of
, and
and
don't have to be addition and rescaling of vectors in
, they could be any other objects and operations which behave in a similar manner, by which I mean that the following conditions hold (for all
and
):
There is a distinguished element
.
.
So a vector space is a set with operations that satisfy these conditions. In particular,
equipped with vector addition and rescaling is an example of a vector space. Indeed, it is a theorem that any finite-dimensional vector space is equivalent to
for some
. But if you allow yourself to consider infinite-dimensional vector spaces there are more interesting examples.
Example:
Let
be the set of all functions from
to
. Given functions
, define
to be the function whose value at
is
, and given a scalar
we define
to be the function whose value at
is
. This gives
the structure of a vector space. It's not the same as
for any
: it is actually an infinite-dimensional vector space.
Example:
Inside our previous example we can find some linear subspaces. For example, the set of continuous functions
is a subspace. Inside that we have another subspace: the space of once continuously-differentiable functions
. Inside that we have the subspace of twice continuously-differentiable functions
. And so on. Inside all of these we have the infinitely-differentiable functions
, and, inside that, that space of analytic functions
(infinitely-differentiable functions whose Taylor series converges on a neighbourhood of the origin). This gives us an infinite nested sequence of subspaces:
Inside the space of analytic functions, we have the space of polynomials (which we already met).
Example:
Differentiation defines a linear map
In other words, you start with a once continuously-differentiable function and differentiate it to get a continuous function). To show that it's linear, all we need to do is check that
What is the kernel of
? It consists of functions whose derivative is zero, in other words constant functions. That is,
is the 1-dimensional subspace of constant functions.
The eigenfunctions of
(with eigenvalue
) will be a function
such that
. This is a differential equation for
; its solution is
. This is why the exponential functions is so important: it's an eigenfunction of differentiation. Similarly the eigenfunctions of
are the solutions of
, that is
. This is a 2-dimensional eigenspace.
Algebraic numbers
Example:
Here is an example from number theory. We have come across the idea that your vectors can have complex coefficients or real coefficients, but we can work much more generally by requiring our coefficients to live in some coefficient field
. In this example, we will take
(the rational numbers), but you could imagine all sorts of things (the integers modulo 5, the 17-adic numbers, and goodness knows what else). The only difference this makes in the definition of a vector space is that rescaling can only be done by elements of
, that is the rescaling map is
.
A number
is algebraic if there exists a polynomial
with
for which
.
For example,
is an algebraic number because it's a root of
.
is algebraic because it's a root of
.
and
are not algebraic (i.e. they're transcendental)
The set of all algebraic numbers is called
.
Lemma:
is a
-vector space.
We need to show that if
are algebraic numbers and
is a rational number then
and
are algebraic numbers.
To see that
, note that there is a polynomial
with
and
. Now
satisfies
, so
, showing that
. Note that we really need
(or else the coefficients
are not rational), so this is only a
-vector space (not a
-vector space).
To show that
is algebraic is much harder: we can't prove it here. In the words of Pierre Samuel (Algebraic Theory of Numbers), "The reader will have to exert himself to show that
is an algebraic integer, and will be convinced that the steps which lead to a proof that this number is algebraic may not be easily generalised." The nicest proof uses the theory of modules over rings.
In fact, slightly more is true: the product of two algebraic numbers is also an algebraic number, so
is a field. One of the most complicated objects in mathematics is the group
, the Galois group of
. This is the set of invertible
-linear maps
which preserve the product (
).
The elements of
are like infinitely large matrices with rational entries (because
is infinite-dimensional over
). One way people study this enormously complicated group is using Galois representations: associating to each
a finite-dimensional matrix
such that
. Constructing Galois representations is a very difficult task, but the payoff can be enormous. Galois representations played an important role in Wiles's proof of Fermat's last theorem, and continue to play an important role in modern number theory.