Optional: Proof of classification of irreps of SU(3)

Recap

This video is optional.

We now turn to the proof of the classification theorem for irreps of SU(3). The classification theorem said the following:

Theorem:

For any pair of nonnegative integers k and l there exists an irreducible SU(3) representation, unique up to isomorphism, whose weight diagram is obtained by:

  1. taking the images of lambda equals k L_1 plus l L_2 under the action of the Weyl group,

  2. taking the convex hull of these points to get a polygon P,

  3. taking the lattice points in P of the form lambda plus r where r is in the root lattice (the lattice of integer linear combinations of L_i minus L_j),

  4. assigning multiplicities to these lattice points according to a prescription described in detail earlier.

In this video and the next, we will show that for each weight lambda there is a unique irrep with "highest weight" lambda. We won't prove the multiplicity formula, but we will show that the weight diagram is "supported" on the set of lattice points described in (3) (i.e. these are precisely the weights which can occur).

Highest weight subrepresentations

First step

First, we will prove the following:

Theorem:

Given any complex representation R from SU(3) to G L V, if lambda is a highest weight for R then there is an irreducible subrepresentation whose weight diagram is supported on the set of lattice points defined by point (3) of the previous theorem.

What does highest weight mean now? To answer this question, we first need to pick a line through the origin which doesn't contain any of the other points of the triangular lattice (i.e. integer linear combinations of L_1, L_2). Such a line is said to be "irrational" with respect to the lattice; it's not quite enough to pick a line of irrational slope, as the lattice contains vectors like (a half, root 3 over 2), but if you pick a line with slope a rational multiple of pi (or somethhing algebraically independent from root 3) then it will work.

Now move this line parallel to itself to the right until we leave the weight diagram.

Definition:

The final weight we hit before we leave the weight diagram is called a highest weight. The figure illustrates this for the weight diagram of the adjoint representation.

A line of irrational slope determines a unique highest weight

This defines a unique highest weight because the line can contain at most one lattice point.

Remark:

Of course, we made a choice when we defined "highest weight" for SU(2) representations: we picked the furthest weight to the right instead of the left. But now we additionally have to pick this irrational line.

Proof

The idea of the proof is exactly the same as for SU(2). If lambda is our highest weight then we pick a highest weight vector v in W_(lambda). We apply R star superscript C of E_{i j} to v for those E_{i j} which lie to the left of our chosen line. We now drop R star superscript C from the notation

Remark:

Our irrational line splits our roots L_i minus L_j into two groups. We call those on the left of the line negative and those on the right positive. It's the negative root vectors E_{i j} we're interested in. In the example above, E_{2 1}, E_{3 1} and E_{3 2} are the negative root vectors.

We apply all possible combinations of the negative root vectors to our highest weight vector, for example something like: E_{3 1}, E_{2 1} squared, E_{3 1}, E_{3 2} v This is the analogue of applying powers of Y for little s l 2 C representations. We obtain a set of vectors.

Lemma:

The set of vectors we get this way will span an irreducible subrepresentation U.

Proof:

We need to check that if we apply H_theta or E_{i j} to this set of vectors then we get something in U. It is clear that if we apply a negative E_{i j} then we get something else in U. Moreover, each of our vectors is in a weight space (each time we apply a negative root vector to v we just move it to a different weight space), so H_theta preserves the subspace U.

The only tricky bit is to show that E_{1 2}, E_{1 3} and E_{2 3} (i.e. positive root vectors) preserve U.

We proceed by induction. Our set of vectors consists of things like: v; E_{2 1}v, E_{3 1}v and E_{3 2}v; E_{2 1} squared v, E_{2 1} E_{3 1} v, etc. The next line will consist of things we get by applying three negative roots, then the next line involves four negative roots, etc. Take the inductive hypothesis:

  • If w is obtained from v by applying at most k negative root vectors then E_{1 2} w, E_{1 3} w and E_{2 3} w are contained in U.

This is true when k = 0 because all three positive root vectors send v to a vector in W_(lambda plus r) where r is a positive root, and because \lambda is a highest weight, W_(lambda plus r) equals 0.

The inductive step works as follows (we'll just do the example of applying E_{1 3}). Take the vector E_{i_1 j_1} dot dot dot E_{i_{k+1}, j_{k+1}} v and apply E_{1 3}. We have three cases: E_{i_1 j_1} is one of E_{3 1}, E_{3 2} or E_{2 1}. For each case we need to check that E_{1 3} E_{i_1 j_1} dot dot dot E_{i_{k+1}, j_{k+1}} v is in U.

In the case E_{i_1 j_1} = E_{3 1}, we have E_{1 3} E_{3 1} equals E_{1 3} bracket E_{3 1}, plus E_{3 1} E_{1 3}, so E_{1 3} E_{3 1} dot dot dot E_{i_{k+1}, j_{k+1}} v equals E_{1 3} bracket E_{3 1} dot dot dot E_{i_{k+1}, j_{k+1}} v, plus E_{3 1} E_{1 3} dot dot dot E_{i_{k+1}, j_{k+1}} v.

The second term here is contained in U because E_{1 3} dot dot dot E_{i_{k+1}, j_{k+1}} v by the inductive hypothesis, and E_{3 1} preserves U.

We have E_{1 3} bracket E_{3 1} equals H_{1 3} and E_{i_2 j_2} dot dot dot E_{i_{k+1}, j_{k+1}} v is a weight vector, so is an eigenvector of H_{1 3}. Therefore E_{1 3} bracket E_{3 1} dot dot dot E_{i_{k+1}, j_{k+1}} v is in U.

The other two cases are similar: I'll leave them as an exercise.

We have now proved that for any SU(3) representation, if we pick a highest weight then we get a highest weight subrepresentation. We now need to prove that there is only one irrep with a given highest weight.