# Optional: Proof of classification of irreps of SU(3)

## Recap

This video is optional.

We now turn to the proof of the classification theorem for irreps of $SU(3)$ . The classification theorem said the following:

Theorem:

For any pair of nonnegative integers $k,\ell$ there exists an irreducible $SU(3)$ representation, unique up to isomorphism, whose weight diagram is obtained by:

1. taking the images of $\lambda=kL_{1}+\ell L_{2}$ under the action of the Weyl group,

2. taking the convex hull of these points to get a polygon $P$ ,

3. taking the lattice points in $P$ of the form $\lambda+r$ where $r$ is in the root lattice (the lattice of integer linear combinations of $L_{i}-L_{j}$ ),

4. assigning multiplicities to these lattice points according to a prescription described in detail earlier.

In this video and the next, we will show that for each weight $\lambda$ there is a unique irrep with "highest weight" $\lambda$ . We won't prove the multiplicity formula, but we will show that the weight diagram is "supported" on the set of lattice points described in (3) (i.e. these are precisely the weights which can occur).

## Highest weight subrepresentations

### First step

First, we will prove the following:

Theorem:

Given any complex representation $R\colon SU(3)\to GL(V)$ , if $\lambda$ is a highest weight for $R$ then there is an irreducible subrepresentation whose weight diagram is supported on the set of lattice points defined by point (3) of the previous theorem.

What does highest weight mean now? To answer this question, we first need to pick a line through the origin which doesn't contain any of the other points of the triangular lattice (i.e. integer linear combinations of $L_{1},L_{2}$ ). Such a line is said to be "irrational" with respect to the lattice; it's not quite enough to pick a line of irrational slope, as the lattice contains vectors like $(1/2,\sqrt{3}/2)$ , but if you pick a line with slope a rational multiple of $\pi$ (or somethhing algebraically independent from $\sqrt{3}$ ) then it will work.

Now move this line parallel to itself to the right until we leave the weight diagram.

Definition:

The final weight we hit before we leave the weight diagram is called a highest weight. The figure illustrates this for the weight diagram of the adjoint representation.

This defines a unique highest weight because the line can contain at most one lattice point.

Remark:

Of course, we made a choice when we defined "highest weight" for $SU(2)$ representations: we picked the furthest weight to the right instead of the left. But now we additionally have to pick this irrational line.

### Proof

The idea of the proof is exactly the same as for $SU(2)$ . If $\lambda$ is our highest weight then we pick a highest weight vector $v\in W_{\lambda}$ . We apply $R_{*}^{\mathbf{C}}(E_{ij})$ to $v$ for those $E_{ij}$ which lie to the left of our chosen line. We now drop $R_{*}^{\mathbf{C}}$ from the notation

Remark:

Our irrational line splits our roots $L_{i}-L_{j}$ into two groups. We call those on the left of the line negative and those on the right positive. It's the negative root vectors $E_{ij}$ we're interested in. In the example above, $E_{21}$ , $E_{31}$ and $E_{32}$ are the negative root vectors.

We apply all possible combinations of the negative root vectors to our highest weight vector, for example something like: $E_{31}(E_{21})^{2}E_{31}E_{32}v$ This is the analogue of applying powers of $Y$ for $\mathfrak{sl}(2,\mathbf{C})$ representations. We obtain a set of vectors.

Lemma:

The set of vectors we get this way will span an irreducible subrepresentation $U$ .

Proof:

We need to check that if we apply $H_{\theta}$ or $E_{ij}$ to this set of vectors then we get something in $U$ . It is clear that if we apply a negative $E_{ij}$ then we get something else in $U$ . Moreover, each of our vectors is in a weight space (each time we apply a negative root vector to $v$ we just move it to a different weight space), so $H_{\theta}$ preserves the subspace $U$ .

The only tricky bit is to show that $E_{12}$ , $E_{13}$ and $E_{23}$ (i.e. positive root vectors) preserve $U$ .

We proceed by induction. Our set of vectors consists of things like: $v;$ $E_{21}v,\quad E_{31}v,\quad E_{32}v;$ $E_{21}^{2}v,\quad E_{21}E_{31}v,\quad\cdots$ The next line will consist of things we get by applying three negative roots, then the next line involves four negative roots, etc. Take the inductive hypothesis:

• If $w$ is obtained from $v$ by applying at most $k$ negative root vectors then $E_{12}w$ , $E_{13}w$ and $E_{23}w$ are contained in $U$ .

This is true when $k=0$ because all three positive root vectors send $v$ to a vector in $W_{\lambda+r}$ where $r$ is a positive root, and because $\lambda$ is a highest weight, $W_{\lambda+r}=0$ .

The inductive step works as follows (we'll just do the example of applying $E_{13}$ ). Take the vector $E_{i_{1}j_{1}}\cdots E_{i_{k+1}j_{k+1}}v$ and apply $E_{13}$ . We have three cases: $E_{i_{1}j_{1}}$ is one of $E_{31}$ , $E_{32}$ or $E_{21}$ . For each case we need to check that $E_{13}E_{i_{1}j_{1}}\cdots E_{i_{k+1}j_{k+1}}v\in U.$

In the case $E_{i_{1}j_{1}}=E_{31}$ , we have $E_{13}E_{31}=[E_{13},E_{31}]+E_{31}E_{13},$ so $E_{13}E_{31}\cdots E_{i_{k+1}j_{k+1}}v=[E_{13},E_{31}]\cdots E_{i_{k+1}j_{k+1}% }v+E_{31}E_{13}\cdots E_{i_{k+1}j_{k+1}}v.$

The second term here is contained in $U$ because $E_{13}\cdots E_{i_{k+1}j_{k+1}}v\in U$ by the inductive hypothesis, and $E_{31}$ preserves $U$ .

We have $[E_{13},E_{31}]=H_{13}$ and $E_{i_{2}j_{2}}\cdots E_{i_{k+1}j_{k+1}}v$ is a weight vector, so is an eigenvector of $H_{13}$ . Therefore $[E_{13},E_{31}]\cdots E_{i_{k+1}j_{k+1}}v\in U.$

The other two cases are similar: I'll leave them as an exercise.

We have now proved that for any $SU(3)$ representation, if we pick a highest weight then we get a highest weight subrepresentation. We now need to prove that there is only one irrep with a given highest weight.