# (Sketch) proof of classification theorem

## Classification theorem

This time, we will sketch how you would prove the classification theorem for Dynkin diagrams of compact semisimple groups.

Theorem:

If $R$ is the root system of a compact semisimple Lie group then the Dynkin diagram of $R$ is a disjoint union of diagrams from the following list. (The diagrams on the list are connected; disjoint unions of Dynkin diagrams correspond to taking the product of the corresponding Lie algebras).

## Proof

### Positive definiteness of the Killing form

The idea of the proof is to use the fact that $v\cdot v>0$ unless $v=0$ .

Remark:

This comes from the fact that the dual Killing form $K^{*}(\alpha,\alpha)$ is positive definite on $\mathfrak{h}_{\mathbf{R}}^{*}$ , the dual of $i$ times the Lie algebra of the maximal torus. However, we're now just talking about root systems, so this is simply a fact about the Euclidean dot product.

We will use this as follows. Write elements of $\mathbf{R}^{n}$ (i.e. of $\mathfrak{h}^{*}_{\mathbf{R}}$ ) in terms of the basis given by the (unit vectors pointing along the) simple roots, that is $v=\begin{pmatrix}v_{1}\\ \vdots\\ v_{n}\end{pmatrix}=\sum_{i=1}^{n}v_{i}\hat{\alpha}_{i}$ where $\alpha_{1},\ldots,\alpha_{n}$ are the simple roots and $\hat{\alpha}_{i}$ is the unit vector $\alpha_{i}/|\alpha_{i}|$ .

The dot product $v\cdot v$ is: $v\cdot v=\sum v_{i}\hat{\alpha}_{i}\cdot\sum v_{j}\hat{\alpha}_{j}=\sum\sum v_% {i}v_{j}\hat{\alpha}_{i}\cdot\hat{\alpha}_{j}.$

We can write this as a matrix product $\begin{pmatrix}v_{1}&\cdots&v_{n}\end{pmatrix}Q\begin{pmatrix}v_{1}\\ \vdots\\ v_{n}\end{pmatrix},$ where $Q$ is the matrix whose $ij$ entry is $Q_{ij}=\hat{\alpha}_{i}\cdot\hat{\alpha}_{j}.$

### The entries of Q

The diagonal entries of $Q$ are $1$ s, because the $\hat{\alpha}_{i}$ are unit vectors. The off-diagonal entries of $Q$ are nonpositive, because $\alpha\cdot\beta\leq 0$ for simple roots $\alpha$ and $\beta$ as we saw last time.

Moreover, $Q_{ij}=\hat{\alpha}_{i}\cdot\hat{\alpha}_{j}=\frac{\alpha_{i}\cdot\alpha_{j}}{% \sqrt{\alpha_{i}\cdot\alpha_{i}}\sqrt{\alpha_{j}\cdot\alpha_{j}}}=-\sqrt{\frac% {\alpha_{i}\cdot\alpha_{j}}{\alpha_{i}\cdot\alpha_{i}}\frac{\alpha_{i}\cdot% \alpha_{j}}{\alpha_{j}\cdot\alpha_{j}}}$

Here, we're taking the negative square root because we know that $Q_{ij}$ is negative. Recall from before that $n_{\alpha_{i}\alpha_{j}}=\frac{2\alpha_{i}\cdot\alpha_{j}}{\alpha_{i}\cdot% \alpha_{i}}\qquad n_{\alpha_{j}\alpha_{i}}=\frac{2\alpha_{i}\cdot\alpha_{j}}{% \alpha_{j}\cdot\alpha_{j}},$ so $Q_{ij}=-\frac{1}{2}\sqrt{n_{\alpha_{i}\alpha_{j}}n_{\alpha_{j}\alpha_{i}}}.$

We showed that $n_{\alpha_{i}\alpha_{j}}n_{\alpha_{j}\alpha_{i}}=4\cos^{2}\phi=\begin{cases}0% \\ 1\\ 2\\ 3\end{cases}$ where $\phi$ is the angle between $\alpha_{i}$ and $\alpha_{j}$ . Therefore the off-diagonal entries of $Q$ can only be $0$ , $-1/2$ , $-1/\sqrt{2}$ , or $-\sqrt{3}/2$ .

### Q makes sense for any Dynkin-like graph

Observe that this matrix $Q$ makes sense for any Dynkin graph (i.e. a graph where every node is connected by 0, 1, 2 or 3 edges) whether it's on our list in the theorem or not: we just write down a matrix whose diagonal entries are 1s and whose $ij$ entry is $0$ , $-1/2$ , $-1/\sqrt{2}$ , or $-\sqrt{3}/2$ depending on whether the number of edges connecting vertex $i$ and vertex $j$ is $0$ , $1$ , $2$ or $3$ .

Example:

The root system $A_{2}$ has $Q=\begin{pmatrix}1&-1/2\\ -1/2&1\end{pmatrix}$ .

Example:

The triangle Dynkin diagram shown below (three vertices, all connected by single edges) gives the matrix $Q=\begin{pmatrix}1&-1/2&-1/2\\ -1/2&1&-1/2\\ -1/2&-1/2&1\end{pmatrix}$ .

### Q is positive definite only for graphs on our list

Proposition:

Start with any Dynkin graph. The corresponding matrix $Q$ is positive definite (i.e. $v^{T}Qv>0$ unless $v=0$ ) if and only if the Dynkin graph is on our list from the statement of the classification theorem.

We will not prove this: it will be a fun in-depth project for those who want it. I'll just sketch how it goes.

Example:

Consider the matrix $Q$ associated to the triangular Dynkin graph above. We have $\begin{pmatrix}1&1&1\end{pmatrix}\begin{pmatrix}1&-1/2&-1/2\\ -1/2&1&-1/2\\ -1/2&-1/2&1\end{pmatrix}\begin{pmatrix}1\\ 1\\ 1\end{pmatrix}=0,$ so $Q$ is not positive definite.

Remark:

Usually, people multiply $Q$ by $2$ to get rid of all these factors of $1/2$ .

If you have a graph which contains a subgraph whose matrix $Q$ is not positive definite, then your graph also fails to be positive definite. (Exercise)

Then you need to find a suitably large collection of graphs whose matrix is not positive definite. For example, we've seen that the triangle gives a matrix which is not positive definite; a similar argument shows that the matrix associated to any closed polygonal graph is not positive definite. This means that our Dynkin graph cannot have any cycles.

Here are some more graphs whose matrix is not positive definite:

These are all obtained by adding extra vertices to the graphs from the statement of the theorem. The strategy is to make a big list of "bad graphs" $\tilde{A}_{n}$ , $\tilde{B}_{n}$ , ..., $\tilde{G}_{2}$ and then show that any graph which doesn't contain a bad graph must be one of $A_{n}$ , $B_{n}$ , ..., $G_{2}$ .