# (Sketch) proof of classification theorem

## Classification theorem

This time, we will sketch how you would prove the classification theorem for Dynkin diagrams of compact semisimple groups.

Theorem:

If R is the root system of a compact semisimple Lie group then the Dynkin diagram of R is a disjoint union of diagrams from the following list. (The diagrams on the list are connected; disjoint unions of Dynkin diagrams correspond to taking the product of the corresponding Lie algebras).

## Proof

### Positive definiteness of the Killing form

The idea of the proof is to use the fact that v dot v is positive unless v = 0.

Remark:

This comes from the fact that the dual Killing form K star alpha, alpha is positive definite on little h R dual, the dual of i times the Lie algebra of the maximal torus. However, we're now just talking about root systems, so this is simply a fact about the Euclidean dot product.

We will use this as follows. Write elements of R n (i.e. of little h R dual) in terms of the basis given by the (unit vectors pointing along the) simple roots, that is v equals v_1 down to v_n, equals sum from i = 1 to n of v_i hat alpha_i where alpha_1 up to alpha_n are the simple roots and hat alpha_i is the unit vector alpha_i over length of alpha_i.

The dot product v dot v is: v dot v equals sum of v_i hat alpha_i dot sum of v_j hat alpha_j, which equals sum over i and j of v_i v_j hat alpha_i dot hat alpha_j.

We can write this as a matrix product row vector v_1 to v_n times Q times column vector v_1 down to v_n where Q is the matrix whose i j entry is Q_{i j} equals hat alpha_i dot hat alpha_j.

### The entries of Q

The diagonal entries of Q are 1s, because the hat alpha_i are unit vectors. The off-diagonal entries of Q are nonpositive, because alpha dot beta is less than or equal to zero for simple roots alpha and beta as we saw last time.

Moreover, Q_{i j} equals hat alpha_i dot hat alpha_j, which equals alpha_i dot alpha j over (square root of alpha_i dot alpha_i times square root of alpha_j dot alpha_j), which equals minus the square root of (alpha_i dot alpha_j over alpha_i dot alpha_i times alpha_i dot alpha_j over alpha_j dot alpha_j).

Here, we're taking the negative square root because we know that Q_{i j} is negative. Recall from before that n alpha_i alpha_j equals 2 alpha_i dot alpha_j over alpha_i dot alpha_i and n alpha_j alpha_i equals 2 alpha_i dot alpha_j over alpha_j dot alpha_j, so Q_{i j} equals minus a half square root of n_{alpha_i alpha_j} n_{alpha_j alpha_i}.

We showed that n alpha_i alpha_j times n alpha_j alpha_i equals 4 cos squared phi, which is 0, 1, 2 or 3 where phi is the angle between alpha_i and alpha_j. Therefore the off-diagonal entries of Q can only be 0, minus a half, minus 1 over square root of 2, or minus root 3 over 2.

### Q makes sense for any Dynkin-like graph

Observe that this matrix Q makes sense for any Dynkin graph (i.e. a graph where every node is connected by 0, 1, 2 or 3 edges) whether it's on our list in the theorem or not: we just write down a matrix whose diagonal entries are 1s and whose i j entry is 0, minus a half, minus one over root 2, or minus root 3 over 2 depending on whether the number of edges connecting vertex i and vertex j is 0, 1, 2 or 3.

Example:

The root system A_2 has Q equals 1, minus a half, minus a half 1.

Example:

The triangle Dynkin diagram shown below (three vertices, all connected by single edges) gives the matrix Q equals 1, minus a half, minus a half; minus a half, 1, minus a half; minus a half, minus a half, 1.

### Q is positive definite only for graphs on our list

Proposition:

Start with any Dynkin graph. The corresponding matrix Q is positive definite (i.e. v transpose Q v is positive unless v = 0) if and only if the Dynkin graph is on our list from the statement of the classification theorem.

We will not prove this: it will be a fun in-depth project for those who want it. I'll just sketch how it goes.

Example:

Consider the matrix Q associated to the triangular Dynkin graph above. We have row vector 1, 1, 1 times Q times column vector 1, 1, 1 equals 0 so Q is not positive definite.

Remark:

Usually, people multiply Q by 2 to get rid of all these factors of a half.

If you have a graph which contains a subgraph whose matrix Q is not positive definite, then your graph also fails to be positive definite. (Exercise)

Then you need to find a suitably large collection of graphs whose matrix is not positive definite. For example, we've seen that the triangle gives a matrix which is not positive definite; a similar argument shows that the matrix associated to any closed polygonal graph is not positive definite. This means that our Dynkin graph cannot have any cycles.

Here are some more graphs whose matrix is not positive definite:

These are all obtained by adding extra vertices to the graphs from the statement of the theorem. The strategy is to make a big list of "bad graphs" tilde A_n, tilde B_n, ..., tilde G_2 and then show that any graph which doesn't contain a bad graph must be one of A_n, B_n, ..., G_2.