New representations from old

New representations from old

We've now encountered four representations of SU(2) and we've claimed there are irreps of every complex dimension. To see this, we will need some recipes for constructing new representations out of old representations.

Direct sum

Here's one recipe we already know.

Definition:

Given two representations R from G to G L V and S from G to G L W, we can construct a representation direct sum S from G to G L V direct sum W by setting R direct sum S of g equals the block matrix with diagonal blocks R of g and S of g (a block matrix).

Remark:

This is no use for us because we're looking for irreducible representations, and this is not irreducible unless R or S is the zero-representation (because R and S are subrepresentations).

Tensor product

Definition:

Given two representations R from G to G L V and S from G to G L W, we can construct a representation R tensor S from G To G L V tensor W as follows.

  • The vector space V tensor W is constructed by taking a basis e_1 to e_m of V and a basis f_1 to f_n of W and using the symbols e_i tensor f_j as a basis of V tensor W. This is mn-dimensional. The vectors in V tensor W are things like e_1 tensor f_1 or e_1 tensor f_2 minus a half e_3 tensor f_5.

  • R tensor S of g is the linear map which acts as follows on tensors v tensor w: R tensor S of g applied to v tensor w equals (R of g applied to v) tensor (S of g applied to w) It's enough to specify what it does on such tensors because all our basis vectors have this form.

Example:

Take R = S to be the standard representation of SU(2) (so V equals W equals C 2). Pick the standard basis e_1, e_2 of V and f_1, f_2 of W (these are the same bases, I'm just keeping different letters for clarity). A basis for C 2 tensor C 2 is given by e_1 tensor f_1, e_1 tensor f_w, e_2 tensor f_1, e_2 tensor f_2. Let g be the matrix alpha, beta, minus beta bar, alpha bar in SU(2). For the standard representation, R of g equals S of g equals the matrix alpha, beta, minus beta bar, alpha bar. This means R of g applied to e_1 equals alpha e_1 minus beta bar e_2 and R of g applied to e_2 equals beta e_1 plus alpha bar e_2 (similarly for S with e's replaced by f's).

Let's calculate R tensor S of g applied to e_1 tensor f_1 equals R of g applied to e_1, tensor S of g applied to e_2, which equals (alpha e_1 minus beta bar e_2) tensored with (alpha f_1 minus beta bar f_2)

Multiplying out brackets, we get: alpha squared e_1 tensor f_1 minus beta bar alpha e_2 tensor f_1 minus alpha beta bar e_2 tensor f_1 plus beta bar squared e_2 tensor f_2.

This means that the first column of the 4-by-4 matrix R tensor S of g is alpha squared, minus alpha beta bar, minus alpha beta bar, beta bar squared, all the other columns being filled with question marks.

You can figure out the second column by computing R tensor S of g applied to e_1 tensor f_2, etc. (This will be an exercise).

Remark:

It will turn out that the tensor product does not usually give an irreducible representation (indeed we will have a lot of fun later decomposing tensor products into subrepresentations), but it is a much more interesting recipe for representations than direct summation.

Symmetric powers

Given a representation R from G to G L V, take R tensored with itself to the power n from G to G L V tensored with itself to the power n. This is not irreducible: we will produce a subrepresentation consisting of symmetric tensors.

Example:

Take V equals C 2 to be the standard representation and n = 2. We found a basis e_1 tensor e_1, e_1 tensor e_2, e_2 tensor e_1, e_2 tensor e_2 of C 2 tensor squared. The tensors e_1 tensor e_1 and e_2 tensor e_2 are symmetric in the sense that when I switch the two factors I get the same tensor back. The other two basis tensors are not, but the combination e_1 tensor e_2 plus e_2 tensor e_1 is symmetric because if I switch all factors in all monomials then I get e_2 tensor e_1 plus e_1 tensor e_2, which is the same combination back again. By contrast, e_1 tensor e_2 minus e_2 tensor e_1 is antisymmetric: we'll talk more about that at a later date.

We will now see that the symmetric tensors span a subrepresentation, called Sym n V. Here's the idea. Given any tensor, I can produce something symmetric in a canonical way, as illustrated by the following example.

Example:

Suppose V equals C 3 with basis e_1, e_2, e_3. Consider V tensor-cubed. To symmetrise e_1 tensor e_2 tensor e_3 in V tensor-cubed, we take (e_1 tensor e_2 tensor e_3 plus e_2 tensor e_1 tensor e_3 plus e_1 tensor e_3 tensor e_2 plus e_3 tensor e_2 tensor e_1 plus e_2 tensor e_3 tensor e_1 plus e_3 tensor e_1 tensor e_2) all divided by 6.

This is just summing all six permutations of the three factors and dividing by the number of permutations (in this case 6) so that if we start with a symmetric tensor then we get the same tensor back.

Definition:

Define the averaging map Av from (V to the nth tensor power) to (V to the nth tensor power) by Av of v_1 tensor dot dot dot tensor v_n equals one over n factorial times the sum over permutations sigma of v_(sigma of 1) tensor dot dot dot tensor v_(sigma of n). In other words, you take all possible permutations of the factors and then take the average of these.

Definition:

Sym n of V inside V to the nth tensor power is the image of the averaging map, i.e. the set of all averaged tensors, which are symmetric by construction.

Lemma:

If R from G to G L V is a representation of G then Sym n V is a subrepresentation of V to the nth tensor power.

Proof:

We will first show that the averaging map is a morphism of representations from V to the nth tensor power to itself. Then we'll show that the image of a morphism is a subrepresentation.

Recall that a morphism of representations is a map L from V to W such that L compose R of g equals S of g compose L for all g in G. We are therefore trying to show that Av applied to (R of g to the nth tensor power applied to v_1 tensor dot dot dot tensor v_n) equals R of g to the nth tensor power applied to Av of v_1 tensor dot dot dot tensor v_n

From the definition of the tensor product and the averaging map, we have Av applied to (R of g to the nth tensor power applied to v_1 tensor dot dot dot tensor v_n) equals Av applied to (R of g v_1 tensor dot dot dot tensor R of g v_n), which equals one over n factorial times the sum over permutations sigma of R of g v_(sigma 1) tensor dot dot dot tensor R of g v_(sigma n)

On the other hand, R of g to the nth tensor power applied to Av of v_1 tensor dot dot dot tensor v_n equals R of g to the nth tensor power applied to one over n factorial times the sum over permutations sigma of v_(sigma_1) tensor dot dot dot tensor v_(sigma_n), which equals one over n factorial times the sum over permutations sigma of R of g v_(sigma_1) tensor dot dot dot tensor R of g v_(sigma n)

These are identical, so we see that the averaging map is a morphism of representations.

We will now prove a separate lemma telling us that the image of a morphism of representations is a subrepresentation, which will complete the proof.

Lemma:

If L from V to W is a morphism of representations from R from G to G L V to S from G to G L W then its image the set of L of v in W as v runs over V is a subrepresentation of W.

Proof:

Take L of v in the image of L. Apply S of g to it. We have S of g applied to L of v equals L of (R of g applied to v) because L is a morphism, so S of g applied to L of v is in the image of L. This tells us that the image of L is a subrepresentation.

Remark:

In the case of the symmetric power this is telling us that Sym n V is a subrepresentation of V to the nth tensor power.

Example:

Take V equals C 2, the standard representation of SU(2). Consider Sym 3 of C 2. By averaging the basis elements of V tensor-cubed we end up with a basis for Sym 3 C 2: e_1 tensor e_1 tensor e_1, a third of (e_1 tensor e_1 tensor e_2 plus e_1 tensor e_2 tensor e_1 plus e_2 tensor e_1 tensor e_1), a third of (e_1 tensor e_2 tensor e_2 plus e_2 tensor e_1 tensor e_2 plus e_2 tensor e_2 tensor e_1), e_2 tensor e_2 tensor e_2.

Remark:

This is a 4-dimensional representation, and it will turn out to be irreducible. You can label the elements here with polynomials: e_1 cubed, e_1 squared times e_2, e_1 times e_2 squared, e_2 cubed. Given a homogeneous monomial M in e_1 and e_2 of degree n, there's a unique way to write down a symmetric tensor whose monomials reduce to M when you remove the tensor symbols. You can therefore think of Sym n V as the space of homogeneous polynomials of degree n in the basis elements of V.

Remark:

It's an exercise to see that Sym n C 2 is (n+1)-dimensional. These will turn out to be our irreducible representations of SU(2).

Pre-class exercise

Exercise:

Let R from SU(2) to G L 2 C and S from SU(2) to G L 2 C be two copies of the standard representation. Figure out the full 4-by-4 matrix R tensor S of a, b; minus b bar, a bar.