New representations from old

New representations from old

We've now encountered four representations of SU(2) and we've claimed there are irreps of every complex dimension. To see this, we will need some recipes for constructing new representations out of old representations.

Direct sum

Here's one recipe we already know.

Definition:

Given two representations R:GGL(V) and S:GGL(W) , we can construct a representation RS:GGL(VW) by setting (RS)(g)=(R(g)00S(g)) (a block matrix).

Remark:

This is no use for us because we're looking for irreducible representations, and this is not irreducible unless R or S is the zero-representation (because R and S are subrepresentations).

Tensor product

Definition:

Given two representations R:GGL(V) and S:GGL(W) , we can construct a representation RS:GGL(VW) as follows.

  • The vector space VW is constructed by taking a basis e1,,em of V and a basis f1,,fn of W and using the symbols eifj as a basis of VW . This is mn -dimensional. The vectors in VW are things like e1f1 or e1f2-12e3f5 .

  • (RS)(g) is the linear map which acts as follows on tensors vw : (RS)(g)(vw)=(R(g)v)(S(g)w). It's enough to specify what it does on such tensors because all our basis vectors have this form.

Example:

Take R=S to be the standard representation of SU(2) (so V=W=𝐂2 ). Pick the standard basis e1 , e2 of V and f1 , f2 of W (these are the same bases, I'm just keeping different letters for clarity). A basis for 𝐂2𝐂2 is given by e1f1 , e1f2 , e2f1 , e2f2 . Let g=(ab-ˉbˉa)SU(2) . For the standard representation, R(g)=S(g)=(ab-ˉbˉa) . This means R(g)e1=ae1-ˉbe2 and R(g)e2=be1+ˉae2 (similarly for S with e 's replaced by f 's).

Let's calculate (RS)(g)(e1f1)=(R(g)e1)(S(g)f1)=(ae1-ˉbe2)(af1-ˉbf2)

Multiplying out brackets, we get: a2e1f1-ˉbae2f1-aˉbe2f1+ˉb2e2f2.

This means that the first column of the 4-by-4 matrix (RS)(g) is (a2???-aˉb???-aˉb???ˉb2???).

You can figure out the second column by computing (RS)(g)(e1f2) , etc. (This will be an exercise).

Remark:

It will turn out that the tensor product does not usually give an irreducible representation (indeed we will have a lot of fun later decomposing tensor products into subrepresentations), but it is a much more interesting recipe for representations than direct summation.

Symmetric powers

Given a representation R:GGL(V) , take Rn:GGL(Vn) . This is not irreducible: we will produce a subrepresentation consisting of symmetric tensors.

Example:

Take V=𝐂2 to be the standard representation and n=2 . We found a basis e1e1 , e1e2 , e2e1 , e2e2 of (𝐂2)2 . The tensors e1e1 and e2e2 are symmetric in the sense that when I switch the two factors I get the same tensor back. The other two basis tensors are not, but the combination e1e2+e2e1 is symmetric because if I switch all factors in all monomials then I get e2e1+e1e2 , which is the same combination back again. By contrast, e1e2-e2e1 is antisymmetric: we'll talk more about that at a later date.

We will now see that the symmetric tensors span a subrepresentation, called Symn(V) . Here's the idea. Given any tensor, I can produce something symmetric in a canonical way, as illustrated by the following example.

Example:

Suppose V=𝐂3 with basis e1 , e2 , e3 . Consider V3 . To symmetrise e1e2e3V3 , we take 16(e1e2e3+e2e1e3+e1e3e2+e3e2e1+e2e3e1+e3e1e2).

This is just summing all six permutations of the three factors and dividing by the number of permutations (in this case 6) so that if we start with a symmetric tensor then we get the same tensor back.

Definition:

Define the averaging map Av:VnVn by Av(v1vn)=1n!σSnvσ(1)vσ(n) . In other words, you take all possible permutations of the factors and then take the average of these.

Definition:

Symn(V)Vn is the image of the averaging map, i.e. the set of all averaged tensors, which are symmetric by construction.

Lemma:

If R:GGL(V) is a representation of G then Symn(V) is a subrepresentation of Vn .

Proof:

We will first show that the averaging map is a morphism of representations VnVn . Then we'll show that the image of a morphism is a subrepresentation.

Recall that a morphism of representations is a map L:VW such that LR(g)=S(g)L for all gG . We are therefore trying to show that Av(R(g)n(v1vn)=R(g)nAv(v1vn).

From the definition of the tensor product and the averaging map, we have Av(R(g)n(v1vn))=Av(R(g)v1R(g)vn)=1n!σSnR(g)vσ(1)R(g)vσ(n).

On the other hand, R(g)nAv(v1vn)=R(g)n1n!σSnvσ(1)vσ(n)=1n!σSnR(g)vσ(1)R(g)vσ(n).

These are identical, so we see that the averaging map is a morphism of representations.

We will now prove a separate lemma telling us that the image of a morphism of representations is a subrepresentation, which will complete the proof.

Lemma:

If L:VW is a morphism of representations from R:GGL(V) to S:GGL(W) then its image {L(v)W:vV} is a subrepresentation of W .

Proof:

Take L(v) in the image of L . Apply S(g) to it. We have S(g)L(v)=L(R(g)v) because L is a morphism, so S(g)L(v) is in the image of L . This tells us that the image of L is a subrepresentation.

Remark:

In the case of the symmetric power this is telling us that Symn(V) is a subrepresentation of Vn .

Example:

Take V=𝐂2 , the standard representation of SU(2) . Consider Sym3(𝐂2) . By averaging the basis elements of V3 we end up with a basis for Sym3(𝐂2) : e1e1e1 , 13(e1e1e2+e1e2e1+e2e1e1) , 13(e1e2e2+e2e1e2+e2e2e1) , e2e2e2 .

Remark:

This is a 4-dimensional representation, and it will turn out to be irreducible. You can label the elements here with polynomials: e31 , e21e2 , e1e22 , e32 . Given a homogeneous monomial M in e1 and e2 of degree n , there's a unique way to write down a symmetric tensor whose monomials reduce to M when you remove the tensor symbols. You can therefore think of Symn(V) as the space of homogeneous polynomials of degree n in the basis elements of V .

Remark:

It's an exercise to see that Symn(𝐂2) is (n+1) -dimensional. These will turn out to be our irreducible representations of SU(2) .

Pre-class exercise

Exercise:

Let R:SU(2)GL(2,𝐂) and S:SU(2)GL(2,𝐂) be two copies of the standard representation. Figure out the full 4-by-4 matrix (RS)(ab-ˉbˉa) .