# New representations from old

## New representations from old

We've now encountered four representations of $SU(2)$ and we've claimed there are irreps of every complex dimension. To see this, we will need some recipes for constructing new representations out of old representations.

## Direct sum

Here's one recipe we already know.

Definition:

Given two representations $R\colon G\to GL(V)$ and $S\colon G\to GL(W)$ , we can construct a representation $R\oplus S\colon G\to GL(V\oplus W)$ by setting $(R\oplus S)(g)=\begin{pmatrix}R(g)&0\\ 0&S(g)\end{pmatrix}$ (a block matrix).

Remark:

This is no use for us because we're looking for irreducible representations, and this is not irreducible unless $R$ or $S$ is the zero-representation (because $R$ and $S$ are subrepresentations).

## Tensor product

Definition:

Given two representations $R\colon G\to GL(V)$ and $S\colon G\to GL(W)$ , we can construct a representation $R\otimes S\colon G\to GL(V\otimes W)$ as follows.

• The vector space $V\otimes W$ is constructed by taking a basis $e_{1},\ldots,e_{m}$ of $V$ and a basis $f_{1},\ldots,f_{n}$ of $W$ and using the symbols $e_{i}\otimes f_{j}$ as a basis of $V\otimes W$ . This is $mn$ -dimensional. The vectors in $V\otimes W$ are things like $e_{1}\otimes f_{1}$ or $e_{1}\otimes f_{2}-\frac{1}{2}e_{3}\otimes f_{5}$ .

• $(R\otimes S)(g)$ is the linear map which acts as follows on tensors $v\otimes w$ : $(R\otimes S)(g)(v\otimes w)=(R(g)v)\otimes(S(g)w).$ It's enough to specify what it does on such tensors because all our basis vectors have this form.

Example:

Take $R=S$ to be the standard representation of $SU(2)$ (so $V=W=\mathbf{C}^{2}$ ). Pick the standard basis $e_{1}$ , $e_{2}$ of $V$ and $f_{1}$ , $f_{2}$ of $W$ (these are the same bases, I'm just keeping different letters for clarity). A basis for $\mathbf{C}^{2}\otimes\mathbf{C}^{2}$ is given by $e_{1}\otimes f_{1}$ , $e_{1}\otimes f_{2}$ , $e_{2}\otimes f_{1}$ , $e_{2}\otimes f_{2}$ . Let $g=\begin{pmatrix}a&b\\ -\bar{b}&\bar{a}\end{pmatrix}\in SU(2)$ . For the standard representation, $R(g)=S(g)=\begin{pmatrix}a&b\\ -\bar{b}&\bar{a}\end{pmatrix}$ . This means $R(g)e_{1}=ae_{1}-\bar{b}e_{2}$ and $R(g)e_{2}=be_{1}+\bar{a}e_{2}$ (similarly for $S$ with $e$ 's replaced by $f$ 's).

Let's calculate $(R\otimes S)(g)(e_{1}\otimes f_{1})=(R(g)e_{1})\otimes(S(g)f_{1})=(ae_{1}-\bar% {b}e_{2})\otimes(af_{1}-\bar{b}f_{2})$

Multiplying out brackets, we get: $a^{2}e_{1}\otimes f_{1}-\bar{b}ae_{2}\otimes f_{1}-a\bar{b}e_{2}\otimes f_{1}+% \bar{b}^{2}e_{2}\otimes f_{2}.$

This means that the first column of the 4-by-4 matrix $(R\otimes S)(g)$ is $\begin{pmatrix}a^{2}&?&?&?\\ -a\bar{b}&?&?&?\\ -a\bar{b}&?&?&?\\ \bar{b}^{2}&?&?&?\end{pmatrix}.$

You can figure out the second column by computing $(R\otimes S)(g)(e_{1}\otimes f_{2})$ , etc. (This will be an exercise).

Remark:

It will turn out that the tensor product does not usually give an irreducible representation (indeed we will have a lot of fun later decomposing tensor products into subrepresentations), but it is a much more interesting recipe for representations than direct summation.

## Symmetric powers

Given a representation $R\colon G\to GL(V)$ , take $R^{\otimes n}\colon G\to GL(V^{\otimes n})$ . This is not irreducible: we will produce a subrepresentation consisting of symmetric tensors.

Example:

Take $V=\mathbf{C}^{2}$ to be the standard representation and $n=2$ . We found a basis $e_{1}\otimes e_{1}$ , $e_{1}\otimes e_{2}$ , $e_{2}\otimes e_{1}$ , $e_{2}\otimes e_{2}$ of $(\mathbf{C}^{2})^{\otimes 2}$ . The tensors $e_{1}\otimes e_{1}$ and $e_{2}\otimes e_{2}$ are symmetric in the sense that when I switch the two factors I get the same tensor back. The other two basis tensors are not, but the combination $e_{1}\otimes e_{2}+e_{2}\otimes e_{1}$ is symmetric because if I switch all factors in all monomials then I get $e_{2}\otimes e_{1}+e_{1}\otimes e_{2}$ , which is the same combination back again. By contrast, $e_{1}\otimes e_{2}-e_{2}\otimes e_{1}$ is antisymmetric: we'll talk more about that at a later date.

We will now see that the symmetric tensors span a subrepresentation, called $\mathrm{Sym}^{n}(V)$ . Here's the idea. Given any tensor, I can produce something symmetric in a canonical way, as illustrated by the following example.

Example:

Suppose $V=\mathbf{C}^{3}$ with basis $e_{1}$ , $e_{2}$ , $e_{3}$ . Consider $V^{\otimes 3}$ . To symmetrise $e_{1}\otimes e_{2}\otimes e_{3}\in V^{\otimes 3}$ , we take $\frac{1}{6}(e_{1}\otimes e_{2}\otimes e_{3}+e_{2}\otimes e_{1}\otimes e_{3}+e_% {1}\otimes e_{3}\otimes e_{2}+e_{3}\otimes e_{2}\otimes e_{1}+e_{2}\otimes e_{% 3}\otimes e_{1}+e_{3}\otimes e_{1}\otimes e_{2}).$

This is just summing all six permutations of the three factors and dividing by the number of permutations (in this case 6) so that if we start with a symmetric tensor then we get the same tensor back.

Definition:

Define the averaging map $\mathrm{Av}\colon V^{\otimes n}\to V^{\otimes n}$ by $\mathrm{Av}(v_{1}\otimes\cdots\otimes v_{n})=\frac{1}{n!}\sum_{\sigma\in S_{n}% }v_{\sigma(1)}\otimes\cdots\otimes v_{\sigma(n)}$ . In other words, you take all possible permutations of the factors and then take the average of these.

Definition:

$\mathrm{Sym}^{n}(V)\subset V^{\otimes n}$ is the image of the averaging map, i.e. the set of all averaged tensors, which are symmetric by construction.

Lemma:

If $R\colon G\to GL(V)$ is a representation of $G$ then $\mathrm{Sym}^{n}(V)$ is a subrepresentation of $V^{\otimes n}$ .

Proof:

We will first show that the averaging map is a morphism of representations $V^{\otimes n}\to V^{\otimes n}$ . Then we'll show that the image of a morphism is a subrepresentation.

Recall that a morphism of representations is a map $L\colon V\to W$ such that $L\circ R(g)=S(g)\circ L$ for all $g\in G$ . We are therefore trying to show that $\mathrm{Av}(R(g)^{\otimes n}(v_{1}\otimes\cdots\otimes v_{n})=R(g)^{\otimes n}% \mathrm{Av}(v_{1}\otimes\cdots\otimes v_{n}).$

From the definition of the tensor product and the averaging map, we have $\mathrm{Av}(R(g)^{\otimes n}(v_{1}\otimes\cdots\otimes v_{n}))=\mathrm{Av}(R(g% )v_{1}\otimes\cdots\otimes R(g)v_{n})=\frac{1}{n!}\sum_{\sigma\in S_{n}}R(g)v_% {\sigma(1)}\otimes\cdots\otimes R(g)v_{\sigma(n)}.$

On the other hand, $R(g)^{\otimes n}\mathrm{Av}(v_{1}\otimes\cdots\otimes v_{n})=R(g)^{\otimes n}% \frac{1}{n!}\sum_{\sigma\in S_{n}}v_{\sigma(1)}\otimes\cdots\otimes v_{\sigma(% n)}=\frac{1}{n!}\sum_{\sigma\in S_{n}}R(g)v_{\sigma(1)}\otimes\cdots\otimes R(% g)v_{\sigma(n)}.$

These are identical, so we see that the averaging map is a morphism of representations.

We will now prove a separate lemma telling us that the image of a morphism of representations is a subrepresentation, which will complete the proof.

Lemma:

If $L\colon V\to W$ is a morphism of representations from $R\colon G\to GL(V)$ to $S\colon G\to GL(W)$ then its image $\{L(v)\in W\ :\ v\in V\}$ is a subrepresentation of $W$ .

Proof:

Take $L(v)$ in the image of $L$ . Apply $S(g)$ to it. We have $S(g)L(v)=L(R(g)v)$ because $L$ is a morphism, so $S(g)L(v)$ is in the image of $L$ . This tells us that the image of $L$ is a subrepresentation.

Remark:

In the case of the symmetric power this is telling us that $\mathrm{Sym}^{n}(V)$ is a subrepresentation of $V^{\otimes n}$ .

Example:

Take $V=\mathbf{C}^{2}$ , the standard representation of $SU(2)$ . Consider $\mathrm{Sym}^{3}(\mathbf{C}^{2})$ . By averaging the basis elements of $V^{\otimes 3}$ we end up with a basis for $\mathrm{Sym}^{3}(\mathbf{C}^{2})$ : $e_{1}\otimes e_{1}\otimes e_{1}$ , $\frac{1}{3}(e_{1}\otimes e_{1}\otimes e_{2}+e_{1}\otimes e_{2}\otimes e_{1}+e_% {2}\otimes e_{1}\otimes e_{1})$ , $\frac{1}{3}(e_{1}\otimes e_{2}\otimes e_{2}+e_{2}\otimes e_{1}\otimes e_{2}+e_% {2}\otimes e_{2}\otimes e_{1})$ , $e_{2}\otimes e_{2}\otimes e_{2}$ .

Remark:

This is a 4-dimensional representation, and it will turn out to be irreducible. You can label the elements here with polynomials: $e_{1}^{3}$ , $e_{1}^{2}e_{2}$ , $e_{1}e_{2}^{2}$ , $e_{2}^{3}$ . Given a homogeneous monomial $M$ in $e_{1}$ and $e_{2}$ of degree $n$ , there's a unique way to write down a symmetric tensor whose monomials reduce to $M$ when you remove the tensor symbols. You can therefore think of $\mathrm{Sym}^{n}(V)$ as the space of homogeneous polynomials of degree $n$ in the basis elements of $V$ .

Remark:

It's an exercise to see that $\mathrm{Sym}^{n}(\mathbf{C}^{2})$ is $(n+1)$ -dimensional. These will turn out to be our irreducible representations of $SU(2)$ .

## Pre-class exercise

Exercise:

Let $R\colon SU(2)\to GL(2,\mathbf{C})$ and $S\colon SU(2)\to GL(2,\mathbf{C})$ be two copies of the standard representation. Figure out the full 4-by-4 matrix $(R\otimes S)\begin{pmatrix}a&b\\ -\bar{b}&\bar{a}\end{pmatrix}$ .