Lie groups and Lie Algebras – 1

Sources:

Maggiore: “A Modern Introduction to Quantum Field Theory

Robert Gilmore: “Lie Groups, Physics, and Geometry”

Ballentine: “Quantum Mechanics: A Modern Development

These posts are going to have a lot of questions in them because I have a lot of questions about these topics.  Their relation to physics for example is something I’m not ready to state in my own words.

We start with a Lie group G and choose a representation for it.  Representation theory is something I’ll write about more when I have more to say, for now it suffices to imagine choosing a matrix group which is isomorphic to G.  We also have to choose charts for G to give it a differentiable structure.  This also parameterizes the operators of the matrix group. Following the notation of Maggiore, if R is a representation from G to the appropriate matrix group and if \alpha is a chart then D_R(\alpha) is a group element.  Let’s just stay in the same chart for the purposes of this post, so as we range over \alpha we obtain different group elements for a fixed representation.

The Lie algebra is, on the one hand, the tangent space at the identity element of G.  Why there?  You’re welcome to do it anywhere else.  As Gilmore says “A Lie group is homogeneous – every point looks locally like every other point.”  The points of G can be identified with diffeomorphisms of G.  For g\in G define \phi_g to be left multiplication by g.  For an element h\in G, \phi_g(h)=gh which is another point on G.  So we can shift all the points on G around to all the other points on G.  As we’ll end up  ‘expanding’ at the identity this might also simplify matters.  Are there any better reasons?  I’m not sure.

Gilmore talks about the Lie algebra as a linearization of the Lie group.  In what way I wonder?  I’m fairly sure, from the readings, that we’re linearizing functions like \phi_g which are in correspondence with elements of G which are also points in the manifold.  It’s because of the multitude of ways of looking at G that this can be confusing.  Currently this construction looks different from constructing a non-Lie algebra, i.e. just a tangent space.

I’m going to work through Gilmore’s example of SL(2,\mathbb{R}).  Recall this are 2×2 matrices with real valued entries that have  determinant 1.  The determinant means that only three real numbers are required to specify an element so SL(2,\mathbb{R}) is a 3D manifold.

Question: what does it look like?  Any nice embeddings in \mathbb{R}^3?

In what follows I’m going to think about linearizing functions like \phi_g(\alpha) where \alpha = (a, b, c) are the coordinates of the chart.

\phi_g(\alpha) = \begin{bmatrix}  1+a & b \\  c & (1+bc)/(1+a)  \end{bmatrix}

Gilmore does some magic such as let the parameters become infinitesimals and expand in first order (a,b,c)\to (\delta a, \delta b, \delta c). Heuristically we’re doing an expansion of some function \phi_g about the identity I so I’m inclined to follow the ideas of Maggiore and Ballentine here,

\phi_g(\alpha)\approx I+\alpha^{i}\partial_i\phi_g(alpha)|_{\alpha = I}+\mathcal{O}(\alpha^n)

The partial derivatives are the basis vectors for the Lie algebra.  That seems similar in a sense to our tangent space construction,

\partial_a\phi_g(\alpha)=\begin{bmatrix} 1&0\\ 0 & -1\end{bmatrix}=X_a

\partial_b\phi_g(\alpha)=\begin{bmatrix} 0&1\\ 0 & 0\end{bmatrix}=X_b

\partial_c\phi_g(\alpha)=\begin{bmatrix} 0&0\\ 1& 0\end{bmatrix}=X_c

Loosely, we’re taking elements ‘close’ to the identity and expanding them in a sense not so removed from tangent vector to surfaces in a calculus class.  Given a point on a surface and a tangent plane, we can ‘approximate’ the location of another point by using those tangent vectors.  But since those vector don’t reside in the surface it’s only an approximation.  It seems like something similar is happening here.  For group elements close to the identity we can write them, approximately, as

g\approx I+aX_a+bX_b+cX_c

As long as a, b, and c are ‘small.’ As I write this it occurs to me that this construction is probably more similar to another method of constructing the tangent space using curves in the manifold.  More on that later.

With our basis elements we can now form, via vector addition and scalar multiplication, any element of our Lie algebra X\in \mathfrak{sl}(2,\mathbb{R}).  What about reconstructing group elements, as mentioned above?  Take X in the Lie algebra and let \epsilon\in\mathbb{R} be a small real number. Then, as we’ve been saying, an element close to I can be written as I+\epsilon X.

I don’t follow Gilmore here, from the above we go to

\lim_{k\to\infty} \left (I+\frac{1}{k} X\right )^k = \sum_{n=0}^{\infty}\frac{X^n}{n!}=EXP(X)

I’m fine with everything on that line, it’s getting to that line through iterations of (I+\epsilon X) that I’m not sure of.  Certainly there exists k such that 1/k < \epsilon but are we intentionally using smaller and smaller epsilons?  Sure we’d have to (otherwise we’d blow up) but it would be nice if this passage was clearer.

We arrive, regardless, at the commonly stated fact that if you exponentiate an element of the Lie algebra you end up with an element of the Lie group.  If we take an arbitrary element of \mathfrak{sl}(2,\mathbb{R}) we have

\begin{bmatrix} a&b\\c&-a\end{bmatrix}

exponentiating,

EXP(X)=\sum_{n=0}^{\infty}\frac{1}{n!}\begin{bmatrix}a&b\\c&-a\end{bmatrix}^n

Notice the following pattern, (define \theta^2=a^2+bc)

X^0=I, X^1=X, X^2=\theta^2 X, X^3=\theta^2 X

So our series can be split up, I+X+X^2/2!+X^3/3!+... becomes

I(1+\theta^2/2!+\theta^4/4!+...)+X(1+\theta^2/3!+\theta^4/5!+...)

Which motivates us to multiply the second term by \theta/\theta and recognize the power series of the hyperbolic cosine and sine.

EXP(X)=I\cosh(\theta)+\frac{X}{\theta}\sinh(\theta)

or

\begin{bmatrix}\cosh\theta + a\sinh(\theta)/\theta&b\sinh(\theta)/\theta\\c\sinh(\theta)/\theta&\cosh\theta-a\sinh(\theta)/\theta\end{bmatrix}

This is certainly an element of SL(2,\mathbb{R}).

Gilmore goes on to discuss some properties of the Lie algebra.  Group closure implies vector space closure under linear combinations and commutator closure in G implies commutator closure in \mathfrak{g}.  For example for g_1,g_2\in G we have g_1g_2g^{-1}_1g^{-1}_2\in G.  Suppose g_1, g_2 are two group elements close to the identity then g_1=EXP(\epsilon X)  and g^{-1}_1=EXP(-\epsilon X) and so on.  We can rewrite the above commutator as,

$latex g_1 g_2 g^{-1}_1 g^{-1}_2=EXP( \epsilon X) EXP( \delta Y) EXP(- \epsilon X) EXP(- \delta Y)$

And expand each exponential,

(I+\epsilon X+...)(I+\delta Y+...)(I-\epsilon X+...)(I-\delta Y+...)

Multiplying,

I+\epsilon\delta(XY-YX)-\delta^2Y^2-\epsilon^2X^2+\mathcal{O}(\mbox{cubes})

So if the commutator of group elements is to belong to the group, the the commutator XY-YX must belong to the Lie algebra.

Question: why toss out the \delta ^2 Y^2 terms and such?  They’re the same order as the commutator.

There’s a lot packed into this commutator apparently.  One quick and reasonable result is that if G is abelian then [X,Y]=0 which makes sense.  Using the arguments above, the commutator of group elements is the identity, so [X,Y] must vanish (although does that mean X^2 and Y^2 must vanish also?).

I want to wrap up this post with a brief bit about structure constants.  If we take the generators of the Lie Algebra, X^i and take their commutators,

[X^i, X^j]={C^{ij}}_kX^k

Where the {C^{ij}}_k are called the structure constants.  Gilmore makes an interesting comment that despite the fact that the structure constants transform as a tensor, this fact is seldom useful.  It’s easy and a bit of fun to find all the structure constants for \mathfrak{sl}(2,\mathbb{R})

[X^a, X^b]=2X^b

[X^a, X^c]=-2X^c

[X^b, X^c]=X^a

From these we have {C^{ab}}_b=2, {C^{ac}}_c=-2, {C^{bc}}_a=1.  The others are zero.

My intention for follow up posts is to practice calculating some other Lie algebras and structure constants.  If nothing else to gain some calculational fluency as I try to understand the concepts.  Constructive criticism is appreciated.

Advertisements

About because0fbeauty

Fascinated by the way mathematics and physics interact, captivated by visual and tactile mathematics and hoping to become a better expositor of these things is why I blog...occasionally...when I remember.
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

4 Responses to Lie groups and Lie Algebras – 1

  1. sheabrowne says:

    I’m glad you’re posting this. I get confused about language when different people are talking about Lie groups and Lie algebras. I think that, as I read in preparation for posting this comment, I’ve figured out where my confusion has come from. I’ve heard it mention several places that a Lie group is a continuous group that is also a differential manifold, which now makes sense because we need to be able to construct a tangent space in order to define the Lie algebra. I have a couple of questions:

    1: It might be too early in the morning, but I don’t see how g_1 g_2 g^{-1}_1 g^{-1}_2 is the commutator of group elements. I’m curious because in Peskin & Schroeder (ch. 3, Dirac Field), they constructed the commutator for the group formed by Lorentz transformation, then looked for representations whose generators of the Lie algebra (basis vectors of the tangent space) also satisfied the same commutation relations. To me this implies that the commutation relations of the Lie group will be the same as the Lie algebra…. is this true?

    2: I also don’t get the need for the repeated application of (I+\epsilon X) to recover the exponential? Physicists are notoriously sloppy, but we consider \lim_{\epsilon \rightarrow 0} (I+\epsilon X) = e^{X} … I see this all the time in field theory texts/notes. Are we just being sloppy?

  2. Shea,
    I understand, I likewise experience confusion. I was thinking about your comments this morning and it occurs to me that sometimes it might be good to take the definition of a Lie group quite literally. Take SO(3) for example. It’s a 3D manifold. So locally it looks like 3-space. Your office could be inside of SO(3) and you wouldn’t know it. Presumably as you walk around you’d notice something (though I can’t say that I know anything about the geometry of SO(3)). It’s just that when we learned about manifolds and surfaces we started with the nicest smoothest examples because those are the easiest. When we start learning about groups we start with the easiest groups which are finite and discrete. SO(3) is like 3-space where the 3-tuples are given a product operation which would be isomorphic to multiplication of rotation matrices. However we never really think of 3-space that way.

    Question: What does SO(3) look like?

    Onto your other points. Let G be the group of actions on the Rubik’s Cube. Consider the elements R: a clockwise turn of the face facing right (relative to you) and F: a clockwise turn of the face on the front of the cube (relative to you). So the commutator of R and F would be FRF’R’ (where I’ve used the primes to mean inverse or counterclockwise turns). If G were abelian this would equal the identity, so in some sense this measures how far from abelian the group is, just like the commutator you know and love from physics. In fact, this particular commutator is rather well known for the Rubik’s cube, it’s called the Z-commutator (the pattern it makes is sort of like a Z) and, by using conjugates, can generate a lot of useful actions for restoring the cube. This group commutator really is the same idea as the commutator for a Lie algebra. Both are measuring non-commutativity. I’m not sure how decent an explanation that was other than saying it’s defined that way (but the definition is motivated by good reasons).

    In general the commutation relations won’t be the same. I suspect the deal here is that both the groups and algebras we’re looking at can be expressed with matrices which have a natural product and addition rule. I wonder, also, what ‘same’ means. The elements of the algebra needn’t be the same as the elements of the group. For example notice that the generators of the Lie algebra \mathfrak{sl}_2 were det=0 so they don’t belong to SL and vice versa.

    For the second point I think you meant \lim_{\epsilon\to 0}(1+\epsilon X)^{1/\epsilon}. In any event, certainly that goes to exp(X). I was caught up in Gilmore’s motivation. The idea of starting with an element close to the identity of the form I+\epsilon X and then, using the corresponding map (group multiplication) to move it ‘away’ from I. There’s another way of doing this I’m sure you’ve seen which is to create a little differential equation which ends up yielding exp(X). I’m just trying to find both (1) good motivation and (2) sound foundations for why we would exponentiate.

    As I write this I wonder if it would be better to just start asking about sequences of elements, certainly this (1+(1/k)X)^k for k\in\mathbb{N} is a sequence of elements of G. Does it converge? Does it converge in G? To what does it converge in G? There are non-compact Lie groups. Are those the ones where you can’t reconstruct the group from the algebra?

    Unfortunately I’ve got to run. I’ll think about this and write more later. I’m really looking forward to these exchanges.

  3. I was thinking a bit more about the commutators. What’s special about the Lie group G is that its algebraic structure, in particular it’s commutator, induces a commutator on the tangent space. There seem to be plenty of vector spaces you could put a Lie bracket on, but is not induced. I’ll definitely be following this up with a more rigorous post, but after some more sloppy calculations 🙂

  4. Pingback: Lie groups and Lie algebras – 2 | because0fbeauty

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s