37. Eigenapplications, 3: Dynamics

37. Eigenapplications, 3: Dynamics

We now turn to dynamics. Let v be a vector and A be a matrix. Consider the sequence v , A v , A 2 v , A 3 v , . We'll investigate what happens to this sequence A n v as n .

Example: Fibonacci sequence

Let A = ( 0 1 1 1 ) and v = ( 1 1 ) . We get A v = ( 1 2 ) , A 2 v = ( 2 3 ) , A 3 v = ( 3 5 ) , A 4 v = ( 5 8 ) , and, more generally, A n v = ( F n F n + 1 ) where F 0 = 1 , F 1 = 1 , F 2 = 2 , F 3 = 3 , F 4 = 5 , F 5 = 8 , F 6 = 13 is the Fibonacci sequence.

Why are we getting the Fibonacci numbers? Suppose the formula A n v = ( F n F n + 1 ) is true for some value of n ; we'll prove it's true for all values of n by induction: A n + 1 v = A A n v = A ( F n F n + 1 ) = ( F n + 1 F n + F n + 1 ) = ( F n + 1 F n + 2 ) , where we used the recursive formula F n + 2 = F n + 1 + F n which defines the Fibonacci sequence.

As n , both entries of the vector tends to infinity, but they do so in a particular way:

Theorem:

We have lim n F n + 1 F n = 1 + 5 2 . This expression is the "golden ratio" 1.618 .

Write v = ( 1 1 ) as α u 1 + β u 2 where u 1 and u 2 are the λ 1 - and λ 2 -eigenvectors of A = ( 0 1 1 1 ) . We'll figure out what these eigenvectors and eigenvalues are later.

Now A n v = A n ( α u 1 + β u 2 ) = α A n u 1 + β A n u 2 . We have A u 1 = λ 1 u 1 , A 2 u 1 = λ 1 A u 1 = λ 1 2 u 1 , and by induction we get A n u 1 = λ 1 n u 1 , A n u 2 = λ 2 n u 2 . Therefore ( F n F n + 1 ) = A n v = α λ 1 n u 1 + β λ 2 n u 2 .

I claim that λ 1 = 1 + 5 2 1.618 and λ 2 = 1 - 5 2 - 0.618 . Therefore:

  • λ 1 > 1 , so λ 1 n ,

  • λ 2 n 0 as n . Note that λ 2 is negative, so its powers keep switching sign, but its absolute value is less than 1, so the absolute value of its powers get smaller and smaller as n .

Therefore lim n F n + 1 F n is the limit of the slopes of the vectors α λ 1 n u 1 + β λ 2 n u 2 , and the λ 2 n term is going to zero, so in the limit we just get the slope of the vector α λ 1 n u 1 , which is just a rescaling of u 1 . Since rescaling doesn't change the slope, we get lim n F n + 1 F n = slope of  u 1 .

We therefore need to figure out the slope of u 1 (and verify the claim about eigenvalues). The characteristic polynomial of A is det ( - t 1 1 1 - t ) = t 2 - t + 1 , whose roots are 1 ± 5 2 as required. The eigenvectors are ( 1 1 ± 5 2 ) , so u 1 (corresponding to the plus sign) has slope 1 + 5 2 , as required.

Here's a picture of the eigenlines (orthogonal to one another because the matrix A is symmetric) and the positions of v , A v , A 2 v , . You can see that these vectors get closer and closer to the u 1 -eigenline (and stretched out in the u 1 -direction). They move from side to side of this axis because the sign of λ 2 is negative. So A n v gets more and more parallel to u 1 as n .

The vectors A^nv approach the u_1 direction as n goes to infinity

Arnold cat map

Here's another nice example, due to Vladimir Arnold. Consider A = ( 2 1 1 1 ) . This has eigenvalues 3 ± 5 2 : one of these is bigger than 1, the other is positive but less than 1. Here are the eigenlines, with a square S drawn on (whose sides are parallel to the eigenlines). We also draw A ( S ) and A 2 ( S ) . We can see that it gets stretched in the u 1 -direction and squashed in the u 2 -direction (because λ 1 > 1 and λ 2 < 1 ). In the limit, A n ( S ) gets thinner and thinner and closer to the u 1 -eigenline.

The cat map squashes rectangles in the u_2 direction and stretches them in the u_1 direction

This is called the Arnold cat map because of the following strange phenomenon. Take an infinite grid of squares in 𝐑 2 , take a picture of a cat, and put it into every square. Apply A to this grid of cats. The cats will get stretched and squashed in the eigendirections. Pick one of our original squares and look at what's there. We see a bunch of cats all chopped up and stretched and squished back into that square in some way. Now repeat, and repeat. What we see in our square is absolute carnage for a long time. But, amazingly, at some point, we our cat reappears almost identically to how it looked to begin with. This is not because of any periodicity: A n is not the identity for any n > 0 . This is instead an instance of "Poincaré recurrence": a phenomenon in dynamical systems which goes way beyond anything we're discussing in this course.