We have limn→∞Fn+1Fn=1+√52. This expression is the "golden ratio" 1.618⋯ .
37. Eigenapplications, 3: Dynamics
37. Eigenapplications, 3: Dynamics
We now turn to dynamics. Let v be a vector and A be a matrix. Consider the sequence v,Av,A2v,A3v,… . We'll investigate what happens to this sequence Anv as n→∞ .
Example: Fibonacci sequence
Let A=(0111) and v=(11) . We get Av=(12),A2v=(23),A3v=(35),A4v=(58), and, more generally, Anv=(FnFn+1) where F0=1 , F1=1 , F2=2 , F3=3 , F4=5 , F5=8 , F6=13 is the Fibonacci sequence.
Why are we getting the Fibonacci numbers? Suppose the formula Anv=(FnFn+1) is true for some value of n ; we'll prove it's true for all values of n by induction: An+1v=AAnv=A(FnFn+1)=(Fn+1Fn+Fn+1)=(Fn+1Fn+2), where we used the recursive formula Fn+2=Fn+1+Fn which defines the Fibonacci sequence.
As n→∞ , both entries of the vector tends to infinity, but they do so in a particular way:
Write v=(11) as αu1+βu2 where u1 and u2 are the λ1 - and λ2 -eigenvectors of A=(0111) . We'll figure out what these eigenvectors and eigenvalues are later.
Now Anv=An(αu1+βu2)=αAnu1+βAnu2 . We have Au1=λ1u1 , A2u1=λ1Au1=λ21u1 , and by induction we get Anu1=λn1u1,Anu2=λn2u2. Therefore (FnFn+1)=Anv=αλn1u1+βλn2u2 .
I claim that λ1=1+√52≈1.618⋯ and λ2=1-√52≈-0.618⋯ . Therefore:
-
λ1>1 , so λn1→∞ ,
-
λn2→0 as n→∞ . Note that λ2 is negative, so its powers keep switching sign, but its absolute value is less than 1, so the absolute value of its powers get smaller and smaller as n→∞ .
Therefore limn→∞Fn+1Fn is the limit of the slopes of the vectors αλn1u1+βλn2u2 , and the λn2 term is going to zero, so in the limit we just get the slope of the vector αλn1u1 , which is just a rescaling of u1 . Since rescaling doesn't change the slope, we get limn→∞Fn+1Fn=slope of u1.
We therefore need to figure out the slope of u1 (and verify the claim about eigenvalues). The characteristic polynomial of A is det(-t111-t)=t2-t+1 , whose roots are 1±√52 as required. The eigenvectors are (11±√52) , so u1 (corresponding to the plus sign) has slope 1+√52 , as required.
Here's a picture of the eigenlines (orthogonal to one another because the matrix A is symmetric) and the positions of v,Av,A2v,… . You can see that these vectors get closer and closer to the u1 -eigenline (and stretched out in the u1 -direction). They move from side to side of this axis because the sign of λ2 is negative. So Anv gets more and more parallel to u1 as n→∞ .

Arnold cat map
Here's another nice example, due to Vladimir Arnold. Consider A=(2111) . This has eigenvalues 3±√52 : one of these is bigger than 1, the other is positive but less than 1. Here are the eigenlines, with a square S drawn on (whose sides are parallel to the eigenlines). We also draw A(S) and A2(S) . We can see that it gets stretched in the u1 -direction and squashed in the u2 -direction (because λ1>1 and λ2<1 ). In the limit, An(S) gets thinner and thinner and closer to the u1 -eigenline.

This is called the Arnold cat map because of the following strange phenomenon. Take an infinite grid of squares in 𝐑2 , take a picture of a cat, and put it into every square. Apply A to this grid of cats. The cats will get stretched and squashed in the eigendirections. Pick one of our original squares and look at what's there. We see a bunch of cats all chopped up and stretched and squished back into that square in some way. Now repeat, and repeat. What we see in our square is absolute carnage for a long time. But, amazingly, at some point, we our cat reappears almost identically to how it looked to begin with. This is not because of any periodicity: An is not the identity for any n>0 . This is instead an instance of "Poincaré recurrence": a phenomenon in dynamical systems which goes way beyond anything we're discussing in this course.