Schrödinger equation 3: Complex conjugate, Phase rotation, and Energy

•March 17, 2013 • Leave a Comment

In this third episode, we cover what a complex conjugate is, how to compute how fast a wavefunction’s phase is rotating (given dpsi/dt), and roughly what energy is in terms of a wavefunction.  That’s even more information than last time, so please don’t hesitate to ask any questions!

Schrödinger equation 2: Wavefunctions, Magnitude, and Phase – Your Science

•December 27, 2012 • Leave a Comment

I’ve finally gotten Episode 2 on how to solve the Schrödinger equation out! Be sure to ask any questions you might have, so that I can address them in the next episode, and subscribe to be sure you don’t miss it. If you like/dislike the episode, please like/dislike the video, and feel free to share it if you really enjoyed it.

You can solve the Schrödinger equation – Your Science

•September 29, 2012 • Leave a Comment

This has taken a huge amount of work, and I still had to cut corners to make the deadline, but hopefully you like it. I’m entering the YouTube Next EDU Guru competition, and I had to get it ready by October 1st. Apologies for the low volume and stupid thumbnail that I can’t change to something more representative of the video.

I Speak for Myself

•June 5, 2012 • 2 Comments

It’s been far too long since I’ve made a video.  This has been in preparation for a while now.

Art & Science

•March 3, 2012 • Leave a Comment

This is the output of a half-finished program.  I thought it looked surprisingly artful, so I gave it a name, and here it is.

Ruin by Neil G. Dickson

Inventor IDE Alpha 6!

•September 27, 2011 • 1 Comment

After 2 years, 1 month, and 1 day, I’m finally back in the swing of things with a new release of Inventor IDE!  Version Alpha 6 is planned to be the last major alpha release.  I will probably make a few smaller updates as 6a, 6b, etc.  It now contains the preliminary performance analysis tool, which I’ll explain in due time.  More mundane features added include Find/Replace/Find All References, and a bunch of obscure-but-problematic bugs fixed.

Performance Analysis Tool

Plot made by the performance analysis tool in Inventor IDE Alpha 6

Jacobi eigenvalue algorithm, Schur decomposition, and Wikipedia

•July 13, 2011 • 1 Comment

Me?  Writing a blog post?  Well, you see, it all started with this Wikipedia page on the “Jacobi method for complex Hermitian matrices”.  I wanted to apply the Jacobi eigenvalue algorithm to diagonalize complex matrices, since it had worked well for real symmetric matrices, but the page on it only covers real symmetric matrices, not the complex equivalent, Hermitian matrices.  I was glad to see there was the additional page supposedly explaining how to do something similar for complex Hermitian matrices, but alas, it’s explained in such complicated, ambiguous terms that it’s at best unreadable, and at worst wrong.  I implemented two different interpretations of the page’s content, and both failed on the simplest possible example…

…so I derived my own in a much simpler way, and figured out a more general algorithm.  Other people have undoubtedly found the exact same method, but papers I’d found on the topic didn’t seem to state their methods much clearer than the Wikipedia page.  The issue is that I derived it in a completely different way than what’s on Wikipedia.  It’s not a matter of “fixing a mistake” in the page; by Wikipedia’s rules, this counts as original research (despite that others have probably done the same), so it’s not allowed on Wikipedia.  The only other suggestion people had was a blog post to make sure that a correct version is online somewhere, so here it is.

In a nutshell, the Jacobi eigenvalue algorithm works by one-by-one choosing the largest off-diagonal element, and rotating the two vectors associated with that element such that the element becomes zero.  Do this enough times and all off-diagonal elements will be quite close to zero (if the matrix is Hermitian), leaving the eigenvalues.  The eigenvectors come from applying the rotations to the identity matrix.  The only thing necessary to do this is to know the eigenvectors V_{\rm sub} that diagonalize a 2×2 matrix, namely

M_{\rm sub} = \left[\begin{array}{cc} M_{ii} & M_{ij} \\ M_{ji} & M_{jj}\end{array}\right]

where M is the matrix we’re trying to diagonalize, and M_{ij} is the element we’re trying to eliminate.  If M is Hermitian, M_{ii} , M_{jj} are real, and M_{ji}=M_{ij}^*.  However, I won’t assume that M is Hermitian, since even if it’s not, this same approach can still be used to get what’s known as the Schur decomposition, namely M=VUV^*, where U is upper triangular and the eigenvalues of M are on the diagonal of U.  If M is Hermitian, the Schur decomposition is the same as the eigendecomposition, so M = VDV^*.  In the non-Hermitian case, it doesn’t give the eigenvectors, but those can be more easily computed afterward.  I’ll split this into two cases.

Case 1: |M_{ij}|<10^{14}|M_{ii}-M_{jj}|

This case is where you don’t have two diagonal elements that are almost identical compared to the off-diagonal element.  If you did, dividing by them could cause chaos, madness, and megalomania… or just invalid results.  We start by subtracting ((M_{ii}+M_{jj})/2) I from our 2×2 matrix, since we’re only interested in its (pseudo-)eigenvectors V_{\rm sub}, and multiples of I can be diagonalized using the same eigenvectors.  That basically means that adding/subtracting multiples of I won’t change V_{\rm sub}.  Then, we  divide the 2×2 matrix by (M_{ii}-M_{jj})/2, which also doesn’t change the (pseudo-)eigenvectors.  This gives us

\left[\begin{array}{cc} 1 & \frac{2M_{ij}}{M_{ii}-M_{jj}} \\ \frac{2M_{ji}}{M_{ii}-M_{jj}} & -1\end{array}\right]

For simplicity, we’ll just relabel this as

\left[\begin{array}{cc} 1 & b \\ a & -1\end{array}\right]

Now, if we want to make a=0 by multiplying by a unitary matrix V_{\rm sub} on the right and its conjugate transpose V_{\rm sub}^* on the right, a bit of trial and error can give you that

V_{\rm sub} = L^{-1}\left[\begin{array}{cc} 1+\sqrt{1+ab} & -a^* \\ a & 1+\sqrt{1+ab}^*\end{array}\right] \\ L=\sqrt{|a|^2 + |1+\sqrt{1+ab}|^2}

is a solution to that problem.  Note that the square roots are square roots of complex numbers, not necessarily real numbers.  You can verify that this works in the original case by multiplying out V_{\rm sub}^*M_{\rm sub}V_{\rm sub} and confirming that the lower off-diagonal element is zero.  If M_{\rm sub} is Hermitian, both off-diagonal elements should be zero.  Case 1 complete.

Case 2: |M_{ij}|\ge 10^{14}|M_{ii}-M_{jj}|

This case is where you have two diagonal elements that are almost identical compared to the off-diagonal element.  Simple solution: pretend that they are exactly zero.  You now have

\left[\begin{array}{cc} 0 & b \\ a & 0\end{array}\right]

The zeros make things much simpler, as we can now find

V=L^{-1} \left[ \begin{array}{cc} \sqrt{b} & -\sqrt{a}^* \\ \sqrt{a} & \sqrt{b}^* \end{array} \right] \\ L=\sqrt{|a|+|b|}

This time, since we’ve assumed that the diagonal elements are exactly the same, it probably won’t make the off-diagonal element exactly zero, but it’ll probably be negligibly small, and if it’s ever the largest at some later point, it’ll just get picked again to be eliminated, only probably without the diagonal elements being so close the 2nd time around.

Again, you don’t have to take my word for it.  You can easily verify this solution by just multiplying it out.

How did you get these?

If you’re wondering how I figured these out (after a weekend of trial and error of different methods), I knew that if the off-diagonal element was non-zero (which it must be, else there’s nothing to do), there is some non-zero component of the second element in each vector of V.  I temporarily ignored the restriction of having unit vectors and set both of those components to 1, but didn’t ignore the restriction of having the two vectors being complex orthogonal (i.e. v_1^*v_2 = 0), giving me

\left[\begin{array}{cc} x^* & 1 \\ -x^{-1} & 1\end{array}\right]\left[\begin{array}{cc} 1 & b \\ a & -1\end{array}\right]\left[\begin{array}{cc} x & -x^{-1*} \\ 1 & 1\end{array}\right]=\left[\begin{array}{cc} ... & ... \\ ax-2-bx^{-1} & ...\end{array}\right]

I know I want that off-diagonal element zero, so

ax-2-bx^{-1}=0 \\ ax^2-2x-b=0 \\ x = \frac{1\pm\sqrt{1+ab}}{a}

Since the scales of the 2 vectors are arbitrary, including the relative scales, I scaled one by a, and the other by 1\pm\sqrt{1+ab}.  Then I just normalize them both (they happened to be the same length, L), and that’s the solution above.  I did the same for case 2, only it was even simpler.  Easy… it just took a weekend to figure out what the easy way was.

Final Algorithm to Compute Schur Decomposition

V = identity matrix
while (there is an ij such that i>j and |M[i][j]| > 10^-8*|M[i][i]-M[j][j]|) {
    select ij with maximum |M[i][j]|/|M[i][i]-M[j][j]|, possibly infinite
    // Case 1
    if (|M[i][j]| < 10^14*|M[i][i]-M[j][j]|) {
        a = 2*M[i][j]/(M[i][i]-M[j][j])
        b = 2*M[j][i]/(M[i][i]-M[j][j])
        c = sqrt(1+a*b)
        L = sqrt(|a|^2 + |1+c|^2)
        Vsub = {{1+c, -conj(a)}, {a, 1+conj(c)}}/L
    }
    // Case 2
    else {
        a = M[i][j]
        b = M[j][i]
        L = sqrt(|a|+|b|)
        Vsub = {{sqrt(b), -conj(sqrt(a))}, {sqrt(a), conj(sqrt(b))}}/L
    }
    // Rotate columns i and j of M
    for (each row of M) {
        newi = M[row][i]*Vsub[0][0]+M[row][j]*Vsub[1][0]
        newj = M[row][i]*Vsub[0][1]+M[row][j]*Vsub[1][1]
        M[row][i] = newi
        M[row][j] = newj
    }
    // Rotate rows i and j of M
    for (each column of M) {
        newi = M[i][col]*conj(Vsub[0][0])+M[j][col]*conj(Vsub[1][0])
        newj = M[i][col]*conj(Vsub[0][1])+M[j][col]*conj(Vsub[1][1])
        M[i][col] = newi
        M[j][col] = newj
    }
    // Rotate cols i and j of V
    for (each row of V) {
        newi = V[row][i]*Vsub[0][0]+V[row][j]*Vsub[1][0]
        newj = V[row][i]*Vsub[0][1]+V[row][j]*Vsub[1][1]
        V[row][i] = newi
        V[row][j] = newj
    }
}
// Diagonal of M now has the eigenvalues of the input M
// If M was Hermitian, V now has the eigenvectors of the input M as columns,
//                     in the order that the eigenvalues appear in M
 
Follow

Get every new post delivered to your Inbox.

Join 111 other followers