When we want to propagate a wave packet in time or calculate some expectation value, we usually end up having to apply an operator to a wave function. When using a computer, this obviously has to be done numerically.
Here, we briefly derive one of the most widely-used numerical schemes for wave packet propagation, the so-called DVR (Discrete Variable Representation) method, sometimes also called pseudospectral method. We will demonstrate the basic principle of the method without detailed proofs using the common Fourier method (plane wave expansion). After reading this document, you should be able to have
The DVR method is relatively general and has been studied extensively by J.C. Light, for example here. Another recommended reference for the more gory details is David Tannor's book ''Introduction to Quantum Mechanics: A Time-Dependent Perspective''.
Before you get lost in the details of the DVR method, I would as shortly as possible outline the line of thought that runs through this exposition. The DVR method goes in four steps:
If you found this trivial enough, you can stop reading here; the rest are just gory details. In every other case, we go through this in a bit more detail using the common Fourier method as an example, before briefly discussing the DVR for the classical polynomials and general basis functions.
The way how the DVR method works is thus: You start with a Hamiltonian that is a sum of one or more operators that are local in space, and a non-local operator (typically, the kinetic energy or derivatives). You then choose a basis expansion that works well for this non-local operator (e.g., by diagonalizing it). The basis expansion together with the number of basis functions defines a unique quadrature grid in coordinate space. The DVR method works then well if the local operator has certain properties regarding this expansion.
Below is a list of DVR methods currently implemented in WavePacket:
Operator | Basis that diagonalizes the operator | Real-space grid | Requirements for A(x) |
---|---|---|---|
Kinetic energy, d2/dx2 | plane waves with period L | equally-spaced grid of total length L | must be band-limited and have period L |
Angular momentum, \hat L^2 | spherical harmonics Ylm | Gauss-Jacobi quadrature points | expandable as low-oder polynomial of x = cos θ |
Harmonic oscillator, d2/dx2 + x2 | harmonic oscillator eigenstates | Gauss-Hermite quadrature points | expandable in low order polynomial of x |
We want to show the basic idea of the DVR method for the example of an expansion in plane waves. We assume periodic boundary conditions, with a single period going from x=0 to x=L. (Note that periodic boundary conditions are always implicit in a plane wave expansion with discrete momenta)
We now wish to apply an operator to a wave function, , where the operator is just a function of the position operator  = A(x).
The generic way to do so, is to expand the ψ, ψ', Â in a (preferably orthonormal) basis. For our example, we use plane waves, . With the periodic interval being [0,L], the wave vectors are given by kn = (-1)n ceil(n/2) 2π/L, where ceil(x) denotes the smallest integer that is larger or equal to x.
Using the orthonormality of the basis functions,
we can write (assuming even for definiteness)
where the coefficients are related by f'n = Σm anm fm. Note that the set of coefficients fn, f'n, anm give a unique numerical representation of the states and the operator. In practice, we always have to truncate the expansion at some order N, which is why this expansion is called a Finite Basis Representation (FBR). The representation is exact if ψ(x), ψ'(x), Â(x) have no Fourier components larger than the cut-off, which we tacitly assume in the following.
Now, there are three small problems with this scheme that are resolved by introducing the DVR.
Instead of representing our state ψ by the coefficient vector fn, we can store the value of the wave function at equally-spaced grid points xk = kL/N (k = 0...N-1) without losing any information. To see this, we write the expansion:
Although this is not immediately apparent, this is actually a Discrete Fourier Transformation (DFT) up to a scaling factor of L-0.5. To show this, let us have a look at the exponent: . If we add 2π to all negative exponents (which does not change the sum), these exponents take on all values 2πikn/N with n ranging from 0 to N-1, which happens to be the definition of a DFT. The two vectors fn and Fk are thus Fourier-transform pairs and encode the same information (this is essentially the Nyquist theorem).
If we know the wave function at the points xk, we can easily transform back by simply reverting the scaling and then doing an inverse DFT. It follows that
We can compare this with the generic way of calculating the coefficient, which would be to evaluate the bracket <φn|ψ>. By equating these two, we find that
with Δx = L/N if the function ψ(x) has only Fourier-components between kN-1 and kN-2, or equivalently
if f is only built up from plane waves with |k| \leq |2kN-1| = Nπ/L.
So we have found an alternative representation Fk, F'k, Akl for our states and the operator, which can be calculated from the FBR by a discrete Fourier transformation. Since now the relevant quantity is given at discrete coordinate steps, this representation is called the Discrete Variable Representation (DVR). It should be pointed out that the operator has a particularly pleasant form, Akl = δkl Ak = δkl A(xk), reflecting the diagonality in the coordinate basis.
Note that the basis set φn used for the FBR uniquely defines the position of the grid points. For the plane wave expansion, the Nyquist theorem guarantees that we lose no information if we use an equally-spaced grid in coordinate space. For other (polynomial) expansions, there exist equivalent theorems that guarantee that no information is lost if we choose certain quadrature points for the grid (see below).
For the DVR method, we now simply take the exact relation
and apply it at each grid point, i.e.,
or, in a more technical style
While this seems like an obvious thing to do, it is important to understand that this is an approximation. We state here a procedure where the DVR of ψ' is obtained by multiplying the DVR of ψ with the (diagonal) DVR of the operator A. This only works in special cases that we want to discuss in the next section.
Note that the application of the operator is now an elementwise product between two vectors, which scales with the number of basis functions/grid points N. The computational cost enters during the switch to the DVR and back to the FBR. If this is done by a matrix multiplication, we have again a scaling with N2 and no speed gain. However, for special cases such as the plane-wave expansion here, this switching can be done by a fast transformation (FFT) that scales like N ln N. Thus, a plane wave expansion coupled with the DVR method can be significantly faster than an expansion in a more suitable basis, even though the latter would require less basis functions for an accurate representation.
To obtain a condition when the DVR method is actually exact, let us calculate an expansion coefficient fn' in two different ways.
We can stay in the FBR the whole time, in which case we get
The other way is to use ψ in the DVR, apply the operator using the DVR method, and transform the resulting ψ' back to the FBR. Using the transformations above, we get
Thus, the DVR method is equivalent to the replacement
As we found two sections earlier, this replacement is ok if the integrand is sufficiently band-limited. If we do not have any prior constraints on ψ', then |kn| \leq |kN-1|. Correspondingly, the product A(x)ψ(x) must be representable by plane waves up to a maximum wave vector \approx |kN-1|.
Note that for plane waves, we need the same number of grid points if we stay in the FBR. In this case, we would require that ψ'(x) = A(x) ψ(x) should fit into the finite basis, i.e., has only plane wave components with wave vectors \leq |kN-1. What differs between a pure FBR and a DVR method is the form of the error if there are not enough basis functions.
The general idea is similar to the discussion above.
We have a function whose argument is restricted to some interval [a, b], and expand our state in a set of basis functions φn(x) = [w(x)]1/2 Xn(x), where Xn(x) is an orthogonal polynomial that fulfills
with the associated weight function w(x). We assume that we are happy with the first N polynomials up to degree N-1. In this case, we can use the results of the Gaussian quadrature, which states that there exist N points xi and weights wi such that the identity
is exact if f is a polynomial of degree 2N-1 or less. Obviously, this allows us to introduce the DVR. From the knowledge of the wave function ψ at the quadrature points xi, we can exactly calculate <φn|ψ> by a summation over the grid.
If we repeat all the DVR method, we finally find that it consists of the replacement
where n can run from 0 to N-1. Again, the DVR method is exact if the product A(x)ψ(x)/[w(x)]1/2 can be represented by a polynomial up to a certain degree N in this case). And again, this is similar to the requirement if we had stayed in the FBR (where ψ'(x)/[w(x)]1/2 would need to be a polynomial of order \leq N-1).
Depending on the choice of the polynomial and weight function, there are different quadratures. Implemented in WavePacket are the following three (Gauss-Legendre is implemented within the more general Gauss-Jacobi scheme)
DVR | Gauss-Legendre |
---|---|
[a,b] | x = cosθ ∈ [-1,1] |
w(x) | 1 |
Xn(x) | Legendre polynomials Pn(x) |
In rotational problems, we usually expand in spherical harmonics Ylm, where, in the case of cylindrical symmetry, m remains a constant. For m=0, the spherical harmonics are just normalized and scaled Legendre polynomials as a function of x = cosθ. The maximum angular momentum l determines the maximum degree of the polynomial that we need to include in the expansion.
DVR | Gauss-Jacobi (/Gauss-Gegenbauer) |
---|---|
[a,b] | x = cosθ ∈ [-1,1] |
w(x) | (1-x2)m |
Xn(x) | Jacobi polynomials Pn(m,m)(x) or equivalently Gegenbauer polynomials Cnm/2+1(x) |
For spherical harmonics expansions where m is fixed but nonzero, the spherical harmonics Ylm are, apart from normalization and a complex phase, which drops out from all integrals, just associated Legendre functions Pnm(x=cosθ). These can be written as (1-x2)m/2 times a polynomial of degree n-m. This expansion into a weight function and a polynomial leads straight to the use of Jacobi polynomials (actually Gegenbauer polynomials as a special case, but they are supposedly even less well known).
Note that we do not need to actually play around with the Jacobi polynomials; we merely need them to calculate the quadrature points. The maximum degree N of the polynomial, which is associated with the number of required grid points, is determined by the maximum angular momentum lmax through N=lmax-m.
For examples where the Legendre/Jacobi grids are used, see the Molecular Rotation Demos
DVR | Gauss-Hermite |
---|---|
[a,b] | [-∞,∞] |
w(x) | e-x2 |
Xn(x) | Hermite polynomials Hn(x) |
Hermite polynomials occur naturally in the treatment of harmonic oscillators, with some scaling of the coordinate involved x→x/mω (with m,ω being the mass and angular frequency of the harmonic oscillator). The highest degree of the polynomial is determined by the maximum quantum number that is used in the expansion.
This grid is used for example in the Double well Demo
The DVR method can be extended to arbitrary basis sets. What you need is a procedure that creates quadrature schemes for any number N of basis functions that you need. The quadrature has to fulfill
for all f(x) that can be expanded in the first 2N or so basis functions. This is discussed in detail in the paper by J. C. Light mentioned in the introduction.
Note, however, that there are two caveats when doing so. The first is that you have to find such a quadrature first.
Secondly, when deriving exact conditions, we always put the product A(x)ψ(x) into the quadrature scheme. Now polynomial and plane-wave basis sets are well suited for multiplication. If you multiply two polynomials of degree n1, n2, you get a polynomial of degree n1+n2. Similarly, if you have two functions that are band-limited with maximum k-vectors |k1|, |k2|, the product is bandlimited with maximum wave vector |k1|+|k2|. This directly translates into the convenient property that the DVR method for these basis expansions is roughly as accurate as if we had stayed in the FBR (more precisely: there is no error, if ψ, ψ', A can be represented with N basis functions).
If, however, your basis converges slowly with respect to multiplication, i.e., if the product φkφl requires much more than the first k+l basis functions for a faithful representation, then this basis is not well-suited for the DVR method.
Wiki: Demos.DoubleWell.Bound1D
Wiki: Demos.MolRotation.Main
Wiki: Demos.MolRotation.PendularStates
Wiki: Demos.MolRotation.Test_DVR
Wiki: Demos.MolVibration.H3+
Wiki: Numerics.Fourier_ABS
Wiki: Numerics.Main
Wiki: Numerics.Schroedinger
Wiki: Numerics.TISE