Better than Slater-determinants: center-of-mass free basis sets for few-electron quantum dots

Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states.

Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states. The arrows point out the increasing error of the CI approach with increasing center-of-mass admixing.

Solving the interacting many-body Schrödinger equation is a hard problem. Even restricting the spatial domain to a two-dimensions plane does not lead to analytic solutions, the trouble-maker are the mutual particle-particle interactions. In the following we consider electrons in a quasi two-dimensional electron gas (2DEG), which are further confined either by a magnetic field or a harmonic oscillator external confinement potential. For two electrons, this problem is solvable for specific values of the Coulomb interaction due to a hidden symmetry in the Hamiltonian, see the review by A. Turbiner and our application to the two interacting electrons in a magnetic field.

For three and more electrons (to my knowledge) no analytical solutions are known. One standard computational approach is the configuration interaction (CI) method to diagonalize the Hamiltonian in a variational trial space of Slater-determinantal states. Each Slater determinant consists of products of single-particle orbitals. Due to computer resource constraints,  only a certain number of Slater determinants can be included in the basis set. One possibility is to include only trial states up to certain excitation level of the non-interacting problem.

The usage of Slater-determinants as CI basis-set introduce severe distortions in the eigenenergy spectrum due to the intrusion of spurious states, as we will discuss next. Spurious states have been extensively analyzed in the few-body problems arising in nuclear physics but have rarely been mentioned in solid-state physics, where they do arise in quantum-dot systems. The basic defect of the Slater-determinantal CI method is that it brings along center-of-mass excitations. During the diagonalization, the center-of-mass excitations occur along with the Coulomb-interaction and lead to an inflated basis size and also with a loss of precision for the eigenenergies of the excited states. Increasing the basis set does not uniformly reduce the error across the spectrum, since the enlarged CI basis set brings along states of high center-of-mass excitations. The cut-off energy then restricts the remaining basis size for the relative part.

The cleaner and leaner way is to separate the center-of-mass excitations from the relative-coordinate excitations, since the Coulomb interaction only acts along the relative coordinates. In fact, the center-of-mass part can be split off and solved analytically in many cases. The construction of the relative-coordinate basis states requires group-theoretical methods and is carried out for four electrons here Interacting electrons in a magnetic field in a center-of-mass free basis. For three electrons, the importance of a spurious state free basis set was emphasized by R Laughlin and is a design principles behind the Laughlin wave function.

Slow or fast transfer: bottleneck states in light-harvesting complexes

lhc2_monomer

Light-harvesting complex II, crystal structure 1RWT from Liu et al (Nature 2004, vol. 428, p. 287), rendered with VMD. The labels denote the designation of the chlorophyll sites (601-614). Chlorophylls 601,605-609 are of chlorophyll b type, the others of type a.

In the previous post I described some of the computational challenges for modeling energy transfer in the light harvesting complex II (LHCII) found in spinach. Here, I discuss the results we have obtained for the dynamics and choreography of excitonic energy transfer through the chlorophyll network. Compared to the Fenna-Matthews-Olson complex, LHCII has twice as many chlorophylls per monomeric unit (labeled 601-614 with chlorophyll a and b types).
Previous studies of exciton dynamics had to stick to simple exponential decay models based on either Redfield or Förster theory for describing the transfer from the Chl b to the Chl a sites. The results are not satisfying and conclusive, since depending on the method chosen the transfer time differs widely (tens of picoseconds vs picoseconds!).

Exciton dynamics in LHCII.

Exciton dynamics in LHCII computed with various methods. HEOM denotes the most accurate method, while Redfield and Förster approximations fail.

To resolve the discrepancies between the various approximate methods requires a more accurate approach. With the accelerated HEOM at hand, we revisited the problem and calculated the transfer rates. We find slower rates than given by the Redfield expressions. A combined Förster-Redfield description is possible in hindsight by using HEOM to identify a suitable cut-off parameter (Mcr=30/cm in this specific case).

Since the energy transfer is driven by the coupling of electronic degrees of freedom to vibrational ones, it of importance to assess how the vibrational mode distribution affects the transfer. In particular it has been proposed that specifically tuned vibrational modes might promote a fast relaxation. We find no strong impact of such modes on the transfer, rather we see (independent of the detailed vibrational structure) several bottleneck states, which act as a transient reservoir for the exciton flux. The details and distribution of the bottleneck states strongly depends on the parameters of the electronic couplings and differs for the two most commonly discussed LHCII models proposed by Novoderezhkin/Marin/van Grondelle and Müh/Madjet/Renger – both are considered in the article Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan Aspuru-Guzik).
Again, the correct assignment of the bottleneck states requires to use HEOM and to look beyond the approximate rate equations.

High-performance OpenCL code for modeling energy transfer in spinach

With increasing computational power of massively-parallel computers, a more accurate modeling of the energy-transfer dynamics in larger and more complex photosynthetic systems (=light-harvesting complexes) becomes feasible – provided we choose the right algorithms and tools.

OpenCL cross platform performance for tracking energy-transfer in the light-harvesting complex II found in spinach.

OpenCL cross platform performance for tracking energy-transfer in the light-harvesting complex II found in spinach, see Fig. 1 in the article . Shorter values show higher perfomance. The program code was originally written for massively-parallel GPUs, but performs also well on the AMD opteron setup. The Intel MIC OpenCL variant does not reach the peak performance (a different data-layout seems to be required to benefit from autovectorization).

The diverse character of hardware found in high-performance computers (hpc) seemingly requires to rewrite program code from scratch depending if we are targeting multi-core CPU systems, integrated many-core platforms (Xeon PHI/MIC), or graphics processing units (GPUs).

To avoid the defragmentation of our open quantum-system dynamics workhorse (see the previous GPU-HEOM posts) across the various hpc-platforms, we have transferred the GPU-HEOM CUDA code to the Open Compute Language (OpenCL). The resulting QMaster tool is described in our just published article Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan Aspuru-Guzik). This post details the computational challenges and lessons learnt, the application to the light-harvesting complex II found in spinach will be the topic of the next post.

In my experience, it is not uncommon to develop a nice GPU application for instance with CUDA, which later on is scaled up to handle bigger problem sizes. With increasing problem size also the memory demands increase and even the 12 GB provided by the Kepler K40 are finally exhausted. Upon reaching this point, two options are possible: (a) to distribute the memory across different GPU devices or (b) to switch to architectures which provide more device-memory. Option (a) requires substantial changes to existing program code to manage the distributed memory access, while option (b) in combination with OpenCL requires (in the best case) only to adapt the kernel-launch configuration to the different platforms.

The OpenCL device fission extension allows to investigate the scaling of the QMaster code with the number of CPU cores. We observe a linear scaling up to 48 cores.

The OpenCL device fission extension allows us to investigate the scaling of the QMaster code with the number of CPU cores. We observe a linear scaling up to 48 cores.

QMaster implements an extension of the hierarchical equation of motion (HEOM) method originally proposed by Tanimura and Kubo, which involves many (small) matrix-matrix multiplications. For GPU applications, the usage of local memory and the optimal thread-grids for fast matrix-matrix multiplications have been described before and are used in QMaster (and the publicly available GPU-HEOM tool on nanohub.org). While for GPUs the best performance is achieved using shared/local memory and assign one thread to each matrix element, the multi-core CPU OpenCL variant performs better with fewer threads, but getting more work per thread done. Therefore we use for the CPU machines a thread-grid which computes one complete matrix product per thread (this is somewhat similar to following the “naive” approach given in NVIDIA’s OpenCL programming guide, chapter 2.5). This strategy did not work very well for the Xeon PHI/MIC OpenCL case, which requires additional data structure changes, as we learnt from discussions with the distributed algorithms and hpc experts in the group of Prof. Reinefeld at the Zuse-Institute in Berlin.
The good performance and scaling across the 64 CPU AMD opteron workstation positively surprised us and lays the groundwork to investigate the validity of approximations to the energy-transfer equations in the spinach light-harvesting system, the topic for the next post.

Flashback to the 80ies: filling space with the first quasicrystals

This post provides a historical and conceptional perspective for the theoretical discovery of non-periodic 3d space-fillings by Peter Kramer, later experimentally found and now called quasicrystals. See also these previous blog entries for more quasicrystal references and more background material here.
The following post is written by Peter Kramer.

When sorting out old texts and figures from 1981 of mine published in Non-periodic central space filling with icosahedral symmetry using copies of seven elementary cells, Acta Cryst. (1982). A38, 257-264), I came across the figure of a regular pentagon of edge length L, which I denoted as p(L). In the left figure its red-colored edges are star-extending up to their intersections. Straight connection of these intersection points creates a larger blue pentagon. Its edges are scaled up by τ2, with τ the golden section number, so the larger pentagon we call p(τ2 L). This blue pentagon is composed of the old red one plus ten isosceles triangles with golden proportion of their edge length. Five of them have edges t1(L): (L, τ L, τ L), five have edges t2(L): (τ L,τ L, τ2 L). We find from Fig 1 that these golden triangles may be composed face-to-face into their τ-extended copies as t1(τ L) = t1(L) + t2(L) and t2(τ L) = t1(L) + 2 t2(L).

Moreover we realize from the figure that also the pentagon p(τ2 L) can be composed from golden triangles as p(τ2 L) = t1(τ L) + 3 t2(τ L) = 4 t1(L) + 7 t2(L).

This suggests that the golden triangles t1,t2 can serve as elementary cells of a triangle tiling to cover any range of the plane and provide the building blocks of a quasicrystal. Indeed we did prove this long range property of the triangle tiling (see Planar patterns with fivefold symmetry as sections of periodic structures in 4-space).

An icosahedral tiling from star extension of the dodecahedron.
The star extension of the dodecahedron.

Star extension of the dodecahedron d(L) to the icosahedron i(τ2L) and further to d(τ3L) and i(τ5L) shown in Fig 3 of the 1982 paper. The vertices of these polyhedra are marked by filled circles; extensions of edges are shown except for d(L).

In the same paper, I generalized the star extension from the 2D pentagon to the 3D dodecahedron d(L) of edge length L in 3D (see next figure) by the following prescription:

  • star extend the edges of this dodecahedron to their intersections
  • connect these intersections to form an icosahedron

The next star extension produces a larger dodecahedron d(τ3L), with edges scaled by τ3. In the composition of the larger dodecahedron I found four elementary polyhedral shapes shown below. Even more amusing I also resurrected the paper models I constructed in 1981 to actually demonstrate the complete space filling!
These four polyhedra compose their copies by scaling with τ3. As for the 2D case arbitrary regions of 3D can be covered by the four tiles.

Elementary cells The paper models I built in 1981 are still around and complete enough to fill the 3D space.
The four elementary cells shown in the 1982 paper, Fig. 4. The four shapes are named dodecahedron (d) skene (s), aetos (a) and tristomos (t). The paper models from 1981 are still around in 2014 and complete enough to fill the 3D space without gaps. You can spot all shapes (d,s,a,t) in various scalings and they all systematically and gapless fill the large dodecahedron shell on the back of the table.
Quasiperiodicity.

The only feature missing for quasicrystals is aperiodic long-range order which eventually leads to sharp diffraction patterns of 5 or 10 fold point-symmetries forbidden for the old-style crystals. In my construction shown here I strictly preserved central icosahedral symmetry. Non-periodicity then followed because full icosahedral symmetry and periodicity in 3D are incompatible.

In 1983 we found a powerful alternative construction of icosahedral tilings, independent of the assumption of central symmetry: the projection method from 6D hyperspace (On periodic and non-periodic space fillings of Em obtained by projection) This projection establishes the quasiperiodicity of the tilings, analyzed in line with the work Zur Theorie der fast periodischen Funktionen (i-iii) of Harald Bohr from 1925 , as a variant of aperiodicity (more background material here).

Tutorial #1: simulate 2d spectra of light-harvesting complexes with GPU-HEOM @ nanoHub

The computation and prediction of two-dimensional (2d) echo spectra of photosynthetic complexes is a daunting task and requires enormous computational resources – if done without drastic simplifications. However, such computations are absolutely required to test and validate our understanding of energy transfer in photosyntheses. You can find some background material in the recently published lecture notes on Modelling excitonic-energy transfer in light-harvesting complexes (arxiv version) of the Latin American School of Physics Marcos Moshinsky.
The ability to compute 2d spectra of photosynthetic complexes without resorting to strong approximations is to my knowledge an exclusive privilege of the Hierarchical Equations of Motion (HEOM) method due to its superior performance on massively-parallel graphics processing units (GPUs). You can find some background material on the GPU performance in the two conference talks Christoph Kreisbeck and I presented at the GTC 2014 conference (recored talk, slides) and the first nanoHub users meeting.

GPU-HEOM 2d spectra computed at nanohub


GPU-HEOM 2d spectra computed at nanohubComputed 2d spectra for the FMO complex for 0 picosecond delay time (upper panel) and 1 ps (lower panel). The GPU-HEOM computation takes about 40 min on the nanohub.org platform and includes all six Liouville pathways and averages over 4 spatial orientations.

  1. login on nanoHub.org (it’s free!)
  2. switch to the gpuheompop tool
  3. click the Launch Tool button (java required)
  4. for this tutorial we use the example input for “FMO coherence, 1 peak spectral density“.
    You can select this preset from the Example selector.
  5. we stick with the provided Exciton System parameters and only change the temperature to 77 K to compare the results with our published data.
  6. in the Spectral Density tab, leave all parameters at the the suggested values
  7. to compute 2d spectra, switch to the Calculation mode tab
  8. for compute: choose “two-dimensional spectra“. This brings up input-masks for setting the directions of all dipole vectors, we stick with the provided values. However, we select Rotational averaging: “four shot rotational average” and activate all six Liouville pathways by setting ground st[ate] bleach reph[asing , stim[ulated] emission reph[asing], and excited st[ate] abs[orption] to yes, as well as their non-rephasing counterparts (attention! this might require to resize the input-mask by pulling at the lower right corner)
  9. That’s all! Hit the Simulate button and your job will be executed on the carter GPU cluster at Purdue university. The simulation takes about 40 minutes of GPU time, which is orders of magnitude faster than any other published method with the same accuracy. You can close and reopen your session in between.
  10. Voila: your first FMO spectra appears.
  11. Now its time to change parameters. What happens at higher temperature?
  12. If you like the results or use them in your work for comparison, we (and the folks at nanoHub who generously develop and provide the nanoHub platform and GPU computation time) appreciate a citation. To make this step easy, a DOI number and reference information is listed at the bottom of the About tab of the tool-page.

With GPU-HEOM we and now you (!) can not only calculate the 2d echo spectra of the Fenna-Matthews-Olson (FMO) complex, but also reveal the strong link between the continuum part of the vibrational spectral density and the prevalence of long-lasting electronic coherences as written in my previous posts

GPU and cloud computing conferences in 2014

Two conferences are currently open for registration related to GPU and cloud computing. I will be attending and presenting at both, please email me if you want to get in touch at the meetings.

Oscillations in two-dimensional spectroscopy

Transition from electronic coherence to a vibrational mode.

Transition from electronic coherence to a vibrational mode made visible by Short Time Fourier Transform (see text).

Over the last years, a debate is going on whether the observation of long lasting oscillatory signals in two-dimensional spectra are reflecting vibrational of electronic coherences and how the functioning of the molecule is affected. Christoph Kreisbeck and I have performed a detailed theoretical analysis of oscillations in the Fenna-Matthews-Olson (FMO) complex and in a model three-site system. As explained in a previous post, the prerequisites for long-lasting electronic coherences are two features of the continuous part of the vibronic mode density are: (i) a small slope towards zero frequency, and (ii) a coupling to the excitonic eigenenergy (ΔE) differences for relaxation. Both requirements are met by the mode density of the FMO complex and the computationally demanding calculation of two-dimensional spectra of the FMO complex indeed predicts long-lasting cross-peak oscillations with a period matching h/ΔE at room temperature (see our article Long-Lived Electronic Coherence in Dissipative Exciton-Dynamics of Light-Harvesting Complexes or arXiv version). The persistence of oscillations is stemming from a robust mechanism and does not require adding any additional vibrational modes at energies ΔE (the general background mode density is enough to support the relaxation toward a thermal state). But what happens if in addition to the background vibronic mode density additional vibronic modes are placed within the vicinity of the frequencies related electronic coherences? This fine-tuning model is sometimes discussed in the literature as an alternative mechanism for long-lasting oscillations of vibronic nature. Again, the answer requires to actually compute two-dimensional spectra and to carefully analyze the possible chain of laser-molecule interactions. Due to the special way two-dimensional spectra are measured, the observed signal is a superposition of at least three pathways, which have different sensitivity for distinguishing electronic and vibronic coherences. Being a theoretical physicists now pays off since we have calculated and analyzed the three pathways separately (see our recent publication Disentangling Electronic and Vibronic Coherences in Two-Dimensional Echo Spectra or arXiv version). One of the pathways leads to an enhancement of vibronic signals, while the combination of the remaining two diminishes electronic coherences otherwise clearly visible within each of them. Our conclusion is that estimates of decoherence times from two-dimensional spectroscopy might actually underestimate the persistence of electronic coherences, which are helping the transport through the FMO network. The fine tuning and addition of specific vibrational modes leaves it marks at certain spots of the two-dimensional spectra, but does not destroy the electronic coherence, which is still there as a Short Time Fourier Transform of the signal reveals.

Computational physics on GPUs: writing portable code

GPU-HEOM code comparison for various hardware.

Runtime in seconds for our GPU-HEOM code on various hardware and software platforms.


I am preparing my presentation for the simGPU meeting next week in Freudenstadt, Germany, and performed some benchmarks.
In the previous post I described how to get an OpenCL program running on a smartphone with GPU. By now Christoph Kreisbeck and I are getting ready to release our first smartphone GPU app for exciton dynamics in photosynthetic complexes, more about that in a future entry.
Getting the same OpenCL kernel running on laptop GPUs, workstation GPUs and CPUs, and smartphones/tablets is a bit tricky, due to different initialisation procedures and the differences in the optimal block sizes for the thread grid. In addition on a smartphone the local memory is even smaller than on a desktop GPU and double-precision floating point support is missing. The situation reminds me a bit of the “earlier days” of GPU programming in 2008.
Besides being a proof of concept, I see writing portable code as a sort of insurance with respect to further changes of hardware (however always with the goal to stick with the massively parallel programming paradigm). I am also amazed how fast smartphones are gaining computational power through GPUs!
Same comparison for smaller memory consumption. Note the drop in OpenCL performance for the NVIDIA K20c GPU.

Same comparison for smaller memory consumption. Note the drop in OpenCL performance for the NVIDIA K20c GPU.


Here some considerations and observations:

  1. Standard CUDA code can be ported to OpenCL within a reasonable time-frame. I found the following resources helpful:
    AMDs porting remarks
    Matt Scarpinos OpenCL blog
  2. The comparison of OpenCL vs CUDA performance for the same algorithm can reveal some surprises on NVIDIA GPUs. While on our C2050 GPU OpenCL works a bit faster for the same problem compared to the CUDA version, on a K20c system for certain problem sizes the OpenCL program can take several times longer than the CUDA code (no changes in the basic algorithm or workgroup sizes).
  3. The comparison with a CPU version running on 8 cores of the Intel Xeon machine is possible and shows clearly that the GPU code is always faster, but requires a certain minimal systems size to show its full performance.
  4. I am looking forward to running the same code on the Intel Xeon Phi systems now available with OpenCL drivers, see also this blog.

[Update June 22, 2013: I updated the graphs to show the 8-core results using Intels latest OpenCL SDK. This brings the CPU runtimes down by a factor of 2! Meanwhile I am eagerly awaiting the possibility to run the same code on the Xeon Phis...]

Computational physics on the smartphone GPU

Screenshot of the interacting many-body simulation on the Nexus 4 GPU.

Screenshot of the interacting many-body simulation on the Nexus 4 GPU.


[Update August 2013: Google has removed the OpenCL library with Android 4.3. You can find an interesting discussion here. Google seems to push for its own renderscript protocol. I will not work with renderscript since my priorities are platform independency and sticking with widely adopted  standards to avoid fragmentation of my code basis.]
I recently got hold of a Nexus 4 smartphone, which features a GPU (Qualcomm Adreno 320) and conveniently ships with already installed OpenCL library. With minimal changes I got the previously discussed many-body program code related to the fractional quantum Hall effect up and running. No unrooting of the phone is required to run the code example. Please use the following recipe at your own risk, I don’t accept any liabilities. Here is what I did:

  1. Download and unpack the Android SDK from google for cross-compilation (my host computer runs Mac OS X).
  2. Download and unpack the Android NDK from google to build minimal C/C++ programs without Java (no real app).
  3. Install the standalone toolchain from the Android NDK. I used the following command for my installation:

    /home/tkramer/android-ndk-r8d/build/tools/make-standalone-toolchain.sh \
    --install-dir=/home/tkramer/android-ndk-standalone
  4. Put the OpenCL programs and source code in an extra directory, as described in my previous post
  5. Change one line in the cl.hpp header: instead of including <GL/gl.h> change to <GLES/gl.h>. Note: I am using the “old” cl.hpp bindings 1.1, further changes might be required for the newer bindings, see for instance this helpful blog
  6. Transfer the OpenCL library from the phone to a subdirectory lib/ inside your source code. To do so append the path to your SDK tools and use the adb command:

    export PATH=/home/tkramer/adt-bundle-mac-x86_64-20130219/sdk/platform-tools:$PATH
    adb pull /system/lib/libOpenCL.so
  7. Cross compile your program. I used the following script, please feel free to provide shorter versions. Adjust the include directories and library directories for your installation.

    rm plasma_disk_gpu
    /home/tkramer/android-ndk-standalone/bin/arm-linux-androideabi-g++ -v -g \
    -DCL_USE_DEPRECATED_OPENCL_1_1_APIS -DGPU \
    -I. \
    -I/home/tkramer/android-ndk-standalone/include/c++/4.6 \
    -I/home/tkramer/android-ndk-r8d/platforms/android-5/arch-arm/usr/include \
    -Llib \
    -march=armv7-a -mfloat-abi=softfp -mfpu=neon \
    -fpic -fsigned-char -fdata-sections -funwind-tables -fstack-protector \
    -ffunction-sections -fdiagnostics-show-option -fPIC \
    -fno-strict-aliasing -fno-omit-frame-pointer -fno-rtti \
    -lOpenCL \
    -o plasma_disk_gpu plasma_disk.cpp
  8. Copy the executable to the data dir of your phone to be able to run it. This can be done without rooting the phone with the nice SSHDroid App, which by defaults transfers to /data . Don’t forget to copy the kernel .cl files:

    scp -P 2222 integrate_eom_kernel.cl root@192.168.0.NNN:
    scp -P 2222 plasma_disk_gpu root@192.168.0.NNN:
  9. ssh into your phone and run the GPU program:
    ssh -p 2222 root@192.168.0.NNN
    ./plasma_disk_gpu 64 16
  10. Check the resulting data files. You can copy them for example to the Download path of the storage and use the gnuplot (droidplot App) to plot them.

A short note about runtimes. On the Nexus 4 device the program runs for about 12 seconds, on a MacBook Pro with NVIDIA GT650M it completes in 2 seconds (in the example above the equations of motion for 16*64=1024 interacting particles are integrated). For larger particle numbers the phone often locks up.

An alternative way to transfer files to the device is to connect via USB cable and to install the Android Terminal Emulator app. Next

cd /data/data/jackpal.androidterm
mkdir gpu
chmod 777 gpu

On the host computer use adb to transfer the compiled program and the .cl kernel and start a shell to run the kernel

adb push integrate_eom_kernel.cl /data/data/jackpal.androidterm/gpu/
adb push plasma_disk_gpu /data/data/jackpal.androidterm/gpu/

You can either run the program within the terminal emulator or use the adb shell

adb shell
cd /data/data/jackpal.androidterm/gpu/
./plasma_disk_gpu 64 16

Let’s see in how many years todays desktop GPUs can be found in smartphones and which computational physics codes can be run!

Computational physics & GPU programming: exciton lab for light-harvesting complexes (GPU-HEOM) goes live on nanohub.org

User interface of the GPU-HEOM tool for light-harvesting complexes at nanohub.org.

User interface of the GPU-HEOM tool for light-harvesting complexes at nanohub.org.

Christoph Kreisbeck and I are happy to announce the public availability of the Exciton Dynamics Lab for Light-
Harvesting Complexes (GPU-HEOM) hosted on nanohub.org. You need to register a user account (its free), and then you are ready to use GPU-HEOM for the Frenkel exciton model of light harvesting complexes. In release 1.0 we support

  • calculating population dynamics 
  • tracking coherences between two eigenstates
  • obtaining absorption spectra
  • two-dimensional echo spectra (including excited state absorption)
  • … and all this for general vibronic spectral densities parametrized by shifted Lorentzians.

I will post some more entries here describing how to use the tool for understanding how the spectral density affects the lifetime of electronic coherences (see also this blog entry).
In the supporting document section you find details of the implemented method and the assumptions underlying the tool. We are appreciating your feedback for further improving the tool.
We are grateful for the support of Prof. Gerhard Klimeck, Purdue University, director of the Network for Computational Nanotechnology to bring GPU computing to nanohub (I believe our tool is the first GPU enabled one at nanohub).

If you want to refer to the tool you can cite it as:

Christoph Kreisbeck; Tobias Kramer (2013), “Exciton Dynamics Lab for Light-Harvesting Complexes (GPU-HEOM),” https://nanohub.org/resources/gpuheompop. (DOI:10.4231/D3RB6W248).

and you find further references in the supporting documentation.

I very much encourage my colleagues developing computer programs for theoretical physics and chemistry to make them available on platforms such as nanohub.org. In my view, it greatly facilitates the comparison of different approaches and is the spirit of advancing science by sharing knowledge and providing reproducible data sets.

Good or bad vibrations for the Fenna-Matthews-Olson complex?

Electronic and vibronic coherences in the FMO complex using GPU HEOM by Kreisbeck and Kramer

Time-evolution of the coherence for the FMO complex (eigenstates 1 and 5 ) calculated with GPU-HEOM by Kreisbeck and Kramer, J. Phys. Chem Lett. 3, 2828 (2012).


Due to its known structure and relative simplicity, the Fenna-Matthews-Olson complex of green sulfur bacteria provides an interesting test-case for our understanding of excitonic energy transfer in a light-harvesting complex.

The experimental pump-probe spectra (discussed in my previous post catching and tracking light: following the excitations in the Fenna-Matthews-Olson complex) show long-lasting oscillatory components and this finding has been a puzzle for theoretician and led to a refinement of the well-established models. These models show a reasonable agreement with the data and the rate equations explain the relaxation and transfer of excitonic energy to the reaction center.

However, the rate equations are based on estimates for the relaxation and dephasing rates. As Christoph Kreisbeck and I discuss in our article Long-Lived Electronic Coherence in Dissipative Exciton-Dynamics of Light-Harvesting Complexes (arxiv version), an exact calculation with GPU-HEOM following the best data for the Hamiltonian allows one to determine where the simple approach is insufficient and to identify a key-factor supporting electronic coherence:

Wendling spectral density for FMO complex

Important features in the spectral density of the FMO complex related to the persistence of cross-peak oscillations in 2d spectra.


It’s the vibronic spectral density – redrawn (in a different unit convention, multiplied by ω2)  from the article by M. Wendling from the group of Prof. Rienk van Grondelle. We did undertake a major effort to proceed in our calculations as close to the measured shape of the spectral density as the GPU-HEOM method allows one. By comparison of results for different forms of the spectral density, we identify how the different parts of the spectral density lead to distinct signatures in the oscillatory coherences. This is illustrated in the figure on the rhs. To get long lasting oscillations and finally to relax, three ingredients are important

  1. a small slope towards zero frequency, which suppresses the pure dephasing.
  2. a high plateau in the region where the exciton energy differences are well coupled. This leads to relaxation.
  3. the peaked structures induce a “very-long-lasting” oscillatory component, which is shown in the first figure. In our analysis we find that this is a persistent, but rather small (<0.01) modulation.

2d spectra are smart objects

FMO spectrum calculated with GPU-HEOM

FMO spectrum calculated with GPU-HEOM for a 3 peak approximation of the measured spectral density, including disorder averaging but no excited state absorption.

The calculation of 2d echo spectra requires considerable computational resources. Since theoretically calculated 2d spectra are needed to check how well theory and experiment coincide, I conclude with showing a typical spectrum we obtain (including static disorder, but no excited state absorption for this example). One interesting finding is that 2d spectra are able to differentiate between the different spectral densities. For example for a a single-peak Drude-Lorentz spectral density (sometimes chosen for computational convenience), the wrong peaks oscillate and the life-time of cross-peak oscillations is short (and becomes even shorter with longer vibronic memory). But this is for the experts only, see the supporting information of our article.

Are vibrations good or bad? Probably both… The pragmatic answer is that the FMO complex lives in an interesting parameter regime. The exact calculations within the Frenkel exciton model do confirm  the well-known dissipative energy transfer picture. But on the other hand the specific spectral density of the FMO complex supports long-lived coherences (at least if the light source is a laser beam), which require considerable theoretical and experimental efforts to be described and measured. Whether the seen coherence has any biological relevance is an entirely different topic… maybe the green-sulfur bacteria are just enjoying a glimpse into Schrödinger’s world of probabilistic uncertainty.

Computational physics & GPU programming: interacting many-body simulation with OpenCL

Trajectories in a two-dimensional interacting plasma simulation, reproducing the density and pair-distribution function of a Laughlin state relevant for the quantum Hall effect. Figure taken from Interacting electrons in a magnetic field: mapping quantum mechanics to a classical ersatz-system.

In the second example of my series on GPU programming for scientists, I discuss a short OpenCL program, which you can compile and run on the CPU and the GPUs of various vendors. This gives me the opportunity to perform some cross-platform benchmarks for a classical plasma simulation. You can expect dramatic (several 100 fold) speed-ups on GPUs for this type of system. This is one of the reasons why molecular dynamics code can gain quite a lot by incorporating the massively parallel-programming paradigm in the algorithmic foundations.

The Open Computing Language (OpenCL) is relatively similar to its CUDA pendant, in practice the setup of an OpenCL kernel requires some housekeeping work, which might make the code look a bit more involved. I have based my interacting electrons calculation of transport in the Hall effect on an OpenCL code. Another examples is An OpenCL implementation for the solution of the time-dependent Schrödinger equation on GPUs and CPUs (arxiv version) by C. Ó Broin and L.A.A. Nikolopoulos.

Now to the coding of a two-dimensional plasma simulation, which is inspired by Laughlin’s mapping of a many-body wave function to an interacting classical ersatz dynamics (for some context see my short review Interacting electrons in a magnetic field: mapping quantum mechanics to a classical ersatz-system on the arxiv).

Continue reading

Computational physics & GPU programming: Solving the time-dependent Schrödinger equation

I start my series on the physics of GPU programming by a relatively simple example, which makes use of a mix of library calls and well-documented GPU kernels. The run-time of the split-step algorithm described here is about 280 seconds for the CPU version (Intel(R) Xeon(R) CPU E5420 @ 2.50GHz), vs. 10 seconds for the GPU version (NVIDIA(R) Tesla C1060 GPU), resulting in 28 fold speed-up! On a C2070 the run time is less than 5 seconds, yielding an 80 fold speedup.

autocorrelation function in a uniform force field

Autocorrelation function C(t) of a Gaussian wavepacket in a uniform force field. I compare the GPU and CPU results using the wavepacket code.

The description of coherent electron transport in quasi two-dimensional electron gases requires to solve the Schrödinger equation in the presence of a potential landscape. As discussed in my post Time to find eigenvalues without diagonalization, our approach using wavepackets allows one to obtain the scattering matrix over a wide range of energies from a single wavepacket run without the need to diagonalize a matrix. In the following I discuss the basic example of propagating a wavepacket and obtaining the autocorrelation function, which in turn determines the spectrum. I programmed the GPU code in 2008 as a first test to evaluate the potential of GPGPU programming for my research. At that time double-precision floating support was lacking and the fast Fourier transform (FFT) implementations were little developed. Starting with CUDA 3.0, the program runs fine in double precision and my group used the algorithm for calculating electron flow through nanodevices. The CPU version was used for our articles in Physica Scripta Wave packet approach to transport in mesoscopic systems and the Physical Review B Phase shifts and phase π-jumps in four-terminal waveguide Aharonov-Bohm interferometers among others.
Here, I consider a very simple example, the propagation of a Gaussian wavepacket in a uniform potential V(x,y)=-Fx, for which the autocorrelation function of the initial state
⟨x,y|ψ(t=0)⟩=1/(a√π)exp(-(x2+y2)/(2 a2))
is known in analytic form:
⟨ψ(t=0)|ψ(t)⟩=2a2m/(2a2m+iℏt)exp(-a2F2t2/(4ℏ2)-iF2t3/(24ℏ m)).
Continue reading

The physics of GPU programming

GPU cluster

Me pointing at the GPU Resonance cluster at SEAS Harvard with 32x448=14336 processing cores. Just imagine how tightly integrated this setup is compared to 3584 quad-core computers. Picture courtesy of Academic Computing, SEAS Harvard.

From discussions I learn that while many physicists have heard of Graphics Processing Units as fast computers, resistance to use them is widespread. One of the reasons is that physics has been relying on computers for a long time and tons of old, well trusted codes are lying around which are not easily ported to the GPU. Interestingly, the adoption of GPUs happens much faster in biology, medical imaging, and engineering.
I view GPU computing as a great opportunity to investigate new physics and my feeling is that todays methods optimized for serial processors may need to be replaced by a different set of standard methods which scale better with massively parallel processors. In 2008 I dived into GPU programming for a couple of reasons:

  1. As a “model-builder” the GPU allows me to reconsider previous limitations and simplifications of models and use the GPU power to solve the extended models.
  2. The turn-around time is incredibly fast. Compared to queues in conventional clusters where I wait for days or weeks, I get back results with 10000 CPU hours compute time the very same day. This in turn further facilitates the model-building process.
  3. Some people complain about the strict synchronization requirements when running GPU codes. In my view this is an advantage, since essentially no messaging overhead exists.
  4. If you want to develop high-performance algorithm, it is not good enough to convert library calls to GPU library calls. You might get speed-ups of about 2-4. However, if you invest the time and develop your own know-how you can expect much higher speed-ups of around 100 times or more, as seen in the applications I discussed in this blog before.

This summer I will lecture about GPU programming at several places and thus I plan to write a series of GPU related posts. I do have a complementary background in mathematical physics and special functions, which I find very useful in relation with GPU programming since new physical models require a stringent mathematical foundation and numerical studies.

Catching and tracking light: following the excitations in the Fenna-Matthews-Olson complex

Peak oscillations in the FMO complex calculated using GPU-HEOM

The animation shows how peaks in the 2d echo-spectra are oscillation and changing for various delay times. For a full explanation, see Modelling of Oscillations in Two-Dimensional Echo-Spectra of the Fenna-Matthews-Olson Complex by B.Hein, C. Kreisbeck, T. Kramer, M. Rodríguez, New J. of Phys., 14, 023018 (2012), open access.

Efficient and fast transport of electric current is a basic requirement for the functioning of nanodevices and biological systems. A neat example is the energy-transport of a light-induced excitation in the Fenna-Matthews-Olson complex of green sulfur bacteria. This process has been elucidated by pump-probe spectroscopy. The resulting spectra contain an enormous amount of information about the couplings of the different pigments and the pathways taken by the excitation. The basic guide to a 2d echo-spectrum is as follows:
You can find peaks of high intensity along the diagonal line which are roughly representing a more common absorption spectrum.  If you delay the pump and probe pulses by several picoseconds, you will find a new set of peaks at a horizontal axis which indicates that energy of the excitation gets redistributed and the system relaxes and transfers part of the energy to vibrational motion. This process is nicely visible in the spectra recorded by Brixner et al.
A lot of excitement and activity on photosynthetic complexes was triggered by experiments of Engel et al showing that besides the relaxation process also periodic oscillations are visible in the oscillations for more than a picosecond.

What is causing the oscillations in the peak amplitudes of 2d echo-spectra in the Fenna-Matthews Olson complex?

A purely classical transport picture should not show such oscillations and the excitation instead hops around the complex without interference. Could the observed oscillations point to a different transport mechanism, possibly related to the quantum-mechanical simultaneous superposition of several transport paths?

The initial answer from the theoretical side was no, since within simplified models the thermalization occurs fast and without oscillations. It turned out that the simple calculations are a bit too simplistic to describe the system accurately and exact solutions are required. But exact solutions (even for simple models) are difficult to obtain. Known exact methods such as DMRG work only reliable at very low temperatures (-273 C), which are not directly applicable to biological systems. Other schemes use the famous path integrals but are too slow to calculate the pump-probe signals.

Our contribution to the field is to provide an exact computation of the 2d echo-spectra at the relevant temperatures and to see the difference to the simpler models in order to quantify how much coherence is preserved. From the method-development the computational challenge is to speed-up the calculations several hundred times in order to get results within days of computational run-time. We did achieve this by developing a method which we call GPU-hierarchical equations of motion (GPU-HEOM). The hierarchical equations of motions are a nice scheme to propagate a density matrix under consideration of non-Markovian effects and strong couplings to the environment. The HEOM scheme was developed by Kubo, Tanimura, and Ishizaki (Prof. Tanimura has posted some material on HEOM here).

However, the original computational method suffers from the same problems as path-integral calculations and is rather slow (though the HEOM method can be made faster and applied to electronic systems by using smart filtering as done by Prof. YiJing Yan). The GPU part in GPU-HEOM stands for Graphics Processing Units. Using our GPU adoption of the hierarchical equations (see details in Kreisbeck et al[JCTC, 7, 2166 (2011)] ) allowed us to cut down computational times dramatically and made it possible to perform a systematic study of the oscillations and the influence of temperature and disorder in our recent article Hein et al [New J. of Phys., 14, 023018 (2012), open access] .

The Nobel Prize 2011 in Chemistry: press releases, false balance, and lack of research in scientific writing

To get this clear from the beginning: with this posting I am not questioning the great achievement of Prof. Dan Shechtman, who discovered what is now known as quasicrystal in the lab. Shechtman clearly deserves the prize for such an important experiment demonstrating that five-fold symmetry exists in real materials.

My concern is the poor quality of research and reporting on the subject of quasicrystals starting with the press release of the Swedish Academy of Science and lessons to be learned about trusting these press releases and the reporting in scientific magazines. To provide some background: with the announcement of the Nobel prize a press release is put online by the Swedish academy which not only announces the prize winner, but also contains two PDFs with background information: one for the “popular press” and another one with for people with a more “scientific background”. Even more dangerously, the Swedish Academy has started a multimedia endeavor of pushing its views around the world in youtube channels and numerous multimedia interviews with its own members (what about asking an external expert for an interview?).

Before the internet age journalists got the names of the prize winners, but did not have immediately access to a “ready to print” explanation of the subject at hand. I remember that local journalists would call at the universities and ask a professor who is familiar with the topic for advice or get at least the phone number of somebody familiar with it. Not any more. This year showed that the background information prepared in advance by the committee is taken over by the media outlets basically unchanged. So far it looks as business as usual. But what if the story as told by the press release is not correct? Does anybody still have time and resources for some basic fact checking, for example by calling people familiar with the topic, or by consulting the archives of their newspaper/magazine to dig out what was written when the discovery was made many years ago? Should we rely on the professor who writes the press releases and trust that this person adheres to scientific and ethic standards of writing?

For me, the unfiltered and unchecked usage of press releases by the media and even by scientific magazines shows a decay in the quality of scientific reporting. It also generates a uniformity and self-referencing universe, which enters as “sources” in online encyclopedias and in the end becomes a “self-generated” truth. However it is not that difficult to break this circle, for example by

  1.  digging out review articles on the topic and looking up encyclopedias for the topic of quasicrystals, see for example: Pentagonal and Icosahedral Order in Rapidly Cooled Metals by David R. Nelson and Bertrand I. Halperin, Science 19 July 1985:233-238, where the authors write: “Independent of these experimental developments, mathematicians and some physicists had been exploring the consequences of the discovery by Penrose in 1974 of some remarkable, aperiodic, two-dimensional tilings with fivefold symmetry (7). Several authors suggested that these unusual tesselations of space might have some relevance to real materials (8, 9). MacKay (8) optically Fourier-transformed a two-dimensional Penrose pattern and found a tenfold symmetric diffraction pattern not unlike that shown for Al-Mn in Fig. 2. Three-dimensional generalizations of the Penrose patterns, based on the icosahedron, have been proposed (8-10). The generalization that appears to be most closely related to the experiments on Al-Mn was discovered by Kramer and Neri (11) and, independently, by Levine and Steinhardt (12).
  2. identifying from step 1 experts and asking for their opinion
  3. checking the newspaper and magazine archives. Maybe there exists already a well researched article?
  4. correcting mistakes. After all mistakes do happen. Also in “press releases” by the Nobel committee, but there is always the option to send out a correction or to amend the published materials. See for example the letter in Science by David R. Nelson
    Icosahedral Crystals in Perspective, Science 13 July 1990:111 again on the history of quasicrystals:
    “[...] The threedimensional generalization of the Penrose tiling most closely related to the experiments was discovered by Peter Kramer and R. Neri (3) independently of Steinhardt and Levine (4). The paper by Kramer and Neri was submitted for publication almost a year before the paper of Shechtman et al. These are not obscure references: [...]

Since I am working in theoretical physics I find it important to point out that in contrast to the story invented by the Nobel committee actually the theoretical structure of quasicrystals was published and available in the relevant journal of crystallography at the time the experimental paper got published. This sequence of events is well documented as shown above and in other review articles and books.
I am just amazed how the press release of the Nobel committee creates an alternate universe with a false history of theoretical and experimental publication records. It does give false credits for the first theoretical work on three-dimensional quasicrystals and at least in my view does not adhere to scientific and ethic standards of scientific writing.

Prof. Sven Lidin, who is the author of the two press releases of the Swedish Academy has been contacted as early as October 7 about his inaccurate and unbalanced account of the history of quasicrystals. In my view, a huge responsibility rests on the originator of the “story” which was put in the wild by Prof. Lidin, and I believe he and the committee members are aware of their power  since they use actively all available electronic media channels to push their complete “press package” out. Until today no corrections or updates have been distributed. Rather you can watch on youtube the (false) story getting repeated over and over again. In my view this example shows science reporting in its worst incarnation and undermines the credibility and integrity of science.

Quasicrystals: anticipating the unexpected

The following guest entry is contributed by Peter Kramer

Dan Shechtman received the Nobel prize in Chemistry 2011 for the experimental discovery of quasicrystals. Congratulations! The press release stresses the unexpected nature of the discovery and the struggles of Dan Shechtman to convince the fellow experimentalists. To this end I want to contribute a personal perspective:

From the viewpoint of theoretical physics the existence of icosahedral quasicrystals as later discovered by Shechtman was not quite so unexpected. Beginning in 1981 with Acta Cryst A 38 (1982), pp. 257-264 and continued with Roberto Neri in Acta Cryst A 40 (1984), pp. 580-587 we worked out and published the building plan for icosahedral quasicrystals. Looking back, it is a strange and lucky coincidence that unknown to me during the same time Dan Shechtman and coworkers discovered icosahedral quasicrystals in their seminal experiments and brought the theoretical concept of three-dimensional non-periodic space-fillings to live.

More about the fascinating history of quasicrystals can be found in a short review: gateways towards quasicrystals and on my homepage.

Time to find eigenvalues without diagonalization

Solving the stationary Schrödinger (H-E)Ψ=0 equation can in principle be reduced to solving a matrix equation. This eigenvalue problem requires to calculate matrix elements of the Hamiltonian with respect to a set of basis functions and to diagonalize the resulting matrix. In practice this time consuming diagonalization step is replaced by a recursive method, which yields the eigenfunctions for a specific eigenvalue.

A very different approach is followed by wavepacket methods. It is possible to propagate a wavepacket without determining the eigenfunctions beforehand. For a given Hamiltonian, we solve the time-dependent Schrödinger equation (i ∂t-H) Ψ=0 for an almost arbitrary initial state Ψ(t=0)  (initial value problem).

The reformulation of the determination of eigenstates as an initial value problem has a couple of computational advantages:

  • results can be obtained for the whole range of energies represented by the wavepacket, whereas a recursive scheme yields only one eigenenergy
  • the wavepacket motion yields direct insight into the pathways and allows us to develop an intuitive understanding of the transport choreography of a quantum system
  • solving the time-dependent Schrödinger equation can be efficiently implemented using Graphics Processing Units (GPU), resulting in a large (> 20 fold) speedup compared to  CPU code
Aharnov-Bohm Ring conductance oscillations

The Zebra stripe pattern along the horizontal axis shows Aharonov-Bohm oscillations in the conductance of a half-circular nanodevice due to the changing magnetic flux. The vertical axis denotes the Fermi energy, which can be tuned experimentally. For details see our paper in Physical Review B.

The determination of transmissions requires now to calculate the Fourier transform of correlation functions <Ψ(t=0)|Ψ(t)>. This method has been pioneered by Prof. Eric J. Heller, Harvard University, and I have written an introductory article for the Latin American School of Physics 2010 (arxiv version).

Recently, Christoph Kreisbeck  has done a detailed calculations on the gate-voltage dependency of the conductance in Aharonov-Bohm nanodevices, taking full adventage of the simultaneous probing of a range of Fermi energies with one single wavepacket. A very clean experimental realization of the device was achieved by Sven Buchholz, Prof. Saskia Fischer, and Prof. Ulrich Kunze (RU Bochum), based on a semiconductor material grown by Dr. Dirk Reuter and Prof. Anreas Wieck (RU Bochum). The details, including a comparison of experimental and theoretical results shown in the left figure, are published in Physical Review B (arxiv version).

Cosmic topology from the Möbius strip

Fig 1. The Möbius twist.

The following article is contributed by Peter Kramer.

Einstein’s fundamental theory of Newton’s gravitation relates the interaction of masses to the curvature of space. Modern cosmology from the big bang to black holes results from Einstein’s field equations for this relation. These differential equations by themselves do not yet settle the large-scale structure and connection of the cosmos. Theoretical physicists in recent years tried to infer information on the large-scale cosmology from Cosmic microwave background radiation (CMBR), observed by satellite observations. In the frame of large-scale cosmology, the usual objects of astronomy from solar systems to galaxy clusters are smoothed out, and conditions imprinted in the early stage of the universe dominate.

Fig 2: The planar Möbius crystal cm

In mathematical language one speaks of cosmic topology. Topology is often considered to be esoteric. Here we present topology from the familiar experience with the twisted Möbius strip. This strip on one hand can be seen as a rectangular crystallographic lattice cell whose copies tile the plane, see Fig. 2. The Möbius strip is represented as a rectangular cell, located between the two vertical arrows, of a planar crystal. A horizontal dashed line through the center indicates a glide-reflection line. A glide reflection is a translation along the dashed line by the horizontal length of the cell, followed by a reflection in this line. The crystallographic symbol for this planar crystal is cm. In three-dimensional space the planar Möbius crystal (top panel of Fig. 1) is twisted (middle panel of Fig. 1). The twist is a translation along the dashed line, combined with a rotation by 180 degrees around that line. A final bending (bottom panel of Fig. 1) of the dashed line and a smooth gluing of the arrowed edges yields the familiar Möbius strip.

Fig 3: Cubic twist N3.

Given this Möbius template in two dimension, we pass to manifolds of dimension three. We present in Fig. 3 a new cubic manifold named N3. Three cubes are twisted from an initial one. A twist here is a translation along one of the three perpendicular directions, combined with a right-hand rotation by 90 degrees around this direction. To follow the rotations, note the color on the faces. The three neighbor cubes can be glued to the initial one. If the cubes are replaced by their spherical counterparts on the three-sphere, the three new cubes can pairwise be glued with one another, with face gluings indicated by heavy lines. The complete tiling of the three-sphere comprises 8 cubes and is called the 8-cell. The gluings shown here generate the so-called fundamental group of a single spherical cube on the three-sphere with symbol N3. This spherical cube is a candidate for the cosmic topology inferred from the cosmic microwave background radiation. A second cubic twist with a different gluing and fundamental group is shown in Fig. 4. Here, the three twists combine translations along the three directions with different rotations.

The key idea in cosmic topology is to pass from a topological manifold to its eigen- or normal modes. For the Möbius strip, these eigenmodes are seen best in the planar crystal representation of Fig. 2. The eigenmodes can be taken as sine or cosine waves of wave length \lambda which repeat their values from edge to edge of the cell. It is clear that the horizontal wavelength \lambda of these modes has as upper bound the length L of the rectangle. The full Euclidean plane allows for infinite wavelength, and so the eigenmodes of the Möbius strip obey a selection rule that characterizes the topology. Moreover the eigenmodes of the Möbius strip must respect its twisted connection.

Fig 4: Cubic twist N2.

Similarly, the eigenmodes of the spherical cubes in Fig. 3 must repeat themselves when going from cube to neighbor cube. It is intuitively clear that the cubic eigenmodes must have a wavelength smaller than the edge length of the cubes. The wave length of the eigenmodes of the full three-sphere are bounded by the equator length of the three-sphere. Seen on a single cube, the different twists and gluings of the manifolds N2 and N3 shown in Figs. 3 and 4 form different boundary value problems for the cubic eigenmodes.

Besides of these spherical cubic manifolds, there are several other competing polyhedral topologies with multiple connection or homotopy. Among them are the famous Platonic polyhedra. Each of them gives rise to a Platonic tesselation of the three-sphere. Everitt has analyzed all their possible gluings in his article Three-manifolds from platonic solids in Topology and its applications, vol 138 (2004), pp. 253-263. In my contribution Platonic topology and CMB fluctuations: Homotopy, anisotropy, and multipole selection rules, Class. Quant. Grav., vol. 27 (2010), 095013 (freely available on the arxiv) I display them and present a full analysis of their corresponding eigenmodes and selection rules.

Since terrestrial observations measure the incoming radiation in terms of its spherical multipoles as functions of their incident direction, the eigenmodes must be transformed to a multipole expansion as done in my work. New and finer data on the CMB radiation are expected from the Planck spacecraft launched in 2009. These data, in conjunction with the theoretical models, will promote our understanding of cosmic space and possible twists in its topology.

Hot spot: the quantum Hall effect in graphene

Hall potential in a graphene device due to interactions and equipotential boundary conditions at the contacts.

An interesting and unfinished chapter of condensed mater theory concerns the quantum Hall effect. Especially the integer quantum Hall effect (IQHE) is actually not very well understood. The fancy cousin of the IQHE is the fractional quantum Hall effect (FQHE). The FQHE is easier to handle since there is agreement about the Hamiltonian which is to be solved (although the solutions are difficult to obtain): the quantum version of the very Hamiltonian used for the classical Hall effect, namely the one for interacting electrons in a magnetic field. The Hamiltonian is still lacking the specification of the boundary conditions, which can completely alter the results for open and current carrying systems (as in the classical Hall effect) compared to interacting electrons in a box.
Surprisingly no agreement about the Hamiltonian underlying the IQHE exists. It was once hoped that it is possible to completely neglect interactions and still to obtain a theoretical model describing the experiments. But if we throw out the interactions, we throw out the Hall effect itself. Thus we have to come up with the correct self-consistent solution of a mean field potential which incorporates the interactions and the Hall effect.

Is it possible to understand the integer quantum Hall effect without including interactions – and if yes, how does the effectively non-interacting Hamiltonian look like?

Starting from a microscopic theory we have constructed the self-consistent solution of the Hall potential in our previous post for the classical Hall effect. Two indispensable factors caused the emergence of the Hall potential:

  1. repulsive electronic interactions and
  2. equipotential boundary conditions at the contacts.

The Hall potential which emerges from our simulations has been directly imaged in GaAs Hall-devices under conditions of a quantized conductance by electro-optical methods and by scanning probe microscopy using a single electron transistor. Imaging requires relatively high currents in order to resolve the Hall potential clearly.

In graphene the dielectric constant is 12 times smaller than in GaAs and thus the Coulomb repulsion between electrons are stronger (which should help to generate the Hall potential). The observation of the FQHE in two-terminal devices has led the authors of the FQHE measurments to conjecture that hot-spots are also present in graphene devices [Du, Skachko, Duerr, Luican Andrei Nature 462, 192-195 (2009)].

These observations are extremely important, since the widely used theoretical model of edge-state transport of effectively non-interacting electrons is not readily compatible with these findings. In the edge-state model conductance quantization relies on the counter-propagation of two currents along the device borders, whereas the shown potential supports only a unidirectional current from source to drain diagonally across the device.

Moreover the construction of asymptotic scattering states is not possible, since no transverse lead-eigenbasis exists at the contacts. Electrons moving strictly along one side of the device from one contact to the other one would locally increase the electron density within the contact and violate the metallic boundary condition (see our recent paper on the Self-consistent calculation of electric potentials in Hall devices [Phys. Rev. B, 81, 205306 (2010)]).

Are there models which support a unidirectional current and at the same time support a quantized conductance in units of the conductivity quantum?

We put forward the injection model of the quantum Hall effect, where we take the Hall potential as being the self-consistent mean-field solution of the interacting and current carrying device. On this potential we construct the local density of states (LDOS) next to the injection hot spot and calculate the resulting current flow. In our model, the conductivity of the sample is completely determined by the injection processes at the source contact where the high electric field of the hot spots leads to a fast transport of electrons into the device. The LDOS is broadened due to the presence of the electric Hall field during the injection and not due to disorder. Our model is described in detail in our paper Theory of the quantum Hall effect in finite graphene devices [Phys. Rev. B, 81, 081410(R) (2010), free arxiv version] and the LDOS in a conventional semiconductor in electric and magnetic fields is given in a previous paper on electron propagation in crossed magnetic and electric fields. The tricky part is to prove the correct quantization, since the absence of any translational symmetries in the Hall potential obliterates the use of “Gedankenexperimente” relying on periodic boundary conditions or fancy loop topologies.

In order to propel the theoretical models forward, we need more experimental images of the Hall potential in a device, especially in the vicinity of the contacts. Experiments with graphene devices, where the Hall potential sits close to the surface, could help to establish the potential distribution and to settle the question which Hamiltonian is applicable for the quantum Hall effects. Is there anybody out to take up this challenge?

Trilobites revived: fragile Rydberg molecules, Coulomb Green’s function, Lambert’s theorem

The trilobite state

The trilobite Rydberg molecule can be modeled by the Coulomb Green’s function, which represents the quantized version of Lambert’s orbit determination problem.

The recent experimental realization observation of giant Rydberg molecules by Bendkowsky, Butscher, Nipper, Shaffer, Löw, Pfau [theoretically studied by Greene and coworkers, see for example Phys. Rev. Lett. 85, 2458 (2000)] shows Coulombic forces at work at large atomic distances to form a fragile molecule. The simplest approach to Rydberg molecules employs the Fermi contact potential (also called zero range potential), where the Coulomb Green’s function plays a central role. The quantum mechanical expression for the Coulomb Green’s function was derived in position space by Hostler and in momentum space by Schwinger. The quantum mechanical expression does not provide immediate insights into the peculiar nodal structure shown on the left side and thus it is time again to look for a semiclassical interpretation, which requires to translate an astronomical theorem into the Schrödinger world, one of my favorite topics.

Johann Heinrich Lambert was a true “Universalgelehrter”, exchanging letters with Kant about philosophy, devising a new color pyramid, proving that π is an irrational number, and doing physics. His career did not proceed without difficulties since he had to educate himself after working hours in his father’s tailor shop. After a long journey Lambert ended up in Berlin at the academy (and Euler choose to “escape” to St. Petersburg).

Lambert followed Kepler’s footsteps and tackled one of the most challenging problems of the time: the determination of celestial orbits from observations. In 1761 Lambert did solve the problem of orbit determination from two positions measurements. Lambert’s Theorem is a cornerstone of astronavigation (see for example the determination of Sputnik’s orbit using radar range measurements and Lambert’s theorem). Orbit determination from angular information alone (without known distances) is another problem and requires more observations.

Lambert poses the following question [Insigniores orbitae cometarum proprietates (Augsburg, 1761), p. 120, Lemma XXV, Problema XL]: Data longitudine axis maioris & situ foci F nec non situ punctorum N, M, construere ellipsin [Given the length of the semi-major axis, the location of one focal point, the points N,M, construct the two possible elliptical orbits connecting both points.]

Lambert's construction of two ellipses.

Lambert’s construction to find all possible trajectories from N to M and to map them to a ficticious 1D motion from n to m.

Lambert finds the two elliptic orbits [Fig. XXI] with an ingenious construction: he maps the rather complicated two-dimensional problem to the fictitious motion along a degenerate linear ellipse. Some physicists may know how to relate the three-dimensional Kepler problem to a four-dimensional oscillator via the Kustaanheimo–Stiefel transformation [see for example The harmonic oscillator in modern physics by Moshinsky and Smirnov]. But Lambert’s quite different procedure has its advantages for constructing the semiclassical Coulomb Green’s function, as we will see in a moment.

Shown are two ellipses with the same lengths of the semimajor axes 1/2 A1B1=1/2 A2 B2 and a common focus located at F. The centers of the two ellipses are denoted by C1 and C2. Lambert’s lemma allows to relate the motion from N to M on both ellipses to a common collinear motion on the degenerate linear ellipse Fb, where the points n and m are chosen such that the time of flight (TOF) along nm equals the TOF
along the elliptical arc NM on the first ellipse. On the second ellipse the TOF along the arc NB2M equals the TOF along nbm. The points n and m are found by marking the point G halfway between N and M. Then the major axis Fb=A1 B1=A2 B2 of the linear ellipse is drawn starting at F and running through G. On this line the point g is placed at the distance Fg=1/2(FN+FM). Finally n and m are given by the intersection points of a circle around g with radius GN=GM. This construction shows that the sum of the lengths of the shaded triangle α±=FN + FM ± NM is equal to α±=fn+ fm ± nm. The travel time depends only on the distances entering α±, and all calculations of the travel times etc. are given by one-dimensional integrations along the ficticious linear ellipse.

Lambert did find all the four possible trajectories from N to M which have the same energy (=semimajor axis a), regardless of their eccentricity (=angular momentum). The elimination of the angular momentum from Kepler’s equation is a tremendous achievement and the expression for the action is converted from Kepler’s form

  • [Kepler] W(r,r‘;E)=√μ a Kc [ξ + ε sin(ξ) - ξ' - ε sin(ξ')], with eccentricity ε, eccentric anomaly ξ to
  • [Lambert] W(r,r‘;E)=√μ a Kc[γ + sin(γ) - δ - sin(δ)], with
    sin2(γ/2)=(r+r’+ |r‘-r|)/(4a) and sin2(δ/2)=(r+r’- |r‘-r|)/(4a).

The derivation is also discussed in detail in our paper [Kanellopoulos, Kleber, Kramer: Use of Lambert’s Theorem for the n-Dimensional Coulomb Problem Phys. Rev. A, 80, 012101 (2009), free arxiv version here]. The Coulomb problem of the hydrogen atom is equivalent to the gravitational Kepler problem, since both are subject to a 1/r potential. Some readers might have seen the equation for the action in Gutzwiller’s nice book Chaos in classical and quantum mechanics, eq. (1.14). It is worthwhile to point out that the series solution given by Lambert (and Gutzwiller) for the time of flight can be summed up easily and is denoted today by an inverse sine function (for hyperbolic motion a hyperbolic sine, a function later introduced by Riccati and Lambert). Again, the key-point is the introduction of the linear ficticious ellipse by Lambert which avoids integrating along elliptical arcs.

The surprising conclusion: the nodal pattern of the hydrogen atom can be viewed as resulting from a double-slit interference along two principal ellipses. The interference determines the eigenenergies and the eigenstates. Even the notorious difficult-to-calculate van Vleck-Pauli-Morette (VVPM) determinant can be expressed in short closed form with the help of Lambert’s theorem and our result works even in higher dimensions. The analytic form of the action and the VVPM determinant becomes essential for our continuation of the classical action into the forbidden region, which corresponds to a tunneling process, see the last part of our paper.

Lambert is definitely a very fascinating person. Wouldn’t it be nice to discuss with him about philosophy, life, and science?

Determining the affinities of electrons OR: seeing semiclassics in action

Electron trajectories for photodetachment in an electric field.

Negatively charged ions are an interesting species, having managed to bind one more electron than charge neutrality grants them [for a recent review see T. Andersen: Atomic negative ions: structure, dynamics and collisions, Physics Reports 394 p. 157-313 (2004)]. The precise determination of the usually small binding energy is best done by shining a laser beam of known wave length on the ions and detect at which laser frequency the electron gets detached from the atomic core.

For some ions (oxygen, sulfur, or hydrogen fluoride and many more) the most precise values given at NIST are obtained by Christophe Blondel and collaborators with an ingenious apparatus based on an idea by Demkov, Kondratovich, and Ostrovskii in Pis’ma Zh. Eksp. Teor. Fiz. 34, 425 (1981) [JETP Lett. 34, 403 (1981)]: the photodetachment microscope. Here, in addition to the laser energy, the energy of the released electron is measured via a virtual double-slit experiment. The ions are placed in an electric field, which makes the electronic wave running against the field direction turn back and interfere with the wave train emitted in the field direction. The electric-field induced double-slit leads to the build up of a circular interference pattern of millimeter size (!) on the detector shown in the left figure (the animation was kindly provided by C. Blondel, W. Chaibi, C. Delsart, C. Drag, F. Goldfarb & S. Kröger, see their orginal paper The electron affinities of O, Si, and S revisited with the photodetachment microscope, Eur. Phys. J. D 33 (2005) 335-342).

Observed time-dependent emergence of the interference pattern in an electric field. Video shown with kind permission of C. Blondel et al. (see text for full credit)

I view this experiment as one of the best illustrations of how quantum and classical mechanics are related via the classical actions along trajectories. The two possible parabolic trajectories underlying the quantum mechanical interference pattern were described by Galileo Galilei in his Discourses & Mathematical Demonstrations Concerning Two New Sciences Pertaining to Mechanics & Local Motions in proposition 8: Le ampiezze de i tiri cacciati con l’istesso impeto, e per angoli egualmente mancanti, o eccedenti l’angolo semiretto, sono eguali. Ironically the “old-fashioned” parabolic motion was removed from the latest Gymnasium curriculum in Baden-Württemberg to make space for modern quantum physics.

At the low energies of the electrons, their paths are easily deflected by the magnetic field of the Earth and thus require either excellent shielding of the field or an active compensation, which was achieved recently by
Chaibi, Peláez, Blondel, Drag, and Delsart in Eur. Phys. J. D 58, 29-37 (2010). The new paper demonstrates nicely the focusing effect of the combined electric an magnetic fields, which Christian Bracher, John Delos, Manfred Kleber, and I have analyzed in detail and where one encounters some of the seven elementary catastrophies since the magnetic field allows one to select the number of interfering paths.

We have predicted similar fringes for the case of matter waves in the gravitational field around us originating from trapped Bose-Einstein condensates (BEC), but we are not aware of an experimental observation of similar clarity as in the case of the photodetachment microscope.

Mathematically, the very same Green’s function describes both phenomena, photodetachment and atomlasers. For me this universality demonstrates nicely how mathematical physics allows us to understand phenomena within a language suitable for so many applications.

Interactions: from galaxies to the nanoscale

Microscopic model of a Hall bar
(a) Device model
(b) phenomenological potential
(c) GPU result

For a while we have explored the usage of General Purpose Graphics Processing Units (GPGPU) for electronic transport calculations in nanodevices, where we want to include all electron-electron and electron-donor interactions. The GPU allows us to drastically (250 fold !!!) boost the performance of N-body codes and we manage to propagate 10,000 particles over several million time-steps within days. While GPU methods are now rather popular within the astrophysics crowd, we haven’t seen many GPU applications for electronic transport in a nanodevice. Besides the change from astronomical units to atomic ones, gravitational forces are always attractive, whereas electrons are affected by electron-donor charges (attractive) and electron-electron repulsion. Furthermore we have a magnetic field present, leading to deflections. Last, the space where electrons can spread out is limited by the device borders. In total the force on the kth electron is given by \vec{F}_{k}=-\frac{e^2}{4\pi\epsilon_0 \epsilon}\sum_{\substack{l=1}}^{N_{\rm donor}}\frac{\vec{r}_l-\vec{r}_k}{|\vec{r}_l-\vec{r}_k|^3}+\frac{e^2}{4\pi\epsilon_0 \epsilon}\sum_{\substack{l=1\\l\ne k}}^{N_{\rm elec}}\frac{\vec{r}_l-\vec{r}_k}{|\vec{r}_l-\vec{r}_k|^3}+e \dot{\vec{r}}_k\times\vec{B}

Our recent paper in Physical Review B (also freely available on the arxiv) gives the first microscopic description of the classical Hall effect, where interactions are everything: without interactions no Hall field and no drift transport. The role and importance of the interactions is surprisingly sparsely mentioned in the literature, probably due to a lack of computational means to move beyond phenomenological models. A notable exception is the very first paper on the Hall effect by Edwin Hall, where he writes “the phenomena observed indicate that two currents, parallel and in the same direction, tend to repel each other”. Note that this repulsion works throughout the device and therefore electrons do not pile up at the upper edge, but rather a complete redistribution of the electronic density takes place, yielding the potential shown in the figure.

Another important part of our simulation of the classical Hall effect are the electron sources and sinks, the contacts at the left and right ends of the device. We have developed a feed-in and removal model of the contacts, which keeps the contact on the same (externally enforced) potential during the course of the simulation.

Mind-boggling is the fact that the very same “classical Hall potential” has also been observed in conjunction with a plateau of the integer quantum Hall effect (IQHE) [Knott et al 1995 Semicond. Sci. Technol. 10 117 (1995)]. Despite these observations, many theoretical models of the integer quantum Hall effect do not consider the interactions between the electrons. In our classical model, the Hall potential for non-interacting electrons differs dramatically from the solution shown above and transport proceeds then (and only then) along the lower and upper edges. However the edge current solution is not compatible with the contact potential model described above where an external reservoir enforces equipotentials within each contact.