Together with Christoph Kreisbeck and Rafael A Molina I have contributed a blog entry to the News and Views section of the Journal of Physics describing our most recent work on AharonovBohm interferometer with an imbedded quantum dot (article, arxiv). Can you spot Schrödinger’s cat in the result?
Splitting the heat: the quantum limits of thermal energy flow
With ever shrinking sizes of electronic transistors, the quantum mechanical nature of electrons becomes more visible. For instance two electrons with the same spin orientation and velocities cannot be at the same location (Pauli blocking). At low temperatures, electronic waves travel many mircometers completely coherently, only reflected by the geometric of the confinement. A tight confinement leads to larger separation of quantized energy levels and restricts the lateral spread of the electrons to specific eigenmodes of a nanowire.
The distribution of the electronic current into various is then given by the geometrical scattering properties of the device interior, which are conveniently computed using wave packets. The ballistic electrons entering a nanodevice carry along charge and thermal energy. The maximum amount of thermal energy Q per time which can be transported through a single channel between two reservoirs of different temperatures is limited to Q ≤ π^{2} k_{B}^{2} (T_{2}^{2}T_{1}^{2})/(3h) [h denotes Planck’s and k_{B} Boltzmann’s constant]. This has implications for computing devices, since this restricts the cooling rate (Pendry 1982).
In a collaboration with the novel materials group at Humboldt University (Prof. S.F. Fischer, Dr. C. Riha, Dr. O. Chiatti, S. Buchholz) and using wafers produced in the lab of A. Wieck, D. Reuter (Bochum, Paderborn) C. Kreisbeck and I have compared theoretical expectations with experimental data for the thermal energy and charge currents in multiterminal nanorings (AIP Advances 2016, open access). Our findings highlight the influence of the device geometry on both, charge and thermal energy transfer and demonstrate the usefulness of the timedependent wavepacket algorithm to find eigenstates over a whole range of temperature.
Metadata analysis of 80,000 arxiv:physics/astroph articles reveals biased moderation
Have you ever thought of arXiv moderation in astroph being a problem? Did you experience a >5 months delay from submission of your preprint to the arXiv to being publicly visible? Did this happen without any explanation or reaction from the arXiv moderators despite the same article being published after peer review in the Astrophysical Journal Letters?
Chances are high that your answer is no, to be precise the odds are 81404/81440=99.9558 percent that this did not happen to you. Lucky you! Now let me tell about the other 36/81440=0.0442043 percent. My computer based analysis of the last 80,000 deposited arxiv:astroph articles shows interesting results about the moderation patterns in astrophysics. To repeat the analysis
 get the arXiv metadata, which is available (good!) from the arxiv itself. I used the excellent metha tools from Martin Czygan to download all metadata from the astroph and quantph sections since 5/2014.
 parse the resulting 200 MB XML file, for instance with Mathematica. To get the delay from submission to arXiv publication, I took the time difference between the submission date stamp (oldest XMLElement[{http://purl.org/dc/elements/1.1/, date}) and the arXiv identifier, which encodes the year and month of public visibility.
 Example: the article arxiv:1604.00876 went public in April 2016, 5 months after submission to the arXiv (November 5, 2015) and publication in the Astrophysical Journal Letters (there total processing time from submission to online publication, including peer review 1.5 months).
The analysis shows different patterns of moderation for the two sections I considered, quantph and astroph. It reveals problematic moderation effects in the arXiv astroph section:
 Completely suitable articles are blocked, mostly peer reviewed and published for instance in the Astrophysical Journal, Astrophysical Journal Letters, Monthly Notices of the Royal Astronomical Society.
 This might indicate a biased moderation toward specific persons and subjects. In contrast to scientific journals with their named editors, the arXiv moderation is opaque and anonymous. The metadata analysis shows that the moderation of the physics:astroph and physics:quantph use very different guidelines, with astroph having a strong bias to block valid contributions.
 It makes the astroph arXiv less usable as a medium for rapid dissemination of cutting edge research via preprints.
 This hurts careers, citation histories, and encourages plagiarism. New scientific findings are more easily plagiarized by other groups, since no arXiv timestamped preprint establishes the precedence.
 If we, the scientists, want a publicly funded arXiv we must ensure that it is operated according to scientific standards which serve the public. This excludes biased blocking of valid and publicly funded research.
 Finally, the arXiv was not put in place to be a backup server for all journals, but rather to provide a space to share upcoming scientific publications without months of delay.
I will be happy to share comments I receive about similar cases. I am not talking about dubious articles or nonscientific theories, but about standard peerreviewed contributions published in established physics journals, which should be on the astrophysical preprint arXiv.
Here follows the list of all articles which were delayed by more than 3 months from arxiv:physics/astroph (out of a total of 81,440 deposited articles) and if known where the peer reviewed article got published. I cannot exclude other factors besides moderation for the delay, but can definitely confirm incorrect moderation being the cause for the 2 cases I have experienced. Interestingly the same analysis on arxiv:physics/quantph did not reveal such a moderation bias of peer reviewed articles. This gives hope that the astrophysical section could recover and return to 100 percent flawless operation. Then the arXiv fulfils its own pledge on accountability and on good scientific practices (principles of the arXiv’s operation).
http://arxiv.org/abs/1406.4846
http://arxiv.org/abs/1009.0420 The Astrophysical Journal
http://arxiv.org/abs/1101.3547 Publications of Astronomical Society of Japan
http://arxiv.org/abs/1505.07311
http://arxiv.org/abs/1506.01700 Journal of Astrophysics and Astronomy
http://arxiv.org/abs/1506.01957
http://arxiv.org/abs/1509.08311 EPJ Web of Conferences
http://arxiv.org/abs/1511.00541
http://arxiv.org/abs/1512.02176
http://arxiv.org/abs/1501.03478 Astrophysics and Space Sciences
http://arxiv.org/abs/1602.07334 The Astrophysical Journal
http://arxiv.org/abs/1602.07511 Physical Review C
http://arxiv.org/abs/1503.03442
http://arxiv.org/abs/1603.09573 Monthly Notices of the Royal Astronomical Society
http://arxiv.org/abs/1604.00876 The Astrophysical Journal Letters
http://arxiv.org/abs/1604.00878
http://arxiv.org/abs/1604.00291 Journal of Statistical Mechanics: Theory and Experiment
http://arxiv.org/abs/1604.07760 The Astrophysical Journal Letters
http://arxiv.org/abs/1605.02095
http://arxiv.org/abs/1606.00369
Predicting comets: a matter of perspective
In general, any cometary activity is difficult to predict and many comets are known for sudden changes in brightness, break ups and simple disappearances. Fortunately, the Rosetta target comet 67P/ChuryumovGerasiminko (67P/CG) is much more amendable to theoretical predictions. The OSIRIS and NAVCAM images show light reflected from a highly structured dust coma within the space probe orbit (ca 20150 km).
Is is possible to predict the dust coma and tail of comets?
Starting in 2014 we have been working on a dust forecast for 67P/CG, see the previous blog entries. We had now the chance to check how well our predictions hold by comparing the model outcome to a image sequence from the OSIRIS camera during one rotation period of 67P/CG on April 12, 2015, published by Vincent et al in A&A 587, A14 (2016) (arxiv version, there Fig. 13).
Our results appeared in Kramer & Noack, Astrophysical Journal Letters, 823, L11 (preprint, images). We obtain a surprisingly high correlation coefficient (average 80%, max 90%) between theory and observation, if we stick to the following minimal assumption model:
 dust is emitted from the entire sunlit nucleus, not only from localized active areas. We refer to this as the “homogeneous activity model”
 dust is entering space with a finite velocity (on average) along the surface normal. This implies that close to the surface a rapid acceleration takes place.
 photographed “jets” are highly depending on the observing geometry:
if multiple concave areas align along the line of sight, a high imaged intensity results, but is not necessarily the result of a single main emission source. As an exemplary case, we analysed the brightest points in the Rosetta image taken on April 12, 2015, 12:12 and look at all contributing factors along the line of sight (yellow line) from the camera to the comet. The observed jet is actually resulting from multiple sources and in addition from contributions from all sunlit surface areas.
What are the implications of the theoretical model?
If dust is emitted from all sunlit areas of 67P/CG, this implies a more homogeneous surface erosion of the illuminated nucleus and leaves less room for compositional heterogeneities. And finally: it makes the dust coma much more predictable, but still allows for additional (but unpredictable) spontaneous, 2040 min outbreak events. Interestingly, a reanalysis of the comet Halley flyby by Crifo et al (Earth, Moon, and Planets 90 227238 (2002)) also points to a more homogeneous emission pattern as compared to localized sources.
Apply now for the 621. WEHeraeusSeminar: From Photosynthesis to Photovoltaics: Theoretical Approaches for Modelling Supramolecular Complexes and Molecular Crystals
The conference focuses on theoretical and experimental new developments in the field of photoactive molecules. If you are working in this field, we encourage you to attend the meeting. Find out more and who already agreed to participate at photosynthesis2016.wordpress.com.
Please submit your abstract within the next two weeks by filling out the linked template from the conference site.
We (the scientific organizers, Tobias Kramer and Jörg Megow) are grateful for the full support by the Wilhelm and Else Heraeus Foundation, which covers the accommodation for all successful applicants.
Weathering the dust around comet 67P/Churyumov–Gerasimenko
Comet 67P/Churyumov–Gerasimenko is past its perihelion and is currently visible in telescopes in the morning hours. The picture is taken from Tenerife by the Bradford robotic telescope, where I submitted the request. The tail is extending hundred thousands kilometers into space and consists of dust particles emitted from the cometary nucleus, which measures just a few kilometers. In a recent work just published in the Astrophysical Journal Letters (arxiv version), we have explored how dust, which does not make it into space, is whirling around the cometary nucleus. The model assumes that dust particles are emitted from the porous mantle and hover over the cometary surface for some time (<6h) and then fall back on the surface, delayed by the gas drag of gas molecules moving away from the nucleus. As in the predictions for the cometary coma discussed previously, we are sticking to a minimalassumption model with a homogeneous surface activity of gas and dust emission.
The movements of 40,000 dust particles are tracked and the average dust transport within a volumetric grid with 300 m sized boxes is computed. Besides the gasdust interaction, we do also incorporate the rotation of the comet, which leads to a directional transport.
The Rosetta mission dropped Philae over the small lobe of 67P/CG and Philae took a sequence of approach images which reveal structures resembling windtails behind boulders on the comet. This allowed Mottola et al (Science 349.6247 (2015): aab0232) to derive information about the direction of impinging particles which hit the surface unless sheltered by the boulder. Our model predicts a dusttransport inline with the observed directions in the descent region, it will be interesting to see how windtails at other locations match with the prediction. We put an interactive 3d duststream model online to visualize the dustflux predicted from the homogeneous surface model.
Day and night at comet 67P/Churyumov–Gerasimenko
Comet 67P/Churyumov–Gerasimenko has passed its nearest distance to the sun and its tail has been observed from earth. The comet emits dust and displays spectacular but shortlived outbreaks of localized jet activity. Very detailed OSIRIS pictures of the nearsurface dust emission ready for stereo viewing have been posted by Brian May. The pictures also allow one to have a look at the prediction from the homogeneous dust emission model discussed previously. When you direct your attention in Brian May’s pictures to the background activity, you find very similar patterns as expected from the homogenous emission model. This activity is dimmer but steadily blowing off dust from the nucleus. Matthias Noack and I have generated and uploaded a visualization of the dust data obtained from the homogeneous activity model. In contrast to a localized activity models, collimated jets arise from a bundle of copropagating dust trajectories emanating from concave surface areas. The underlying topographical shape model is a uniform triangle remesh of Mattias Malmer’s excellent work based on the release of Rosetta’s NAVCAM images via the Rosetta blog. The following video takes you on a flight around 67P/CG, with 16 hours condensed into 90 sec.
The video is a sidebyside stereoscopic 3d rendering of 67P/Churyumov–Gerasimenko and the dust cloud, which can be viewed in 3d with a simple cardboard viewer. While the observer is encircling the nucleus, day and night passes and different parts of the comet are illuminated.
In the homogeneous activity model each sunlit triangle emits dust with an initial velocity component along the surface normal. Then dust is additionally dragged along within the outwards streaming gas, which is also incorporated in the model. In contrast to compact dust particles, the gas molecules are diffusing also in lateral directions and thus gas is not helping to collimate jets by itself. The Rosetta mission with its long term observation program offers fascinating ways to perform a reality check on various models of cometary activity, which differ considerably in the underlying physics and assumptions about the original distribution and liftoff conditions of the dust eventually forming the beautiful tails of comets.
Source:
Homogeneous dust emission and jet structure near active cometary nuclei: the case of 67P/ChuryumovGerasimenko by Tobias Kramer, Matthias Noack, Daniel Baum, HansChristian Hege, Eric J. Heller.
PS:
For redcyan glasses try our 3d video on youtube (flash player required, watch out for the settings and 3d options, 1080p HD recommended).
Dusting off cometary surfaces: collimated jets despite a homogeneous emission pattern.
Knowledge of GPGPU techniques is helpful for rapid model building and testing of scientific ideas. For example, the beautiful pictures taken by the ESA/Rosetta spacecraft of comet 67P/Churyumov–Gerasimenko reveal jets of dust particles emitted from the comet. Wouldn’t it be nice to have a fast method to simulate thousands of dust particles around the comet and to find out if already the peculiar shape of this spacepotato influences the dusttrajectories by its gravitational potential? At the ZuseInstitut in Berlin we joined forces between the distributed algorithm and visual data analysis groups to test this idea. But first an accurate shape model of the comet 67P CG is required. As published in his blog, Mattias Malmer has done amazing work to extract a shapemodel from the published navigation camera images.
 Starting from the shape model by Mattias Malmer, we obtain a remeshed model with fewer triangles on the surface (we use about 20,000 triangles). The keyproperty of the new mesh is a homogeneous coverage of the cometary surface with almost equally sized triangle meshes. We don’t want better resolution and adaptive mesh sizes at areas with more complex features. Rather we are considering a homogeneous emission pattern without isolated activity regions. This is best modeled by mesh cells of equal area. Will this prescription yield nevertheless collimated dust jets? We’ll see…
 To compute the gravitational potential of such a surface we follow this nice article by JT Conway. The calculation later on stays in the rotating frame anchored to the comet, thus in addition the centrifugal and Coriolis forces need to be included.
 To accelerate the method, OpenCL comes to the rescue and lets one compute many trajectories in parallel. What is required are physical conditions for the starting positions of the dust as it flies off the surface. We put one dustparticle on the center of each triangle on the surface and set the initial velocity along the normal direction to typically 2 or 4 m/s. This ensures that most particles are able to escape and not fall back on the comet.
 To visualize the resulting point clouds of dust particles we have programmed an OpenGL visualization tool. We compute the rotation and sunlight direction on the comet to cast shadows and add activity profiles to the comet surface to mask out dust originating from the dark side of the comet.
This is what we get for May 3, 2015. The ESA/NAVCAM image is taken verbatim from the Rosetta/blog.
Read more about the physics and results in our arxiv article T. Kramer et al.: Homogeneous Dust Emission and Jet Structure near Active Cometary Nuclei: The Case of 67P/ChuryumovGerasimenko (submitted for publication) and grab the code to compute your own dust trajectories with OpenCL at github.org/noma/covis
The shape of the universe
The following post is contributed by Peter Kramer.
The new Planck data on the cosmic microwave background (CMB) has come in. For cosmic topology, the data sets contain interesting information related to the size and shape of the universe. The curvature of the threedimensional space leads to a classification into hyperbolic, flat, or spherical cases. Sometimes in popular literature, the three cases are said to imply an inifinite (hyperbolic, flat) or finite (spherical) size of the universe. This statement is not correct. Topology supports a much wider zoo of possible universes. For instance, there are finite hyperbolic spaces, as depicted in the figure (taken from Group actions on compact hyperbolic manifolds and closed geodesics, arxiv version). The figure also shows the resulting geodesics, which is the path of light through such a hyperbolic finite sized universe. The start and endpoints must be identified and lead to smooth connection.
Recent observational data seem to suggest a spherical space. Still, it does not resolve the issue of the size of the universe.
Instead of a fully filled threesphere, already smaller parts of the sphere can be closed topologically and thus lead to a smaller sized universe. A systematic exploration of such smaller but still spherical universes is given in my recent article
Topology of Platonic Spherical Manifolds: From Homotopy to Harmonic Analysis.
In physics, it is important to give specific predictions for observations of the topology, for instance by predicting the ratio of the different angular modes of the cosmic microwave background. It is shown that this is indeed the case and for instance in a cubic (still spherical!) universe, the ratio of 4th and 6th multipole order squared are tied together in the proportion 7 : 4, see Table 5. On p. 35 of ( the Planck collaboration article) the authors call for models yielding such predictions as possible explanations for the observed anisotropy and the ratio of high and low multipole moments.
When two electrons collide. Visualizing the Pauli blockade.
From time to time I get asked about the implications of the Pauli exclusion principle for quantum mechanical wavepacket simulations.
I start with the simplest antisymmetric case: a two particle state given by the Slater determinant of two Gaussian wave packets with perpendicular directions of the momentum:
φ_{a}(x,y)=e^{[(xo)2+(yo)2]/(2a2)ikx+iky} and φ_{b}(x,y)=e^{[(x+o)2+(yo)2]/(2a2)+ikx+iky}
This yields the twoelectron wave function
ψ_{a}(x1,y1,x2,y2)=φ_{a}(x1,y1)*φ_{b}(x2,y2)φ_{a}(x2,y2)*φ_{b}(x1,y1)
The probability to find one of the two electrons at a specific point in space is given by integrating the absolute value squared wave function over one coordinate set.
The resulting single particle density (snapshots at specific values of the displacement o) is shown in the animation for two different values of the momentum k (we assume that both electrons are in the same spin state).
For small values of k the two electrons get close in phase space (that is in momentum and position). The animation shows how the density deviates from a simple addition of the probabilities of two independent electrons.
If the two electrons differ already by a large relative momentum, the distance in phase space is large even if they get close in position space. Then, the resulting single particle density looks similar to the sum of two independent probabilities.
The probability to find the two electrons simultaneously at the same place is zero in both cases, but this is not directly visible by looking at the single particle density (which reflects the probability to find any of the electrons at a specific position).
For further reading, see this article [arxiv version].
The impact of scientific publications – some personal observations
I will resume posting about algorithm development for computational physics. To put these efforts in a more general context, I start with some observation about the current publication ranking model and explore alternatives and supplements in the next posts.
Working in academic institutions involves being part of hiring committees as well as being assessed by colleagues to measure the impact of my own and other’s scientific contributions.
In the internet age it has become common practice to look at various performance indices, such as the hindex, number of “first author” and “senior author” articles. Often it is the responsibility of the applicant to submit this data in electronic spreadsheet format suitable for an easy ranking of all candidates. The indices are only one consideration for the final decision, albeit in my experience an important one due to their perceived unbiased and statistical nature. Funding of whole university departments and the careers of young scientists are tied to the performance indices.
I did reflect about the usefulness of impact factors while I collected them for various reports, here are some personal observations:
 Looking at the (very likely rather incomplete) citation count of my father I find it interesting that for instance a 49 year old contribution by P Kramer/M Moshinsky on grouptheoretical methods for fewbody systems gains most citations per year after almost 5 decades. This timescale is well beyond any shortterm hiring or funding decisions based on performance indices. From colleagues I hear about similar cases.
 A high hindex can be a sign of a narrow research field, since the hindex is best built up by sticking to the same specialized topic for a long time and this encourages serialised publications. I find it interesting that on the other hand important contributions have been made by people working outside the field to which they contributed. The discovery of threedimensional quasicrystals discussed here provides a good example. The canonical condensed matter theory did not envision this paradigmatic change, rather the study of group theoretical methods in nuclear physics provided the seeds.
 The fulltext search provided by the search engines offers fascinating options to scan through previously forgotten chapters and books, but it also bypasses the systematic classification schemes previously developed and curated by colleagues in mathematics and theoretical physics. It is interesting to note that for instance the AMS short reviews are not done anonymously and most often are of excellent quality. The noncurated search on the other hand leads to a downranking of books and review articles, which contain a broader and deeper exposition of a scientific topic. Libraries with real books grouped by topics are deserted these days, and online services and expert reviews did in general not gain a larger audience or expert community to write reports. One exception might be the public discussion of possible scientific misconduct and retracted publications.
 Another side effect: searching the internet for specific topics diminishes the opportunity to accidentally stumble upon an interesting article lacking these keywords, for instance by scanning through a paper volume of a journal while searching for a specific article. I recall that many faculty members went every monday to the library and looked at all the incoming journals to stay uptodate about the general developments in physics and chemistry. Today we get email alerts about citation counts or specific subfields, but no alert contains a suggestion what other article might pick our intellectual curiosity – and looking at the rather stupid shopping recommendations generated by onlinewarehouses I don’t expect this to happen anytime soon.
 On a positive note: since all text sources are treated equally, no “highimpact journals” are preferred. In my experience as a referee for journals of all sorts of impact numbers, the interesting contributions are not necessarily published or submitted to highly ranked journals.
To sum up, the assessment of manuscripts, contribution of colleagues, and of my own articles requires humans to read them and to process them carefully – all of this takes a lot of time and consideration. It can take decades before publications become alive and well cited. Citation counts of the last 10 years can be poor indicators for the longterm importance of a contribution. Counting statistics provides some gratification by showing immediate interest and are the (less personal) substitute for the oldfashioned postcards requesting reprints. People working in theoretical physics are often closely related by collaboration distance, which provides yet another (much more fun!) factor. You can check your Erdos number (mine is 4) or Einstein number (3, thanks to working with Marcos Moshinsky) at the AMS website.
How to improve the current situation and maintain a well curated and relevant library of scientific contributions – in particular involving numerical results and methods? One possibility is to make a larger portion of the materials surrounding a publication available. In computational physics it is of interest to test and recalculate published results shown in journals. The nanohub.org platform is in my view a best practice case for providing supplemental information on demand and to ensure a longterm availability and usefulness of scientific results by keeping the computational tools running and updated. It is for me a pleasure and excellent experience to work with the team around nanohub to maintain our open quantum dynamics tool. Another way is to provide and test background materials in research blogs. I will try out different approaches with the next posts.
Better than Slaterdeterminants: centerofmass free basis sets for fewelectron quantum dots
Solving the interacting manybody Schrödinger equation is a hard problem. Even restricting the spatial domain to a twodimensions plane does not lead to analytic solutions, the troublemakers are the mutual particleparticle interactions. In the following we consider electrons in a quasi twodimensional electron gas (2DEG), which are further confined either by a magnetic field or a harmonic oscillator external confinement potential. For two electrons, this problem is solvable for specific values of the Coulomb interaction due to a hidden symmetry in the Hamiltonian, see the review by A. Turbiner and our application to the two interacting electrons in a magnetic field.
For three and more electrons (to my knowledge) no analytical solutions are known. One standard computational approach is the configuration interaction (CI) method to diagonalize the Hamiltonian in a variational trial space of Slaterdeterminantal states. Each Slater determinant consists of products of singleparticle orbitals. Due to computer resource constraints, only a certain number of Slater determinants can be included in the basis set. One possibility is to include only trial states up to certain excitation level of the noninteracting problem.
The usage of Slaterdeterminants as CI basisset introduce severe distortions in the eigenenergy spectrum due to the intrusion of spurious states, as we will discuss next. Spurious states have been extensively analyzed in the fewbody problems arising in nuclear physics but have rarely been mentioned in solidstate physics, where they do arise in quantumdot systems. The basic defect of the Slaterdeterminantal CI method is that it brings along centerofmass excitations. During the diagonalization, the centerofmass excitations occur along with the Coulombinteraction and lead to an inflated basis size and also with a loss of precision for the eigenenergies of the excited states. Increasing the basis set does not uniformly reduce the error across the spectrum, since the enlarged CI basis set brings along states of high centerofmass excitations. The cutoff energy then restricts the remaining basis size for the relative part.
The cleaner and leaner way is to separate the centerofmass excitations from the relativecoordinate excitations, since the Coulomb interaction only acts along the relative coordinates. In fact, the centerofmass part can be split off and solved analytically in many cases. The construction of the relativecoordinate basis states requires grouptheoretical methods and is carried out for four electrons here Interacting electrons in a magnetic field in a centerofmass free basis (arxiv:1410.4768). For three electrons, the importance of a spurious state free basis set was emphasized by R Laughlin and is a design principles behind the Laughlin wave function.
Slow or fast transfer: bottleneck states in lightharvesting complexes
In the previous post I described some of the computational challenges for modeling energy transfer in the light harvesting complex II (LHCII) found in spinach. Here, I discuss the results we have obtained for the dynamics and choreography of excitonic energy transfer through the chlorophyll network. Compared to the FennaMatthewsOlson complex, LHCII has twice as many chlorophylls per monomeric unit (labeled 601614 with chlorophyll a and b types).
Previous studies of exciton dynamics had to stick to simple exponential decay models based on either Redfield or Förster theory for describing the transfer from the Chl b to the Chl a sites. The results are not satisfying and conclusive, since depending on the method chosen the transfer time differs widely (tens of picoseconds vs picoseconds!).
To resolve the discrepancies between the various approximate methods requires a more accurate approach. With the accelerated HEOM at hand, we revisited the problem and calculated the transfer rates. We find slower rates than given by the Redfield expressions. A combined FörsterRedfield description is possible in hindsight by using HEOM to identify a suitable cutoff parameter (Mcr=30/cm in this specific case).
Since the energy transfer is driven by the coupling of electronic degrees of freedom to vibrational ones, it of importance to assess how the vibrational mode distribution affects the transfer. In particular it has been proposed that specifically tuned vibrational modes might promote a fast relaxation. We find no strong impact of such modes on the transfer, rather we see (independent of the detailed vibrational structure) several bottleneck states, which act as a transient reservoir for the exciton flux. The details and distribution of the bottleneck states strongly depends on the parameters of the electronic couplings and differs for the two most commonly discussed LHCII models proposed by Novoderezhkin/Marin/van Grondelle and Müh/Madjet/Renger – both are considered in the article Scalable highperformance algorithm for the simulation of excitondynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan AspuruGuzik).
Again, the correct assignment of the bottleneck states requires to use HEOM and to look beyond the approximate rate equations.
Highperformance OpenCL code for modeling energy transfer in spinach
With increasing computational power of massivelyparallel computers, a more accurate modeling of the energytransfer dynamics in larger and more complex photosynthetic systems (=lightharvesting complexes) becomes feasible – provided we choose the right algorithms and tools.
The diverse character of hardware found in highperformance computers (hpc) seemingly requires to rewrite program code from scratch depending if we are targeting multicore CPU systems, integrated manycore platforms (Xeon PHI/MIC), or graphics processing units (GPUs).
To avoid the defragmentation of our open quantumsystem dynamics workhorse (see the previous GPUHEOM posts) across the various hpcplatforms, we have transferred the GPUHEOM CUDA code to the Open Compute Language (OpenCL). The resulting QMaster tool is described in our just published article Scalable highperformance algorithm for the simulation of excitondynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan AspuruGuzik). This post details the computational challenges and lessons learnt, the application to the lightharvesting complex II found in spinach will be the topic of the next post.
In my experience, it is not uncommon to develop a nice GPU application for instance with CUDA, which later on is scaled up to handle bigger problem sizes. With increasing problem size also the memory demands increase and even the 12 GB provided by the Kepler K40 are finally exhausted. Upon reaching this point, two options are possible: (a) to distribute the memory across different GPU devices or (b) to switch to architectures which provide more devicememory. Option (a) requires substantial changes to existing program code to manage the distributed memory access, while option (b) in combination with OpenCL requires (in the best case) only to adapt the kernellaunch configuration to the different platforms.
QMaster implements an extension of the hierarchical equation of motion (HEOM) method originally proposed by Tanimura and Kubo, which involves many (small) matrixmatrix multiplications. For GPU applications, the usage of local memory and the optimal threadgrids for fast matrixmatrix multiplications have been described before and are used in QMaster (and the publicly available GPUHEOM tool on nanohub.org). While for GPUs the best performance is achieved using shared/local memory and assign one thread to each matrix element, the multicore CPU OpenCL variant performs better with fewer threads, but getting more work per thread done. Therefore we use for the CPU machines a threadgrid which computes one complete matrix product per thread (this is somewhat similar to following the “naive” approach given in NVIDIA’s OpenCL programming guide, chapter 2.5). This strategy did not work very well for the Xeon PHI/MIC OpenCL case, which requires additional data structure changes, as we learnt from discussions with the distributed algorithms and hpc experts in the group of Prof. Reinefeld at the ZuseInstitute in Berlin.
The good performance and scaling across the 64 CPU AMD opteron workstation positively surprised us and lays the groundwork to investigate the validity of approximations to the energytransfer equations in the spinach lightharvesting system, the topic for the next post.
Flashback to the 80ies: filling space with the first quasicrystals
This post provides a historical and conceptional perspective for the theoretical discovery of nonperiodic 3d spacefillings by Peter Kramer, later experimentally found and now called quasicrystals. See also these previous blog entries for more quasicrystal references and more background material here.
The following post is written by Peter Kramer.
When sorting out old texts and figures from 1981 of mine published in Nonperiodic central space filling with icosahedral symmetry using copies of seven elementary cells, Acta Cryst. (1982). A38, 257264), I came across the figure of a regular pentagon of edge length L, which I denoted as p(L). In the left figure its redcolored edges are starextending up to their intersections. Straight connection of these intersection points creates a larger blue pentagon. Its edges are scaled up by τ^{2}, with τ the golden section number, so the larger pentagon we call p(τ^{2} L). This blue pentagon is composed of the old red one plus ten isosceles triangles with golden proportion of their edge length. Five of them have edges t_{1}(L): (L, τ L, τ L), five have edges t_{2}(L): (τ L,τ L, τ^{2} L). We find from Fig 1 that these golden triangles may be composed facetoface into their τextended copies as t_{1}(τ L) = t_{1}(L) + t_{2}(L) and t_{2}(τ L) = t_{1}(L) + 2 t_{2}(L).
Moreover we realize from the figure that also the pentagon p(τ^{2} L) can be composed from golden triangles as p(τ^{2} L) = t_{1}(τ L) + 3 t_{2}(τ L) = 4 t_{1}(L) + 7 t_{2}(L).
This suggests that the golden triangles t_{1},t_{2} can serve as elementary cells of a triangle tiling to cover any range of the plane and provide the building blocks of a quasicrystal. Indeed we did prove this long range property of the triangle tiling (see Planar patterns with fivefold symmetry as sections of periodic structures in 4space).
An icosahedral tiling from star extension of the dodecahedron.
In the same paper, I generalized the star extension from the 2D pentagon to the 3D dodecahedron d(L) of edge length L in 3D (see next figure) by the following prescription:
 star extend the edges of this dodecahedron to their intersections
 connect these intersections to form an icosahedron
The next star extension produces a larger dodecahedron d(τ^{3}L), with edges scaled by τ^{3}. In the composition of the larger dodecahedron I found four elementary polyhedral shapes shown below. Even more amusing I also resurrected the paper models I constructed in 1981 to actually demonstrate the complete space filling!
These four polyhedra compose their copies by scaling with τ^{3}. As for the 2D case arbitrary regions of 3D can be covered by the four tiles.
The four elementary cells shown in the 1982 paper, Fig. 4. The four shapes are named dodecahedron (d) skene (s), aetos (a) and tristomos (t).  The paper models from 1981 are still around in 2014 and complete enough to fill the 3D space without gaps. You can spot all shapes (d,s,a,t) in various scalings and they all systematically and gapless fill the large dodecahedron shell on the back of the table. 
Quasiperiodicity.
The only feature missing for quasicrystals is aperiodic longrange order which eventually leads to sharp diffraction patterns of 5 or 10 fold pointsymmetries forbidden for the oldstyle crystals. In my construction shown here I strictly preserved central icosahedral symmetry. Nonperiodicity then followed because full icosahedral symmetry and periodicity in 3D are incompatible.
In 1983 we found a powerful alternative construction of icosahedral tilings, independent of the assumption of central symmetry: the projection method from 6D hyperspace (On periodic and nonperiodic space fillings of E^{m} obtained by projection) This projection establishes the quasiperiodicity of the tilings, analyzed in line with the work Zur Theorie der fast periodischen Funktionen (iiii) of Harald Bohr from 1925 , as a variant of aperiodicity (more background material here).
Tutorial #1: simulate 2d spectra of lightharvesting complexes with GPUHEOM @ nanoHub
The computation and prediction of twodimensional (2d) echo spectra of photosynthetic complexes is a daunting task and requires enormous computational resources – if done without drastic simplifications. However, such computations are absolutely required to test and validate our understanding of energy transfer in photosyntheses. You can find some background material in the recently published lecture notes on Modelling excitonicenergy transfer in lightharvesting complexes (arxiv version) of the Latin American School of Physics Marcos Moshinsky.
The ability to compute 2d spectra of photosynthetic complexes without resorting to strong approximations is to my knowledge an exclusive privilege of the Hierarchical Equations of Motion (HEOM) method due to its superior performance on massivelyparallel graphics processing units (GPUs). You can find some background material on the GPU performance in the two conference talks Christoph Kreisbeck and I presented at the GTC 2014 conference (recored talk, slides) and the first nanoHub users meeting.
 login on nanoHub.org (it’s free!)
 switch to the gpuheompop tool
 click the Launch Tool button (java required)
 for this tutorial we use the example input for “FMO coherence, 1 peak spectral density“.
You can select this preset from the Example selector.  we stick with the provided Exciton System parameters and only change the temperature to 77 K to compare the results with our published data.
 in the Spectral Density tab, leave all parameters at the the suggested values
 to compute 2d spectra, switch to the Calculation mode tab
 for compute: choose “twodimensional spectra“. This brings up inputmasks for setting the directions of all dipole vectors, we stick with the provided values. However, we select Rotational averaging: “four shot rotational average” and activate all six Liouville pathways by setting ground st[ate] bleach reph[asing , stim[ulated] emission reph[asing], and excited st[ate] abs[orption] to yes, as well as their nonrephasing counterparts (attention! this might require to resize the inputmask by pulling at the lower right corner)
 That’s all! Hit the Simulate button and your job will be executed on the carter GPU cluster at Purdue university. The simulation takes about 40 minutes of GPU time, which is orders of magnitude faster than any other published method with the same accuracy. You can close and reopen your session in between.
 Voila: your first FMO spectra appears.
 Now its time to change parameters. What happens at higher temperature?
 If you like the results or use them in your work for comparison, we (and the folks at nanoHub who generously develop and provide the nanoHub platform and GPU computation time) appreciate a citation. To make this step easy, a DOI number and reference information is listed at the bottom of the About tab of the toolpage.
With GPUHEOM we and now you (!) can not only calculate the 2d echo spectra of the FennaMatthewsOlson (FMO) complex, but also reveal the strong link between the continuum part of the vibrational spectral density and the prevalence of longlasting electronic coherences as written in my previous posts
GPU and cloud computing conferences in 2014
Two conferences are currently open for registration related to GPU and cloud computing. I will be attending and presenting at both, please email me if you want to get in touch at the meetings.
 April 911, 2014, talk about the GPUHEOM tool at the first nanoHUB users conference, Phoenix, Arizona
 March 2427, 2014, talk about GPU computing for understanding energytransfer in photosynthesis at the GPU Technology Conference 2014, San Jose, California
Oscillations in twodimensional spectroscopy
Over the last years, a debate is going on whether the observation of long lasting oscillatory signals in twodimensional spectra are reflecting vibrational of electronic coherences and how the functioning of the molecule is affected. Christoph Kreisbeck and I have performed a detailed theoretical analysis of oscillations in the FennaMatthewsOlson (FMO) complex and in a model threesite system. As explained in a previous post, the prerequisites for longlasting electronic coherences are two features of the continuous part of the vibronic mode density are: (i) a small slope towards zero frequency, and (ii) a coupling to the excitonic eigenenergy (ΔE) differences for relaxation. Both requirements are met by the mode density of the FMO complex and the computationally demanding calculation of twodimensional spectra of the FMO complex indeed predicts longlasting crosspeak oscillations with a period matching h/ΔE at room temperature (see our article LongLived Electronic Coherence in Dissipative ExcitonDynamics of LightHarvesting Complexes or arXiv version). The persistence of oscillations is stemming from a robust mechanism and does not require adding any additional vibrational modes at energies ΔE (the general background mode density is enough to support the relaxation toward a thermal state). But what happens if in addition to the background vibronic mode density additional vibronic modes are placed within the vicinity of the frequencies related electronic coherences? This finetuning model is sometimes discussed in the literature as an alternative mechanism for longlasting oscillations of vibronic nature. Again, the answer requires to actually compute twodimensional spectra and to carefully analyze the possible chain of lasermolecule interactions. Due to the special way twodimensional spectra are measured, the observed signal is a superposition of at least three pathways, which have different sensitivity for distinguishing electronic and vibronic coherences. Being a theoretical physicists now pays off since we have calculated and analyzed the three pathways separately (see our recent publication Disentangling Electronic and Vibronic Coherences in TwoDimensional Echo Spectra or arXiv version). One of the pathways leads to an enhancement of vibronic signals, while the combination of the remaining two diminishes electronic coherences otherwise clearly visible within each of them. Our conclusion is that estimates of decoherence times from twodimensional spectroscopy might actually underestimate the persistence of electronic coherences, which are helping the transport through the FMO network. The fine tuning and addition of specific vibrational modes leaves it marks at certain spots of the twodimensional spectra, but does not destroy the electronic coherence, which is still there as a Short Time Fourier Transform of the signal reveals.
Computational physics on GPUs: writing portable code
I am preparing my presentation for the simGPU meeting next week in Freudenstadt, Germany, and performed some benchmarks.
In the previous post I described how to get an OpenCL program running on a smartphone with GPU. By now Christoph Kreisbeck and I are getting ready to release our first smartphone GPU app for exciton dynamics in photosynthetic complexes, more about that in a future entry.
Getting the same OpenCL kernel running on laptop GPUs, workstation GPUs and CPUs, and smartphones/tablets is a bit tricky, due to different initialisation procedures and the differences in the optimal block sizes for the thread grid. In addition on a smartphone the local memory is even smaller than on a desktop GPU and doubleprecision floating point support is missing. The situation reminds me a bit of the “earlier days” of GPU programming in 2008.
Besides being a proof of concept, I see writing portable code as a sort of insurance with respect to further changes of hardware (however always with the goal to stick with the massively parallel programming paradigm). I am also amazed how fast smartphones are gaining computational power through GPUs!
Here some considerations and observations:

Standard CUDA code can be ported to OpenCL within a reasonable timeframe. I found the following resources helpful:
AMDs porting remarks
Matt Scarpinos OpenCL blog  The comparison of OpenCL vs CUDA performance for the same algorithm can reveal some surprises on NVIDIA GPUs. While on our C2050 GPU OpenCL works a bit faster for the same problem compared to the CUDA version, on a K20c system for certain problem sizes the OpenCL program can take several times longer than the CUDA code (no changes in the basic algorithm or workgroup sizes).
 The comparison with a CPU version running on 8 cores of the Intel Xeon machine is possible and shows clearly that the GPU code is always faster, but requires a certain minimal systems size to show its full performance.
 I am looking forward to running the same code on the Intel Xeon Phi systems now available with OpenCL drivers, see also this blog.
[Update June 22, 2013: I updated the graphs to show the 8core results using Intels latest OpenCL SDK. This brings the CPU runtimes down by a factor of 2! Meanwhile I am eagerly awaiting the possibility to run the same code on the Xeon Phis…]
Computational physics on the smartphone GPU
[Update August 2013: Google has removed the OpenCL library with Android 4.3. You can find an interesting discussion here. Google seems to push for its own renderscript protocol. I will not work with renderscript since my priorities are platform independency and sticking with widely adopted standards to avoid fragmentation of my code basis.]
I recently got hold of a Nexus 4 smartphone, which features a GPU (Qualcomm Adreno 320) and conveniently ships with already installed OpenCL library. With minimal changes I got the previously discussed manybody program code related to the fractional quantum Hall effect up and running. No unrooting of the phone is required to run the code example. Please use the following recipe at your own risk, I don’t accept any liabilities. Here is what I did:
 Download and unpack the Android SDK from google for crosscompilation (my host computer runs Mac OS X).
 Download and unpack the Android NDK from google to build minimal C/C++ programs without Java (no real app).

Install the standalone toolchain from the Android NDK. I used the following command for my installation:
/home/tkramer/androidndkr8d/build/tools/makestandalonetoolchain.sh \ installdir=/home/tkramer/androidndkstandalone
 Put the OpenCL programs and source code in an extra directory, as described in my previous post
 Change one line in the cl.hpp header: instead of including <GL/gl.h> change to <GLES/gl.h>. Note: I am using the “old” cl.hpp bindings 1.1, further changes might be required for the newer bindings, see for instance this helpful blog

Transfer the OpenCL library from the phone to a subdirectory lib/ inside your source code. To do so append the path to your SDK tools and use the adb command:
export PATH=/home/tkramer/adtbundlemacx86_6420130219/sdk/platformtools:$PATH adb pull /system/lib/libOpenCL.so

Cross compile your program. I used the following script, please feel free to provide shorter versions. Adjust the include directories and library directories for your installation.
rm plasma_disk_gpu /home/tkramer/androidndkstandalone/bin/armlinuxandroideabig++ v g \ DCL_USE_DEPRECATED_OPENCL_1_1_APIS DGPU \ I. \ I/home/tkramer/androidndkstandalone/include/c++/4.6 \ I/home/tkramer/androidndkr8d/platforms/android5/archarm/usr/include \ Llib \ march=armv7a mfloatabi=softfp mfpu=neon \ fpic fsignedchar fdatasections funwindtables fstackprotector \ ffunctionsections fdiagnosticsshowoption fPIC \ fnostrictaliasing fnoomitframepointer fnortti \ lOpenCL \ o plasma_disk_gpu plasma_disk.cpp

Copy the executable to the data dir of your phone to be able to run it. This can be done without rooting the phone with the nice SSHDroid App, which by defaults transfers to /data . Don’t forget to copy the kernel .cl files:
scp P 2222 integrate_eom_kernel.cl root@192.168.0.NNN: scp P 2222 plasma_disk_gpu root@192.168.0.NNN:
 ssh into your phone and run the GPU program:
ssh p 2222 root@192.168.0.NNN ./plasma_disk_gpu 64 16
 Check the resulting data files. You can copy them for example to the Download path of the storage and use the gnuplot (droidplot App) to plot them.
A short note about runtimes. On the Nexus 4 device the program runs for about 12 seconds, on a MacBook Pro with NVIDIA GT650M it completes in 2 seconds (in the example above the equations of motion for 16*64=1024 interacting particles are integrated). For larger particle numbers the phone often locks up.
An alternative way to transfer files to the device is to connect via USB cable and to install the Android Terminal Emulator app. Next
cd /data/data/jackpal.androidterm mkdir gpu chmod 777 gpu
On the host computer use adb to transfer the compiled program and the .cl kernel and start a shell to run the kernel
adb push integrate_eom_kernel.cl /data/data/jackpal.androidterm/gpu/ adb push plasma_disk_gpu /data/data/jackpal.androidterm/gpu/
You can either run the program within the terminal emulator or use the adb shell
adb shell cd /data/data/jackpal.androidterm/gpu/ ./plasma_disk_gpu 64 16
Let’s see in how many years todays desktop GPUs can be found in smartphones and which computational physics codes can be run!
Computational physics & GPU programming: exciton lab for lightharvesting complexes (GPUHEOM) goes live on nanohub.org
Christoph Kreisbeck and I are happy to announce the public availability of the Exciton Dynamics Lab for Light
Harvesting Complexes (GPUHEOM) hosted on nanohub.org. You need to register a user account (its free), and then you are ready to use GPUHEOM for the Frenkel exciton model of light harvesting complexes. In release 1.0 we support
 calculating population dynamics
 tracking coherences between two eigenstates
 obtaining absorption spectra
 twodimensional echo spectra (including excited state absorption)
 … and all this for general vibronic spectral densities parametrized by shifted Lorentzians.
I will post some more entries here describing how to use the tool for understanding how the spectral density affects the lifetime of electronic coherences (see also this blog entry).
In the supporting document section you find details of the implemented method and the assumptions underlying the tool. We are appreciating your feedback for further improving the tool.
We are grateful for the support of Prof. Gerhard Klimeck, Purdue University, director of the Network for Computational Nanotechnology to bring GPU computing to nanohub (I believe our tool is the first GPU enabled one at nanohub).
If you want to refer to the tool you can cite it as:
Christoph Kreisbeck; Tobias Kramer (2013), “Exciton Dynamics Lab for LightHarvesting Complexes (GPUHEOM),” https://nanohub.org/resources/gpuheompop. (DOI:10.4231/D3RB6W248).
and you find further references in the supporting documentation.
I very much encourage my colleagues developing computer programs for theoretical physics and chemistry to make them available on platforms such as nanohub.org. In my view, it greatly facilitates the comparison of different approaches and is the spirit of advancing science by sharing knowledge and providing reproducible data sets.
Good or bad vibrations for the FennaMatthewsOlson complex?
Due to its known structure and relative simplicity, the FennaMatthewsOlson complex of green sulfur bacteria provides an interesting testcase for our understanding of excitonic energy transfer in a lightharvesting complex.
The experimental pumpprobe spectra (discussed in my previous post catching and tracking light: following the excitations in the FennaMatthewsOlson complex) show longlasting oscillatory components and this finding has been a puzzle for theoretician and led to a refinement of the wellestablished models. These models show a reasonable agreement with the data and the rate equations explain the relaxation and transfer of excitonic energy to the reaction center.
However, the rate equations are based on estimates for the relaxation and dephasing rates. As Christoph Kreisbeck and I discuss in our article LongLived Electronic Coherence in Dissipative ExcitonDynamics of LightHarvesting Complexes (arxiv version), an exact calculation with GPUHEOM following the best data for the Hamiltonian allows one to determine where the simple approach is insufficient and to identify a keyfactor supporting electronic coherence:
It’s the vibronic spectral density – redrawn (in a different unit convention, multiplied by ω^{2}) from the article by M. Wendling from the group of Prof. Rienk van Grondelle. We did undertake a major effort to proceed in our calculations as close to the measured shape of the spectral density as the GPUHEOM method allows one. By comparison of results for different forms of the spectral density, we identify how the different parts of the spectral density lead to distinct signatures in the oscillatory coherences. This is illustrated in the figure on the rhs. To get long lasting oscillations and finally to relax, three ingredients are important
 a small slope towards zero frequency, which suppresses the pure dephasing.
 a high plateau in the region where the exciton energy differences are well coupled. This leads to relaxation.
 the peaked structures induce a “verylonglasting” oscillatory component, which is shown in the first figure. In our analysis we find that this is a persistent, but rather small (<0.01) modulation.
2d spectra are smart objects
The calculation of 2d echo spectra requires considerable computational resources. Since theoretically calculated 2d spectra are needed to check how well theory and experiment coincide, I conclude with showing a typical spectrum we obtain (including static disorder, but no excited state absorption for this example). One interesting finding is that 2d spectra are able to differentiate between the different spectral densities. For example for a a singlepeak DrudeLorentz spectral density (sometimes chosen for computational convenience), the wrong peaks oscillate and the lifetime of crosspeak oscillations is short (and becomes even shorter with longer vibronic memory). But this is for the experts only, see the supporting information of our article.
Are vibrations good or bad? Probably both… The pragmatic answer is that the FMO complex lives in an interesting parameter regime. The exact calculations within the Frenkel exciton model do confirm the wellknown dissipative energy transfer picture. But on the other hand the specific spectral density of the FMO complex supports longlived coherences (at least if the light source is a laser beam), which require considerable theoretical and experimental efforts to be described and measured. Whether the seen coherence has any biological relevance is an entirely different topic… maybe the greensulfur bacteria are just enjoying a glimpse into Schrödinger’s world of probabilistic uncertainty.
Computational physics & GPU programming: interacting manybody simulation with OpenCL
In the second example of my series on GPU programming for scientists, I discuss a short OpenCL program, which you can compile and run on the CPU and the GPUs of various vendors. This gives me the opportunity to perform some crossplatform benchmarks for a classical plasma simulation. You can expect dramatic (several 100 fold) speedups on GPUs for this type of system. This is one of the reasons why molecular dynamics code can gain quite a lot by incorporating the massively parallelprogramming paradigm in the algorithmic foundations.
The Open Computing Language (OpenCL) is relatively similar to its CUDA pendant, in practice the setup of an OpenCL kernel requires some housekeeping work, which might make the code look a bit more involved. I have based my interacting electrons calculation of transport in the Hall effect on an OpenCL code. Another examples is An OpenCL implementation for the solution of the timedependent Schrödinger equation on GPUs and CPUs (arxiv version) by C. Ó Broin and L.A.A. Nikolopoulos.
Now to the coding of a twodimensional plasma simulation, which is inspired by Laughlin’s mapping of a manybody wave function to an interacting classical ersatz dynamics (for some context see my short review Interacting electrons in a magnetic field: mapping quantum mechanics to a classical ersatzsystem on the arxiv).
Computational physics & GPU programming: Solving the timedependent Schrödinger equation
I start my series on the physics of GPU programming by a relatively simple example, which makes use of a mix of library calls and welldocumented GPU kernels. The runtime of the splitstep algorithm described here is about 280 seconds for the CPU version (Intel(R) Xeon(R) CPU E5420 @ 2.50GHz), vs. 10 seconds for the GPU version (NVIDIA(R) Tesla C1060 GPU), resulting in 28 fold speedup! On a C2070 the run time is less than 5 seconds, yielding an 80 fold speedup.
The description of coherent electron transport in quasi twodimensional electron gases requires to solve the Schrödinger equation in the presence of a potential landscape. As discussed in my post Time to find eigenvalues without diagonalization, our approach using wavepackets allows one to obtain the scattering matrix over a wide range of energies from a single wavepacket run without the need to diagonalize a matrix. In the following I discuss the basic example of propagating a wavepacket and obtaining the autocorrelation function, which in turn determines the spectrum. I programmed the GPU code in 2008 as a first test to evaluate the potential of GPGPU programming for my research. At that time doubleprecision floating support was lacking and the fast Fourier transform (FFT) implementations were little developed. Starting with CUDA 3.0, the program runs fine in double precision and my group used the algorithm for calculating electron flow through nanodevices. The CPU version was used for our articles in Physica Scripta Wave packet approach to transport in mesoscopic systems and the Physical Review B Phase shifts and phase πjumps in fourterminal waveguide AharonovBohm interferometers among others.
Here, I consider a very simple example, the propagation of a Gaussian wavepacket in a uniform potential V(x,y)=Fx, for which the autocorrelation function of the initial state
⟨x,yψ(t=0)⟩=1/(a√π)exp((x^{2}+y^{2})/(2 a^{2}))
is known in analytic form:
⟨ψ(t=0)ψ(t)⟩=2a^{2}m/(2a^{2}m+iℏt)exp(a^{2}F^{2}t^{2}/(4ℏ^{2})iF^{2}t^{3}/(24ℏ m)).
Continue reading Computational physics & GPU programming: Solving the timedependent Schrödinger equation
The physics of GPU programming
From discussions I learn that while many physicists have heard of Graphics Processing Units as fast computers, resistance to use them is widespread. One of the reasons is that physics has been relying on computers for a long time and tons of old, well trusted codes are lying around which are not easily ported to the GPU. Interestingly, the adoption of GPUs happens much faster in biology, medical imaging, and engineering.
I view GPU computing as a great opportunity to investigate new physics and my feeling is that todays methods optimized for serial processors may need to be replaced by a different set of standard methods which scale better with massively parallel processors. In 2008 I dived into GPU programming for a couple of reasons:
 As a “modelbuilder” the GPU allows me to reconsider previous limitations and simplifications of models and use the GPU power to solve the extended models.
 The turnaround time is incredibly fast. Compared to queues in conventional clusters where I wait for days or weeks, I get back results with 10000 CPU hours compute time the very same day. This in turn further facilitates the modelbuilding process.
 Some people complain about the strict synchronization requirements when running GPU codes. In my view this is an advantage, since essentially no messaging overhead exists.
 If you want to develop highperformance algorithm, it is not good enough to convert library calls to GPU library calls. You might get speedups of about 24. However, if you invest the time and develop your own knowhow you can expect much higher speedups of around 100 times or more, as seen in the applications I discussed in this blog before.
This summer I will lecture about GPU programming at several places and thus I plan to write a series of GPU related posts. I do have a complementary background in mathematical physics and special functions, which I find very useful in relation with GPU programming since new physical models require a stringent mathematical foundation and numerical studies.
Catching and tracking light: following the excitations in the FennaMatthewsOlson complex
Efficient and fast transport of electric current is a basic requirement for the functioning of nanodevices and biological systems. A neat example is the energytransport of a lightinduced excitation in the FennaMatthewsOlson complex of green sulfur bacteria. This process has been elucidated by pumpprobe spectroscopy. The resulting spectra contain an enormous amount of information about the couplings of the different pigments and the pathways taken by the excitation. The basic guide to a 2d echospectrum is as follows:
You can find peaks of high intensity along the diagonal line which are roughly representing a more common absorption spectrum. If you delay the pump and probe pulses by several picoseconds, you will find a new set of peaks at a horizontal axis which indicates that energy of the excitation gets redistributed and the system relaxes and transfers part of the energy to vibrational motion. This process is nicely visible in the spectra recorded by Brixner et al.
A lot of excitement and activity on photosynthetic complexes was triggered by experiments of Engel et al showing that besides the relaxation process also periodic oscillations are visible in the oscillations for more than a picosecond.
What is causing the oscillations in the peak amplitudes of 2d echospectra in the FennaMatthews Olson complex?
A purely classical transport picture should not show such oscillations and the excitation instead hops around the complex without interference. Could the observed oscillations point to a different transport mechanism, possibly related to the quantummechanical simultaneous superposition of several transport paths?
The initial answer from the theoretical side was no, since within simplified models the thermalization occurs fast and without oscillations. It turned out that the simple calculations are a bit too simplistic to describe the system accurately and exact solutions are required. But exact solutions (even for simple models) are difficult to obtain. Known exact methods such as DMRG work only reliable at very low temperatures (273 C), which are not directly applicable to biological systems. Other schemes use the famous path integrals but are too slow to calculate the pumpprobe signals.
Our contribution to the field is to provide an exact computation of the 2d echospectra at the relevant temperatures and to see the difference to the simpler models in order to quantify how much coherence is preserved. From the methoddevelopment the computational challenge is to speedup the calculations several hundred times in order to get results within days of computational runtime. We did achieve this by developing a method which we call GPUhierarchical equations of motion (GPUHEOM). The hierarchical equations of motions are a nice scheme to propagate a density matrix under consideration of nonMarkovian effects and strong couplings to the environment. The HEOM scheme was developed by Kubo, Tanimura, and Ishizaki (Prof. Tanimura has posted some material on HEOM here).
However, the original computational method suffers from the same problems as pathintegral calculations and is rather slow (though the HEOM method can be made faster and applied to electronic systems by using smart filtering as done by Prof. YiJing Yan). The GPU part in GPUHEOM stands for Graphics Processing Units. Using our GPU adoption of the hierarchical equations (see details in Kreisbeck et al[JCTC, 7, 2166 (2011)] ) allowed us to cut down computational times dramatically and made it possible to perform a systematic study of the oscillations and the influence of temperature and disorder in our recent article Hein et al [New J. of Phys., 14, 023018 (2012), open access] .
The Nobel Prize 2011 in Chemistry: press releases, false balance, and lack of research in scientific writing
To get this clear from the beginning: with this posting I am not questioning the great achievement of Prof. Dan Shechtman, who discovered what is now known as quasicrystal in the lab. Shechtman clearly deserves the prize for such an important experiment demonstrating that fivefold symmetry exists in real materials.
My concern is the poor quality of research and reporting on the subject of quasicrystals starting with the press release of the Swedish Academy of Science and lessons to be learned about trusting these press releases and the reporting in scientific magazines. To provide some background: with the announcement of the Nobel prize a press release is put online by the Swedish academy which not only announces the prize winner, but also contains two PDFs with background information: one for the “popular press” and another one with for people with a more “scientific background”. Even more dangerously, the Swedish Academy has started a multimedia endeavor of pushing its views around the world in youtube channels and numerous multimedia interviews with its own members (what about asking an external expert for an interview?).
Before the internet age journalists got the names of the prize winners, but did not have immediately access to a “ready to print” explanation of the subject at hand. I remember that local journalists would call at the universities and ask a professor who is familiar with the topic for advice or get at least the phone number of somebody familiar with it. Not any more. This year showed that the background information prepared in advance by the committee is taken over by the media outlets basically unchanged. So far it looks as business as usual. But what if the story as told by the press release is not correct? Does anybody still have time and resources for some basic fact checking, for example by calling people familiar with the topic, or by consulting the archives of their newspaper/magazine to dig out what was written when the discovery was made many years ago? Should we rely on the professor who writes the press releases and trust that this person adheres to scientific and ethic standards of writing?
For me, the unfiltered and unchecked usage of press releases by the media and even by scientific magazines shows a decay in the quality of scientific reporting. It also generates a uniformity and selfreferencing universe, which enters as “sources” in online encyclopedias and in the end becomes a “selfgenerated” truth. However it is not that difficult to break this circle, for example by
 digging out review articles on the topic and looking up encyclopedias for the topic of quasicrystals, see for example: Pentagonal and Icosahedral Order in Rapidly Cooled Metals by David R. Nelson and Bertrand I. Halperin, Science 19 July 1985:233238, where the authors write: “Independent of these experimental developments, mathematicians and some physicists had been exploring the consequences of the discovery by Penrose in 1974 of some remarkable, aperiodic, twodimensional tilings with fivefold symmetry (7). Several authors suggested that these unusual tesselations of space might have some relevance to real materials (8, 9). MacKay (8) optically Fouriertransformed a twodimensional Penrose pattern and found a tenfold symmetric diffraction pattern not unlike that shown for AlMn in Fig. 2. Threedimensional generalizations of the Penrose patterns, based on the icosahedron, have been proposed (810). The generalization that appears to be most closely related to the experiments on AlMn was discovered by Kramer and Neri (11) and, independently, by Levine and Steinhardt (12).“
 identifying from step 1 experts and asking for their opinion
 checking the newspaper and magazine archives. Maybe there exists already a well researched article?
 correcting mistakes. After all mistakes do happen. Also in “press releases” by the Nobel committee, but there is always the option to send out a correction or to amend the published materials. See for example the letter in Science by David R. Nelson
Icosahedral Crystals in Perspective, Science 13 July 1990:111 again on the history of quasicrystals:
“[…] The threedimensional generalization of the Penrose tiling most closely related to the experiments was discovered by Peter Kramer and R. Neri (3) independently of Steinhardt and Levine (4). The paper by Kramer and Neri was submitted for publication almost a year before the paper of Shechtman et al. These are not obscure references: […]“
Since I am working in theoretical physics I find it important to point out that in contrast to the story invented by the Nobel committee actually the theoretical structure of quasicrystals was published and available in the relevant journal of crystallography at the time the experimental paper got published. This sequence of events is well documented as shown above and in other review articles and books.
I am just amazed how the press release of the Nobel committee creates an alternate universe with a false history of theoretical and experimental publication records. It does give false credits for the first theoretical work on threedimensional quasicrystals and at least in my view does not adhere to scientific and ethic standards of scientific writing.
Prof. Sven Lidin, who is the author of the two press releases of the Swedish Academy has been contacted as early as October 7 about his inaccurate and unbalanced account of the history of quasicrystals. In my view, a huge responsibility rests on the originator of the “story” which was put in the wild by Prof. Lidin, and I believe he and the committee members are aware of their power since they use actively all available electronic media channels to push their complete “press package” out. Until today no corrections or updates have been distributed. Rather you can watch on youtube the (false) story getting repeated over and over again. In my view this example shows science reporting in its worst incarnation and undermines the credibility and integrity of science.
Quasicrystals: anticipating the unexpected
The following guest entry is contributed by Peter Kramer
Dan Shechtman received the Nobel prize in Chemistry 2011 for the experimental discovery of quasicrystals. Congratulations! The press release stresses the unexpected nature of the discovery and the struggles of Dan Shechtman to convince the fellow experimentalists. To this end I want to contribute a personal perspective:
From the viewpoint of theoretical physics the existence of icosahedral quasicrystals as later discovered by Shechtman was not quite so unexpected. Beginning in 1981 with Acta Cryst A 38 (1982), pp. 257264 and continued with Roberto Neri in Acta Cryst A 40 (1984), pp. 580587 we worked out and published the building plan for icosahedral quasicrystals. Looking back, it is a strange and lucky coincidence that unknown to me during the same time Dan Shechtman and coworkers discovered icosahedral quasicrystals in their seminal experiments and brought the theoretical concept of threedimensional nonperiodic spacefillings to live.
More about the fascinating history of quasicrystals can be found in a short review: gateways towards quasicrystals and on my homepage.
Time to find eigenvalues without diagonalization
Solving the stationary Schrödinger (HE)Ψ=0 equation can in principle be reduced to solving a matrix equation. This eigenvalue problem requires to calculate matrix elements of the Hamiltonian with respect to a set of basis functions and to diagonalize the resulting matrix. In practice this time consuming diagonalization step is replaced by a recursive method, which yields the eigenfunctions for a specific eigenvalue.
A very different approach is followed by wavepacket methods. It is possible to propagate a wavepacket without determining the eigenfunctions beforehand. For a given Hamiltonian, we solve the timedependent Schrödinger equation (i ∂tH) Ψ=0 for an almost arbitrary initial state Ψ(t=0) (initial value problem).
The reformulation of the determination of eigenstates as an initial value problem has a couple of computational advantages:
 results can be obtained for the whole range of energies represented by the wavepacket, whereas a recursive scheme yields only one eigenenergy
 the wavepacket motion yields direct insight into the pathways and allows us to develop an intuitive understanding of the transport choreography of a quantum system
 solving the timedependent Schrödinger equation can be efficiently implemented using Graphics Processing Units (GPU), resulting in a large (> 20 fold) speedup compared to CPU code
The determination of transmissions requires now to calculate the Fourier transform of correlation functions <Ψ(t=0)Ψ(t)>. This method has been pioneered by Prof. Eric J. Heller, Harvard University, and I have written an introductory article for the Latin American School of Physics 2010 (arxiv version).
Recently, Christoph Kreisbeck has done a detailed calculations on the gatevoltage dependency of the conductance in AharonovBohm nanodevices, taking full adventage of the simultaneous probing of a range of Fermi energies with one single wavepacket. A very clean experimental realization of the device was achieved by Sven Buchholz, Prof. Saskia Fischer, and Prof. Ulrich Kunze (RU Bochum), based on a semiconductor material grown by Dr. Dirk Reuter and Prof. Anreas Wieck (RU Bochum). The details, including a comparison of experimental and theoretical results shown in the left figure, are published in Physical Review B (arxiv version).
Cosmic topology from the Möbius strip
The following article is contributed by Peter Kramer.
Einstein’s fundamental theory of Newton’s gravitation relates the interaction of masses to the curvature of space. Modern cosmology from the big bang to black holes results from Einstein’s field equations for this relation. These differential equations by themselves do not yet settle the largescale structure and connection of the cosmos. Theoretical physicists in recent years tried to infer information on the largescale cosmology from Cosmic microwave background radiation (CMBR), observed by satellite observations. In the frame of largescale cosmology, the usual objects of astronomy from solar systems to galaxy clusters are smoothed out, and conditions imprinted in the early stage of the universe dominate.
In mathematical language one speaks of cosmic topology. Topology is often considered to be esoteric. Here we present topology from the familiar experience with the twisted Möbius strip. This strip on one hand can be seen as a rectangular crystallographic lattice cell whose copies tile the plane, see Fig. 2. The Möbius strip is represented as a rectangular cell, located between the two vertical arrows, of a planar crystal. A horizontal dashed line through the center indicates a glidereflection line. A glide reflection is a translation along the dashed line by the horizontal length of the cell, followed by a reflection in this line. The crystallographic symbol for this planar crystal is cm. In threedimensional space the planar Möbius crystal (top panel of Fig. 1) is twisted (middle panel of Fig. 1). The twist is a translation along the dashed line, combined with a rotation by 180 degrees around that line. A final bending (bottom panel of Fig. 1) of the dashed line and a smooth gluing of the arrowed edges yields the familiar Möbius strip.
Given this Möbius template in two dimension, we pass to manifolds of dimension three. We present in Fig. 3 a new cubic manifold named N3. Three cubes are twisted from an initial one. A twist here is a translation along one of the three perpendicular directions, combined with a righthand rotation by 90 degrees around this direction. To follow the rotations, note the color on the faces. The three neighbor cubes can be glued to the initial one. If the cubes are replaced by their spherical counterparts on the threesphere, the three new cubes can pairwise be glued with one another, with face gluings indicated by heavy lines. The complete tiling of the threesphere comprises 8 cubes and is called the 8cell. The gluings shown here generate the socalled fundamental group of a single spherical cube on the threesphere with symbol N3. This spherical cube is a candidate for the cosmic topology inferred from the cosmic microwave background radiation. A second cubic twist with a different gluing and fundamental group is shown in Fig. 4. Here, the three twists combine translations along the three directions with different rotations.
The key idea in cosmic topology is to pass from a topological manifold to its eigen or normal modes. For the Möbius strip, these eigenmodes are seen best in the planar crystal representation of Fig. 2. The eigenmodes can be taken as sine or cosine waves of wave length which repeat their values from edge to edge of the cell. It is clear that the horizontal wavelength of these modes has as upper bound the length L of the rectangle. The full Euclidean plane allows for infinite wavelength, and so the eigenmodes of the Möbius strip obey a selection rule that characterizes the topology. Moreover the eigenmodes of the Möbius strip must respect its twisted connection.
Similarly, the eigenmodes of the spherical cubes in Fig. 3 must repeat themselves when going from cube to neighbor cube. It is intuitively clear that the cubic eigenmodes must have a wavelength smaller than the edge length of the cubes. The wave length of the eigenmodes of the full threesphere are bounded by the equator length of the threesphere. Seen on a single cube, the different twists and gluings of the manifolds N2 and N3 shown in Figs. 3 and 4 form different boundary value problems for the cubic eigenmodes.
Besides of these spherical cubic manifolds, there are several other competing polyhedral topologies with multiple connection or homotopy. Among them are the famous Platonic polyhedra. Each of them gives rise to a Platonic tesselation of the threesphere. Everitt has analyzed all their possible gluings in his article Threemanifolds from platonic solids in Topology and its applications, vol 138 (2004), pp. 253263. In my contribution Platonic topology and CMB fluctuations: Homotopy, anisotropy, and multipole selection rules, Class. Quant. Grav., vol. 27 (2010), 095013 (freely available on the arxiv) I display them and present a full analysis of their corresponding eigenmodes and selection rules.
Since terrestrial observations measure the incoming radiation in terms of its spherical multipoles as functions of their incident direction, the eigenmodes must be transformed to a multipole expansion as done in my work. New and finer data on the CMB radiation are expected from the Planck spacecraft launched in 2009. These data, in conjunction with the theoretical models, will promote our understanding of cosmic space and possible twists in its topology.