Quantizing comets: semiclassical methods in action

One key aspect of theoretical physics is that the (rather few!) basic equations are not tied to a narrow field of applications, but that insights from celestial mechanics are equally relevant for the dynamics of quantum objects. I have written before how to interpret the Coulomb Green’s function of hydrogen or Rydberg molecules in terms of Lamberts theorem of cometary orbit determination.

Potential landscape around Jupiter (rotating frame) showing the unstable saddle points (Lagrange points), where comets and asteroids can enter or escape to join Jupiter for a while.
Potential landscape around Jupiter (rotating frame) showing the unstable saddle points (Lagrange points), where comets and asteroids can enter or escape to join Jupiter for a while.

Equally interesting is the dynamics of small celestials bodies in the vicinity of a parent body (the “restricted three body problem“). Near the Lagrange points the attraction of the sun and the effective potential in a coordinate system moving with the parent body around the sun cancel and the small objects can be trapped.
The comet Shoemaker-Levy 9 (SL9) is a prime example: SL9 was captured by Jupiter, broke apart at a close approach, and finally the string of fragments collided with Jupiter in 1994.

What would have happened with SL9 if Jupiter was contracted to a point mass?

Since SL9 was once captured, it should also have been released again. Indeed, in 2014 SL9 would have left Jupiter, as shown in the numerically integrated JPL orbit. To illustrate and simplify the transient dynamics, I have assumed in a recent publication a circular orbit of Jupiter and that SL9, Jupiter, and the sun are located in a plane. In reality the changing distance from the sun can open and close the entry points and in conjunction with precise location of the comet an escape or trapping becomes feasible:

Dynamics around Jupiter (located at (0,0)) for two slightly different initial kinetic energies. The shaded area indicates the energetically allowed regions. On the right: after encircling Jupiter many times, the object escapes (or if you reverse time, becomes trapped).
Dynamics around Jupiter (located at (0,0)) for two slightly different initial kinetic energies. The shaded area indicates the energetically allowed regions. On the right: after encircling Jupiter many times, the object escapes (or if you reverse time, becomes trapped). (C) Tobias Kramer.

The Shoemaker-Levy 9 case has been studied extensively in the astronomical literature (see for instance the orbital analysis by Benner et al). So what new insights are there for quantum objects? The goal is not to claim that comets have to be treated as quantum mechanical objects, but to realize that exactly the same dynamics seen in celestial mechanics guides electrons in magnetic fields through wave guides. I refer you for the details to my article Transient capture of electrons in magnetic fields, or: comets in the restricted three-body problem, but want to close by showing the electronic eigenfunctions, which show a real quantum feature absent in the classical case: electrons can tunnel through the forbidden area and thus will always escape from the parent body:

Spectrum and gallery of eigenstates for the quantized version of the celestial dynamics around Lagrange points. The quantum case describes the motion of an electron in a magnetic field.
Spectrum and gallery of eigenstates for the quantized version of the celestial dynamics around Lagrange points. The quantum case describes the motion of an electron in a magnetic field. (C) Tobias Kramer.

Added on March 1, 2020: another case of a transient capture, this time around Erath: small body 2020 CD3:

Visualization of the orbit of small body 2020 CD3, ephemeris taken from the JPL Horizons system (the uncertainties of the 1 year backward integration are several hours), geocentric ecliptical coordinates. The inset shows the close encounters with Earth (blue disk) on April 4, 2019 and February 13, 2020.

Visitors from other stars

Star chart (open source stellarium program) showing the location of comet 2I/Borisov on Dec 1st, 2019, 5:36 UTC for comparison with the corresponding image in the gallery of telescope.org (C) The Open University.

This year has revealed the first two visitors coming from another stellar system far away. The giveaway signature of these visits is the very hyperbolic orbit, which means that the objects are not gravitationally bound by our sun. They must have undertaken the long journey from another star to us. Many fascinating questions arise: are these distant worlds we will never get to similar to our solar system? Which elements are formed, how much water is around?

While the first visitor 1I/’Oumuamua (I=interstellar) did not show much activity (besides a surprisingly large change in its orbital acceleration presumable caused by sublimating ices), the second one 2I/Borisov is more active and can be analyzed for its chemical composition.
I submitted a request for an image to the fine online observatory operated by The Open University on the Teide (Tenerife, Spain), and tonight the COAST 14″ telescope took a 120 seconds image of 2I/Borisov. You can find the
corresponding image in the newest image gallery of telescope.org, all images there (C) The Open University. Can you spot the comet? It is the tiny elongated speck in the lower right quadrant (the image covers about 1/2 degree of the sky), or have a look at the finder chart shown here.

In addition to the hyperbolic orbit, both objects showed a non-gravitational acceleration (discussed here before for the solar-system comet 67P/Churyumov-Gerasimenko). Both interstellar comets show a rather large extra acceleration if compared to Jupiter family comets.

Non-graviational acceleration of Jupiter family comets (taken from the JPL Small-Body database), compared with the two visiting comets (preliminary orbit data from JPL).

Random scattering: the small scale structure of the universe

Branched flow
Emergence of branched electron flow from weak scattering across a random potential. (C) Tobias Kramer

Our universe displays various mass concentrations of matter and is not a homogeneous density soup of particles as assumed in the simplest cosmological models where isotropy is assumed (cosmological principle). Interestingly recent supernova data (see Colin et al Evidence of anisotropy of cosmic acceleration A&A 631 L13, also on the arxiv) shows deviations of the angular distribution of matter as seen from Earth. One possibility (also not taken into account in the standard model) is the topology of the cosmos, discussed in this blog by Peter Kramer before. But back to the “small structures” (< 100 Mega parsec): How do mass concentrations arise and how get small fluctuations amplified?

One universal mechanism at work across many domains of physics is structure formation and concentration into branches by random weak scattering. The key-point is: despite randomness this type of scattering does not smear out the density as expected by diffusion processes. This important mechanism has been independently discovered in various domains of physics, but has been rarely discussed and further explored within a consistent framework. Interestingly the fundamental branching behavior can be seen in both quantum mechanics and classical physics. Together with Rick Heller and Ragnar Fleischmann we have written a short overview  article (available on the arxiv: “Branched flow”) which provides some background information and serves as a starting point for further exploration. We also provide a short python script to generate intricate patterns out of random deformations (script available on github).

Sunlight creating a web of caustics at the bottom of a swimming poll. Picture (C) Tobias Kramer.

One example discussed in this blog is the formation of “dust concentrations” in the near nucleus coma of comets from a homogeneously emitting cometary surface. The structure formation of the universe (the cosmic web) is discussed by Y. Zeldovich and reviewed by P.J.E. Peebles in his monograph The Large-Scale Structure of the Universe (Princeton University Press, 1980) in the chapter “caustics and pancakes”. A more accessible incarnation of the caustics are the ripples of sunlight at a pool bottom.

Kepler’s Harmonices Mundi turns 400

Johannes Kepler published his book Harmony of the world 400 years ago contains what is now commonly known as “Kepler’s third law”:

Sed res est certissima exactissimaque, quòd proportio quae est inter binorum quorumcunque Planetarum tempora periodica, sit praecisè sesquialtera proportionis mediarum distantiarum, …

Johannes Kepler Harmonices Mundi 1619, see p. 302 of the Kepler edition by Max Caspar.

Expressed in equations: the ratio of orbital periods T1 : T2 is proportional to the semimajor axes a11.5 : a21.5 to the power of 1.5 for any two planets 1 and 2. What is more difficult to predict (and still unknown today!) is why the planets in our solar system are at their respective orbit. With the vast statistics of thousands of extra solar planets obtained from the Kepler mission and other surveys we might soon find out if there is an empirical relation hidden in the formation of planetary systems.

Kepler did make several attempts to find a law behind the distances of the 5 known planets Mercury, Venus, Earth, Mars, Jupiter, and Saturn. Most known is his book Mysterium Cosmographicum published already 1597.

In the first rendering, I show the elliptical orbits of the five planets Mercury, Venus, Earth, Mars, Jupiter, Saturn, in black. The spherical shells have radii corresponding to the perihelion and aphelion distances of the planets. The red tetrahedron has as insphere a ball of radius the aphelion distance of mars and a circumsphere with radius equal to Jupiter’s perihelion distance. The cube has as insphere a ball with radius aphelion distance of Jupiter and circumscribed a sphere with the perihelion distance of Saturn.

Kepler Mysterium Cosmographicum
Kepler’s 1597 study of the proportions of the planetary orbits in relation to the platonic bodies. (C) for this visualization: Tobias Kramer 2019
Kepler Mysterium Cosmographicum
1597 inner planets with platonic bodies

The second figure shows the inner planets in more detail. The black ellipses are the actual orbits of Mercury, Venus, Earth, and Mars. The innermost yellow octahedron encloses the orbit of Mercury (inscribed in the square shaped plane), then the construction proceeds by connecting the outer sphere of the yellow octahedron to the Venus shell (perihelion distance), then put around the aphelion shell of Venus the green icosahedron. The sphere circumscribed around the icosahedron has the radius of the Earth’s perihelion distance. The blue dodecahedron harbors an insphere with radius of the Earth’s aphelion distance and its circumsphere yields the perihelion sphere of Mars. The larger eccentricities of the Mars and Mercury orbits yield thicker shells for these planets. Note that the construction takes the eccentricities to be known and does not rely on circular trajectories.

Kepler was well aware of the eccentricity of the planets and did not assume circular orbits. He also noted that the theory agrees only within few percent with observations. In Harmonices Mundi he investigates if alternatively the planetary distances are encoded in musical proportions.

Accelerating comets: the non-gravitational force

Orbit of comet 67P/Churyumov-Gerasimenko seen from the North pole of the rotation axis of the nucleus. Perihelion, aphelion, and the equinox positions are shown. (C) Tobias Kramer 2019

Celestial mechanics is firmly rooted in Newton’s (and Einsteins) laws of gravity and enables us to compute the positions of planets, moons, and comets with high accuracy. Besides the solar attraction, the major gravitational perturbation of comets are caused by Jupiter. Non-gravitational forces are important to understand the dynamics and composition of our interstellar visitor ʻOumuamua, which not only followed a hyperbolic trajectory but additionally accelerated.

How does a “non-gravitational force” arise?

For asteroids and comets, the term “non-gravitational force” refers to all effects besides the standards law of gravitation. For comets the most obvious one is the force induced by the sublimation of ice. The gas molecules fly into space and carry along momentum transferred from the cometary nucleus. The momentum transfer affects cometary motion considerably:

Fortunately, Rosetta was a faithful companion of 67P/Churyumov-Gerasimenko and provided the required positions in space (accuracy better than 10 km). The accurate determination of the spacecrafts is an art by itself, as described in this report. Only this data allowed us to retrieve the non-gravitational acceleration and to deduce how much water ice sublimated during the 2015 apparition.

Magnitude of the non-gravitational acceleration of 67P/Churyumov-Gerasimenko retrieved from the orbital evolution. The red line is an independent determination of the gas production based on ROSINA data. Adapted from Kramer & Läuter 2019.

Flying close to the nucleus provided us with spectacular views of the surface from the OSIRIS camera, but made it more difficult to assess the overall gas production of the comet, since Rosetta moved actually across the escaping molecules and probed the density “in-situ” using the ROSINA devices. Together with the ROSINA principal scientists Kathrin Altwegg and Martin Rubin, we (Matthias Läuter and Tobias Kramer) reconstructed the surface emission rate from the in-situ data (“Surface localization of gas sources on comet 67P/Churyumov-Gerasimenko based on DFMS/COPS data“, free arxiv version). This completely independent determination of the water gas production of 67P agrees matches up nicely with the acceleration data as shown in the figure.

You can run your own experiments with various parameters for the outgassing induced acceleration by using the NASA Horizons web interface. You need to input the 6 osculating elements (an equivalent to specifying the position and velocity vector of the comet) and you can enter values for the non-gravitational parameters A1,A2,A3. Usually these parameters are determined by a careful analysis of several apparitions of a comet, since one apparition as seen and measured from Earth does not allow us to retrieve the values with enough confidence.

For 67P a good orbit representation is given by these values from our article:

Osculating elements of 67P Churyumov Gerasimenko:
Epoch                    2456897.7196990740
Eccentricity             0.6410114978
Perihelion distance      1.24317813856152504
Perihelion Julian date   2454893.7138435622
Longitude Ascending Node 50.1459466115
Argument of perihelion   12.7813547059
Inclination              7.0405649967

Non-gravitational parameters:
A1 +1.066669896245E-9 au/d^2
A2 −3.689152188599E-11 au/d^2
A3 +2.483436092734E-10 au/d^2
∆T 35.07142 d

Stellar spectroscopy: (i) let the star shine in (the fiber)

Spectra of Vega, recorded with the setup described in this series of posts. The lower panel shows the reference curve.
Motivated by the book “Using Commercial Amateur Astronomical” by J.L. Hopkins I got hold of the Science Surplus Spectrometer, which provides a digital spectrum (2048 points) from the light received through a optical fiber. The spectroscope (USD 200) is shipped from the US and had to go through German customs, which actually applies a specific custom tariff for spectroscopes. Now the work started with a rather long and colorful road to achieve first spectral light:
Spectroscopy setup with the cold mirror.
  • The main issue is the small amount of available stellar light! This requires to hook the spectrometer up to a telescope, and even then we will only be able to analyze the brightest stars.
  • The spectrometer spreads the stellar light across 2048 pixels, which corresponds to roughly reducing the light by 2000 times compared to registering the light with a normal camera on a single “dot”. In addition there are losses from coupling the light out of the telescope into the fiber and the spectroscope.
  • Calibrate the spectrometer (well explained in the manual) for instance using a fluorescent lamp and save the spectrum to file to get the 1..2048 range converted to nm.
  •  I used this information later to write my own data acquisition and image processing program.
  • Get a serial to USB cable (long enough to be later useful to hook up the telescope outside) and get Windows XP running in a virtual machine
  • to test the spectrometer needs to be connected to the serial interface and the accompanying Windows software needs to run on my Macintosh compute (VirtualBox with an old Windows XP came to the rescue, some fiddling with the settings required to make it recognize the USB serial adapter as valid COM port)
  • Now the tricky part: “how to get starlight out of the telescope and into the spectroscope”?
  • Hopkins suggest to drill a hole in a diagonal mirror, but that option looked too tricky to me and I wanted to go for the beam splitter option. But how to avoid that half of the precious star light gets waisted?
  • I found a nice presentation and solution by fellow astronomer J. Hubbell, who describes his “cold mirror” solution (and gives very helpful advice about the best collimation into the fiber, more about that later)
Flip mirror repurposed as permanent “cold mirror”
  • Building a cold mirror (materials: about EUR 200)
  • This requires to repurpose some other equipment. My part list from any astronomy shop:
  • Skywatcher or Telescope Service 1.25″ flip mirror which can be easily taken apart (but beware, the screws are small and not very sturdy)
  • several adapters: one to put in other 1.25″ equipment (and eventually a camera, which has a C-mount thread requiring a second adapter)
  • be aware that the relatively long optical path added by the flip mirror requires to shift-back the focus by 7-8 cm! This worked for my Newton telescope only, because the tube can be shortened.
  • finally: the FM03 – 1″ Visible Cold Mirror, AOI: 45°, 1 mm Thick, which I ordered from Thorlabs to be put into the filter CP02/M mount
  • more adapters: S120-SMA Fiber Adapter Cap with Internal SM1 (1.035″-40) Thread, and SM1A10 Adapter with External SM1 Threads and Internal C-Mount Threads
  • the C-mount thread is then screwed into a short 1.25″ C-mount adapter (I ordered this short C-Mount adapter from the UK) and on the other end a 0.5 focal reducer lens for collimation.
  • finally, some do-it-yourself work with a fret saw (and lots of attempts to get the right measures) to replace the supplied flip mirror with the mounted cold mirror, attached with M6 plastic screws on a 1mm thick wooden plate with a circular hole cut out.
  • Note that the final setup will NOT allow you to flip the mirror anymore and that in the through-direction the intensity is 90% reduced (the transmission/reflectance characteristics is available at Thorlabs site)
To be continued… image processing and computer guiding the telescope with the star centered on the fiber for minutes. Just came in: spectra of Mirach
Mirach recorded with the cold mirror setup (60 s integration), spectral class M0 III. The red lines shows the reference (Pickles). Air mass background has been subtracted, but instrument response not been corrected.

Machine learning for encoding optical spectra

Architecture of the neural network used by Rodriguez & Kramer to encode the 2d-echo spectra of the photosynthetic FMO complex. The network is trained on GPUs with the using Mathematica, which in turn uses the Apache MXNet framework.

Machine learning techniques (“neural networks”) are presently explored in a wide range of applications, with the standard showcase being image recognition. In most scenarios the input data is “user generated” (for example handwritten digits) or comes from automatic sensors. The neural network gets trained with (Key–Value) pairs which are often tagged before used for “supervised learning”.

But what if we want to apply machine learning to scientific data sets generated by demanding simulations for instance on supercomputers? In that case, the input data does not come “for free”, but is the outcome of state-of-the art simulations, for instance of the optical properties of photosynthetic complexes, discussed before.

The advantage of the Machine Learning technique in this case is that the input parameter are known and the training works with very reliable information. This allows one to find very small-sized (in terms of storage) neural network representations of huge data sets (several gigabytes). We (Rodriguez & Kramer 2019, arxiv version) have explored this method for encoding the information of “two-dimensional optical spectra” and to relate the spectra to the molecular structure, such as the dipole orientations and the fluctuating energy states.

From a “physics perspective”, machine learning provides a way of automatic parameter fitting and could be seen as minimizing a variational parameter space. The problem: a variational principle always gives happily an answer, even if that answer is wrong. While we cannot solve this problem, we have studied how good different network layouts perform under the constraint of fixing the number of fitted parameters. This determines the size of the resulting parameter file of the network, which becomes surprisingly small. You can explore it by downloading the ancillary data we deposited on the arxiv.

Comet 21P/Giacobini-Zinner from Berlin Mitte

cof comet21p_fc

Observing comet 21P/Giacobini-Zinner from the city center of Berlin proved to be challenging. I put up my 130 mm Newton telescope in the living room to catch a glimpse of Capella above the roof of the adjacent building around 3 am on Sept. 5. Not far from Capella, 21P/Giacobini-Zinner was barely visible, but a 10s exposure with the smartphone camera revealed the fuzzy coma of the comet. Since the comet is close to perihelion it moves swiftly across the stars within 1 hour, as seen in the 3 frames animation from my observations at 02:39, 03:04 and 03:14 local time.

It would be nice to knock out the street light or to block the city lights, but it turns out the new LED street lights are emitting with a broad spectrum across to the entire visible range.
I analyzed the spectra with a hand-build spectrometer from Astromedia to evaluate if a UHC filter might help, but unfortunately this is not the case.

The hopes are high for Christmas comet 46P/Wirtanen to be much brighter and better visible at the end of the year.
Wirtanen gets this time very close to Earth (0.071 AU, no danger of collision though) and will be extensively studied. For an overview of our professional work about comet 67P/Churyumov-Gerasimenko have a look at this ZIB feature.
Current comet light curves are compiled on the excellent COBS.SI webpage.


Lunar eclipse colors, signatures of ozone

The red/green/blue lines denote the relative intensity (normalized to the sum r+g+b) from left to right along the center line (averaged vertically across 50 pixels). While red is the prevailing color in the central umbra (left), near the border with the penumbra a blue color component shows up.

The lunar eclipse yesterday (July 27, 2018) provided an excellent opportunity to take pictures and to study the color of Earth’s shadow. I snapped some images with my smartphone (Huwai Mate 10 Pro, Leica camera) through a 130mm Newton telescope. The moon was pretty low and within the city barely visible. Just after leaving the central shadow cone, the bluish color of the umbra/penumbra region became very noticeable. Today, I analyzed the picture a bit and drew the relative red/green/blue components along the shadow. Indeed blue wins within a narrow stripe bordering the penumbra. According to the literature, this is a signature of the ozone layer (which absorbs orange and red light, while blue passes and arrives at the moon).

More images:


Octahedral gravity – the case of asteroid (162173) Ryugu

Gravitational potential of an octahedron (rough estimate: 1.4 km diameter, density 1500 kg/m3), with added centrifugal term (7 h rotation). Credit: Tobias Kramer, 2018

Asteroid Ryugu (June 24, 2018). Credit: JAXA, University of Tokyo, Kochi University, Rikkyo University, Nagoya University, Chiba Institute of Technology, Meiji University, Aizu University, AIST

Asteroid Ryugu

The Hayabusa 2 target asteroid (162173) Ryugu displays a strange diamond like shape, at least from the sunny side explored so far.
Time for me to reuse the code developed for comet 67P/Churyumov-Gerasimenko and to compute the gravitational potential (with added centrifugal term, assuming a 7 h rotation period). The rotation weakens the gravitational potential in the equator regions. Otherwise on the triangular faces the potential minima are close to the centers of the triangle.

Update: new JAXA images, an animation of the rotation, and a shape model extracted by Doug Ellison. This allows me to recompute the effective gravitational potential (assuming a uniform density):


Cometary activity: the case of 67P/Churyumov-Gerasimenko

Visualization of dust trajectories emerging from comet 67P/C-G from sun-lit (reddish) and shadowed areas (bluish lines). Taken from Advances in Physics: X, 3(1), 1404436, 2018

The Rosetta spacecraft has come to rest on comet 67P/Churyumov-Gerasimenko. The comet is retreating from the sun, but the analysis of the scientific data is an ongoing endeavour with many discoveries yet to be made. As described before, the coma structure of 67P/C-G followed a surprisingly predictable pattern: dust is emitted from the entire sunlit surface and later in space forms intricate dust bundles and rays, directly reflecting the surface topography. The rotation of the nucleus leads to a bending of the dust trajectories which allows us to read of the velocity of the particles: around 3 m/s at distances 2-3 km from the surface. Our detailed prediction of the dust coma from May 2015 recently appeared in Advances in Physics: X, 3(1), 1404436, 2018 (open access), where we compare the model with Rosetta images such as these ones: (1, 2, 3).

Distribution of gas emitting regions on 67P/Churyumov-Gerasimenko (blue to red color scale: increasing gas emission). The black needles mark the reported locations of short-lived dust outbursts. Adapted from MNRAS 469, S20, 2017

The dust is propelled by the gas emitted from the surface by sublimation processes. It is a non-trivial task to back-out the surface ice distribution from the measured gas densities at Rosetta’s orbit via the COmetary Pressure Sensor (COPS), built by the ROSINA team (PI: Prof. Kathrin Altwegg, University of Bern, Switzerland). Using an analytical ansatz for the gas distribution, we managed to retrieve the “best-fit” distribution of the gas sources on the surface (T. Kramer, M. Läuter, M. Rubin, K. Altwegg: Seasonal changes of the volatile density in the coma and on the surface of comet 67P/Churyumov-Gerasimenko Monthly Notices of the Royal Astronomical Society, 469, S20, 2017).
Interestingly, the sources of higher gas emission are linked to reported short-lived outburst locations. A unified picture and modelling of gas and dust emission holds important clues about the composition of the cometary surface and we continue our investigation in that direction.

Corona shape during the total eclipse on August 21, 2017

I took a single picture during totality with my old smartphone, but at least I put it in high dynamic range mode (HDR). (C) 2017 Tobias Kramer

Eclipse path on August 21, 2017 near Salem, Oregon, USA. I was positioned at the red marker. I computed the path with my venerable eclipse prediction program from 1994 following the algorithms given in the Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac (1961 edition) p. 211ff.

I was lucky to witness the total eclipse under perfect weather conditions in Salem, Oregon. Salem was very accessible with public transport (Amtrak trains arriving before the eclipse, special stop to accommodate people getting off to viewing locations). While this picture does little justice to the real impression of the “inverted Sun” (dark disk with bright corona around it), it shows the irregular “triangular” shape of the corona with three rays  poking out. From Earth it is normally impossible to see the corona due to scattered light in the atmosphere, but solar satellites monitor the Sun constantly. I followed NASA’s example and overlaid a space satellite image with my image (after rotating it such that  the ecliptic plane [conveniently  indicated by Venus] aligns horizontally). If you want to try something similar yourself, I recommend the helioviewer website for easy selection and download of the solar satellite imagery. To match the scales on your pictures: Venus is 34.3 degrees away from the Sun, corresponding to about 130 solar radii. We are presently close to a minimum of solar activity and thus the corona less roundish.

Overlay of the solar eclipse image taken by Tobias Kramer at Salem with the SOHO LASCO C3 image 2017-08-21 16:54:07 UTC. The 3 rays visible by SOHO are causing the triangular shape of the corona as seen during the eclipse.

Photosynthesis: from the antenna to the reaction center

From the antenna to the reaction center: downhill energy transfer in the photosynthetic apparatus of Chlorobium tepidum.

The photosynthetic apparatus of the green sulfur bacteria (chlorobium tepdidum) spurred a long lasting discussion about quantum coherence in biology, mostly focused on its subunit, the Fenna Matthews Olson complex. An account of the discovery of the FMO Protein is given by Olson in Photosynth. Res 80, 2004.

Recent experiments by Dostál, Pšenčík, and Zigmantas (Nat. Chem 8, 2016) show measured time and frequency resolved 2d-spectra of the whole photosynthetic apparatus. These results allow one to trace the energy flow from the antenna down to the reaction center and relate it to theoretical models.

Computed two-dimensional spectrum of the antenna and FMO complex of C. tepidum. “A” denotes the location of the antenna peak, 1-7 the FMO complex states. Within tens of picoseconds, the energy is shuffled from the antenna towards the FMO complex.

In our article [Kramer & Rodriguez  Sci. Reports 7, 2017] , open access] we provide a model of the experimental results using the open quantum system dynamics code described previously. In addition we show how the different pathways in 2D spectroscopy (ground state bleaching, stimulated emission, and excited state absorption) affect the spectra and lead to shifts of “blobs” down from the diagonal places. This allows us to infer the effective coupling of the antenna part to the FMO complex and to assess the relative orientations of the different units. The comparison of theory and experimental result is an good test of our current understanding of the physical processes at work.

How a wave packet travels through a quantum electronic interferometer

Together with Christoph Kreisbeck and Rafael A Molina I have contributed a blog entry to the News and Views section of the Journal of Physics describing our most recent work on Aharonov-Bohm interferometer with an imbedded quantum dot (article, arxiv). Can you spot Schrödinger’s cat in the result?

Transition between the resistivity of the nanoring with and without embedded quantum dot. The vertical axis denotes the Fermi energy (controlled by a gate), while the horizontal axis scans through the magnetic field to induce phase differences between the pathways.

Splitting the heat: the quantum limits of thermal energy flow

Device geometry. a) Scanning electron micrograph of the sample. The 1D waveguides with a lithographic width of 170 nm form a half-ring connected to reservoirs A-F. A global top-gate is present. Heating of reservoirs A, B is generated by applying a current Ih, thermal noise measurements are performed at contacts E, F. The reservoirs C and D are left floating. b) Device potential for the ballistic transport model with labels A∗ and E∗ denoting the joined reservoirs A+B and E+F. Harmonic waveguide network with Gaussian scatterer, mode spacing is ħω = 5 meV.
Device geometry. a) Scanning electron micrograph of an the sample. The 1D waveguides with a lithographic width of 170 nm form a half-ring connected to reservoirs A-F. A global top-gate is present. Heating of reservoirs A, B is generated by applying a current Ih, thermal noise measurements are performed at contacts E, F. The reservoirs C and D are left floating. b) Device potential for the ballistic transport model with labels A∗ and E∗ denoting the joined reservoirs A+B and E+F. Harmonic waveguide network with Gaussian scatterer (indicated by arrow). Mode spacing is ħω = 5 meV. © 2016 Kramer et al. Citation: AIP Advances 6, 065306 (2016); http://dx.doi.org/10.1063/1.4953812

With ever shrinking sizes of electronic transistors, the quantum mechanical nature of electrons becomes more visible. For instance two electrons with the same spin orientation and velocities cannot be at the same location (Pauli blocking). At low temperatures, electronic waves travel many mircometers completely coherently, only reflected by the geometric of the confinement. A tight confinement leads to larger separation of quantized energy levels and restricts the lateral spread of the electrons to specific eigenmodes of a nanowire.

The distribution of the electronic current into various is then given by the geometrical scattering properties of the device interior, which are conveniently computed using wave packets. The ballistic electrons entering a nanodevice carry along charge and thermal energy. The maximum amount of thermal energy Q per time which can be transported through a single channel between two reservoirs of different temperatures is limited to  Q ≤ π2 kB2 (T22-T12)/(3h) [h denotes Planck’s and kB Boltzmann’s constant]. This has implications for computing devices, since this restricts the cooling rate (Pendry 1982).

In a collaboration with the novel materials group at Humboldt University (Prof. S.F. Fischer, Dr. C. Riha, Dr. O. Chiatti, S. Buchholz) and using wafers produced in the lab of A. Wieck, D. Reuter (Bochum, Paderborn) C. Kreisbeck and I have compared theoretical expectations with experimental data for the thermal energy and charge currents in multi-terminal nanorings (AIP Advances 2016, open access). Our findings highlight the influence of the device geometry on both, charge and thermal energy transfer and demonstrate the usefulness of the time-dependent wave-packet algorithm to find eigenstates over a whole range of temperature.

Metadata analysis of 80,000 arxiv:physics/astro-ph articles reveals biased moderation

Have you ever thought of arXiv moderation in astro-ph being a problem? Did you experience a >5 months delay from submission of your pre-print to the arXiv  to being publicly visible? Did this happen without any explanation or reaction from the arXiv moderators despite the same article being published after peer review in the Astrophysical Journal Letters?

Chances are high that your answer is no, to be precise the odds are 81404/81440=99.9558 percent that this did not happen to you.  Lucky you! Now let me tell about the other 36/81440=0.0442043 percent. My computer based analysis of the last 80,000 deposited arxiv:astro-ph articles shows interesting results about the moderation patterns in astrophysics. To repeat the analysis

  • get the arXiv metadata, which is available (good!) from the arxiv itself. I used the excellent metha tools from Martin Czygan to download all metadata from the astro-ph and quant-ph sections since 5/2014.
  • parse the resulting 200 MB XML file, for instance with Mathematica. To get the delay from submission to arXiv publication, I  took the time difference between the submission date stamp (oldest XMLElement[{http://purl.org/dc/elements/1.1/, date}) and the arXiv identifier, which encodes the year and month of public visibility.
  • Example: the article  arxiv:1604.00876 went public in April 2016, 5 months after submission to the arXiv (November 5, 2015) and publication in the Astrophysical Journal Letters (there total processing time from submission to online publication, including peer review 1.5 months).

The analysis shows different patterns of moderation for the two sections I considered, quant-ph and astro-ph. It reveals problematic moderation effects in the arXiv astro-ph section:

  1. Completely suitable articles are blocked, mostly peer reviewed and published for instance in the Astrophysical Journal, Astrophysical Journal Letters, Monthly Notices of the Royal Astronomical Society.
  2. This might indicate a biased moderation toward specific persons and subjects. In contrast to scientific journals with their named editors, the arXiv moderation is opaque and anonymous. The metadata analysis shows that the moderation of the physics:astro-ph and physics:quant-ph use very different guidelines, with astro-ph having a strong bias to block valid contributions.
  3. It makes the astro-ph arXiv less usable as a medium for rapid dissemination of cutting edge research via preprints.
  4. This hurts careers, citation histories, and encourages plagiarism. New scientific findings are more easily plagiarized by other groups, since no arXiv time-stamped preprint establishes the precedence.
  5. If we, the scientists, want a publicly funded arXiv we must ensure that it is operated according to scientific standards which serve the public. This excludes biased blocking of valid and publicly funded research.
  6. Finally, the arXiv was not put in place to be a backup server for all journals, but rather to provide a space to share upcoming scientific publications without months of delay.

I will be happy to share comments I receive about similar cases. I am not talking about dubious articles or non-scientific theories, but about standard peer-reviewed contributions published in established physics journals, which should be on the astrophysical preprint arXiv.

Here follows the list of all articles which were delayed by more than 3 months from arxiv:physics/astro-ph (out of a total of 81,440 deposited articles) and if known where the peer reviewed article got published. I cannot exclude other factors besides moderation for the delay, but can definitely confirm incorrect moderation being the cause for the 2 cases I have experienced. Interestingly the same analysis on arxiv:physics/quant-ph did not reveal such a moderation bias of peer reviewed articles. This gives hope that the astrophysical section could recover and return to 100 percent flawless operation. Then the arXiv fulfils its own pledge on accountability and on good scientific practices (principles of the arXiv’s operation).

http://arxiv.org/abs/1009.0420 The Astrophysical Journal
http://arxiv.org/abs/1101.3547 Publications of Astronomical Society of Japan
http://arxiv.org/abs/1506.01700 Journal of Astrophysics and Astronomy
http://arxiv.org/abs/1509.08311 EPJ Web of Conferences
http://arxiv.org/abs/1501.03478 Astrophysics and Space Sciences
http://arxiv.org/abs/1602.07334 The Astrophysical Journal
http://arxiv.org/abs/1602.07511 Physical Review C
http://arxiv.org/abs/1603.09573 Monthly Notices of the Royal Astronomical Society
http://arxiv.org/abs/1604.00876 The Astrophysical Journal Letters
http://arxiv.org/abs/1604.00291 Journal of Statistical Mechanics: Theory and Experiment
http://arxiv.org/abs/1604.07760 The Astrophysical Journal Letters

Predicting comets: a matter of perspective

Contrast stretched NAVCAM image of the nucleus of comet 67P/Churyumov-Gerasimenko to highlight the “jets” of dust emitted from all over the surface. CC BY-SA IGO 3.0

In general, any cometary activity is difficult to predict and many comets are known for sudden changes in brightness, break ups and simple disappearances. Fortunately, the Rosetta target comet 67P/Churyumov-Gerasiminko (67P/C-G) is much more amendable to theoretical predictions. The OSIRIS and NAVCAM images show light reflected from a highly structured dust coma within the space probe orbit (ca 20-150 km).

Is is possible to predict the dust coma and tail of comets?

Starting in 2014 we have been working on a dust forecast for 67P/C-G, see the previous blog entries. We had now the chance to check how well our predictions hold by comparing the model outcome to a image sequence from the OSIRIS camera during one rotation period of 67P/C-G on April 12, 2015, published by Vincent et al in A&A 587, A14 (2016) (arxiv version, there Fig. 13).

Comparison of Rosetta observations by Vincent et al A&A 2016 (left panels) with the homogeneous model (right panels). Taken from Kramer&Noack (ApJL 2016) Credit for (a, c): ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA

Our results appeared in Kramer & Noack, Astrophysical Journal Letters, 823, L11 (preprint, images). We obtain a surprisingly high correlation coefficient (average 80%, max 90%) between theory and observation, if we stick to the following minimal assumption model:

  1. dust is emitted from the entire sunlit nucleus, not only from localized active areas. We refer to this as the “homogeneous activity model”
  2. dust is entering space with a finite velocity (on average) along the surface normal. This implies that close to the surface a rapid acceleration takes place.
  3. photographed “jets” are highly depending on the observing geometry:
    rotateif multiple concave areas align along the line of sight, a high imaged intensity results, but is not necessarily the result of a single main emission source. As an exemplary case, we analysed the brightest points in the Rosetta image taken on April 12, 2015, 12:12 and look at all contributing factors along the line of sight (yellow line) from the camera to the comet. The observed jet is actually resulting from multiple sources and in addition from contributions from all sunlit surface areas.

What are the implications of the theoretical model?

If dust is emitted from all sunlit areas of 67P/C-G, this implies a more homogeneous surface erosion of the illuminated nucleus and leaves less room for compositional heterogeneities. And finally: it makes the dust coma much more predictable, but still allows for additional (but unpredictable) spontaneous, 20-40 min outbreak events. Interestingly, a re-analysis of the comet Halley flyby by Crifo et al (Earth, Moon, and Planets 90 227-238 (2002)) also points to a more homogeneous emission pattern as compared to localized sources.

Apply now for the 621. WE-Heraeus-Seminar: From Photosynthesis to Photovoltaics: Theoretical Approaches for Modelling Supramolecular Complexes and Molecular Crystals

The conference focuses on theoretical and experimental new developments in the field of photoactive molecules. If you are working in this field, we encourage you to attend the meeting. Find out more and who already agreed to participate  at photosynthesis2016.wordpress.com.

Please submit your abstract within the next two weeks by filling out the linked template from the conference site.

We (the scientific organizers, Tobias Kramer and Jörg Megow) are grateful for the full support by the Wilhelm and Else Heraeus Foundation, which covers the accommodation for all successful applicants.

Weathering the dust around comet 67P/Churyumov–Gerasimenko

Bradford robotic telescope image of comet 67P (30th Oct 2015)
Bradford robotic telescope image of comet 67P/Churyumov–Gerasimenko (180s exposure time, 5:43 UTC, 30-10-2015). © 2015 University of Bradford

Comet 67P/Churyumov–Gerasimenko is past its perihelion and is currently visible in telescopes in the morning hours. The picture is taken from Tenerife by the Bradford robotic telescope, where I submitted the request. The tail is extending hundred thousands kilometers into space and consists of dust particles emitted from the cometary nucleus, which measures just a few kilometers. In a recent work just published in the Astrophysical Journal Letters (arxiv version), we have explored how dust, which does not make it into space, is whirling around the cometary nucleus. The model assumes that dust particles are emitted from the porous mantle and hover over the cometary surface for some time (<6h) and then fall back on the surface, delayed by the gas drag of gas molecules moving away from the nucleus. As in the predictions for the cometary coma discussed previously, we are sticking to a minimal-assumption model with a homogeneous surface activity of gas and dust emission.

Dust trajectories reaching the Philae descent area computed from a homogeneous dust emission model. Figure from Kramer/Noack Prevailing dust-transport directions on comet 67P/Churyumov-Gerasimenko, Astrophysical Journal Letters, 813, L33 (2015)
Dust trajectories reaching the Philae descent area computed from a homogeneous dust emission model. From Kramer/Noack “Prevailing dust-transport directions on comet 67P/Churyumov-Gerasimenko”, Astrophysical Journal Letters, 813, L33 (2015).

The movements of 40,000 dust particles are tracked and the average dust transport within a volumetric grid with 300 m sized boxes is computed. Besides the gas-dust interaction, we do also incorporate the rotation of the comet, which leads to a directional transport.
The Rosetta mission dropped Philae over the small lobe of 67P/C-G and Philae took a sequence of approach images which reveal structures resembling wind-tails behind boulders on the comet. This allowed Mottola et al (Science 349.6247 (2015): aab0232) to derive information about the direction of impinging particles which hit the surface unless sheltered by the boulder. Our model predicts a dust-transport inline with the observed directions in the descent region, it will be interesting to see how wind-tails at other locations match with the prediction. We put an interactive 3d dust-stream model online to visualize the dust-flux predicted from the homogeneous surface model.

Day and night at comet 67P/Churyumov–Gerasimenko

Comet 67P/Churyumov–Gerasimenko has passed its nearest distance to the sun and its tail has been observed from earth. The comet emits dust and displays spectacular but short-lived outbreaks of localized jet activity. Very detailed OSIRIS pictures of the near-surface dust emission ready for stereo viewing have been posted by Brian May. The pictures also allow one to have a look at the prediction from the homogeneous dust emission model discussed previously. When you direct your attention in Brian May’s pictures to the background activity, you find very similar patterns as expected from the homogenous emission model. This activity is dimmer but steadily blowing off dust from the nucleus. Matthias Noack and I have generated and uploaded a visualization of the dust data obtained from the homogeneous activity model. In contrast to a localized activity models, collimated jets arise from a bundle of co-propagating dust trajectories emanating from concave surface areas. The underlying topographical shape model is a uniform triangle remesh of Mattias Malmer’s excellent work based on the release of Rosetta’s NAVCAM images via the Rosetta blog. The following video takes you on a flight around 67P/C-G, with 16 hours condensed into 90 sec.

The video is a side-by-side stereoscopic 3d rendering of 67P/Churyumov–Gerasimenko and the dust cloud, which can be viewed in 3d with  a simple cardboard viewer. While the observer is encircling the nucleus, day and night passes and different parts of the comet are illuminated.

Gas flow around 67P/C-G computed from a homogeneous activity model.
Gas flow around 67P/C-G computed from a homogeneous activity model. arxiv:1505.08041

In the homogeneous activity model each sunlit triangle emits dust with an initial velocity component along the surface normal. Then dust is additionally dragged along within the outwards streaming gas, which is also incorporated in the model. In contrast to compact dust particles, the gas molecules are diffusing also in lateral directions and thus gas is not helping to collimate jets by itself. The Rosetta mission with its long term observation program offers fascinating ways to perform a reality check on various models of cometary activity, which differ considerably in the underlying physics and assumptions about the original distribution and lift-off conditions of the dust eventually forming the beautiful tails of comets.

Homogeneous dust emission and jet structure near active cometary nuclei: the case of 67P/Churyumov-Gerasimenko by Tobias Kramer, Matthias Noack, Daniel Baum, Hans-Christian Hege, Eric J. Heller.

For red-cyan glasses try our 3d video on youtube (flash player required, watch out for the settings and 3d options, 1080p HD recommended).

Dusting off cometary surfaces: collimated jets despite a homogeneous emission pattern.

Effective Gravitational potential of the comet (including the centrifugal contribution), the maximal value of the potential (red) is about 0.46 N/m, the minimal value (blue) 0.31 N/m computed with the methods described in this post.
Effective Gravitational potential of the comet (including the centrifugal contribution), the maximal value of the potential (red) is about 0.46 N/m, the minimal value (blue) 0.31 N/m computed with the methods described in this post. The rotation period is taken to be 12.4043 h. Image computed with the OpenCL cosim code. Image (C) Tobias Kramer (CC-BY SA 3.0 IGO).

Knowledge of GPGPU techniques is helpful for rapid model building and testing of scientific ideas. For example, the beautiful pictures taken by the ESA/Rosetta spacecraft of comet 67P/Churyumov–Gerasimenko reveal jets of dust particles emitted from the comet. Wouldn’t it be nice to have a fast method to simulate thousands of dust particles around the comet and to find out if already the peculiar shape of this space-potato influences the dust-trajectories by its gravitational potential? At the Zuse-Institut in Berlin we joined forces between the distributed algorithm and visual data analysis groups to test this idea. But first an accurate shape model of the comet 67P C-G is required. As published in his blog, Mattias Malmer has done amazing work to extract a shape-model from the published navigation camera images.

  1. Starting from the shape model by Mattias Malmer, we obtain a re-meshed model with fewer triangles on the surface (we use about 20,000 triangles). The key-property of the new mesh is a homogeneous coverage of the cometary surface with almost equally sized triangle meshes. We don’t want better resolution and adaptive mesh sizes at areas with more complex features. Rather we are considering a homogeneous emission pattern without isolated activity regions. This is best modeled by mesh cells of equal area. Will this prescription yield nevertheless collimated dust jets? We’ll see…
  2. To compute the gravitational potential of such a surface we follow this nice article by JT Conway. The calculation later on stays in the rotating frame anchored to the comet, thus in addition the centrifugal and Coriolis forces need to be included.
  3. To accelerate the method, OpenCL comes to the rescue and lets one compute many trajectories in parallel. What is required are physical conditions for the starting positions of the dust as it flies off the surface. We put one dust-particle on the center of each triangle on the surface and set the initial velocity along the normal direction to typically 2 or 4 m/s. This ensures that most particles are able to escape and not fall back on the comet.
  4. To visualize the resulting point clouds of dust particles we have programmed an OpenGL visualization tool. We compute the rotation and sunlight direction on the comet to cast shadows and add activity profiles to the comet surface to mask out dust originating from the dark side of the comet.

This is what we get for May 3, 2015. The ESA/NAVCAM image is taken verbatim from the Rosetta/blog.

Comparison of homogeneous dust model with ESA/NAVCAM Rosetta images.
Comparison of homogeneous dust mode (left panel)l with ESA/NAVCAM Rosetta images. (C) Left panel: Tobias Kramer and Matthias Noack 2015. Right panel: (C) ESA/NAVCAM team CC BY-SA 3.0 IGO, link see text.

Read more about the physics and results in our arxiv article T. Kramer et al.: Homogeneous Dust Emission and Jet Structure near Active Cometary Nuclei: The Case of 67P/Churyumov-Gerasimenko (submitted for publication) and grab the code to compute your own dust trajectories with OpenCL at github.org/noma/covis

The shape of the universe

The following post is contributed by Peter Kramer.

hyperbolic dodecahedron
Shown are two faces of a hyberbolic dodecahedron.
The red line from the family of shortest lines (geodesics) connects both faces. Adapted from CRM Proceedings and Lecture Notes (2004), vol 34, p. 113, by Peter Kramer.

The new Planck data on the cosmic microwave background (CMB) has come in. For cosmic topology, the data sets contain interesting information related to the size and shape of the universe. The curvature of the three-dimensional space leads to a classification into hyperbolic, flat, or spherical cases. Sometimes in popular literature, the three cases are said to imply an inifinite (hyperbolic, flat) or finite (spherical) size of the universe. This statement is not correct. Topology supports a much wider zoo of possible universes. For instance, there are finite hyperbolic spaces, as depicted in the figure (taken from Group actions on compact hyperbolic manifolds and closed geodesics, arxiv version). The figure also shows the resulting geodesics, which is the path of light through such a hyperbolic finite sized universe. The start and end-points must be identified and lead to smooth connection.

Recent observational data seem to suggest a spherical space. Still, it does not resolve the issue of the size of the universe.
Instead of a fully filled three-sphere, already smaller parts of the sphere can be closed topologically and thus lead to a smaller sized universe. A systematic exploration of such smaller but still spherical universes is given in my recent article
Topology of Platonic Spherical Manifolds: From Homotopy to Harmonic Analysis.
In physics, it is important to give specific predictions for observations of the topology, for instance by predicting the ratio of the different angular modes of the cosmic microwave background. It is shown that this is indeed the case and for instance in a cubic (still spherical!) universe, the ratio of 4th and 6th multipole order squared are tied together in the proportion 7 : 4, see Table 5. On p. 35 of ( the Planck collaboration article) the authors call for models yielding such predictions as possible explanations for the observed anisotropy and the ratio of high and low multipole moments.

When two electrons collide. Visualizing the Pauli blockade.

The upper panel shows two (non-interacting) electrons approaching with small relative momenta, the lower panel with larger relative momenta.
The upper panel shows two electrons with small relative momenta colliding, in the lower panel with larger relative momenta.

From time to time I get asked about the implications of the Pauli exclusion principle for quantum mechanical wave-packet simulations.
I start with the simplest antisymmetric case: a two particle state given by the Slater determinant of two Gaussian wave packets with perpendicular directions of the momentum:
φa(x,y)=e-[(x-o)2+(y-o)2]/(2a2)-ikx+iky and φb(x,y)=e-[(x+o)2+(y-o)2]/(2a2)+ikx+iky
This yields the two-electron wave function
The probability to find one of the two electrons at a specific point in space is given by integrating the absolute value squared wave function over one coordinate set.
The resulting single particle density (snapshots at specific values of the displacement o) is shown in the animation for two different values of the momentum k (we assume that both electrons are in the same spin state).
For small values of k the two electrons get close in phase space (that is in momentum and position). The animation shows how the density deviates from a simple addition of the probabilities of two independent electrons.
If the two electrons differ already by a large relative momentum, the distance in phase space is large even if they get close in position space. Then, the resulting single particle density looks similar to the sum of two independent probabilities.
The probability to find the two electrons simultaneously at the same place is zero in both cases, but this is not directly visible by looking at the single particle density (which reflects the probability to find any of the electrons at a specific position).
For further reading, see this article [arxiv version].

The impact of scientific publications – some personal observations

I will resume posting about algorithm development for computational physics. To put these efforts in a more general context, I start with some observation about the current publication ranking model and explore alternatives and supplements in the next posts.

Solvey congress 19...
Solvey congress 1970, many well-known nuclear physicists are present, including Werner Heisenberg.

Working in academic institutions involves being part of hiring committees as well as being assessed by colleagues to measure the impact of my own and other’s scientific contributions.
In the internet age it has become common practice to look at various performance indices, such as the h-index, number of “first author” and “senior author” articles. Often it is the responsibility of the applicant to submit this data in electronic spreadsheet format suitable for an easy ranking of all candidates. The indices are only one consideration for the final decision, albeit in my experience an important one due to their perceived unbiased and statistical nature. Funding of whole university departments and the careers of young scientists are tied to the performance indices.

I did reflect about the usefulness of impact factors while I collected them for various reports, here are some personal observations:

  1. Looking at the (very likely rather incomplete) citation count of my father I find it interesting that for instance a 49 year old contribution by P Kramer/M Moshinsky on group-theoretical methods for few-body systems gains most citations per year after almost 5 decades. This time-scale is well beyond any short-term hiring or funding decisions based on performance indices. From colleagues I hear about similar cases.
  2. A high h-index can be a sign of a narrow research field, since the h-index is best built up by sticking to the same specialized topic for a long time and this encourages serialised publications. I find it interesting that on the other hand important contributions have been made by people working outside the field to which they contributed. The discovery of three-dimensional quasicrystals discussed here provides a good example. The canonical condensed matter theory did not envision this paradigmatic change, rather the study of group theoretical methods in nuclear physics provided the seeds.
  3. The full-text search provided by the search engines offers fascinating options to scan through previously forgotten chapters and books, but it also bypasses the systematic classification schemes previously developed and curated by colleagues in mathematics and theoretical physics. It is interesting to note that for instance the AMS short reviews are not done anonymously and most often are of excellent quality. The non-curated search on the other hand leads to a down-ranking of books and review articles, which contain a broader and deeper exposition of a scientific topic. Libraries with real books grouped by topics are deserted these days, and online services and expert reviews did in general not gain a larger audience or expert community to write reports. One exception might be the public discussion of possible scientific misconduct and retracted publications.
  4. Another side effect: searching the internet for specific topics diminishes the opportunity to accidentally stumble upon an interesting article lacking these keywords, for instance by scanning through a paper volume of a journal while searching for a specific article. I recall that many faculty members went every monday to the library and looked at all the incoming journals to stay up-to-date about the general developments in physics and chemistry. Today we get email alerts about citation counts or specific subfields, but no alert contains a suggestion what other article might pick our intellectual curiosity – and looking at the rather stupid shopping recommendations generated by online-warehouses I don’t expect this to happen anytime soon.
  5. On a positive note: since all text sources are treated equally, no “high-impact journals” are preferred. In my experience as a referee for journals of all sorts of impact numbers, the interesting contributions are not necessarily published or submitted to highly ranked journals.

To sum up, the assessment of manuscripts, contribution of colleagues, and of my own articles requires humans to read them and to process them carefully – all of this takes a lot of time and consideration. It can take decades before publications become alive and well cited. Citation counts of the last 10 years can be poor indicators for the long-term importance of a contribution. Counting statistics provides some gratification by showing immediate interest and are the (less personal) substitute for the old-fashioned postcards requesting reprints. People working in theoretical physics are often closely related by collaboration distance, which provides yet another (much more fun!) factor. You can check your Erdos number (mine is 4) or Einstein number (3, thanks to working with Marcos Moshinsky) at the AMS website.

How to improve the current situation and maintain a well curated and relevant library of scientific contributions – in particular involving numerical results and methods? One possibility is to make a larger portion of the materials surrounding a publication available. In computational physics it is of interest to test and recalculate published results shown in journals. The nanohub.org platform is in my view a best practice case for providing supplemental information on demand and to ensure a long-term availability and usefulness of scientific results by keeping the computational tools running and updated. It is for me a pleasure and excellent experience to work with the team around nanohub to maintain our open quantum dynamics tool. Another way is to provide and test background materials in research blogs. I will try out different approaches with the next posts.

Better than Slater-determinants: center-of-mass free basis sets for few-electron quantum dots

Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states.
Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states. The arrows point out the increasing error of the CI approach with increasing center-of-mass admixing.

Solving the interacting many-body Schrödinger equation is a hard problem. Even restricting the spatial domain to a two-dimensions plane does not lead to analytic solutions, the trouble-makers are the mutual particle-particle interactions. In the following we consider electrons in a quasi two-dimensional electron gas (2DEG), which are further confined either by a magnetic field or a harmonic oscillator external confinement potential. For two electrons, this problem is solvable for specific values of the Coulomb interaction due to a hidden symmetry in the Hamiltonian, see the review by A. Turbiner and our application to the two interacting electrons in a magnetic field.

For three and more electrons (to my knowledge) no analytical solutions are known. One standard computational approach is the configuration interaction (CI) method to diagonalize the Hamiltonian in a variational trial space of Slater-determinantal states. Each Slater determinant consists of products of single-particle orbitals. Due to computer resource constraints,  only a certain number of Slater determinants can be included in the basis set. One possibility is to include only trial states up to certain excitation level of the non-interacting problem.

The usage of Slater-determinants as CI basis-set introduce severe distortions in the eigenenergy spectrum due to the intrusion of spurious states, as we will discuss next. Spurious states have been extensively analyzed in the few-body problems arising in nuclear physics but have rarely been mentioned in solid-state physics, where they do arise in quantum-dot systems. The basic defect of the Slater-determinantal CI method is that it brings along center-of-mass excitations. During the diagonalization, the center-of-mass excitations occur along with the Coulomb-interaction and lead to an inflated basis size and also with a loss of precision for the eigenenergies of the excited states. Increasing the basis set does not uniformly reduce the error across the spectrum, since the enlarged CI basis set brings along states of high center-of-mass excitations. The cut-off energy then restricts the remaining basis size for the relative part.

The cleaner and leaner way is to separate the center-of-mass excitations from the relative-coordinate excitations, since the Coulomb interaction only acts along the relative coordinates. In fact, the center-of-mass part can be split off and solved analytically in many cases. The construction of the relative-coordinate basis states requires group-theoretical methods and is carried out for four electrons here Interacting electrons in a magnetic field in a center-of-mass free basis (arxiv:1410.4768). For three electrons, the importance of a spurious state free basis set was emphasized by R Laughlin and is a design principles behind the Laughlin wave function.

Slow or fast transfer: bottleneck states in light-harvesting complexes

Light-harvesting complex II, crystal structure 1RWT from Liu et al (Nature 2004, vol. 428, p. 287), rendered with VMD. The labels denote the designation of the chlorophyll sites (601-614). Chlorophylls 601,605-609 are of chlorophyll b type, the others of type a.

In the previous post I described some of the computational challenges for modeling energy transfer in the light harvesting complex II (LHCII) found in spinach. Here, I discuss the results we have obtained for the dynamics and choreography of excitonic energy transfer through the chlorophyll network. Compared to the Fenna-Matthews-Olson complex, LHCII has twice as many chlorophylls per monomeric unit (labeled 601-614 with chlorophyll a and b types).
Previous studies of exciton dynamics had to stick to simple exponential decay models based on either Redfield or Förster theory for describing the transfer from the Chl b to the Chl a sites. The results are not satisfying and conclusive, since depending on the method chosen the transfer time differs widely (tens of picoseconds vs picoseconds!).

Exciton dynamics in LHCII.
Exciton dynamics in LHCII computed with various methods. HEOM denotes the most accurate method, while Redfield and Förster approximations fail.

To resolve the discrepancies between the various approximate methods requires a more accurate approach. With the accelerated HEOM at hand, we revisited the problem and calculated the transfer rates. We find slower rates than given by the Redfield expressions. A combined Förster-Redfield description is possible in hindsight by using HEOM to identify a suitable cut-off parameter (Mcr=30/cm in this specific case).

Since the energy transfer is driven by the coupling of electronic degrees of freedom to vibrational ones, it of importance to assess how the vibrational mode distribution affects the transfer. In particular it has been proposed that specifically tuned vibrational modes might promote a fast relaxation. We find no strong impact of such modes on the transfer, rather we see (independent of the detailed vibrational structure) several bottleneck states, which act as a transient reservoir for the exciton flux. The details and distribution of the bottleneck states strongly depends on the parameters of the electronic couplings and differs for the two most commonly discussed LHCII models proposed by Novoderezhkin/Marin/van Grondelle and Müh/Madjet/Renger – both are considered in the article Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan Aspuru-Guzik).
Again, the correct assignment of the bottleneck states requires to use HEOM and to look beyond the approximate rate equations.

High-performance OpenCL code for modeling energy transfer in spinach

With increasing computational power of massively-parallel computers, a more accurate modeling of the energy-transfer dynamics in larger and more complex photosynthetic systems (=light-harvesting complexes) becomes feasible – provided we choose the right algorithms and tools.

OpenCL cross platform performance for tracking energy-transfer in the light-harvesting complex II found in spinach.
OpenCL cross platform performance for tracking energy-transfer in the light-harvesting complex II found in spinach, see Fig. 1 in the article . Shorter values show higher perfomance. The program code was originally written for massively-parallel GPUs, but performs also well on the AMD opteron setup. The Intel MIC OpenCL variant does not reach the peak performance (a different data-layout seems to be required to benefit from autovectorization).

The diverse character of hardware found in high-performance computers (hpc) seemingly requires to rewrite program code from scratch depending if we are targeting multi-core CPU systems, integrated many-core platforms (Xeon PHI/MIC), or graphics processing units (GPUs).

To avoid the defragmentation of our open quantum-system dynamics workhorse (see the previous GPU-HEOM posts) across the various hpc-platforms, we have transferred the GPU-HEOM CUDA code to the Open Compute Language (OpenCL). The resulting QMaster tool is described in our just published article Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes (collaboration of Christoph Kreisbeck, Tobias Kramer, Alan Aspuru-Guzik). This post details the computational challenges and lessons learnt, the application to the light-harvesting complex II found in spinach will be the topic of the next post.

In my experience, it is not uncommon to develop a nice GPU application for instance with CUDA, which later on is scaled up to handle bigger problem sizes. With increasing problem size also the memory demands increase and even the 12 GB provided by the Kepler K40 are finally exhausted. Upon reaching this point, two options are possible: (a) to distribute the memory across different GPU devices or (b) to switch to architectures which provide more device-memory. Option (a) requires substantial changes to existing program code to manage the distributed memory access, while option (b) in combination with OpenCL requires (in the best case) only to adapt the kernel-launch configuration to the different platforms.

The OpenCL device fission extension allows to investigate the scaling of the QMaster code with the number of CPU cores. We observe a linear scaling up to 48 cores.
The OpenCL device fission extension allows us to investigate the scaling of the QMaster code with the number of CPU cores. We observe a linear scaling up to 48 cores.

QMaster implements an extension of the hierarchical equation of motion (HEOM) method originally proposed by Tanimura and Kubo, which involves many (small) matrix-matrix multiplications. For GPU applications, the usage of local memory and the optimal thread-grids for fast matrix-matrix multiplications have been described before and are used in QMaster (and the publicly available GPU-HEOM tool on nanohub.org). While for GPUs the best performance is achieved using shared/local memory and assign one thread to each matrix element, the multi-core CPU OpenCL variant performs better with fewer threads, but getting more work per thread done. Therefore we use for the CPU machines a thread-grid which computes one complete matrix product per thread (this is somewhat similar to following the “naive” approach given in NVIDIA’s OpenCL programming guide, chapter 2.5). This strategy did not work very well for the Xeon PHI/MIC OpenCL case, which requires additional data structure changes, as we learnt from discussions with the distributed algorithms and hpc experts in the group of Prof. Reinefeld at the Zuse-Institute in Berlin.
The good performance and scaling across the 64 CPU AMD opteron workstation positively surprised us and lays the groundwork to investigate the validity of approximations to the energy-transfer equations in the spinach light-harvesting system, the topic for the next post.

Flashback to the 80ies: filling space with the first quasicrystals

This post provides a historical and conceptional perspective for the theoretical discovery of non-periodic 3d space-fillings by Peter Kramer, later experimentally found and now called quasicrystals. See also these previous blog entries for more quasicrystal references and more background material here.
The following post is written by Peter Kramer.

Star extension of the pentagon. From Kramer 1982.
Star extension of the pentagon. Fig 1 from
Non-periodic central space filling with icosahedral symmetry using copies of seven elementary cells by Peter Kramer, Acta Cryst. (1982). A38, 257-264

When sorting out old texts and figures from 1981 of mine published in Non-periodic central space filling with icosahedral symmetry using copies of seven elementary cells, Acta Cryst. (1982). A38, 257-264), I came across the figure of a regular pentagon of edge length L, which I denoted as p(L). In the left figure its red-colored edges are star-extending up to their intersections. Straight connection of these intersection points creates a larger blue pentagon. Its edges are scaled up by τ2, with τ the golden section number, so the larger pentagon we call p(τ2 L). This blue pentagon is composed of the old red one plus ten isosceles triangles with golden proportion of their edge length. Five of them have edges t1(L): (L, τ L, τ L), five have edges t2(L): (τ L,τ L, τ2 L). We find from Fig 1 that these golden triangles may be composed face-to-face into their τ-extended copies as t1(τ L) = t1(L) + t2(L) and t2(τ L) = t1(L) + 2 t2(L).

Moreover we realize from the figure that also the pentagon p(τ2 L) can be composed from golden triangles as p(τ2 L) = t1(τ L) + 3 t2(τ L) = 4 t1(L) + 7 t2(L).

This suggests that the golden triangles t1,t2 can serve as elementary cells of a triangle tiling to cover any range of the plane and provide the building blocks of a quasicrystal. Indeed we did prove this long range property of the triangle tiling (see Planar patterns with fivefold symmetry as sections of periodic structures in 4-space).

An icosahedral tiling from star extension of the dodecahedron.

The star extension of the dodecahedron.
Star extension of the dodecahedron d(L) to the icosahedron i(τ2L) and further to d(τ3L) and i(τ5L) shown in Fig 3 of the 1982 paper. The vertices of these polyhedra are marked by filled circles; extensions of edges are shown except for d(L).

In the same paper, I generalized the star extension from the 2D pentagon to the 3D dodecahedron d(L) of edge length L in 3D (see next figure) by the following prescription:

  • star extend the edges of this dodecahedron to their intersections
  • connect these intersections to form an icosahedron

The next star extension produces a larger dodecahedron d(τ3L), with edges scaled by τ3. In the composition of the larger dodecahedron I found four elementary polyhedral shapes shown below. Even more amusing I also resurrected the paper models I constructed in 1981 to actually demonstrate the complete space filling!
These four polyhedra compose their copies by scaling with τ3. As for the 2D case arbitrary regions of 3D can be covered by the four tiles.

Elementary cells The paper models I built in 1981 are still around and complete enough to fill the 3D space.
The four elementary cells shown in the 1982 paper, Fig. 4. The four shapes are named dodecahedron (d) skene (s), aetos (a) and tristomos (t). The paper models from 1981 are still around in 2014 and complete enough to fill the 3D space without gaps. You can spot all shapes (d,s,a,t) in various scalings and they all systematically and gapless fill the large dodecahedron shell on the back of the table.

The only feature missing for quasicrystals is aperiodic long-range order which eventually leads to sharp diffraction patterns of 5 or 10 fold point-symmetries forbidden for the old-style crystals. In my construction shown here I strictly preserved central icosahedral symmetry. Non-periodicity then followed because full icosahedral symmetry and periodicity in 3D are incompatible.

In 1983 we found a powerful alternative construction of icosahedral tilings, independent of the assumption of central symmetry: the projection method from 6D hyperspace (On periodic and non-periodic space fillings of Em obtained by projection) This projection establishes the quasiperiodicity of the tilings, analyzed in line with the work Zur Theorie der fast periodischen Funktionen (i-iii) of Harald Bohr from 1925 , as a variant of aperiodicity (more background material here).

Tutorial #1: simulate 2d spectra of light-harvesting complexes with GPU-HEOM @ nanoHub

The computation and prediction of two-dimensional (2d) echo spectra of photosynthetic complexes is a daunting task and requires enormous computational resources – if done without drastic simplifications. However, such computations are absolutely required to test and validate our understanding of energy transfer in photosyntheses. You can find some background material in the recently published lecture notes on Modelling excitonic-energy transfer in light-harvesting complexes (arxiv version) of the Latin American School of Physics Marcos Moshinsky.
The ability to compute 2d spectra of photosynthetic complexes without resorting to strong approximations is to my knowledge an exclusive privilege of the Hierarchical Equations of Motion (HEOM) method due to its superior performance on massively-parallel graphics processing units (GPUs). You can find some background material on the GPU performance in the two conference talks Christoph Kreisbeck and I presented at the GTC 2014 conference (recored talk, slides) and the first nanoHub users meeting.

GPU-HEOM 2d spectra computed at nanohub

GPU-HEOM 2d spectra computed at nanohubComputed 2d spectra for the FMO complex for 0 picosecond delay time (upper panel) and 1 ps (lower panel). The GPU-HEOM computation takes about 40 min on the nanohub.org platform and includes all six Liouville pathways and averages over 4 spatial orientations.

  1. login on nanoHub.org (it’s free!)
  2. switch to the gpuheompop tool
  3. click the Launch Tool button (java required)
  4. for this tutorial we use the example input for “FMO coherence, 1 peak spectral density“.
    You can select this preset from the Example selector.
  5. we stick with the provided Exciton System parameters and only change the temperature to 77 K to compare the results with our published data.
  6. in the Spectral Density tab, leave all parameters at the the suggested values
  7. to compute 2d spectra, switch to the Calculation mode tab
  8. for compute: choose “two-dimensional spectra“. This brings up input-masks for setting the directions of all dipole vectors, we stick with the provided values. However, we select Rotational averaging: “four shot rotational average” and activate all six Liouville pathways by setting ground st[ate] bleach reph[asing , stim[ulated] emission reph[asing], and excited st[ate] abs[orption] to yes, as well as their non-rephasing counterparts (attention! this might require to resize the input-mask by pulling at the lower right corner)
  9. That’s all! Hit the Simulate button and your job will be executed on the carter GPU cluster at Purdue university. The simulation takes about 40 minutes of GPU time, which is orders of magnitude faster than any other published method with the same accuracy. You can close and reopen your session in between.
  10. Voila: your first FMO spectra appears.
  11. Now its time to change parameters. What happens at higher temperature?
  12. If you like the results or use them in your work for comparison, we (and the folks at nanoHub who generously develop and provide the nanoHub platform and GPU computation time) appreciate a citation. To make this step easy, a DOI number and reference information is listed at the bottom of the About tab of the tool-page.

With GPU-HEOM we and now you (!) can not only calculate the 2d echo spectra of the Fenna-Matthews-Olson (FMO) complex, but also reveal the strong link between the continuum part of the vibrational spectral density and the prevalence of long-lasting electronic coherences as written in my previous posts

GPU and cloud computing conferences in 2014

Two conferences are currently open for registration related to GPU and cloud computing. I will be attending and presenting at both, please email me if you want to get in touch at the meetings.