Friday, March 11, 2016

Symmetry reductions in loop quantum gravity

Tuesday, Dec. 8th
Norbert Bodendorfer, Univ. Warsaw 
Title: Quantum symmetry reductions based on classical gauge fixings 
PDF of the talk (1.4MB)
Audio [.wav 35MB]

Tuesday, Nov. 10th
Jedrzej Swiezewski, Univ. Warsaw 
Title: Developments on the radial gauge 
PDF of the talk (4MB)
Audio [.mp3 40MB]

by Steffen Gielen, Imperial College

A few months ago, physicists around the world celebrated the centenary of the field equations of general relativity, presented by Einstein to the Prussian Academy of Sciences in November 1915. Arriving at the correct equations was the culmination of an incredible intellectual effort by Einstein, driven largely by mathematical requirements that the new theory of gravitation (superseding Newton's theory of gravitation, which proved ultimately incomplete) should satisfy. In particular, Einstein realized that its field equations should be generally covariant – they should take the same general form in any coordinate system that one chooses to use for the calculation, say whether one uses Cartesian, cylindrical, or spherical coordinates. This property sets the equations of general relativity apart from Newton's laws of motion, where changing coordinate system can lead to the appearance of additional “forces” such as centripetal or Coriolis forces.

Many conferences were held honoring the anniversary of Einstein's achievement. What was discussed at those conferences was partially the historical context, the beauty of the form of the equations, or the precise mathematical and conceptual significance of general covariance. However, the most important legacy of general relativity and the main inspiration for modern research have been the new physical phenomena that appear in general relativity but not in Newtonian gravity: black holes are regions of spacetime where gravity becomes so strong that not even light can escape; the strong gravitational field outside a black hole leads to a time dilation so strong that an hour nearby a black hole can correspond to years on Earth, as used recently in the film Interstellar; and we now believe that the universe as a whole is expanding, and has been since the Big Bang which is thought of as the beginning of space and time.

In order to understand these dramatic consequences of the Einstein equations, physicists had to find solutions to these equations. This is rather challenging in general: the Einstein equations are complicated differential equations for ten functions, depending on one time and three space dimensions, that encode the gravitational field of spacetime. Furthermore, the conceptually appealing property of general covariance means that apparently different solutions of the equations can be simply the same physical configuration looked at in different coordinates. Indeed, both issues – finding solutions to the equations at all and understanding their meaning – were challenges in the early days of the theory, when physicists tried to make sense of Einstein's equations.

Despite this formidable challenge, the Prussian lieutenant of the artillery Karl Schwarzschild, while serving on the Eastern front in World War I, was able to derive an exact solution of Einstein's equations in vacuum within weeks of their publication, much to the surprise of Einstein himself. This solution, now called the Schwarzschild solution, describes a black hole, and is one of the most important solutions of general relativity. What Schwarzschild did in order to solve the equations was to assume a symmetry of the solution: he assumed that the configuration of the gravitational field should be spherically symmetric. In spherical coordinates, where each point in space is specified by one radial and two angular coordinates, it should be independent of any change in the angular directions. This means that one describes space as a collection of regular, concentric spheres. What Schwarzschild found was that the spheres did not have to be glued together to simply give normal flat space, but one could form a curved geometry out of them, with curvature increasing as one heads towards the centre (eventually forming a black hole), while still solving Einstein's equations. To be able to do the calculation, Schwarzschild had to choose a particularly suitable coordinate system, hence exploiting the property of general covariance in his favor.

This strategy of finding solutions is typical for practitioners of general relativity: cosmological solutions could similarly be found by assuming that the universe looks exactly the same at each point and in each direction in space (in mathematical terms, it is homogeneous and isotropic), and only changes in time. This reduces the problem of solving Einstein equations to a much simpler task, and explicit solutions could be written down, again in a suitable coordinate system. These simplest solutions already exhibit the main features of our universe (overall expansion and an initial Big Bang singularity) and are fairly realistic – indeed our Universe seems to display only small variations between different large-scale regions, and at the very largest scales is, within an approximation, well described by a geometry that simply looks the same everywhere in space.

Loop quantum gravity is an approach at a quantization of general relativity, aiming to extend general relativity by making it compatible with quantum mechanics. What distinguishes it from other approaches is that the main property of general relativity, general covariance, is taken as a central guiding principle towards the construction of a quantum theory. In some respects, the status of loop quantum gravity can be compared to the early days of general relativity: while it is now known that a quantum theory compatible with general covariance can be constructed, and its mathematical structure is well understood, one now needs to understand the new physical phenomena implied by the quantization, beyond general relativity. Just as in the time after November 1915, today's physicists should find explicit solutions to the equations of loop quantum gravity that can be used to study the physical implications of the (relatively) new framework.

One of the main successes of loop quantum gravity has been its application to cosmology. Homogeneous solutions of the Einstein equations that approximately describe our universe have been shown to receive modifications once loop quantum gravity techniques are used, leading to a resolution of the Big Bang singularity by a Big Bounce, and potentially observable quantum effects. However, the resulting models of the universe are not solutions of the full theory of loop quantum gravity: rather, they arise from quantization of a reduced set of solutions of classical general relativity with loop quantum gravity techniques. There is no reason, in general, to expect that these are exact solutions of loop quantum gravity. Quantum mechanics is funny: quantization can lead to many inequivalent theories, depending on how one decides to do it. By assuming that the universe is homogeneous from the outset, one obtains a quantum theory of only a finite, rather than an infinite number of “degrees of freedom”. It is well known that quantum theories can behave differently depending on whether they have a finite or infinite number of degrees of freedom.

In their ILQGS seminars, Jedrzej and Norbert presented work towards resolving this tension. Namely, they presented an approach in which, similar to how Schwarzschild and contemporaries proceeded 100 years ago, one identifies a suitable coordinate system in which the spacetime metric, representing the gravitational field, is represented. In a quantum theory where general covariance is implemented fundamentally, this means one has to perform a “gauge-fixing”; the freedom of changing the coordinate system must be “fixed” consistently in the quantum theory. Gauge-fixings mean that one works with fewer variables, and has to worry less about different but physically equivalent solutions that are only related by changes in the coordinate system. Achieving them is often quite hard technically. Together with collaborators in Warsaw, Jedrzej and Norbert have made progress on this issue in recent years.

The second step, after a convenient coordinate system (think of spherical coordinates for treating the Schwarzschild black hole) has been chosen, is to do a “symmetry reduction” in the full quantum theory: rather than on the most general quantum universes, one now focusses on those that have a certain symmetry property. Norbert showed a detailed strategy for how to do this. One identifies an equation satisfied by all classical solutions with the desired symmetry, such as isotropy (i.e. looking the same in all directions). The quantum version of this equation is then imposed in loop quantum gravity, leading to a full quantum definition of symmetries like “isotropy” or “spherical symmetry” in loop quantum gravity. The obvious applications of the mechanism, which are being explored at the moment, are identifying cosmological and black hole solutions in loop quantum gravity, studying their dynamics, and verifying whether the resulting effects are in accord with what has been found in the simpler finite-dimensional quantum models described above. In particular, one would like to know whether singularities inside black holes and at the Big Bang, where Einstein's theory simply breaks down, can be resolved by quantum mechanics, as is hoped.

Jedrzej also showed how the methods developed in different “gauge-fixings” for classical general relativity could be used to resolve a disputed issue in the context of the AdS/CFT correspondence in string theory, where one faces a similar problem of fixing the huge freedom under changes in the coordinate system in order to identify the invariant physical properties of spacetime. In particular, a certain choice of gauge-fixing has been discussed in AdS/CFT, which leads to unfamiliar consequences such as non-locality in the gauge-fixed version of the theory. The tools developed by Jedrzej and collaborators could be used to clarify precisely how this non-locality occurs. They hence provide a somewhat unusual example of the application of methods developed for loop quantum gravity in a string theory-motivated context, clearly a positive example that can inspire more work on closer connections between methods used in these different communities. 

Monday, May 25, 2015

Separability and quantum mechanics

Tuesday, Apr 21st
Fernando Barbero, CSIC, Madrid 
Title: Separability and quantum mechanics 
PDF of the talk (758k)
Audio [.wav 20MB]

by Juan Margalef-Bentabol, UC3M-CSIC, Madrid

Classical vs Quantum: Two views of the world

In classical mechanics it is relatively straightforward to get information from a system. For instance, if we have a bunch of particles moving around, we can ask ourselves: where is its center of mass? What is the average speed of the particles? What is the distance between two of them? In order to ask and answer such questions in a precise mathematical way, we need to know all the positions and velocities of the system at every moment; in the usual jargon, we need to know the dynamics over the state space (also called configuration space for positions and velocities, or phase space when we consider positions and momenta). For example, the appropriate way to ask for the center of mass, is given by the function that for a specific state of the system, gives the weighted mean of the positions of all the particles. Also, the total momentum of the system is given by the function consisting of the sum of the momenta of the individual particles. Such functions are called observables of the theory, therefore an observable is defined as a function that takes all the positions and momenta, and returns a real number. Among all the observables there are some ones that can be considered as fundamental. A familiar example is provided by the generalized position and momenta denoted as and .

In a quantum setting answering, and even asking, such questions is however much trickier. It can be properly justified that the needed classical ingredients have to be significantly changed:
  1. The state space is now much more complicated, instead of positions and velocities/momenta we need a (usually infinite dimensional) complex vector space with an inner product that is complete. Such vector space is called a Hilbert space and the vectors of are called states (up to a complex multiplication).
  2. The observables are functions from to itself that "behave well" with respect to the inner product (these are called self-adjoint operators). Notice in particular that the outputs of the quantum observables are complex vectors and not numbers anymore!
  3. In a physical experiment we do obtain real numbers, so somehow we need to retrieve them from the observable associated with the experiment. The way to do this is by looking at the spectrum of , which consists of a set of real numbers called eigenvalues associated with some vectors called eigenvectors (actually the number that we obtain is a probability amplitude whose absolute value squared is the probability of obtaining as an output a specific eigenvector).
The questions that arise naturally are: how do we choose the Hilbert space? how do we introduce fundamental observables analogous to the ones of classical mechanics? In order to answer these questions we need to take a small detour and talk a little bit about the algebra of observables.

Algebra of Observables

Given two classical observables, we can construct another one by means of different methods. Some important ones are:
  • By adding them (they are real functions)
  • By multiplying them
  • By a more sophisticated procedure called the Poisson bracket
The last one turns out to be fundamental in classical mechanics and plays an important role within the Hamiltonian form of the dynamics of the system. A basic fact is that the set of observables endowed with the Poisson bracket forms a Lie algebra (a vector space with a rule to obtain an element out of two other ones satisfying some natural properties). The fundamental observables behave really well with respect to the Poisson bracket, namely they satisfy simple commutation relations i.e. if we consider the - position observable and "Poisson-multiply" it by the - momentum observable, we obtain the constant function if , or the constant function if .

One of the best approaches to construct a quantum theory associated with a classical one, is to reproduce at the quantum level some features of its classical formulation. One way to do this is to define a Lie algebra for the quantum observables such that some of such observables mimic the behavior of the Poisson bracket of some classical fundamental observables. This procedure (modulo some technicalities) is known as finding a representation of this algebra. In order to do this, one has to choose:
  1. A Hilbert space .
  2. Some fundamental observables that reproduce the canonical commutation relations when we consider the commutator of operators.
In standard Quantum Mechanics the fundamental observables are positions and momenta. It may seem that there is a great ambiguity in this procedure, however there is a central theorem due to Stone and von Neumann that states that, under some reasonable hypothesis, all the representations are essentially the same.


One of the hypotheses of the Stone-von Neumann theorem is that the Hilbert space must be separable. This means that it is possible to find a countable set of orthonormal vectors in (called Hilbert basis) such that any state -vector- of can be written as an appropriate countable sum of them. A separable Hilbert space, despite being infinite dimensional, is not "too big", in the sense that there are Hilbert spaces with uncountable bases that are genuinely larger. The separability assumption seems natural for standard quantum mechanics, but in the case of quantum field theory -with infinitely many degrees of freedom- one might expect to need much larger Hilbert spaces i.e. non separable ones. Somewhat surprisingly, most of the quantum field theories can be handled with our beloved and "simple" separable Hilbert spaces with the remarkable exception of LQG (and its derivative LQC) where non separability plays a significant role. Henceforth it seems interesting to understand what happens when one considers non separable Hilbert spaces [3] in the realm of the quantum world. A natural and obvious way to acquire the necessary intuition is by first considering quantum mechanics on a non-separable Hilbert space.

The Polymeric Harmonic Oscillator

The authors of [2,3] discuss two inequivalent (among the infinitely many) representations of the algebra of fundamental observables which share a non familiar feature, namely, in one of them (called the position representation) the position observable is well defined but the momentum observable does not even exist; in the momentum representation the roles of positions and momenta are exchanged. Notice that in this setting, some familiar features of quantum mechanics are lost for good. For instance, the position-momentum Heisenberg uncertainty formula makes no sense at all as both position and momentum observables need to be defined.

To improve the understanding of such systems and gain some insight for the application to LQG and LQC, the authors of [1] (re)study the -dimensional Harmonic Oscillator (PHO) in a non separable Hilbert space (known in this context as a polymeric Hilbert space). As the space is non separable, any Hilbert basis should be uncountable. This leads to some unexpected behaviors that can be used to obtain exotic representations of the algebra of fundamental observables.

The motivation to study the PHO is kind of the same as always: the HO, in addition to being an excellent toy model, is a good approximation to any 1-dimensional mechanical system close to its equilibrium points. Furthermore, free quantum field theories can be thought of as ensembles of infinitely many independent HO's. There are however many ways to generalize the HO to a non separable Hilbert space and also many equivalent ways to realize a concrete representation, for instance by using Hilbert spaces based on:
The eigenvalue equations in these different spaces take different forms: in some of them they are difference equations, whereas in others they have the form of the standard Schrödinger equation with a periodic potential. It is important to notice nonetheless that writing Hamiltonian observables in this framework turn out to be really difficult, as only one of the position or momentum observables can be strictly represented. This means that for the other one it is necessary to rely on some kind of approximation (that can be obtained by introducing an arbitrary scale) and choosing a periodic potential with minima corresponding to the one of the quadratic operator. The huge uncertainty in this procedure has been highlighted by Corichi, Zapata, Vukašinac and collaborators. The standard choice leads to an equation known as the Mathieu equation but other simple choices have been explored, as the one shown in the figure.

Energy eigenvalues (bands) of a polymerized harmonic oscillator. The horizontal axis shows the position (or the momentum depending on the chosen representation), the vertical axis is the energy and the red line represents the particular periodic extension of the potential used to approximate the usual quadratic potential of the HO. The other lines plotted in this graph correspond to auxiliary functions that can be used to locate the edges of the bands that define the point spectrum in the present example.

As we have already mentioned, the orthonormal bases in non separable Hilbert spaces are uncountable. A consequence of this is the fact that the orthonormal basis provided by the eigenstates of the Hamiltonian must be uncountable, i.e. the Hamiltonian must have an uncountable infinity worth of eigenvalues (counted with multiplicity). A somewhat unexpected result that can be proved by invoking classical theorems on functional analysis in non-separable Hilbert spaces is the fact that these eigenvalues are gathered in bands. It is important to point out here that only the lowest-lying part of the spectrum is expected to mimic reasonably well the one corresponding to the standard HO, however it is important to keep also in mind the huge difference that persists: even the narrowest bands contain a continuum of eigenvalues.

Some physical consequences

The fact that the spectrum of the polymerized harmonic oscillator displays this band structure is relevant for some applications of polymerized quantum mechanics. Two main issues were mentioned in the talk. On one hand the statistical mechanics of polymerized systems must be handled with due care. Owing to the features of the spectrum, the counting of energy eigenstates necessary to compute the entropy in the microcanonical ensemble is ill defined. A similar problem crops up when computing the partition function of the canonical ensemble. These problems can probably be circumvented by using an appropriate regularization and also by relying on some superselection rules that eliminate all but a countable subset of energy eigenstates of the system.

A setting where something similar can be done is in the polymer quantization of the scalar field (already considered by Husain, Pawłowski and collaborators). As this system can be thought of as an infinite ensemble of harmonic oscillators, the specific features of their (polymer) quantization will play a significant role. A way to avoid some difficulties here also relies on the elimination of unwanted energy eigenvalues by imposing superselection rules as long as they can be physically justified.


[1] J.F. Barbero G., J. Prieto and E.J.S. Villaseñor, Band structure in the polymer quantization of the harmonic oscillator, Class. Quantum Grav. 30 (2013) 165011.
[2] W. Chojnacki, Spectral analysis of Schrodinger operators in non-separable Hilbert spaces, Rend. Circ. Mat. Palermo (2), Suppl. 17 (1987) 135–51.
[3] H. Halvorson, Complementarity of representations in quantum mechanics, Stud. Hist. Phil. Mod. Phys. 35 (2004) 45-56.

Tuesday, May 5, 2015

Cosmology with group field theory condensates

Tuesday, Feb 24th
Steffen Gielen, Imperial College 
Title: Cosmology with group field theory condensates 
PDF of the talk (136K)
Audio [.wav 39MB]

by Mercedes Martín-Benito, Rabdoud University

One of the most important open questions in physics is how gravity (or in other words, the geometry of spacetime) behaves when the energy densities are huge, of the order of the Planck density. Our most reliable theory of gravity, general relativity, fails to describe the gravitational phenomena in high energy density regimes, as it generically leads to singularities. These regimes are achieved for example at the origin of the universe or in the interior of black holes, and therefore we do not have yet a consistent explanation for these phenomena. We expect quantum gravity effects to be important in such situations, but general relativity, being a theory that treats the geometry of the spacetime as classical, do not take those quantum gravity effects into account. Thus, in order to describe black holes or the very early universe in a physically meaningful way it seems unavoidable to quantize gravity.

The quantization of gravity not only requires attaining a mathematically well-described theory with predictive power, but also the comparison of the predictions with observations to check that they agree. The regimes where quantum gravity plays a fundamental role, such as black holes or the early universe, might seem very far from our observational or experimental reach. Nevertheless, thanks to the big progress that precision cosmology has undergone in the last decades, in the near future we may be able to get observational data about the very initial instants of the universe that could be sensitive to quantum gravity effects. We need to get prepared for that, putting our quantum gravity theories at work in order to extract cosmological predictions from them.

This is the main goal of Steffen's analysis. He bases his research in the approach to quantum gravity known as Group Field Theory (GFT). GFT defines a path integral for gravity, namely, it replaces the classical notion of unique solution for the geometry of the spacetime with a sum over an infinity of possibilities to compute a quantum amplitude. The formalism that it uses is pretty much like the usual quantum field theory formalism employed in particle physics. There, given a process involving particles, the different possible interactions contributing to that process are described by so-called Feynman diagrams, that are later summed up in a consistent way to finally lead to the transition amplitude of the process that we are trying to describe. GFT follows that strategy. The corresponding Feynman diagrams are spinfoams, and represent the different dynamical processes that contribute to a particular spacetime configuration. GFT is thus linked to Loop Quantum Gravity (GFT), since spinfoams are one main proposal for defining the dynamics of LQG. The GFT Feynman expansion extends and completes this definition of the LQG dynamics by trying to determine how these diagrams must be summed up in a controlled way to obtain the corresponding quantum amplitude. 

GFT is a fundamentally discrete theory, with a large number of microscopical degrees of freedom. These degrees of freedom might organize themselves, following somehow a collective behavior, to lead to different phases of the theory. The hope is to find a phase that in the continuum limit agrees with having a smooth spacetime as described by the classical theory of general relativity. In this way, we would make the link between the underlying quantum theory and the classical one that explains very well the gravitational phenomena in regimes where quantum gravity effects are negligible. To understand this, let us make the analogy with a more familiar theory: Hydrodynamics. 

We know that the fundamental microscopical constituents of a fluid are molecules. The dynamics of this micro-constituents is intrinsically quantum, however these degrees of freedom display a collective behavior that leads to macroscopic properties of the fluid, such as its density, its velocity, etc. In order to study these properties it is enough to apply the classical theory of hydrodynamics. However we know that it is not the fundamental theory describing the fluid, but an effective description coming from an underlying quantum theory (condense matter theory) that explains how the atoms form the molecules, and how these interact among themselves giving rise to the fluid. 

The continuum spacetime that we are used to might emerge, in a similar way to the example of the fluid, from the collective behavior of many many quantum building blocks, or atoms of spacetime. This is, in plane words, the point of view employed in the GFT approach to quantum gravity.

While GFT is still under construction, it is mature enough to try to extract physics from it. With this aim, Steffen and his collaborators, are working in obtaining effective dynamics for cosmology starting from the general framework of GFT. The simplest solutions of Einstein equations are those with spatial homogeneity. These turn out to describe cosmological solutions, which approximate rather well at large scales the dynamics of our universe. Then, in order to get effective cosmological equations from their GFT, they postulate very particular quantum states that, involving all the degrees of freedom of the GFT, are states with collective properties that can give rise to a homogeneous and continuum effective description. The similarities between GFT and condense matter physics allows Steffen and collaborators to exploit the techniques developed in condense matter. In particular, based on the experience on Bose-Einstein condensates, the states that they postulate can be seen as condensates. 

The collective behavior that the degrees of freedom display leads, in fact, to a homogeneous description in the macroscopic limit. The effective equations that they obtain agree in the classical limit with cosmological equations, but remarkably retaining the main effects coming from the underlying quantum theory. More specifically, these effective equations know about the fundamental discreteness, as they explicitly get corrections (non-present in the standard classical equations) that depend on the number of quanta (spacetime “atoms”) in the condensate. These results form the basis of a general programme for extracting effective cosmological dynamics directly from a microscopic non-perturbative theory of quantum gravity. 

Monday, November 24, 2014

Quantum theory from information inference principles

Tuesday, Nov 11th
Philipp Hoehn, Perimeter Institute 
Title: Quantum theory from information inference principles 
PDF of the talk (800k)
Audio [.wav 40MB]

by Matteo Smerlak, Perimeter Institute

When a new theory enters the scene of physics, a succession of events normally takes place: at first, nobody cares; then a minority starts playing with the maths while the majority insists that the theory is obviously wrong; farther down the road, we find the majority using the maths on a daily basis and all arguing that the theory is so beautiful, it can only be right; along the way, thanks to many years of practice, a new kind of intuition grows out of the formalism, and our entire picture of reality changes accordingly. This is the process of science.

For some reason, though, the eventual shift from formalism to intuition never happened for quantum mechanics (QM). Ninety years after its discovery, specialists still call QM “weird”, teachers still quote Feynman claiming that “nobody really understands QM”, and philosophers still discuss whether QM requires us to be “antirealist”, “neo-Kantian”, “Bayesian”… you name it. Niels Bohr wanted new theories to be “crazy enough”, but it seems this one is just too crazy. And yet it works!

In the face of this puzzle, a school of thought initiated by Birkhoff and von Neumann in the thirties has declared it its mission to reconstruct QM. The idea is simple: if you don’t get how the machine works, then roll up your sleeves, take the machine apart, and build it again—from scratch. Indeed this is how Einstein delt with the symmetry group of Maxwell’s equations (and its mysterious action on lengths and durations): he found intuitive two physical principles—the relativity principles—and derived the Lorentz group (the set of symmetries of Maxwell's equations) from them. Thus special relativity was “really understood”.

Much recent work towards a reconstruction of QM has taken place within a framework called “generalized probability theories” (GPT). This approach elaborates on basic notions such as preparations, transformations and measurements. The main achievement of GPT has been to locate QM within a more general landscape of possible modifications of classical probability theory. It has showed for instance that QM is not the most non-local theory consistent with what is known as no-signaling property: stronger correlations than quantum entanglement are in principle possible, though they are not realized in nature. To understand what is, we must know what else could have been—thus speak GPT proponents. 

Philipp uses a different language for his reconstruction of QM: instead of measurements and states, he talks about questions and answers. The semantic shift is not innocent: while a “measurement” uncovers the intrinsic state of a system, a “question” only brings information to whoever asks it—that is, a question relates to two entities (the system and the observer/interrogator) rather than just one (the system). Because there isn’t anybody out there to ask questions about everything, there is no such thing as the “state of the universe”, Philipp says!

This so-called “relational” questions/answers approach to QM was advocated twenty years ago by Rovelli, who emphasized its similarity with the structure of gravitation (time is relative, remember?). He also proposed two basic informational principles: one states that the total information that an observer O can gather about a system S is limited; the second specifies that, even when O has obtained the maximum amount of information about S, she can still learn something about S by asking other, “complementary” questions. Thence non commuting operators! Similar ideas where discussed independently by Zeilinger and Brukner—and Philipp embraces them wholeheartedly.

But he also takes a big step further. Adding four more postulates to Rovelli’s (which he calls completeness, preservation, time evolution and locality), Philipp shows how to reconstruct the set Σ of all possible states of S relative to O (together with its isometry group, representing possible time evolutions). For a quantum system allowing only one independent question—a qubit—Σ is a three-dimensional ball, the Bloch sphere. (Note that a 3-ball is a much bigger space than a 1-ball, the state space of a classical bit—enter quantum computing…) For systems with more independent questions, i.e. N qubits, Σ is the mathematical structure known as the convex cone over some complex projective space—not quite what is known as a Calabi-Yau manifold, but still a challenge for the mind to picture.

N=2 turns out to be the most difficult case: once this one is solved—Philipp says this took him a full year, with inputs from his collaborator Chris Wever—, higher N’s follow rather straightforwardly. This is a reflection of a crucial aspect of QM: quantum systems are “monogamous”, meaning that they can establish strong correlations (aka “entanglement”) with just one partner at a time. Philipp’s questions/answers formulation provides a new and detailed understanding of this peculiar correlation structure, which he represents as a spherical tiling. “QM is beautiful!”, says Philipp.

One limitation of Philipp’s current approach—also pointed out by the audience—is the restriction to binary (or yes/no) questions. A spin-1 particle, for instance, falls outside this framework, for it can give three different answers to the question “what is your spin in the z direction?”, namely “up”, “down” or “zero”. Can Philipp deal with such ternary question, and reconstruct the 8 dimensional state space of a quantum “trit”? We wish him to find the answer within… less than a year! 

Monday, April 28, 2014

Holographic special relativity: observer dependent geometry

Derek Wise, FAU Erlangen
Title: Holographic special relativity: observer space from conformal geometry 
PDF of the talk (600k) Audio [.wav 38MB] Audio [.aif 4MB] 

by Sean Gryb, Radboud University


In Roman mythology, Janus was the god of gateways, transitions, and time, whose two distinct faces are depicted peering in opposite directions, as if bridging two different regions (or epochs) of the Universe. The term “Janus-faced” has come to mean a person or thing that simultaneously embodies two polarized features, and the Janus head has come to represent the embodiment of these two distinct features into one.

In this talk (based off the paper [1]), Derek Wise explores the possibility that spacetime itself might be Janus-faced. He explores an intriguing relationship between the structure of expanding spacetime and the scale-invariant description of a sphere. What he finds is a mathematical relationship providing a bridge between these two Janus faces that distinctly represent events in the Universe. This bridge is remarkably similar to the picture of reality proposed by the holographic principle and, in particular, the AdS/CFT correspondence where, on one side, there is the usual spacetime description of events and, on the other, there is a way to imprint these events onto the 3-dimensional boundary of this spacetime.

Aside from providing an alternative to spacetime, Derek's picture may even help illuminate the deeper structures behind a recent formulation of general relativity (GR) called Shape Dynamics, which I will come to at the end of this post. But to begin, I will try to explain Derek's result by first giving a description of the spacetime aspect of the Janus face and then describe how a link can be established to a completely distinct face, which, as we will see, is a description of events in the Universe that is completely free of any notion of scale. The key points of the discussion are summarized beautifully in the depiction of Janus given below, by Marc Ngui, who has provided all the images for this post. The diagram shows how, as I will describe later, events seen by observers in spacetime can be described by information on the boundary. I encourage the reader to revisit this image as its main elements are progressively explained throughout the text.

Relativity, Observers, and Spacetime

In 1908, Hermann Minkowski made a great discovery: Einstein's new theory of Special Relativity could be cast into a beautiful framework, one that Minkowski recognized as a kind of union of space and time. In his own words: “space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”[2] To understand what Minkowski meant, let's go back to 1904-1905 in order to retrace the discoveries that spawned Minkowski's revolution.

Relativity concerns the way in which different observers organize information about ‘when’ and ‘where’ events take place. Einstein realized that this system of organization should have two properties: i) it should work the same way for each observer, and ii) it should involve a set of rules that allows different observers to consistently compare information about the same events. This means that different observers don't necessary need to agree on when and where a particular event took place, but they do need to agree on how to compare the information gathered by different observers. Einstein expressed this requirement in his principle of relativity, to which he gave primary importance within physical theories. The key point is that relativity is fundamentally a statement about observers and how they collect and compare information about events. Minkowski's conception of spacetime comes afterwards, and it comes about through the specific mathematical properties of the rules used to collect and compare the relevant information.

To try to understand how spacetime works, we will use a slightly more modern version of spacetime than the one used by Minkowski — one with all the same essential properties as the original, but which can accommodate the observed accelerated expansion of space. This kind of spacetime was first studied by Willem de Sitter, and is named de Sitter (dS) spacetime after him. It has the basic shape depicted by the blue grid in the Janus image above. Because this space is curved, it is most convenient to describe it by putting it into a larger dimensional flat space (just like the 2D surface of a sphere depicted in a 3D space). This means that we can label events in this spacetime by 5 numbers: 4 space components, labeled (x, y, z, w) and one time component, t, that obey the relation

x2 + y2 + z2 + w2 - t2 = ℓ2.    (1)

This restriction (which serves as the definition of this spacetime) means that the 4 space components are not all independent. Indeed, the single constraint above removes one independent component, leaving the 3 space directions we know and love. The parameter is related to the cosmological constant and dictates how fast space is expanding. Adjusting its value changes the shape of the spacetime as illustrated in the figure below.

The middle spacetime in blue depicts a typical dS spacetime. Increasing the parameter ℓ decreases the rate of expansion so that, if ℓ → ∞, the spacetime barely expands at all and looks more like the purple cylinder on the right. The opposite extreme, when ℓ → 0, is the yellow light cone, which is named that way because the space is expanding at its maximum rate: the speed of light. This extreme limit will be very important for our considerations later.

Although this model of spacetime is dramatically simplified, it remarkably describes, to a good approximation, two important phases of our real Universe: i) the period of exponential expansion (or inflation), which we believe took place in the early history of our Universe, and ii) the present and foreseeable future. Different observers compare the labels they attribute to each event by performing transformations that leave the form of (1), and thus the shape of dS spacetime, unchanged. Because of this property, these transformations constitute symmetries of dS spacetime. Since the transformations that real observers must use to compare information about real events just happen to correspond to spacetime symmetries, it is no wonder that the notion of spacetime has had such a profound influence on physicists’ view of reality. However, we will shortly see that these rules can be recast into a completely different form, which tells a different story of what is happening.

Spacetime's Janus Face

We will now see how the symmetries of observers in dS spacetime can be rewritten in terms of symmetries that preserve angles, but not necessary distances, in space. In particular, all information about scale is removed. In mathematics, these are called conformal symmetries. This means that different observers have a choice when analyzing information that they collect about events: either they can imagine that these events have taken place in dS spacetime, and are consequently related by the dS symmetries; or they can imagine that these events are representing information that can be expressed in terms of angles (and not lengths), and are consequently related by conformal symmetries.

To understand how this can be so, consider the very distant future and the very distant past: where dS spacetime and the light cone nearly meet. This extreme region is called the conformal sphere because it is a sphere and also because it is where the dS symmetries correspond to conformal symmetries.

In fact, any cross-section of the light cone formed by cutting it with a spatial plane (as illustrated in the diagram below) is a different representative of the conformal sphere since these different cross-sections will disagree on distances but will agree on angles. Although the intersection looks like a circle (represented in dark green), it is actually a 3-dimensional sphere because we have cut out 2 of the spatial dimensions (which we can't draw on a 2 dimensional page).

To see how events on this 3d sphere can be represented in a scale-invariant way on a 3d plane, we can use a handy technique called a stereographic projection. The stereographic projection is often used for map drawing where the round earth has to be drawn onto a flat map. One of its key properties, namely that it preserves angles, means that maps drawn in this way are useful for navigating since an angle on the map corresponds to the same angle on the Earth. It is precisely this property that will make the stereographic projection useful for us here.

To perform a stereographic projection, imagine picking a point on a sphere, which we can interpret as the location of a particular observer on the sphere (represented by an eye in the diagram below), and call this the South Pole. Now imagine putting a light on the North Pole and letting it shine through the space that the sphere has been drawn in. Suppose our sphere is filled with points. Then, the shadow of these points will form an image on the plane tangentially to the sphere on the South Pole. The picture below illustrates what is going on. Points on the sphere are represented by stars and the yellow rays indicate how their image is formed on the plane.

It is now a relatively straightforward mathematical exercise to show that the symmetries of the light cone represent transformations on the plane that may change the size of the image, but will preserve the angles between the points. Thus, the symmetries of the cone can be understood in terms of the conformal symmetries of this plane.

If we now move our cross-section ever further into the future or the past, then the dS spacetime begins to resemble more and more the light cone. Thus, if we can represent arbitrary events in dS spacetime by information imprinted on two cross-sections in the infinite future and infinite past, then these events can be represented in terms of the images they induce onto our projected planes, and we have obtained our objective.

There is a simple way that this can be done. Imagine taking, as shown in the figure below, an arbitrary event in dS spacetime and drawing all the events in the distant past that could affect things that happen at this point (this region is a finite portion of the spherical cross-sections because no disturbance can travel faster than the speed of light). The result is a 2 dimensional spherical region, called the particle horizon indicated by the red regions in the diagram below, which grows steadily over time. You can think of this region as the proportion of dS spacetime that is visible at any particular place. In fact, you can use the relative size of this region as an indication of the time at which that event occurs. Because this is a notion of time that exists solely in terms of quantities defined in the distant past, it will transform under conformal symmetries. To give an idea of what this looks like, the motion of an observer from some point in the distant past to a new point in the distant future is represented by a series of concentric spheres, starting at the initial point and then spreading out to eventually cover the whole sphere. The diagram below shows how this works. The different regions (a,b,c,d) represent progressively growing regions corresponding to progressively later times.

In this way, you can map information about events in dS spacetime to information on the conformal sphere. In other words, the picture of reality that one gets from Einstein's theory of Special Relativity is a story that can be told in two very different ways. In the first way, there are events which trace out histories in spacetime. ‘Where’ and ‘when’ a particular event takes place depends on who you are, and the information about these events can be transformed from one observer to another via the global symmetries of spacetime. In the new picture, it is the information about angles that is important. ‘Where’ and ‘how big’ things are depends on your point of view and the information about particular events can be transformed from one observer to another using conformal transformations.

From Special to General Relativity

We have just described how to relate two very different views of how observers can collect information about the world. Until now, we have only been considering homogeneous spaces: i.e., those that look the same everywhere. The class of observers we were able to consider was resultantly restrictive. It was Einstein's great insight to recognize that the same mathematical machinery needed to describe events seen by arbitrary observers could also be used to study the properties of gravity. The machinery in question is a generalization of Minkowski's geometry, named after Riemann.

In order to describe Riemannian geometry, it is easiest to first describe a generalization of it (which we will need later anyway), and then show how Riemannian geometry is just a special case. The generalization in question is called Cartan geometry, after the great mathematician Élie Cartan. Cartan had the idea of building general curved geometries by modelling them off homogeneous spaces. The more general spaces are constructed by moving these homogeneous spaces around in specific ways. The geometry itself is defined by the set of rules one needs to use to compare vectors after moving the homogeneous spaces. These rules split into two different kinds: those that change the point of contact between the homogeneous space and the general curved space and those that don't. These different moves are illustrated for the case where the homogeneous space is a 2D sphere in the diagram below.
 The moves that don't change the point of contact (in the case above, this corresponds to spinning about the point of contact without rolling) constitute the local symmetries of the geometry and could, for example, correspond to what different local observers would see (in this case, spinning observers versus stationary ones) when looking at objects in the geometry. Einstein exploited this kind of structure to implement his general principle of relativity described earlier. The moves that change the point of contact (in the case above, this means rolling the ball around without slipping) give you information about the curved geometry of the general space. Einstein used a special case of Cartan geometry, which is just Riemannian geometry, where the homogenous space is Minkowski space. He then exploited the analogue of the structure just described to explain an old phenomenon in a completely new way: gravity. In the process, he produced one of our most radical yet successful theories of physics: General Relativity. The figure below shows how the different kinds of geometry we've discussed are related.

Now, consider what happens when we substitute, as we did in the last section, Minkowski's flat spacetime for de Sitter's curved, but still homogenous, spacetime. We can still describe gravity, but in a way that naturally includes a cosmological constant. However, the conformal sphere is also a homogeneous space. Moreover, as we described earlier, the symmetries of this homogeneous space can be related to the dS symmetries. This suggests that it might be possible to describe gravity in terms of a Cartan geometry modelled off the conformal sphere.

From the Conformal Sphere to Shape Dynamics?

Cartan geometries modelled on the conformal sphere are called conformal geometries because the local symmetries of these geometries preserve angles, and not scale. Although we have laid out a procedure relating the model space of conformal geometries to the model space of spacetimes with a cosmological constant, it is quite another thing to rewrite gravity in terms of conformal geometry. This is, in part, because the laws governing spacetime geometry are complicated and, in part, because our prescription for relating the model spaces is also not straightforward, since it relates local quantities in spacetime to non-local quantities in the infinite future and past. Nevertheless, this exciting possibility provides an interesting future line of research. Furthermore, there are other hints that such a description might be possible.

Using very different methods, it is possible to show that General Relativity is actually dual to a theory of evolving conformal geometry [3]. However, the kind of conformal geometry used in this derivation has not yet been written in terms of Cartan geometry (which makes use of slightly different structures). This new way of describing gravity, called Shape Dynamics, is perhaps making use of the interesting relationship between spacetime symmetries and conformal symmetries described here. Understanding exactly the nature of the conformal geometry in Shape Dynamics and its relation to spacetime could prove valuable in being able to understand this new way of describing gravity. Perhaps it could even be a window into understanding how the quantum theory of gravity should work?
  • [1] D. K. Wise, Holographic Special Relativity, arXiv:1305.3258 [hep-th].
  • [2] H. Minkowski, The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity, ch. Space and Time, pp. 75–91. New York: Dover, 1952.
  • [3] H. Gomes, S. Gryb, and T. Koslowski, Einstein gravity as a 3D conformally invariant theory, Class. Quant. Grav. 28 (2011) 045005, arXiv:1010.2481 [gr-qc].


Tuesday, April 1, 2014

Spectral dimension of quantum geometries

Johannes Thürigen, Albert Einstein Institute 
Title: Spectral dimension of quantum geometries 
PDF of the talk (1MB) Audio [.wav 39MB] Audio [.aif 4.1MB] 

By Francesco Caravelli, University College London

One of the fundamental goals of quantum gravity is understanding the structure of space-time at very short distances, together with predicting physical and observable effects of having a quantum geometry. This is not easy. Since the introduction of fractal dimension in Quantum Gravity, and its importance emphasized in the work done in Causal Dynamical Triangulations (Loll et al. 2005) and Asymptotic Safety (Lauscher et al. 2005), it has become more and more clear that space-time, at the quantum level, might have a radical transformation: the number of effective dimensions might change with the energy of the process involved. Various approaches to Quantum Gravity have collected evidences of a dimensional flow at high energies, which was popularized by Carlip as Spontaneous Dimensional Reduction (Carlip 2009, 2013). (The use of the term reduction is indeed a hint that a dimensional reduction is observed, but the evidences are far from conclusive. We find dimensional flow more appropriate.)

Before commenting on the results obtained by the authors of the paper discussed in the seminar
(Calcagni, Oriti, Thuerigen 2013), let us first step back for a second and spend some time introducing the concept of fractal dimension, which is relevant to this discussion.

The concept of non integer dimension was introduced by the mathematician Benoit Mandelbrot half a century ago. What is this fuss about fractals and complexity? What is the relation with spacetimes, quantum space-times?

Everything start from an apparently simple question asked by Mandelbrot: What is the length of the coast of England (or more precisely, Cornwall)? As it turned out, the length of the cost of England, depended on the lens used to magnify the map of coast, and depending on the magnifying power, the length changed with a well defined rule, known as scaling, which we will explain shortly.

There are several definitions of fractal dimension, but let us try to keep things as easy as possible, and see why a granular space-time might indeed imply a different dimensions at different scales (i.e., our magnifying power). The easy case is the one of a regular square lattice, which for the sake of clarity we consider infinite in any direction.

                                                          Source: Manny Lorenzo

 The dimension of the lattice might look two dimensional, as the lattice is planar: it can be embedded into a two dimensional surface (this is what is called embedding dimension). However, if we pick any point of this lattice, and count how many points are at a distance “d” from it, we will see that the number of points increases with a scaling law, given by*:

N ~ d^gamma .

If d is not too big, the value of gamma changes if the underlying structure is not a continuum, or is granular, and gamma can take non-integer values. This can be interpreted in various ways. For the case of fractals, this implies that the real dimension of fractals is not integer. Analogously to the case of the number of points within a certain distance d, it is possible to define a diffusion operation which will do the work of counting for us. However, the counting process depends on the operator which defines the diffusion process: how a swarm of particles move on the underlying discrete space. This is a crucial point of the procedure.

In the continuum, the technology is developed to the point that it can  to show that such an operator can be defined precisely**. The problem then is that the scaling not precise: for too long times, the scaling relation is not exact (as curvature effect might contribute). Thus, the time given to the particle to diffuse has to be appropriately tuned. This is what the authors define in Section 2 of the paper discussed in the talk and is a standard procedure in the context of the spectral dimension. Of course, what discussed insofar is valid for classical diffusion, but the operator can be defined for quantum diffusion as well, which is, put in simple terms, described by a Schroedinger unitary evolution like in ordinary quantum mechanics.

It is important to understand that the combinatorial description of a manifold (how these are represented in the discrete setting), rather than the actual geometry, plays a very relevant role. If you calculate the fractal dimension of these lattices, although at large scale they give the right fractal dimension, on small scale they do not. This shows that in fact discreteness does have an effect on the spectral dimension, and that results do indeed depend on the number of dimensions. But more importantly the authors observe that the spectral dimension, even in the classical case, depends on the precise structure of the underlying pseudo-manifold, i.e. how the manifold is discretized. If you combine this with the fact that insofar the fractal dimension is the global observable saying in how many dimensions you live in (concept very important for other high energy approaches), the interest might be quite well justified.

The case of a quantum geometry, considered using Loop Quantum Gravity (LQG), is then put forward at the end. The definition is different from the one given previously (Modesto 2009, assuming that the scaling is given by the area operator of LQG), and it leads to different results.

Without going into the details (described anyway quite clearly in the paper), probably it is noteworthy to anticipate the results and explain the difficulties involved in the calculation. The first complication comes from the calculation itself: it is in fact very hard to calculate the fractal dimension in the full quantum case. However, in the semiclassical approximation (when the geometry is in part quantum and in part classical), the main "quantum" part can be neglected. The next issue is that, in order to claim the emergence of a clear topological dimension, the fractal dimension has to be constant for a wide range of distances of several orders of magnitude. It is important to say that, if you use the fractal dimension as your definition of dimension, it is not possible to assign a given dimensionality unless the number of discrete points under consideration is large enough. This is a feature of the fractal dimension which is very important for Loop Quantum Gravity in many respects, as there as been for long time a discussion on what is the right description of classical and quantum spacetime. Still, this approach gives the possibility of a bottom-up definition of dimension (in the top-down, there would not be any dimensional flow).

As a closing remark, it is fair to say that this paper goes one step further into defining a notion of fractal dimension in Loop Quantum Gravity. The previous attempt was made by Modesto and collaborators using a rough approximation to the Laplacian. That approximation exhibited a dimensional flow towards an ultraviolet 2-dimensional space, which seems to be not present using a more elaborated Laplacian.

 *For a square lattice, if d is big enough, \gamma is equal to two: this is the Haussdorf dimension of the lattice, and indeed this dimension can be defined through the following equation: gamma=\partial log(N)/ \partial d

** Using the technical terminology, this is the Seeley-De Witt expansion of the heat kernel on curved manifolds. This is usually called spectral dimension. The first term of the expansion depends explicitly on the spectral dimension, while in the terms at higher orders there are also contributions from the curvature.