Tuesday, April 1, 2014

Spectral dimension of quantum geometries

Johannes Thürigen, Albert Einstein Institute 
Title: Spectral dimension of quantum geometries 
PDF of the talk (1MB) Audio [.wav 39MB] Audio [.aif 4.1MB] 

By Francesco Caravelli, University College London


One of the fundamental goals of quantum gravity is understanding the structure of space-time at very short distances, together with predicting physical and observable effects of having a quantum geometry. This is not easy. Since the introduction of fractal dimension in Quantum Gravity, and its importance emphasized in the work done in Causal Dynamical Triangulations (Loll et al. 2005) and Asymptotic Safety (Lauscher et al. 2005), it has become more and more clear that space-time, at the quantum level, might have a radical transformation: the number of effective dimensions might change with the energy of the process involved. Various approaches to Quantum Gravity have collected evidences of a dimensional flow at high energies, which was popularized by Carlip as Spontaneous Dimensional Reduction (Carlip 2009, 2013). (The use of the term reduction is indeed a hint that a dimensional reduction is observed, but the evidences are far from conclusive. We find dimensional flow more appropriate.)

Before commenting on the results obtained by the authors of the paper discussed in the seminar
(Calcagni, Oriti, Thuerigen 2013), let us first step back for a second and spend some time introducing the concept of fractal dimension, which is relevant to this discussion.

The concept of non integer dimension was introduced by the mathematician Benoit Mandelbrot half a century ago. What is this fuss about fractals and complexity? What is the relation with spacetimes, quantum space-times?

Everything start from an apparently simple question asked by Mandelbrot: What is the length of the coast of England (or more precisely, Cornwall)? As it turned out, the length of the cost of England, depended on the lens used to magnify the map of coast, and depending on the magnifying power, the length changed with a well defined rule, known as scaling, which we will explain shortly.

There are several definitions of fractal dimension, but let us try to keep things as easy as possible, and see why a granular space-time might indeed imply a different dimensions at different scales (i.e., our magnifying power). The easy case is the one of a regular square lattice, which for the sake of clarity we consider infinite in any direction.

                                                          Source: Manny Lorenzo

 The dimension of the lattice might look two dimensional, as the lattice is planar: it can be embedded into a two dimensional surface (this is what is called embedding dimension). However, if we pick any point of this lattice, and count how many points are at a distance “d” from it, we will see that the number of points increases with a scaling law, given by*:

N ~ d^gamma .

If d is not too big, the value of gamma changes if the underlying structure is not a continuum, or is granular, and gamma can take non-integer values. This can be interpreted in various ways. For the case of fractals, this implies that the real dimension of fractals is not integer. Analogously to the case of the number of points within a certain distance d, it is possible to define a diffusion operation which will do the work of counting for us. However, the counting process depends on the operator which defines the diffusion process: how a swarm of particles move on the underlying discrete space. This is a crucial point of the procedure.

In the continuum, the technology is developed to the point that it can  to show that such an operator can be defined precisely**. The problem then is that the scaling not precise: for too long times, the scaling relation is not exact (as curvature effect might contribute). Thus, the time given to the particle to diffuse has to be appropriately tuned. This is what the authors define in Section 2 of the paper discussed in the talk and is a standard procedure in the context of the spectral dimension. Of course, what discussed insofar is valid for classical diffusion, but the operator can be defined for quantum diffusion as well, which is, put in simple terms, described by a Schroedinger unitary evolution like in ordinary quantum mechanics.

It is important to understand that the combinatorial description of a manifold (how these are represented in the discrete setting), rather than the actual geometry, plays a very relevant role. If you calculate the fractal dimension of these lattices, although at large scale they give the right fractal dimension, on small scale they do not. This shows that in fact discreteness does have an effect on the spectral dimension, and that results do indeed depend on the number of dimensions. But more importantly the authors observe that the spectral dimension, even in the classical case, depends on the precise structure of the underlying pseudo-manifold, i.e. how the manifold is discretized. If you combine this with the fact that insofar the fractal dimension is the global observable saying in how many dimensions you live in (concept very important for other high energy approaches), the interest might be quite well justified.

The case of a quantum geometry, considered using Loop Quantum Gravity (LQG), is then put forward at the end. The definition is different from the one given previously (Modesto 2009, assuming that the scaling is given by the area operator of LQG), and it leads to different results.

Without going into the details (described anyway quite clearly in the paper), probably it is noteworthy to anticipate the results and explain the difficulties involved in the calculation. The first complication comes from the calculation itself: it is in fact very hard to calculate the fractal dimension in the full quantum case. However, in the semiclassical approximation (when the geometry is in part quantum and in part classical), the main "quantum" part can be neglected. The next issue is that, in order to claim the emergence of a clear topological dimension, the fractal dimension has to be constant for a wide range of distances of several orders of magnitude. It is important to say that, if you use the fractal dimension as your definition of dimension, it is not possible to assign a given dimensionality unless the number of discrete points under consideration is large enough. This is a feature of the fractal dimension which is very important for Loop Quantum Gravity in many respects, as there as been for long time a discussion on what is the right description of classical and quantum spacetime. Still, this approach gives the possibility of a bottom-up definition of dimension (in the top-down, there would not be any dimensional flow).

As a closing remark, it is fair to say that this paper goes one step further into defining a notion of fractal dimension in Loop Quantum Gravity. The previous attempt was made by Modesto and collaborators using a rough approximation to the Laplacian. That approximation exhibited a dimensional flow towards an ultraviolet 2-dimensional space, which seems to be not present using a more elaborated Laplacian.

 *For a square lattice, if d is big enough, \gamma is equal to two: this is the Haussdorf dimension of the lattice, and indeed this dimension can be defined through the following equation: gamma=\partial log(N)/ \partial d

** Using the technical terminology, this is the Seeley-De Witt expansion of the heat kernel on curved manifolds. This is usually called spectral dimension. The first term of the expansion depends explicitly on the spectral dimension, while in the terms at higher orders there are also contributions from the curvature.










Sunday, November 24, 2013

The Platonic solids of quantum gravity

Hal Haggard, CPT Marseille
Title: Dynamical chaos and the volume gap 
PDF of the talk (8Mb) Audio [.wav 37MB] Audio [.aif 4MB]

by Chris Coleman-Smith, Duke University

At the Planck scale, a quantum behavior of the geometry of space is expected. Loop quantum gravity provides a specific realization of this expectation. It predicts a granularity of space with each grain having a quantum behavior. In particular the volume of the grain is quantized and its allowed values (what is technically known as "the spectrum")have a rich structure. Areas are also naturally quantized and there is a robust gap in their spectrum. Just as Planck showed that there must be a smallest possible photon energy, there is a smallest possible spatial area. Is the same true for volumes?

 These grains of space can be visualized as polyhedra with faces of fixed area. In the full quantum theory these polyhedra are fuzzed out and so just as we cannot think of a quantum particle as a little spinning ball we cannot think of these polyhedra as the definite Platonic solids that come to mind.


[The Platonic Solids, by Wenzel Jamnitzer] 

It is interesting to examine these polyhedra at the classical level, where we can set aside this fuzziness, and see what features we can deduce about the quantum theory.

The tetrahedron is the simplest possible polyhedron. Bianchi and Haggard [1] explored the dynamics arising from fixing the volume of a tetrahedron and letting the edges evolve in time. This evolution is a very natural way of exploring the set of constant volume polyhedra that can be reached by smooth deformations of the orientation of the polyhedral faces. The resulting trajectories in the space of polyhedra can be quantized by Bohr and Einstein's original geometrical methods for quantization. The basic idea here is to map some parts of the smooth continuous properties of the classical dynamics into the quantum by selecting only those orbits whose total area is an integer multiple of Planck's constant. The resulting discrete volume spectrum gives excellent agreement to the fully quantum calculation. Further work by Bianchi, Donna and Speziale [2] extended this treatment to more complex polyhedra.

 Much as a bead threaded on a wire can only move forward or backward along the wire, a tetrahedron of fixed volume and face areas only has one freedom: to change its shape. Classical systems like this are typically integrable which means that their dynamics is fairy regular and can be exactly solved. Two degree of freedom systems like the pentahedron are typically non integrable. Their dynamics can be simulated numerically but there is no closed form solution for their motion. This implies that the pentahedron has a much richer dynamics than the tetrahedron. Is this pentahedral dynamics so complex that it is actually chaotic? If so, what are the implications for the quantized volume spectrum in this case. This system has recently been partially explored by Coleman-Smith [3] and Haggard [4] and was indeed found to be chaotic.

 Chaotic systems are very sensitive to their initial conditions, tiny deviations from some reference trajectory rapidly diverge apart. This makes the dynamics of chaotic systems very complex and endows them with some interesting properties. This rapid spreading of any bundle of initial trajectories means that chaotic systems are unlikely to spend much time 'stuck' in some particular motion but rather they will quickly explore all possible motions. Such systems 'forget' their initial conditions very quickly and soon become thermal. This rapid thermalization of grains of space is an intriguing result. Black holes are known to be thermal objects and their thermal properties are believed to be fundamentally quantum in origin. The complex classical dynamics we observe may provide clues into the microscopic origins of these thermal properties.

 The fuzzy world of quantum mechanics is unable to support the delicate fractal structures arising from classical chaos. However its echoes can indeed be found in the quantum analogues of classically chaotic systems. A fundamental property of quantum systems is that they can only take on certain discrete energies. The set of these energy levels is usually referred to as the energy spectrum of the system. An important result from the study of how classical chaos passes into quantum systems is that we can generically expect certain statistical properties of the spectrum of such systems. In fact the spacing between adjacent energy levels of such systems can be predicted on very general grounds. For a non chaotic quantum system one would expect these spacings to be entirely uncorrelated and so be Poisson distributed (e.g the number of cars passing through a toll gate in an hour) resulting in most energy levels being very bunched up. In chaotic systems the spacings become correlated and actually repel each other so that on average one would expect these spacings to be quite large.

 This is suggestive that there may indeed be a robust volume gap since we generically expect the discrete quantized volume levels to repel each other. However the density of the volume spectrum around the ground state needs to be better understood to make this argument more concrete. Is there really a smallest non zero volume?

 The classical dynamics of the fundamental grains of space provide a fascinating window into the behavior of the very complicated full quantum dynamics of space described by loop quantum gravity. Extending this work to look at more complex polyhedra and at coupled netwworks of polyhedra will be very exciting and will certainly provide many useful new insights into the microscopic structure of space itself.

[1]: "Discreteness of the volume of space from Bohr-Sommerfeld quantization", E.Bianchi & H.Haggard. PRL 107, 011301 (2011), "Bohr-Sommerfeld Quantization of Space", E.Bianchi & H.Haggard. PRD 86, 123010 (2012)

[2]: "Polyhedra in loop quantum gravity", E.Bianchi, P.Dona & S.Speziale. PRD 83, 0440305 (2011)

[3]: "A “Helium Atom” of Space: Dynamical Instability of the Isochoric Pentahedron", C.Coleman-Smith & B.Muller, PRD 87 044047 (2013)

[4]: "Pentahedral volume, chaos, and quantum gravity", H.Haggard, PRD 87 044020 (2013)

Sunday, November 17, 2013

Coarse graining theories

Tuesday, Nov 27th. 2012
Bianca Dittrich, Perimeter Institute 
Title: Coarse graining: towards a cylindrically consistent dynamics
PDF of the talk (14Mb) Audio [.wav 41MB] Audio [.aif 4MB]

by Frank Hellmann



Coarse graining is a procedure from statistical physics. In most situations we do not know how all the constituents of a system behave. Instead we only get a very coarse picture. Rather than knowing how all the atoms in the air around us move, we are typically only aware of a few very rough properties, like pressure, temperature and the like. Indeed it is hard to imagine a situation where one would care about the location of this or that atom in a gas made of 10^23 atoms. Thus when we speak of trying to find a coarse grained description of a model, we mean that we want to discard irrelevant detail and find out how a particular model would appear to us.
 
The technical way in which this is done was developed by Kadanoff and Wilson. Given a system made up of simple constituents Kadanoff's idea was to take a set of nearby constituents and combine them back into a single such constituent, only now larger. In a second step we could then scale down the entire system and find out how the behavior of this new, coarse grained constituent compares to the original ones. If certain behaviors grow stronger with such a step we call them relevant, if they grow weaker we call them irrelevant. Indeed, as we build ever coarser descriptions out of our system eventually only the relevant behaviors will survive.

In spin foam gravity we are facing this very problem. We want to build a theory of quantum gravity, that is, a theory that describes how space and time behave at the most fundamental level. We know very precisely how gravity occurs to us, every observation of it we have made is described by Einsteins theory of general relativity. Thus in order to be a viable candidate for a theory of quantum gravity, it is crucial that the coarse grained theory looks, at least in the cases that we have tested, like general relativity.

The problem we face is that usually we are looking at small and large blocks in space, but in spin foam models it is space-time itself that is built up of blocks, and these do not have a predefined size. They can be large or small in their own right. Further, we can not handle the complexity of calculating with so many blocks of space-time. The usual tools, approximations and concepts of coarse graining do not apply directly to spin-foams.

To me this constitutes the most important question facing the spin foam approach to quantum gravity. We have to make sure, or, as it often is in this game, at least give evidence, that we get the known physics right, before we can speak of having a plausible candidate for quantum gravity. So far most of our evidence comes from looking at individual blocks of space time, and we see that their behaviour really makes sense, geometrically. But as we have not yet seen any such blocks of space time floating around in the universe, we need to investigate the coarse graining to understand how a large number of them would look collectively. The hope is that the smooth space time we see arises like the smooth surface of water out of blocks composed of atoms, as an approximation to a large number of discrete blocks.

Dittrich's work tries to address this question. This requires bringing over, or reinventing in the new context, a lot of tools from statistical physics. The first question is, how does one actually combine different blocks of spin foam into one larger block? Given a way to do that, can we understand how it effectively behaves?

The particular tool of choice that Dittrich is using is called Tensor Network Renormalization. In this scheme, the coarse graining is done by looking at what aspects of the original set of blocks is the most relevant to the dynamics directly and then keeping only those. Thus it combines the two steps, of first coarse graining and then looking for relevant operators into a single step.

To get more technical, the idea is to consider maps from the boundary of a coarser lattice into that of a finer one. The mapping of the dynamics for the fine variables then provides the effective dynamics of the coarser ones. If the maps satisfy so called cylindrical consistency conditions, that is, if we can iterate them, this map can be used to define a continuum limit as well.

In the classical case, the behaviour of the theory as a function of the boundary values is coded in what is known as Hamilton's principal function. The use of studying the flow of the theory under such maps is then mostly that of improving the discretizations of continuum systems that can be used for numerical simulations.

In the quantum case, the principal function is replaced by the usual amplitude map. The pull back of the amplitude under this embedding then gives a renormalization prescription for the dynamics. Now Dittrich proposes to adapt an idea from condensed matter theory called tensor network renormalization.

In order to select which degrees of freedom to map from the coarse boundary to the fine one, the idea is to evaluate the amplitude, diagonalize and only keep the eigenstates corresponding to the n largest eigenvalues.

At each step one then obtains a refined dynamics that does not grow in complexity, and one can iterate the procedure to obtain effective dynamics for very coarse variables that have been picked by the theory, rather than by an initial choice of scale, and a split into high and low energy modes.

It is too early to say whether these methods will allow us to understand whether spin foams reproduce what we know about gravity, but they have already produced a whole host of new approximations and insights into how these type of models work, and how they can behave for large number of building blocks.










Monday, May 6, 2013

Bianchi space-times in loop quantum cosmology

Brajesh Gupt, LSU 
Title: Bianchi I LQC, Kasner transitions and inflation
PDF of the talk (800k) Audio [.wav 30MB] Audio [.aif 3MB]

by Edward Wilson-Ewing

The Bianchi space-times are a generalization of the simplest Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models.  While the FLRW space-times are assumed to be homogeneous (there are no preferred points in the space-time) and isotropic (there is no preferred direction), in the Bianchi models the isotropy requirement is removed.  One of the main consequences of this generalization is that in a Bianchi cosmology, the space-time is allowed to expand along the three spatial axes at different rates.  In other words, while there is only one Hubble rate in FLRW space-times, there are three Hubble rates in Bianchi cosmologies, one for each of the three spatial dimensions.


For example, the simplest Bianchi model is the Bianchi I space-time whose metric is given by

ds2 = - dt2 + a1(t)2 (dx1)2 +  a2(t)2 (dx2)2 +  a3(t)2 (dx3)2,

where the ai(t) are the three scale factors.  This is in contrast to the flat FLRW model where there is only one scale factor.

It is possible to determine the exact form of the scale factors by solving the Einstein equations.  In a vacuum, or with a massless scalar field, it turns out that the ith scale factor is simply given by the time elevated to the power ki: ai(t) = tki, where these ki are constant numbers and are called the Kasner exponents.  There are some relations between the Kasner exponents that must be satisfied, so that once the matter content has been chosen, one Kasner exponent between -1 and 1 may be chosen freely, and then the values of the other two Kasner exponents are determined by this initial choice.

In addition to allowing more degrees of freedom than the simpler FLRW models, the Bianchi space-times are important due to the central role they play in the Belinsky-Khalatnikov-Lifshitz (BKL) conjecture.  According to the BKL conjecture, as a generic space-like singularity is approached in general relativity time derivatives dominate over spatial derivatives (with the exception of some small number of “spikes” which we shall ignore here) and so spatial points decouple from each other.  In essence, as the spatial derivatives become negligible, the complicated partial differential equations of general relativity reduce to simpler ordinary differential equations close to a space-like singularity.  Although this conjecture has not been proven, there is a wealth of numerical studies that supports it.

If the BKL conjecture is correct, and the ordinary differential equations can be trusted, then the solution at each point is that of a homogeneous space-time.  Since the most general homogeneous space-times are given by the Bianchi space-times, it follows that as a space-like singularity is approached, the geometry at each point is well approximated by a Bianchi model.

This conjecture is extremely important from the point of view of quantum gravity, as quantum gravity effects are expected to become important precisely when the space-time curvature nears the Planck scale.  Therefore, we expect quantum gravity effects to become important near singularities.  What the BKL conjecture is telling us is that understanding quantum gravity effects in the Bianchi models, which are relatively simple space-times, can shed significant insight into the problem of singularities in gravitation.

What is more, studies of the BKL dynamics show that for long periods of time, the geometry at any point is given by the Bianchi I space-time and during this time the geometry is completely determined by the three Kasner exponents introduced above in the third paragraph.  Now, the Bianchi I solution does not hold at each point eternally, rather there are occasional transitions between different Bianchi I solutions called Kasner or Taub transitions.  During a Kasner transition, the three Kasner exponents rapidly change values before becoming constant for another long period of time.  Now, since the Bianchi I model provides an excellent approximation at each point for long periods of time, understanding the dynamics of the Bianchi I space-time, especially at high curvatures when quantum gravity effects cannot be neglected, may help us understand the behaviour of generic singularities when quantum gravity effects are included.

In loop quantum cosmology (LQC), for all of the space-times studied so far including Bianchi I, the big-bang singularity in cosmological space-times is resolved by quantum geometry effects.  The fact that the initial singularity in the Bianchi I model is resolved in loop quantum cosmology, in conjunction with the BKL conjecture, gives some hope that all space-like singularities may be resolved in loop quantum gravity.  While this result is encouraging, there remain open questions regarding the specifics of the evolution of the Bianchi I space-time in LQC when quantum geometry effects are important.

One of the main goals of Brajesh Gupt's talk is to address this precise question.  Using the effective equations, which provide an excellent approximation to the full quantum dynamics for the FLRW space-times in LQC and are expected to do the same for the Bianchi models, it is possible to study how the quantum gravity effects that arise in loop quantum cosmology modify the classical dynamics when the space-time curvature becomes large and replace the big-bang singularity by a bounce.  In particular, Brajesh Gupt describes the precise manner of how the Kasner exponents ---which are constant classically--- evolve deterministically as they go through the quantum bounce.  It turns out that there is some sort of a Kasner transition that occurs around the bounce, the details of which are given in the talk.

The second part of the talk considers inflation in Bianchi I loop cosmologies.  Inflation is a period of exponential expansion of the early universe which was initially introduced in order to resolve the so-called horizon and flatness problems.  One of the major results of inflation is that it generates small fluctuations that are of exactly the form that are observed in the small temperature variations in the cosmic microwave background today.  For more information about inflation in loop quantum cosmology, see the previous ILQGS talks by William Nelson, Ivan Agullo, Gianluca Calcagni and David Sloan, as well as the blog posts that accompany these presentations.

Although inflation is often considered in the context of isotropic space-times, it is important to remember that in the presence of matter fields such as radiation and cold dark matter, anisotropic space-times will become isotropic at late times.  Therefore, it is not because our universe appears to be isotropic today that it necessarily was some 13.8 billion years ago.  Because of this, it is necessary to understand how the dynamics of inflation change when anisotropies are present.  As mentioned at the beginning of this blog post, there is considerably more freedom in Bianchi models than in FLRW space-times, and so the expectations coming from studying inflation in isotropic cosmologies may be misleading for the more general situation.

There are several interesting issues that are worth considering in this context, and in this talk the focus is on two questions in particular.  First, is it easier or harder to obtain the initial conditions necessary for inflation? In other words, is more or less fine-tuning required in the initial conditions?  As it turns out, the presence of anisotropies actually makes it easier for a sufficient amount of inflation to occur.  The second problem is to determine how the quantum geometry effects from loop quantum cosmology change the results one would expect based on classical general relativity.  The main modification found here has to do with the relation between the amount of anisotropy present in the space-time (which can be quantified in a precise manner) and the amount of inflation that occurs.  While there was a monotonic relation between these two quantities in classical general relativity, this is no longer the case when loop quantum cosmology effects are taken into account.  Instead, there is now a specific amount of anisotropy which extremizes the amount of inflation that will occur, and there is a turn around after this point.  The details of these two results are given in the talk.

Tuesday, March 26, 2013

Reduced loop quantum gravity

Tuesday, Mar 12th.
Emanuele Alesci, Francesco Cianfrani
Title: Quantum reduced loop gravity
PDF of the talk (4Mb) Audio [.wav 29MB] Audio [.aif 3MB]

By Emanuele Alesci, Warszaw University and  Francesco Cianfrani,Wrocław University

We propose a new framework for the loop quantization of  symmetry-reduced sectors of General Relativity, called Quantum Reduced Loop Gravity, and we apply this scheme to the inhomogeneous extension of the Bianchi I cosmological model (a cosmology that is homogeneous but anisotropic). To explain the meaning of this sentence we need several ingredients that will be presented in the next sections. But let us first focus on the meaning of “symmetry reduction”: this process simply means that a if a physical system has some kind of symmetry we can use it to reduce the number of independent variables needed to describe it. Symmetry then in general allows to restrict the variables of the theory to the true independent degrees of freedom of it. For instance, let us consider a point-like spinless particle moving on a plane under a central potential. The system is invariant under 2-dimensional rotations on the plane around the center of the potential and as a consequence the angular momentum is conserved. The angular velocity around the origin is a constant of motion and the only “true” dynamical variable is the radial coordinate of the particle. Going to the phase space (the space of positions and momenta of the theory), it can be parameterized by the radial and angular coordinates together with the corresponding momenta, but the symmetry forces the momentum associated with the angular coordinate to be conserved. The reduced phase-space associated with such a system is parameterized by the radial coordinate and momentum, from which, given the initial conditions, the whole trajectory of the particle in the plane can be reconstructed. The quantization in the reduced phase space is usually easier to handle than in the full phase space and this is the main reason why it is a technique frequently used in order to test the approaches towards Quantum Gravity, whose final theory is still elusive. In this respect, the canonical analysis of homogeneous models (Loop Quantum Cosmology) and of spherically-symmetric systems (Quantum Black Holes) in Loop Quantum Gravity (LQG) has been mostly performed by first restricting to the reduced phase space and then quantizing the resulting system (what is technically known as reduced quantization). The basic idea of our approach is to invert the order of “reduction” and “quantization”. The motivation will come directly from our analysis and, in particular, from the failure of reduced quantization to provide a sensible dynamics for the inhomogeneous extensions of the homogeneous anisotropic Bianchi I model. Hence, we will follow a different path by defining a “quantum” reduction of the Hilbert space of quantum states of the full theory down to a subspace which captures the relevant degrees of freedom. This procedure will allow us to treat the inhomogeneous Bianchi I system directly at the quantum level in a computable theory with all the ingredients of LQG (just simplified due to the quantum-reduction).

To proceed, let us first review the main features of LQG.

 Loop Quantum Gravity
 LQG is one of the most promising approaches for the quantization of the gravitational field. Its formulation is canonical and thus it is based on making a 3+1 splitting of the space-time manifold. The phase space is parameterized by the Ashtekar-Barbero connections, and the associated momenta, from which one can compute the metric of spatial sections. A key point of this reformulation is the existence of a  gauge invariance (technically known as SU(2) gauge invariance), which together with background independence, lead to the so-called kinematical constraints of the theory (every time there is a symmetry in a theory an associated constraint emerges implying that the variables are not independent and one has to isolate the true degrees of freedom). The quantization procedure is inspired by the approaches developed in the 70s to describe gauge theories on the lattice in the strong-coupling limit. In particular, the quantum states are given in terms of spin networks, which are graphs with "colors" in the links between intersections. An essential ingredient of LQG is background independence. The way this symmetry is implemented is a completely new achievement in Quantum Gravity and it allows to define a regularized expression (free from infinities) for the operator associated with the Hamiltonian constraint asssociated with the dynamics of the theory.  Thanks to a procedure introduced by Thiemann, the Hamiltonian constraint can be approximated over a certain triangulation of the spatial manifold. The limit in which the triangulation gets finer and finer gives us back the classical expression and it is well defined on a quantum level over s-knots (classes of spin networks related by smooth deformations). The reason is that s-knots are diffeomorphisms invariant and, thus, insensitive to the characteristic length of the triangulation. This means that the Hamiltonian constraint can be consistently regularized and, by the way, the associated algebra is anomaly-free. Unfortunately, the resulting expression cannot be analytically computed, because of the presence of the volume operator, which is complicated. This drawback appears to be a technical difficulty, rather than a theoretical obstruction, and for this reason our aim is to try to overcome it in a simplified model, like a cosmological one.

 Loop Quantum Cosmology 
 Loop Quantum Cosmology (LQC) is the best theory at our disposal to threat homogeneous cosmologies. LQC is based on a quantization in the reduced phase space, which means that the reduction according with the symmetry is entirely made on a classical level. Once that the classical reduction is made, one then proceeds with a quantization of the degrees of freedom left with LQG techniques. We know that our Universe experiences a highly isotropic and homogeneous phase at scales bigger than 100Mpc. The easiest cosmological description is the one of Friedmann-Robertson-Walker (FRW), in which one deals with an isotropic and homogeneous line element, described by only one variable, the scale factor. A generalization can be given by considering anisotropic extensions, the so-called Bianchi models, in which there are three scale factors defined along some fiducial directions. In LQC one fixes the metric to be of the FRW or Bianchi type and quantizes the dynamical variables. However a direct derivation from LQG is still missing and it is difficult to accommodate in this setting inhomogenities because the theory is defined in the homogeneous reduced phase space.

 Inhomogeneous extension of the Bianchi  models:
 We want to define a new model for cosmology able to retain all the nice features of LQG, in particular a sort of background independence by which the regularization of the Hamiltonian constraint can be carried on as in the full theory. In this respect, we consider the simplest Bianchi model, the type I (a homogeneous but anisotropic space-time), and we define an inhomogeneous extension characterized by scale factors that depend on space. This inhonomogeneous extension contains as a limiting case the homogeneous phase in an arbitrary parameterization. The virtue of these models is that they are invariant under what we called a reduced-diffeomorphism invariance, which is the invariance under a restricted class of diffeomorphisms preserving the fiducial directions of the anisotropies of the Bianchi I model. This is precisely the kind of symmetry we were looking for! In fact, once quantum states are based on reduced graphs, whose edges are along the fiducial directions, we can define some reduced s-knots, which will be insensitive to the length of any cubulation of the spatial manifold (we speak of a cubulation because reduced graphs admit only cubulations and not triangulations). Therefore, all we have to do is to repeat Thiemann's construction for a cubulation rather than for a triangulation. But does it give a good expression for the Hamiltonian constraint?? The answer is no and the reason is that there is an additional symmetry in the reduced phase space that prevents us from repeating the construction used by Thiemann for the Hamiltonian constraint. Henceforth, the dynamical issue cannot be addressed by standard LQG techniques in reduced quantization.

 Quantum-Reduced Loop Gravity

 What are we missing in reduced quantization? The idea is that we have reduced the gauge symmetry too much and that is what prevents us from constructing the Hamiltonian. We therefore go back and do not reduce the symmetry and proceed to quantize first. We then impose the reduction of the symmetry at a quantum level.  Hence, the classical expression of the Hamiltonian  constraint for the Bianchi I model can be quantized according with the Thiemann procedure. Moreover, the associated matrix elements can be analytically computed because the volume operator takes a simplified form in the new Hilbert space. Therefore, we have a quantum description for the inhomogeneous Bianchi I model in which all the techniques of LQG can be applied and all the computations can be carried on analytically. This means that for the first time we have a model in which we can explicitly test numerous aspects of loop quantization: Thiemann's original graph changing Hamiltonian, the master constraint program, Algebraic Quantization or the new deparameterized approach with matter fields can all be tested. Such a model is a cuboidal lattice, whose edges are endowed with  quantum numbers and with some reduced relations between those numbers at vertices. In two words we have a sort of hybrid “LQC” along the edges with LQG relationships at the nodes, but with a graph structure and diagonal volume! This means that we have an analytically tractable model closer to LQG than LQC and potentially able to threat inhomogeneities and anisotropies at once. Is this model meaningful? What we have to do now is “only” physics: as a first test try to work out the semiclassical limit. If this model will yield General Relativity in the classical regime, then we can proceed to compare its predictions with Loop Quantum Cosmology in the quantum regime, inserting matter fields and analyzing their role, discussing the behavior of inhomogeneities and so on.. We will see..

Thursday, November 1, 2012

General relativity in observer space


Tuesday, Oct 2nd.
Derek Wise, FAU Erlangen
Title: Lifting General Relativity to Observer Space
PDF of the talk (700k) Audio [.wav 34MB], Audio [.aif 3MB].

by Jeffrey Morton, University of Hamburg.

You can read a more technical and precise version of this post at Jeff's own blog.

This talk was based on a project of Steffen Gielen and Derek Wise, which has taken written form in a few papers (two shorter ones, "Spontaneously broken Lorentz symmetry for Hamiltonian gravity", "Linking Covariant and Canonical General Relativity via Local Observers", and a new, longer one called "Lifting General Relativity to Observer Space").

The key idea behind this project is the notion of "observer space": a space of all observers in a given universe. This is easiest to picture when one starts with a space-time.  Mathematically, this is a manifold M with a Lorentzian metric, g, which among other things determines which directions are "timelike" at a given point. Then an observer can be specified by choosing two things. First, a particular point (x0,x1,x2,x3) = x, an event in space-time. Second, a future-directed timelike direction, which is the tangent to the space-time trajectory of a "physical" observer passing through the event x. The space of observers consists of all these choices: what is known as the "future unit tangent bundle of  M". However, using the notion of a "Cartan geometry", one can give a general definition of observer space which makes sense even when there is no underlying space-time manifold.

The result is a surprising, relatively new physical intuition saying that "space-time" is a local and observer-dependent notion, which in some special cases can be extended so that all observers see the same space-time. This appears to be somewhat related to the idea of relativity of locality. More directly, it is geometrically similar to the fact that a slicing of space-time into space and time is not unique, and not respected by the full symmetries of the theory of relativity. Rather, the division between space and time depends on the observer.

So, how is this described mathematically? In particular, what did I mean up there by saying that space-time itself becomes observer-dependent? The answer uses Cartan geometry.

Cartan Geometry

Roughly, Cartan geometry is to Klein geometry as Riemannian geometry is to Euclidean geometry.

Klein's Erlangen Program, carried out in the mid-19th-century, systematically brought abstract algebra, and specifically the theory of Lie groups, into geometry, by placing the idea of symmetry in the leading role. It describes "homogeneous spaces" X, which are geometries in which every point is indistinguishable from every other point. This is expressed by an action of some Lie group G, which consists of all transformations of an underlying space which preserve its geometric structure. For n-dimensional Euclidean space En, the symmetry group is precisely the group of transformations that leave the data of Euclidean geometry, namely lengths and angles, invariant. This is the Euclidean group, and is generated by rotations  and translations.
But any point x will be fixed by some symmetries, and not others, so there is a subgroup H , the "stabilizer subgroup", consisting of all symmetries which leave x fixed.


The pair (G,H) is all we need to specify a homogeneous space, or Klein geometry. Thus, a point will be fixed by the group of rotations centered at that point. Klein's insight is to reverse this: we may may obtain Euclidean space from the group G itself, essentially by "ignoring" (or more technically, by "modding out") the subgroup H of transformations that leave a particular point fixed. Klein's program lets us do this in general, given a pair (G,H).  The advantage of this program is that it gives a great many examples of geometries (including ones previously not known) treated in a unified way. But the most relevant ones for now are:
  • n-dimensional Euclidean space, as we just described.
  • n-dimensional Minkowski space. The Euclidean group gets  replaced by the Poincaré group, which includes translations and rotations, but also the boosts of special relativity. This is the group of all transformations that fix the geometry determined by the Minkowski metric of flat space-time.
  • de Sitter space and anti-de Sitter spaces, which are relevant for studying general relativity with a cosmological constant.
Just as a Lorentzian or Riemannian manifold is "locally modeled" by Minkowski or Euclidean space respectively, a Cartan geometry is locally modeled by some Klein geometry. Measurements close enough to a given point in the Cartan geometry look similar to those in the Klein geometry.

Since curvature is measured by the development of curves, we can think of each homogeneous space as a flat Cartan geometry with itself as a local model, just as the Minkowski space of special relativity is a particular example of a solution to general relativity.

The idea that the curvature of a manifold depends on the model geometry being used to measure it, shows up in the way we apply this geometry to physics.

Gravity and Cartan Geometry

The MacDowell-Mansouri formulation of gravity can be understood as a theory in which general relativity is modeled by a Cartan geometry. Of course, a standard way of presenting general relativity is in terms of the geometry of a Lorentzian manifold. The Palatini formalism describes general relativity instead of in terms of a metric, in terms of a set of vector fields governed by the Palatini equations. This can be derived from a Cartan geometry through the theory of MacDowell-Mansouri, which  "breaks the full symmetry" of the geometry at each point, generating the vector fields that arise in the Palatini formalism.  So General Relativity can be written as the theory of a Cartan geometry modeled on a de Sitter space.


Observer Space

The idea in defining an observer space is to combine two symmetry reductions into one. One has a model Klein geometry, which reflects the "symmetry breaking" that happens when choosing one particular point in space-time, or event.  The time directions are tangent vectors to the world-line (space-time trajectory) of a "physical" observer at the chosen event. So the model Klein geometry  is the space of such possible observers at a fixed event. The stabilizer subgroup for a point in this space consists of just the rotations of space-time around the corresponding observer - the boosts in the Lorentz transformations that relate different observers. Locally, choosing an observer amounts to a splitting of the model space-time at the point into a product of space and time. If we combine both reductions at once, we get a 7-dimensional Klein geometry that is related to de Sitter space, which we think of as a homogeneous model for the "space of observers"

This may be intuitively surprising: it gives a perfectly concrete geometric model in which "space-time" is relative and observer-dependent, and perhaps only locally meaningful, in just the same way as the distinction between "space" and "time" in general relativity. That is, it may be impossible to determine objectively whether two observers are located at the same base event or not. This is a kind of "relativity of locality" which is geometrically much like the by-now more familiar relativity of simultaneity. Each observer will reach certain conclusions as to which observers share the same base event, but different observers may not agree. The coincident observers according to a given observer are those reached by a good class of geodesics in observer space  moving only in directions that observer sees as boosts.

When one has a certain integrability condition, one can reconstruct a space-time from the observer space: two observers will agree whether or not they are at the same event. This is the familiar world of relativity, where simultaneity may be relative, but locality is absolute.

Lifting Gravity to Observer Space

Apart from describing this model of relative space-time, another motivation for describing observer space is that one can formulate canonical (Hamiltonian) general relativity locally near each point in such an observer space. The goal is to make a link between covariant and canonical quantization of gravity. Covariant quantization treats the geometry of space-time all at once, by means of of what is known as a Lagrangian. This is mathematically appealing, since it respects the symmetry of general relativity, namely its diffeomorphism-invariance (or, speaking more physically, that its laws take the same form for all observers). On the other hand, it is remote from the canonical (Hamiltonian) approach to quantization of physical systems, in which the concept of time is fundamental. In the canonical approach, one quantizes the space of states of a system at a given point in time, and the Hamiltonian for the theory describes its evolution. This is problematic for diffeomorphism-, or even Lorentz-invariance, since coordinate time depends on a choice of observer. The point of observer space is that we consider all these choices at once. Describing general relativity in observer space is both covariant, and based on (local) choices of time direction. Then a "field of observers" is a choice, at each base event in M, of an observer based at that event. A field of observers may or may not correspond to a particular decomposition of space-time into space evolving in time, but locally, at each point in observer space, it always looks like one. The resulting theory describes the dynamics of space-geometry over time, as seen locally by a given observer, in terms of a Cartan geometry.

This splitting, along the same lines as the one in MacDowell-Mansouri gravity described above, suggests that one could lift general relativity to a theory on an observer space. This amount to describing fields on observer space and a theory for them, so that the splitting of the fields gives back the usual fields of general relativity on space-time, and the equations give back the usual equations. This part of the project is still under development, but there is indeed a lifting of the equations of general relativity to observer space. This tells us that general relativity can be defined purely in terms of the space of all possible observers, and when there is an objective space-time, the resulting theory looks just like general relativity. In the case when there is no "objective" space-time, the result includes some surprising new fields: whether this is a good or a bad thing is not yet clear.

Thursday, October 18, 2012

More on Shape Dynamics

Tim Koslowski, Perimeter Institute
Title: Effective Field Theories for Quantum Gravity form Shape Dynamics
PDF of the talk (0.5Mb) Audio [.wav 31MB], Audio [.aif 3MB].


By Astrid Eichhorn, Perimeter Institute









Gravity and Quantum Physics have resisted to be unified into a common theory for several decades. We know a lot about the classical nature of gravity, in the form of Einstein's theory of General Relativity, which is a field theory. During the last century, we have learnt how to quantize other field theories, such as the gauge theories in the Standard Model of Particle Physics. The crucial difference between a classical theory and a quantum theory lies in the effect of quantum fluctuations. Due to Heisenberg's uncertainty principle, quantum fields can fluctuate, and this changes the effective dynamics of the field. In a classical field theory, the equations of motion can be derived minimizing a function called the classical action. In a quantum field theory, the equations of motion for the mean value of the quantum field cannot be derived from the classical action. Instead, they follow from something called the effective action, which contains the effect of all quantum fluctuations. Mathematically, to incorporate the effect of quantum fluctuations, a procedure known as the path integral has to be performed, which, even within perturbation theory (where one assumes solutions differ little from a known one), is a very challenging task. A method to make this task doable is the so called (functional) Renormalization Group: Not all quantum fluctuations are taken into account at once, but only those with a specific momentum, usually starting with the high-momentum ones. In a pictorial way, this means that we "average" the quantum fields over small distances (corresponding to the inverse of the large momentum). The effect of the high-momentum fluctuations is then to change the values of the coupling constants in the theory: Thus the couplings are no longer constant, but depend on the momentum scale, and so we more appropriately should call them running couplings. As an example, consider Quantum Electrodynamics: We know that the classical equations of motion are linear, so there is no interaction between photons. As soon as we go over to the quantum theory, this is different: Quantum fluctuations of the electron field at high momenta induce a photon-photon interaction (however one with a very tiny coupling, so experimentally this effect is difficult to see).

 Fig 1: Electron fluctuations induce a non-vanishing photon-photon coupling in Quantum Electrodynamics.

 The question, if a theory can be quantized, i.e., the full path integral can be performed, then finds an answer in the behavior of the running couplings: If the effect of quantum fluctuations at high momenta is to make the couplings divergent at some finite momentum scale, the theory is only an effective theory at low energies, but not a fundamental theory. On the technical side this implies that when we perform integrals that take into account the effect of quantum fluctuations, we cannot extend these to arbitrarily high momenta, instead we have to "cut them off" at some scale.

The physical interpretation of such a divergence is that the theory tells us that we are really using effective degrees of freedom, not fundamental ones.  As an example, if we construct a theory of the weak interaction between fermions without the W-bosons and the Z-boson, the coupling between the fermions will diverge at a scale which is related to the mass scale of the new bosons. In this manner, the theory lets us know that new degrees of freedom - the W- and Z-boson - have to be included at this momentum scale. One example that we know of a truly fundamental theory, i.e., one where the degrees of freedom are valid up to arbitrarily high momentum scales, is Quantum Chromodynamics. Its essential feature is the ultraviolet-attractive Gaussian  (one that corresponds to a free, non-interacting theory) fixed point , which is nothing but the statement that the running coupling in QCD weakens towards high momenta, which is called asymptotic freedom (since asymptotically, at high momenta, the theory becomes non-interacting, i.e., free).

 There is nothing wrong with a theory that is not fundamental in this sense, it simply means that it is an effective theory, which we can only use over a finite range of momenta. This concept is well-known in physics, and used very successfully. For instance, in condensed-matter systems, the effective degrees of freedom are, e.g., phonons, which are collective excitations of an atom lattice, and obviously cease to be a valid description of the system on distance scales below the atomic scale.

 Quantum gravity actually exists as an effective quantum field theory, and quantum gravity effects can be calculated, treating the space-time metric as a quantum field like any other. However, being an effective theory means that it will only describe physics over a finite range of scales, and will presumably break down somewhere close to a scale known as the Planck scale (10^-33 in centimeters). This implies that we do not understand the microscopic dynamics of gravity. What are the fundamental degrees of freedom, which describe quantum gravity beyond the Planck scale, what is their dynamics and what are the symmetries that govern it?

 The question, if we can arrive at a fundamental quantum theory of gravity within the standard quantum field theory framework, boils down to understanding the behavior of the running couplings of the theory. In perturbation theory, the answer has been known for a long time:  (in four space-time dimensions) instead of weakening towards high momenta, the Newton coupling increases. More formally, this means that the free fixed point (technically known as Gaussian fixed point) is not ultraviolet-attractive. For this reason, most researchers in quantum gravity gave up on trying to quantize gravity along the same lines as the gauge theories in the Standard Model of particle physics. They concluded that the metric does not carry the fundamental microscopic degrees of freedom of a continuum theory of quantum gravity, but is only an effective description valid at low energies. However, the fact that the Gaussian fixed point is not ultraviolet-attractive really only means that perturbation theory breaks down. Beyond perturbation theory, there is the possibility to obtain a fundamental quantum field theory of gravity: The arena in which we can understand this possibility is called theory space. This is an (infinite dimensional) space, which is spanned by all running couplings which are compatible with the symmetries of the theory. So, in the case of gravity, theory space usually contains the Newton coupling, the cosmological constant, couplings of curvature-squared operators, etc. At a certain momentum scale, all these couplings have some value, specifying a point in theory space. Changing the momentum scale, and including the effect of quantum fluctuations on these scales, implies a change in the value of these couplings. Thus, when we change the momentum scale continuously, we flow through theory space on a so-called Renormalization Group (RG) trajectory. For the couplings to stay finite at all momentum scales, this trajectory should approach a fixed point at high momenta (more exotic possibilities such as limit cycles, or infinitely extendible trajectories could also exist). At a fixed point, the values of the couplings do not change anymore, when further quantum fluctuations are taken into account. Then, we can take the limit of arbitrarily high momentum scale trivially, since nothing changes if we go to higher scales, i.e., the theory is scale invariant. The physical interpretation of this process is, that the theory does not break down at any finite scale: The degrees of freedom that we have chosen to parametrize the physical system are valid up to arbitrarily high scales. An example is given by QCD, which  as we mentioned is asymptotically free, the physical interpretation being that quarks and gluons are valid microscopic degrees of freedom. There is no momentum scale at which we need to expect further particles, or a possible substructure of quarks and gluons.

 In the case of gravity, to quantize it we need a non-Gaussian fixed point. At such a point, where the couplings are non-vanishing, the RG flow stops, and we can take the limit of arbitrarily high momenta. This idea goes back to Weinberg, and is called asymptotic safety. Asymptotically, at high momenta, we are "safe" from divergences in the couplings, since they approach a fixed point, at which they assume some finite value. Since finite couplings imply finiteness of physical observables (when the couplings are defined appropriately), an asymptotically safe theory gives finite answers to all physical questions. In this construction, the fixed point defines the microscopic theory, i.e., the interaction of the microscopic degrees of freedom.

 As a side remark, if, being confronted with an infinite-dimensional space of couplings, you might worry about how such a theory can ever be predictive, note that fixed points come equipped with what is called a critical surface: Only if the RG flow lies within the critical surface of a fixed point, it will actually approach the fixed point at high momenta. Therefore a finite-dimensional critical surface means that the theory will only have a finite number of parameters, namely those couplings spanning the critical surface. The low-momentum value of these couplings, which is accessible to measurements, is not fixed by the theory: Any value works, since they all span the critical surface. On the other hand, infinitely many couplings will be fixed by the requirement of being in the critical surface. This automatically implies that we will get infinitely many predictions from the theory (namely the values of all these so-called irrelevant couplings), which we can then (in principle) test in experiments.


 Fig 2: A non-Gaussian fixed point has a critical surface, the dimensionality of which corresponds to the number of free parameters of the theory.

 Two absolutely crucial ingredients in the search for an asymptotically safe theory of quantum gravity are the specification of the field content and the symmetries of the theory. These determine which running couplings are part of theory space.  They are the couplings of all possible operators that can be constructed from the fundamental fields respecting the symmetry have to be included. Imposing an additional symmetry on theory space means that some of the couplings will drop out of it. Most importantly, the (non)existence of a fixed point will depend on the choice of symmetries. A well-known example is the choice of a U(1) gauge symmetry (like that in electromagnetism) versus an SU(3) one (like the one in QCD). The latter case gives an asymptotically free theory, the former one does not. Thus the (gauge) symmetries of a system crucially determine its microscopic behavior.

 In gravity, there are several classically equivalent versions of the theory (i.e., they admit the same solutions to the equations of motion). A partial list contains standard Einstein gravity with the metric as the fundamental field, Einstein-Cartan gravity, where the metric is exchanged for the vielbein (a set of vectors) and a unimodular version of metric gravity (we will discuss it in a second). The first step in the construction of a quantum field theory of gravity now consists in the choice of theory space. Most importantly, this choice exists in the path-integral framework as well as the Hamiltonian framework. So, in both cases there is a number of classically equivalent formulations of the theory, which differ at the quantum level, and in particular, only some of them might exist as a fundamental theory.

 To illustrate that the choice of theory space is really a physical choice, consider the case of unimodular quantum gravity: Here, the metric determinant is restricted to be constant. This implies, that the spectrum of quantum fluctuations  differs crucially from the non-unimodular version of metric gravity, and most importantly, does not differ just in form, but in its physical content. Accordingly,  the evaluation of Feynman diagrams in perturbation theory in both cases will yield different results. In other words, the running couplings in the two theory spaces will exhibit a different behaviour, reflected in the existence of fixed points as well as critical exponents, which determine the free parameters of the theory.

 This is where the new field of shape dynamics opens up important new possibilities. As explained, theories, which classically describe the same dynamics, can still have different symmetries. In particular, this actually works for gauge theories, where the symmetry is nothing else but a redundancy of description. Therefore, only a reduced configuration space (the space of all possible configurations of the field) is physical, and along certain directions, the configuration space contains physically redundant configurations. A simple example is given by (quantum) electrodynamics, where the longitudinal vibration mode of the photon is unphysical (in vacuum), since the gauge freedom restricts the photon two have two physical (transversal) polarisations.

 One can now imagine how two different theories with different gauge symmetries yield the same physics. The configuration spaces can in fact be different, it is only the values of physical observables on the reduced configuration space that have to agree. This makes a crucial difference for the quantum theory, as it implies different theory spaces, defined by different symmetries, and accordingly different behavior of the running couplings.

 Shape dynamics trades part of the four-dimensionsional, i.e., spacetime symmetries of General Relativity (namely refoliation invariance, so invariance under different choices of spatial slices of the four-dimensional spacetime to become "space") for what is known as local spatial conformal symmetry; which implies local scale invariance of space. This also implies a key difference in the way that spacetime is viewed in the two theories. Whereas spacetime is one unified entity in General Relativity, shape dynamics builds up a spacetime from "stacking" spatial slices (for more details, see the blog entry by Julian Barbour.) Fixing a particular gauge in each of the two formulations then yields two equivalent theories.

 Although the two theories are classically equivalent for observational purposes, their quantized versions will differ. In particular, only one of them might admit a UV completion as a  quantum field theory with the help of a non-Gaussian fixed point.

 A second possibility is that both theory spaces might admit the existence of a non-Gaussian fixed point, but what is known as the universality class might be different: Loosely speaking, the universality class is determined by the rate of approach to the fixed point, which is captured by what is known as the critical exponents. Most importantly, while details of RG trajectories typically depend on the details of the regularization scheme (this specifies how exactly quantum fluctuations in the path integral are integrated out), the critical exponents are universal. The full collection of critical exponents of a fixed point then determines the unversality class. Universality classes are determined by symmetries, which is very well-known from second order phase transitions in thermodynamics.  Since the correlation length in the vicinity of a second-order phase transition diverges, the microscopic details of different physical systems do not matter: The behavior of physical observables in the vicinity of the phase transition is determined purely by the field content, dimensionality, and symmetries of a system.

 Different universality classes can differ in the number of relevant couplings, and thus correspond to theories with a different "amount of predictivity". Thus classically equivalent theories, when quantized, can have a different number of free parameters. Accordingly, not all universality classes will be compatible with observations, and the choice of theory space for gravity is thus crucial to identify which universality class might be "realized in nature".

 Clearly, the canonical quantization of standard General Relativity in contrast to shape dynamics will also differ, since shape dynamics actually has a non-trivial, albeit non-local, Hamiltonian.

 Finally, what is known as doubly General Relativity is the last step in the new construction. Starting from the symmetries of shape dynamics, one can discover a hidden BRST symmetry in General Relativity. BRST symmetries are symmetries existing in gauge-fixed path integrals for gauge theories. To do perturbation theory requires the gauge to be fixed, thus yielding a path-integral action which is not gauge invariant. The remnants of gauge invariants are encoded in BRST invariance, so it can be viewed as the quantum version of a gauge symmetry.

 In the case of gravity, BRST invariance connected to gauge invariance under diffeomorphisms of general relativity is supplemented by BRST invariance connected to local conformal invariance. This is what is referred to as symmetry doubling. Since gauge symmetries restrict the Renormalization Group flow in a theory space, the discovery of a new BRST symmetry in General Relativity is crucial to fully understand the possible existence of a fixed point and its universality class. Thus the newly discovered BRST invariance might turn out to be a crucial ingredient in constructing a quantum theory of gravity.