Three Degrees Above Zero

Uncategorized
18 min read

Researchers: Dr. L. Sriramkumar and V. Sreenath

Planck_CMB

Courtesy: ESA, Planck collaboration

These temperature fluctuations of radiation in the early universe are the seeds of all structure in the universe today. But how did these fluctuations arise?

Big Bang had a problem. Our theory of the early universe could not explain how widely-separated regions of it had seemingly influenced each other even though they clearly couldn’t have had the time to do so. Either the cosmic speed limit had been broken or it had all begun in a way more improbable than a pencil standing on its tip all by itself.

Then, in 1979, a Stanford postdoc named Alan Guth famously had his “spectacular realization” about just how big the bang must have been. A particle physicist, he found that the standard tools that helped him study worlds far smaller than atoms could also help him to get space in the infant universe to expand by a colossal factor of 1028 (one followed by twenty-eight zeroes) in an unimaginably small sliver of time. This meant regions of the universe that are now far apart had a common origin. Inflation, he called it.

But for tiny irregularities in that process, we would not be here. Of the small army of cosmologists worldwide who seek to know what exactly happened, are Dr. L. Sriramkumar of the Physics department of IIT Madras, and his doctoral student V. Sreenath.

Cosmology is not unlike paleontology. Studying the past for clues to understand the present is what paleontologists do. Cosmologists do the same, because the finite speed of light means that they only see celestial objects as they were when the light left them. Like paleontologists, cosmologists can only observe, not experiment. Unlike paleontologists, cosmologists have to be satisfied with observing from millions, sometimes billions, of light years away. (One light year is the distance travelled by light in a year.)

For the most part, this means extracting all possible information from the faint light that reaches us. Depending on the circumstances, light can be thought of as a wave or as a particle. In its wave avatar, light — visible and invisible — from a celestial object reveals lines at particular positions in its spectrum, each one a signature of the element (or the molecule) involved. If the positions of these lines appear shifted towards the blue, more energetic, part of the spectrum compared to a reference source in the laboratory, it means the object is moving towards us. A shift towards the red, less energetic part of the spectrum, means the motion is away from us. Known as the Doppler effect, the extent of this shift is a measure of the object’s velocity.

This came in handy when, barely a century ago, astronomers were debating whether the well-known nebulae — diffuse, extended objects — were part of our galaxy, the Milky Way, or if they were galaxies like our own, only very far away. Then, in 1912, the American astronomer Vesto Slipher observed that most nebulae were redshifted — they were receding from us. He had the prescience to note that there might be more to this than the Doppler effect. Following up was Edwin Hubble, whose star rose when he combined the redshifts with the distances to these objects, and came to a startling conclusion, now known as Hubble’s law: a nebula twice as far away seemed to be receding twice as fast. That the Milky Way contained everything now became an untenable idea.

In parallel, cosmology emerged as a theoretical science, owing its origin to one man: Albert Einstein. Having already unified space and time into a single entity with the special theory of relativity, Einstein, in 1915, published the general theory of relativity, which revealed the gravitational force to be fundamentally related to the geometry of spacetime.

Physicists like such syntheses. They also make the reasonable demand that a physical law, and its mathematical expression, must be the same for all observers regardless of their positions and velocities.

Broadly speaking, there are three kinds of mathematical objects that help them do this. The simplest, a scalar, is just a number with an associated unit. Temperature is a familiar example. Two people, if they use the same units, will always agree upon the temperature at a given point in space and time regardless of how their respective coordinate systems are related. A vector, meanwhile, is a set of three numbers (for three-dimensional space) which together define a direction in space. Modify the coordinate system, and these three numbers will change, but in such a way that the vector, both magnitude and direction, remains intact. Velocity is an example. The third, and the most general, quantity is a tensor, which can be thought of as a set of three vectors associated with two directions. Thus, in three-dimensional space, a tensor has nine components, which also change in a particular way if you transform the coordinate system.

These three types of quantities can be used to describe not just matter and its associated phenomena, but also entities that are less obviously material. Such as a field, a concept that was introduced as a convenient device to describe how a force propagated, and which later took on a life of its own. One of the concepts at the foundations of modern physics, a field is another way of saying that each point in space can be uniquely associated with a scalar, a vector, or a tensor. Accordingly, we get scalar, vector, or tensor fields.

Each of the four fundamental forces in nature — strong, electromagnetic, weak, and gravitation — has an associated field. A particle that feels any of these forces will have a potential energy by virtue of its interaction with the corresponding field. Moving the particle from one point to another costs the field some energy. If that cost depends only on the positions of the two points and not on the path taken, each point in space can be uniquely assigned a scalar, called potential, which is the amount of energy expended to move the particle to that point from a common reference point. Thus, we get a scalar potential field. The gravitational field in Newtonian gravity is an example.

The general theory of relativity, in contrast, represents the gravitational field mathematically as a tensor field. The source of the gravitational field is the energy-momentum tensor, analogous to mass in Newtonian gravity. It, however, includes energy not only in the form of matter, but also radiation, and momentum. Another tensor called the metric tensor describes the shape of spacetime, which was until then always thought to be flat. Given a particular distribution of matter-energy, a set of equations called Einstein’s field equations relates these two tensors and determines the curvature of spacetime. The motion of matter and radiation in this resulting geometry is what manifests itself as the gravitational force. In the words of John Wheeler, who championed Einstein’s theory in post-WWII America:

 

“Spacetime tells matter how to move;
matter tells spacetime how to curve”

 

Now that spacetime was no longer merely an empty stage, it didn’t take long for the true nature and scope of the cosmic drama to be realized. Disbelieving his own equations that showed a static universe would be unstable, which was contrary to the prevailing belief, Einstein fudged his equations by introducing an extra term now known as the cosmological constant. This term, a kind of anti-gravity, kept Einstein’s model universe from collapsing.

Others were more willing to confront the full implications of general relativity. Among them was the Russian Alexander Friedmann who, in 1922, found his eponymous solutions that described the universe, provided certain simplifying assumptions were made — inevitable given that cosmology, unlike astronomy, is concerned with the universe as a whole.

Cosmologists make the reasonable assumption that the part of the universe we live in isn’t special. Any other part of the universe would have a similar distribution of matter-energy and structures, provided we consider sufficiently large scales.

“When we say the universe is homogeneous, we’re talking about scales of the order of a hundred megaparsecs,” says Sreenath, where a parsec is a unit of distance equal to about 3.26 light years and the prefix mega denotes a million.

A second assumption, called isotropy, is that the universe looks the same in whichever direction you look. These two assumptions, together known as the cosmological principle, are now well-supported by observational evidence.

Friedmann had found the solution that embodied just such a homogeneous and isotropic universe. Independently, a Belgian priest-scientist, Georges Lemaitre, also found the same solution, as did two others. They realized that if the general theory of relativity was correct, the universe had to be either expanding or contracting. Observational evidence supported the former. If you rewind, the galaxies would approach each other, which led Lemaitre to propose that they must all have come from a single initial point, which he dubbed the “primeval atom.”

Soon, Hubble published his study of galactic redshifts and their distances, confirming that the universe was indeed expanding, tying together theory and observation in pointing to a universe which had a definite origin.

There was, however, another explanation that seemed possible. What if the universe had no origin and always was just as it is now? The space emptied between galaxies rushing apart could be filled with new matter appearing in just enough quantities to satisfy the cosmological principle. This neatly avoided the questions of how the universe began or what happened before it came into being. This theory, whose main proponent was the British astronomer Fred Hoyle, was called steady state, in appropriate contrast to a big bang (also coined by Hoyle).

An expanding universe naturally means that its energy density — of matter and radiation — is decreasing. But the energy density of radiation decreases faster than that of matter because, apart from the increasing volume, the light redshifts as space expands. A larger wavelength means less-energetic photons, the particle avatar of light. This, however, is a prediction from general relativity and is distinct from the Doppler effect. Known as the cosmological redshift, this is what Slipher and Hubble had observed.

Now, if we run this backwards, the energy density of radiation increases faster than that of matter. Somewhere along the way the two must have been equal — an equality epoch — which cosmologists estimate happened when the universe was a few tens of thousands of years old. At times earlier than this, the universe must have been radiation-dominated and atoms, much less structures, could not have existed. This is because the radiation was energetic enough to break apart any nascent alliance between protons, neutrons and electrons, the three constituents of all atoms.

The universe, however, seemed to be made up of three-quarters hydrogen, nearly one-quarter helium and trace amounts of lithium and beryllium. That the Big Bang hypothesis could account for this naturally was first shown by one of Friedmann’s students, George Gamow. He found that the temperature and density of matter in the very early universe would have been high enough for nuclear reactions to produce required quantities of the nuclei of these elements. Fred Hoyle later showed that all the heavier elements including the ones vital for life, such as carbon and oxygen, could be progressively manufactured in the nuclear fusion reactions that power the stars, which appeared much later in the history of the universe.

As space expanded and the temperature dropped, the nuclei formed would pair up with the electrons to form stable atoms. This is the recombination epoch, now dated at about 380,000 years after the Big Bang. Until then, radiation and matter had the same temperature, a condition known as thermal equilibrium. No longer impeded by charged particles, the photons were now free to begin their journey across the vastness of space and the immensity of time, collectively bearing the imprint of their last encounters of that era — a photograph, as it were, of the infant universe.

Lemaitre’s primeval atom implied a singularity — infinite energy density and temperature — signalling the breakdown of the known laws of physics. Physicists abhor singularities, a main reason for steady state theory’s appeal.

They were far cooler about the other extreme. Anything at a temperature above absolute zero (on the Kelvin scale; about -273 degrees on the Celsius scale) emits electromagnetic radiation. Physicists like to consider an ideal body in thermal equilibrium with its surroundings, which absorbs light of all possible wavelengths. This made the so-called blackbody capable of emitting light of all possible wavelengths too, the intensity distribution of which depends only on the temperature, and is called the blackbody spectrum. Increase or decrease the temperature, and the peak of the distribution — the wavelength at which the intensity of the emitted light is highest — changes. “It’s a unique curve which depends only on the temperature and therefore you can determine the temperature of the body,” says Dr. Sriramkumar.

A good scientific theory gains currency when, like any good bank account, it gives you back more than what you put in. If there was a Big Bang, then the relic radiation of the recombination epoch should still be around, appearing uniformly from all directions. Feeble because of the cosmological redshift, theorists predicted it would be at a temperature of about 5 K, falling in the microwave region of the spectrum. Crucially, they predicted a blackbody spectrum.

When theories compete, only an experiment can be the referee. Occasionally, this happens by accident. In 1964, radio astronomers trying to detect radio waves bounced off an early satellite prototype found noise — undesired and unexpected signals — that they couldn’t get rid of. With wavelength in the microwave range, this background signal was the same across the sky, night and day, throughout the year. And it didn’t come from any specific, known source. It was soon realized that this was the all-pervading radiation Big Bang predicted. And at 3.5 K, it had just about the right temperature.

 

Cosmologists had found their fossil.

 

They called it the cosmic microwave background, or CMB. And steady state theory had no satisfactory explanation for it.

There were problems, though. The CMB appeared to have exactly the same temperature wherever they looked. Estimates of the age of the universe, and of its expansion rate now and at earlier times, showed that regions of the sky that are now widely-separated would, soon after the Big Bang, have been pulled apart before they had had time to interact and attain the same temperature. Known as the horizon problem, it seemed to imply that the initial conditions were “fine-tuned” such that the entire universe started out with a uniform temperature. Further, if the CMB was a faithful record of that epoch, it implied that matter distribution in the early universe was also uniform. Why, then, does the universe have stars, galaxies, and us, instead of a diffuse gas of hydrogen?

Planck_history_of_universe_scaled
The history of structure formation in the Universe. Courtesy: European Space Agency

The universe is still expanding; and in 1998, this expansion was unexpectedly found to be accelerating. This repulsive force is estimated to account for around 68 percent of all the energy in the universe. Dark energy is its name, but little else is known about it.

Opposing this expansion is the gravitational attraction of all the matter-energy in the universe, the density of which determines who wins this cosmic tug of war. Any more dense than it is now, and Einstein’s equations showed it should have contracted and collapsed back on itself long ago; any less, and its expansion rate should have kept accelerating, and resulted in a much emptier universe. This represented a second major problem for the Big Bang theory. The energy density of our universe is too close to the critical density required for flatness to be attributed to chance.

 

Why does the CMB appear to have nearly the same temperature in all directions?
Why does the universe have stars, galaxies, and us?
And why does it appear to be flat?

 

Inflation appeared to solve these problems. If all of the universe we see now is a vastly blown-up version of a region which, before inflation, was small enough to have had time to interact, it means they were in thermal equilibrium to begin with, thereby explaining the uniformity of the CMB from different parts of the sky. And if you start with a very tiny region and blow it up, any initial curvature would get flattened out.

But what mechanism could cause expansion of space?

Guth found that a scalar field with certain properties could cause rapid, accelerated expansion of space in the earliest fractions of a second after the Big Bang. A few others had explored this idea before him, but it was with Guth’s paper that the field exploded. What’s more, inflation also had a natural mechanism for structure formation in the universe.

Any quantum field, unlike a classical field like that of Newtonian gravity, has uncertainties associated with it as a natural consequence of the laws of quantum mechanics. Thus a scalar field that had exceedingly tiny fluctuations in its otherwise uniform value, would, via inflation, lead to fluctuations in the density distribution of matter in the early universe. These could then act as “seeds” for structure formation, aided by gravitational attraction.

Such inhomogeneities, encoded in the CMB as temperature fluctuations, were found by the NASA satellite Cosmic Background Explorer (COBE) in 1992. Against the average temperature of about 2.73 K, the fluctuations, or anisotropies, were found to be miniscule — about one part in 100,000 — just as predicted by inflation. And it turned out to be the most perfect blackbody spectrum ever observed in nature.

 

COBE_blackbody

CMB data from COBE (red points) matches the spectrum predicted by the Big Bang theory (solid curve) to an extraordinary degree. Courtesy: NASA

 

Triumphant though they were, theorists had a lot more work to do.

“Inflation is a very simple theory and it works rather well. The challenge is that there is no unique model of inflation. There are many, many models of inflation,” says Dr. Sriramkumar.

In the standard inflationary picture, the scalar field, cleverly called the inflaton, can be thought of as having a bowl-shaped potential with a minimum, or vacuum, value at the bottom. If the field starts starts from elsewhere, it will “roll” down to the vacuum value. If the sides of the bowl are not very steep, this process takes time during which the potential energy of the inflaton is the dominant energy density in the universe. Once it reaches the bottom, it will oscillate about the minimum and lose energy by decaying into matter and radiation. When the resulting energy density overpowers the inflaton energy density, the standard Big Bang model takes over.

For a class of inflaton models, called slow-roll, the “friction” that arises naturally because of expansion is more pronounced, making the field roll down slowly. Different potentials of this class — different-shaped bowls — can be tried out to see which one best accounts for the observed CMB anisotropies that originate from quantum fluctuations, or perturbations, of the inflaton. “If the universe is smooth, that means the density is the same everywhere. If there are tiny perturbations, it means there’s something which is fluctuating with position,” says Dr. Sriramkumar.

These quantum fluctuations which perturb the placid inflaton can be expressed as the sum of a series of progressively diminishing terms. Adding only the first term of this series to the unperturbed term is a good approximation, called first-order, or linear, perturbation.

The energy-carrying inflaton field, perturbed, passes it on to the energy-momentum tensor; which, invoking Einstein’s field equations, perturbs the metric tensor. Now, the metric gives as good as it gets. “When there’s fluctuation in the metric, it will be automatically transferred to the matter and radiation in the universe,” says Sreenath. Thus, they are “coupled” and have to be studied together.

Each term in the perturbation expansion of the metric tensor is itself the sum of three terms, which, in linear perturbation, are independent and can be studied separately. One, a scalar, contributes most to the CMB anisotropies and is responsible for density fluctuations of the matter-energy distribution — and hence structure formation — in the early universe. While this term originates from quantum fluctuations of the inflaton, another term, a tensor, arises from quantum fluctuations of the space-time metric. The tensor perturbations need no matter-energy source at all and propagate as gravitational waves. Inflation predicts that a vector term is zero; observations agree.

Perturbation of the inflaton field simply refers to how far its value at any point differs from the background value. The simplest assumption is that it is entirely random — equally likely to be more or less than the background value. Mathematically, the perturbation at any point is drawn from a Gaussian distribution (which may not hold for higher-order perturbation.) The most likely value, the peak of the Gaussian distribution, is at zero, and other values occur with decreasing probability the farther they are from zero.

Whether the perturbations at two different points are independent can be studied using what’s called the two-point correlation function. For this, cosmologists use the Fourier transform, a standard tool in any physicist’s arsenal. Any pattern of fluctuations, no matter how complicated, can be represented as a sum of sinusoidal waves of different wavelengths, called Fourier modes, even though an infinite number of them may be required. The contribution of each mode to the sum — its amplitude — will be, in general, different. A plot of the square of the amplitudes of the Fourier modes against their wave numbers (which can be found from their wavelengths), is called a power spectrum, which offers a window on the nature of the system considered. For example, the power spectrum of a pendulum would be a single spike because it oscillates at one frequency.

The power spectrum of the scalar metric perturbations in the slow-roll scenario turns out to be a nearly-straight, horizontal line. This means that the typical size of the perturbations is almost the same on all scales, making it nearly scale-invariant. Had it been perfectly so, a number called the scalar spectral index would have had the value of exactly one. The near-scale-invariance of slow-roll models makes it about 0.96. The latest observed value comes from data collected by the European space observatory Planck.

“It is found to be 0.96,” says Sreenath.

 

But how do you distinguish between the
numerous models which predict this value?

 

The tensor-to-scalar ratio — the ratio of the tensor power spectrum, found from the tensor perturbations of the metric, to the scalar one — is a second observable. But the signatures of tensor perturbations are difficult to detect in the CMB, and many models predict it’s too small to detect.

The correlation functions can be of order higher than two, of which all the odd-ordered ones would be zero if the perturbations were Gaussian. In particular, a non-zero value of the Fourier transform of the three-point correlation function, called the bispectrum, would indicate non-Gaussianity of the perturbations. For the scalar perturbations, this is quantified by fNL, the ratio of the three-point and the two-point scalar correlation functions. Although models which lead to large levels of non-Gaussianity have been ruled out by the latest data from Planck, something more is required to separate the many models within the slow-roll paradigm which predict the observed fNL value of 10-2, or one part in 100.

Instead of being pure scalar or pure tensor, a three-point correlation function of cross-terms — two scalars and a tensor, or one scalar and two tensors — can be constructed. From their corresponding bispectra, one can define two more non-Gaussianity parameters.

“There have been some attempts in studying tensor three-point functions, but not much work in this direction,” says Sreenath. As part of his doctoral thesis, Sreenath developed a numerical method to compute these three-point scalar-tensor cross-correlation functions, as well as the tensor three-point correlation function, of the metric perturbations. Given a particular form of the inflaton potential, its non-Gaussianity parameters can be numerically computed using the code written by Sreenath. Two models — one slow-roll, one not — where analytical results were available, were used to verify the numerical results.

The three-point functions, or their bispectra, involve three different wave numbers. If one of them is much smaller than the other two, it’s called a “squeezed limit.” In such a case, the scalar three-point correlation function can be expressed completely in terms of the scalar two-point correlation function. Known as the consistency relation, it amounts to saying that, in the squeezed limit, fNL can be expressed in terms of the scalar spectral index alone.

Sreenath, working with Dr. Sriramkumar and one of his former students, showed that the consistency relations are valid not just for the scalar three-point correlation function, but also for the cross-correlation and tensor correlation functions, in the slow-roll scenario.

While the consistency relations hold for any model of the inflaton, the question of its validity for non-slow-roll models had remained relatively unaddressed. Sreenath found that, in the squeezed limit, the consistency relation is valid even for non-slow-roll models. “If we can verify this relation observationally, we can use it to eliminate two-field models, because it’s not valid for two-field models,” says Sreenath.

As Sreenath prepares for his doctoral thesis defence, cosmologists worldwide await further data from Planck. But there are a few known anomalies in the CMB that may have to wait for future experiments.

The universe has been bathed in this sea of radiation for almost its entire history, to speak nothing of human history. In this age of renewed hatred and increasingly assertive bigotry, it is good to remind ourselves that we’re more than that. H. G. Wells, last of the pre-space-age prophets, said as much:

“A day will come, one day in the unending succession of days, when beings, beings who are now latent in our thoughts and hidden in our loins, shall stand upon this earth as one stands upon a footstool, and shall laugh and reach out their hands amidst the stars.”

Barely a generation later, we have now reached out beyond the stars to almost the beginning of time, seeking answers to questions that were once thought to belong exclusively to the realm of metaphysics and theology. There are few things that bear greater testimony to human ingenuity and the power of the scientific method than the fact that all this began with poetry written in the language of Nature by the hand of its most accomplished master, combined with nothing more than careful observation and analysis of the most ancient light in the universe, just three degrees above zero.

We have a few hard copies of Immerse available. Want one? Sign up here.

 

Write a Comment

Your email address will not be published. Required fields are marked *