Paul M. Sutter (opens in new tab) is an astrophysicist at SUNY (opens in new tab) Stony Brook and the Flatiron Institute, host of "Ask a Spaceman (opens in new tab)" and "Space Radio (opens in new tab)," and author of "How to Die in Space (opens in new tab)." Sutter contributed this article to Space.com's Expert Voices: Op-Ed & Insights (opens in new tab).
Beginning in 2014, measurements of the Hubble constant — the present-day expansion rate of the universe — began to disagree. Measurements taken from the distant universe were about 10% off from measurements taken from the nearby universe. While that doesn't sound like a lot (and it isn't, considering the considerable feats of science needed to make these measurements in the first place), the uncertainties in those measurements were only about 2%.
A 10% difference with a 2% uncertainty is statistically significant and something worth investigating. Since 2014, there have been over 300 proposals for solutions to this "crisis in cosmology." None of these proposals is universally agreed upon by cosmologists, and as measurements continue, the crisis just keeps getting worse.
Related: Is there really a 'crisis' in cosmology?
The LCDM model
What's at stake here is our modern understanding of the history of the universe, as encapsulated by the so-called Lambda-CDM model, often abbreviated as LCDM. This model makes a few base assumptions, just like any other model in science. The model assumes that general relativity is correct at cosmological scales and that our universe is homogenous and isotropic (the same in all directions). It assumes that our universe is geometrically flat and that there is some entity, called dark matter, that doesn't often interact with normal matter (that's the "CDM" part, for "cold dark matter"). And it assumes that there's another substance, called dark energy (that's the "Lambda"), that maintains a constant density as the universe expands.
Once those assumptions are set (and they are, based on numerous past observations), the LCDM has only six free parameters. You need to make various cosmological measurements to get those numbers, but once you do, the model can predict everything else about the universe, up to and including the present-day expansion rate.
One of the best places to pin down the values of the six free parameters is the cosmic microwave background (CMB), which is the light left over from when the universe was only 380,000 years old. The CMB is useful because it's big, easy to measure and simple to understand.
Armed with measurements of the CMB, like those obtained from the European Space Agency's Planck satellite mission, you can fill in the unknowns in the LCDM model and have a really good grip on basically the entire history of the cosmos.
Ladders to the stars
So here comes the tension. Measurements of the early universe give us loads of information about the free parameters of the LCDM model. Those measurements come from not only the CMB but also from so-called baryon acoustic oscillations — subtle shifts in galaxy positions left over from when giant sound waves crashed around in the early universe — and the abundance of light elements.
No matter what combination of early-universe measurements you make to fill in the LCDM model, you end up predicting a value of the Hubble constant somewhere around 68 km/s/Mpc.
OK, great. Job over, right? Not so fast.
You can also try to directly measure the Hubble constant. To do this, you need to measure the distances and speeds of a bunch of objects. There are lots of options, including Type Ia supernovas, galaxy properties, Mira variables and certain kinds of red giant stars.
With the exception of the red-giant method, all of the local measurements of the Hubble constant reveal a much higher number — more like 74 km/s/Mpc.
Interestingly or frustratingly, the red-giant method gives a number right in the middle of the two extremes — hence, the crisis.
No way out
We have two completely different ways of approaching the Hubble constant. Both are well tested, well studied and well understood. The LCDM model is wildly successful in describing and predicting a host of cosmological observations, so nobody is in that great of a rush to toss it out. The measurements made of the CMB are exquisite — by far the most accurate measurements taken in the entire history of astronomy.
On the other hand, the supernova measurements are also legit. And different probes of the Hubble constant are giving comparable answers.
Early universe versus late universe, global versus local, large scale versus small scale — no matter how you slice it, these two perspectives on the universe should agree, but they don't. We should have a common, agreed-upon value for the Hubble constant, but we don't.
Cosmologists are so interested in this "crisis" because it's the first interesting thing to happen in cosmology since the discovery of dark energy over 20 years ago. When measurements disagree, it's nature whispering to us. There's room here, and perhaps an opportunity, to learn more about the cosmos.
To date, there have been over 300 proposals for solutions to the cosmology crisis. Some call for additional physics in the era of the CMB. Some call for dark energy doing something weird in the recent past. Some alter physics at a fundamental level, messing up our observations of supernovas.
However, no single proposal can explain the wealth of cosmological evidence, and we are nowhere near a consensus for a solution.
I personally believe that "if it's interesting, it's probably wrong." The most boring explanation for the crisis is that we're misunderstanding something about measuring the Hubble constant at local scales.
But only time will tell.
Learn more by listening to the "Ask A Spaceman" podcast, available on iTunes (opens in new tab) and askaspaceman.com (opens in new tab). Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter (opens in new tab) and facebook.com/PaulMattSutter (opens in new tab).