Sunday, October 22, 2017

New gravitational wave detection with optical counterpart rules out some dark matter alternatives

The recently reported gravitational wave detection, GW170817, was accompanied by electromagnetic radiation. Both signals arrived on Earth almost simultaneously, within a time-window of a few seconds. This is a big problem for some alternatives to dark matter as this new paper lays out:


The observation is difficult to explain with some variants of modified gravity because in these models electromagnetic and gravitational radiation travel differently.

In modified gravity, dark matter is not made of particles. Instead, the gravitational pull felt by normal matter comes from a gravitational potential that is not the one predicted by general relativity. In general relativity and its modifications likewise, the gravitational potential is described by the curvature of space-time and encoded in what is called the “metric.” In the versions of modified gravity studied in the new paper, the metric has additional terms which effectively act on normal matter as if there was dark matter, even though there is no dark matter.

However, the metric in general relativity is also what gives rise to gravitational waves, which are small, periodic disturbances of that metric. If dark matter is made of particles, then the gravitational waves themselves travel through the gravitational potential of normal plus dark matter. If dark matter, however, is due to a modification of the gravitational potential, then gravitational waves themselves do not feel the dark matter potential.

This can be probed if you send both types of signals, electromagnetic and gravitational, through a gravitational potential, for example that of the Milky Way. The presence of the gravitational potential increases the run-time of the signal, and the deeper the potential, the longer the run-time. This is known as “Shapiro-delay” and is one of the ways, for example, to probe general relativity in the solar system.

The authors of the paper put in the numbers and find that the difference between the potential with dark matter for electromagnetic radiation and the potential without dark matter for gravitational radiation adds up to about a year for the Milky Way alone. On top come some hundred days more delay if you also take into account galaxies that the signals passed by on the way from the source to Earth. If correct, this means that the almost simultaneous arrival of both signals rules out the modifications of gravity which lead to differences in the travel-time by many orders of magnitude.

The logic of the argument is this. We know that galaxies cause gravitational lensing as if they contain dark matter. This means even if dark matter can be ascribed to modified gravity, its effect on light must be like that of dark matter. The Shapiro-delay isn’t exactly the same as gravitational lensing, but the origin of both effects is mathematically similar. This makes it plausible that the Shapiro-delay for electromagnetic radiation scales with the dark matter mass, regardless of its origin. The authors assume that the delay for the gravitational waves in modified gravity is just due to normal matter. This means that gravitational waves should arrive much sooner than their electromagnetic company because the potential the gravitational waves feel is much shallower.

The Shapiro-delay on the Sun is about 10-4 seconds. If you scale this up to the Milky Way, with a mass of about 1012 times that of the Sun, this gives 108 seconds, which is indeed about a year or so. You gain a little since the dark matter mass is somewhat higher and lose a little because the Milky Way isn’t spherically symmetric. But by order of magnitude this simple estimate explains the constraint.

The paper hence rules out all modified gravity theories that predict gravitational waves which pass differently through the gravitational potential of galaxies than electromagenetic waves do. This does not affect all types of modified gravity, but it does affect, according to the paper, Bekenstein’s TeVeS and Moffat’s Scalar-Vector-Tensor theory.

A word of caution, however, is that the paper does not contain, and I have not seen, an actual calculation for the delay of gravitational waves in the respective modified gravity models. Though the estimate seems good, it’s sketchy on the math.

I think the paper is a big step forward. I am not sold on either modified gravity or particle dark matter and think both have their pros and cons. To me, particle dark matter seems plausible and it works well on all scales, while modified gravity doesn’t work so well on cosmological (super-galactic) scales. On the other hand, we haven’t directly measured any dark matter particles, and some of the observed regularities in galaxies are not well explained by the particle-hypothesis.

But as wonderful as it is to cross some models off the list, ruling out certain types of modified gravity doesn’t make particle dark matter any better. The reason you never hear anyone claim that particle dark matter has been ruled out is that it’s not possible to rule it out. The idea is so flexible and the galactic simulations have so many parameters you can explain everything.

This is why I have lately been intrigued by the idea that dark matter is a kind of superfluid which, in certain approximations, behaves like modified gravity. This can explain the observed regularities while maintaining the benefits of particle dark matter. For all I can tell, the new constraint doesn’t apply to this type of superfluid (one of the authors of the new paper confirmed this to me).

In summary, let me emphasize that this new observation doesn’t rule out modified gravity any more than the no-detection of Weakly Interacting Massive Particles rules out particle dark matter. So please don’t jump to conclusions. It rules out certain types of modified gravity, no more and no less. But this paper gives me hope that a resolution of the dark matter mystery might happen in my lifetime.

Friday, October 20, 2017

Space may not be as immaterial as we thought

Galaxy slime. [Img Src]
Physicists have gathered evidence that space-time can behave like a fluid. Mathematical evidence, that is, but still evidence. If this relation isn’t a coincidence, then space-time – like a fluid – may have substructure.

We shouldn’t speak of space and time as if the two were distant cousins. We have known at least since Einstein that space and time are inseparable, two hemispheres of the same cosmic brain, joined to a single entity: space-time. Einstein also taught us that space-time isn’t flat, like a paper, but bent and wiggly, like a rubber sheet. Space-time curves around mass and energy, and this gives rise to the effect we call gravity.

That’s what Einstein said. But turns out if you write down the equations for small wiggles in a medium – such as soundwaves in a fluid – then the equations look exactly like those of waves in a curved background.

Yes, that’s right. Sometimes, waves in fluids behave like waves in a curved space-time; they behave like waves in a gravitational field. Fluids, therefore, can be used to simulate gravity. And that’s some awesome news because this correspondence between fluids and gravity allows physicists to study situations that are otherwise experimentally inaccessible, for example what happens near a black hole horizon, or during the rapid expansion in the early universe.

This mathematical relation between fluids and gravity is known as “analog gravity.” That’s “analog” as in “analogy” not as opposed to digital. But it’s not just math. The first gravitational analogies have meanwhile been created in a laboratory.

Most amazing is the work by Jeff Steinhauer at Technion, Israel. Steinhauer used a condensate of supercooled atoms that “flows” in a potential of laser beams which simulate the black hole horizon. In his experiment, Steinhauer wanted to test whether black holes emit radiation as Stephen Hawking predicted. The temperature of real, astrophysical, black holes is too small to be measurable. But if Hawking’s calculation is right, then the fluid-analogy of black holes should radiate too.

Black holes trap light behind the “event horizon.” A fluid that simulates a black hole doesn’t trap light, it traps instead the fluid’s soundwaves behind what is called the “acoustic horizon.” Since the fluid analogies of black holes aren’t actually black, Bill Unruh suggested to call them “dumb holes.” The name stuck.

But whether the horizon catches light or sound, Hawking-radiation should be produced regardless, and it should appear in form of fluctuations (in the fluid or quantum matter fields, respectively) that are paired across the horizon.

Steinhauer claims he has measured Hawking-radiation produced by an acoustic black hole. His results are presently somewhat controversial – not everyone is convinced he has really measured what he claims he did – but I am sure sooner or later this will be settled. More interesting is that Steinhauer’s experiment showcases the potential of the method.

Of course fluid-analogies are still different from real gravity. Mathematically the most important difference is that the curved space-time which the fluid mimics has to be designed. It is not, as for real gravity, an automatic reaction to energy and matter; instead, it is part of the experimental setup. However, this is a problem which at least in principle can be overcome with a suitable feedback loop.

The conceptually more revealing difference is that the fluid’s correspondence to a curved space-time breaks down once the experiment starts to resolve the fluid’s atomic structure. Fluids, we know, are made of smaller things. Curved space-time, for all we presently know, isn’t. But how certain are we of this? What if the fluid analogy is more than an analogy? Maybe space-time really behaves like a fluid; maybe it is a fluid. And if so, the experiments with fluid-analogies may reveal how we can find evidence for a substructure of space-time.

Some have pushed the gravity-fluid analogy even further. Gia Dvali from LMU Munich, for example, has proposed that real black holes are condensates of gravitons, the hypothetical quanta of the gravitational field. This simple idea, he claims, explains several features of black holes which have so-far puzzled physicists, notably the question how black holes manage to keep the information that falls into them.

We used to think black holes are almost featureless round spheres. But if they are instead, as Dvali says, condensates of many gravitons, then black holes can take on many slightly different configuration in which information can be stored. Even more interesting, Dvali proposes the analogy could be used to design fluids which are as efficient at storing and distributing information as black holes are. The link between condensed matter and astrophysics, hence, works both ways.

Physicists have looked for evidence of space-time being a medium for some while. For example by studying light from distant sources, such as gamma-ray bursts, they tried to find out whether space has viscosity or whether it causes dispersion (a running apart of frequencies like in a prism). A new line of research is to search for impurities – “space-time defects” – like crystals have them. So far the results have been negative. But the experiments with fluid analogies might point the way forward.

If space-time is made of smaller things, this could solve a major problem: How to describe the quantum behavior of space time. Unlike all the other interactions we know of, gravity is a non-quantum theory. This means it doesn’t fit together with the quantum theories that physicists use for elementary particles. All attempts to quantize gravity so-far have either failed or remained unconfirmed speculations. That space itself isn’t fundamental but made of other things is one way to approach the problem.

Not everyone likes the idea. What irks physicists most about giving substance to space-time is that this breaks Einstein’s bond between space and time which has worked dramatically well – so far. Only further experiment will reveal whether Einstein’s theory holds up.

Time flows, they say. Maybe space does too.

This article previously appeared on iai.news.

Tuesday, October 17, 2017

I totally mean it: Inflation never solved the flatness problem.

I’ve had many interesting reactions to my recent post about inflation, this idea that the early universe expanded exponentially and thereby flattened and smoothed itself. The maybe most interesting response to my pointing out that inflation doesn’t solve the problems it was invented to solve is a flabbergasted: “But everyone else says it does.”

Not like I don’t know that. But, yes, most people who work on inflation don’t even get the basics right.

Inflation flattens the universe like
photoshop flattens wrinkles. Impressive!
[Img Src]


I’m not sure why that is so. Those who I personally speak with pretty quickly agree that what I say is correct. The math isn’t all that difficult and the situation pretty clar. The puzzle is, why then do so many of them tell a story that is nonsense? And why do they keep teaching it to students, print it in textbooks, and repeat it in popular science books?

I am fascinated by this for the same reason I’m fascinated by the widely-spread and yet utterly wrong idea that the Bullet-cluster rules out modified gravity. As I explained in an earlier blogpost, it doesn’t. Never did. The Bullet-cluster can be explained just fine with modified gravity. It’s difficult to explain with particle dark matter. But, eh, just the other day I met a postdoc who told me the Bullet-cluster rules out modified gravity. Did he ever look at the literature? No.

One reason these stories survive – despite my best efforts to the contrary – is certainly that they are simple and sound superficially plausible. But it doesn’t take much to tear them down. And that it’s so simple to pull away the carpet under what motivates research of thousands of people makes me very distrustful of my colleagues.

Let us return to the claim that inflation solves the flatness problem. Concretely, the problem is that in cosmology there’s a dynamical variable (ie, one that depends on time), called the curvature density parameter. It’s by construction dimensionless (doesn’t have units) and its value today is smaller than 0.1 or so. The exact digits don’t matter all that much.

What’s important is that this variable increases in value over time, meaning it must have been smaller in the past. Indeed, if you roll it back to the Planck epoch or so, it must have been something like 10-60, take or give some orders of magnitude. That’s what they call the flatness problem.

Now you may wonder, what’s problematic about this. How is it surprising that the value of something which increases in time was smaller in the past? It’s an initial value that’s constrained by observation and that’s really all there is to say about it.

It’s here where things get interesting: The reason that cosmologists believe it’s a problem is that they think a likely value for the curvature density at early times should have been close to 1. Not exactly one, but not much smaller and not much larger. Why? I have no idea.

Each time I explain this obsession with numbers close to 1 to someone who is not a physicist, they stare at me like I just showed off my tin foil hat. But, yeah, that’s what they preach down here. Numbers close to 1 are good. Small or large numbers are bad. Therefore, cosmologists and high-energy physicists believe that numbers close to 1 are more likely initial conditions. It’s like a bizarre cult that you’re not allowed to question.

But if you take away one thing from this blogpost it’s that whenever someone talks about likelihood or probability you should ask “What’s the probability distribution and where does it come from?”

The probability distribution is what you need to define just how likely each possible outcome is. For a fair dice, for example, it’s 1/6 for each outcome. For a not-so-fair dice it could be any combination of numbers, so long as the probabilities all add to 1. There are infinitely many probability distributions and without defining one it is not clear what “likely” means.

If you ask physicists, you will quickly notice that neither for inflation nor for theories beyond the standard model does anyone have a probability distribution or ever even mentions a probability distribution for the supposedly likely values.

How does it matter?

The theories that we currently have work with differential equations and inflation is no exception. But the systems that we observe are not described by the differential equations themselves, they are described by solutions to the equation. To select the right solution, we need an initial condition (or several, depending on the type of equation). You know the drill from Newton’s law: You have an equation, but you only can tell where the arrow will fly if you also know the arrow’s starting position and velocity.

The initial conditions are either designed by the experimenter or inferred from observation. Either way, they’re not predictions. They can not be predicted. That would be a logical absurdity. You can’t use a differential equation to predict its own initial conditions. If you want to speak about the probability of initial conditions you need another theory.

What happens if you ignore this and go with the belief that the likely initial value for the curvature density should be about 1? Well, then you do have a problem indeed, because that’s incompatible with data to a high level of significance.

Inflation then “solves” this supposed problem by taking the initial value and shrinking it by, I dunno, 100 or so orders of magnitude. This has the consequence that if you start with something of order 1 and add inflation, the result today is compatible with observation. But of course if you start with some very large value, say 1060, then the result will still be incompatible with data. That is, you really need the assumption that the initial values are likely to be of order 1. Or, to put it differently, you are not allowed to ask why the initial value was not larger than some other number.

This fineprint, that there are still initial values incompatible with data, often gets lost. A typical example is what Jim Baggot writes in his book “Origins” about inflation:
“when inflation was done, flat spacetime was the only result.”
Well, that’s wrong. I checked with Jim and he totally knows the math. It’s not like he doesn’t understand it. He just oversimplifies it maybe a little too much.

But it’s unfair to pick on Jim because this oversimplification is so common. Ethan Siegel, for example, is another offender. He writes:
“if the Universe had any intrinsic curvature to it, it was stretched by inflation to be indistinguishable from “flat” today.”
That’s wrong too. It is not the case for “any” intrinsic curvature that the outcome will be almost flat. It’s correct only for initial values smaller than something. He too, after some back and forth, agreed with me. Will he change his narrative? We will see.

You might say then, but doesn’t inflation at least greatly improve the situation? Isn’t it better because it explains there are more values compatible with observation? No. Because you have to pay a price for this “explanation:” You have to introduce a new field and a potential for that field and then a way to get rid of this field once it’s done its duty.

I am pretty sure if you’d make a Bayesian estimate to quantify the complexity of these assumptions, then inflation would turn out to be more complicated than just picking some initial parameter. Is there really any simpler assumption than just some number?

Some people have accused me of not understanding that science is about explaining things. But I do not say we should not try to find better explanations. I say that inflation is not a better explanation for the present almost-flatness of the universe than just saying the initial value was small.

Shrinking the value of some number by pulling exponential factors out of thin air is not a particularly impressive gimmick. And if you invent exponential factors already, why not put them into the probability distribution instead?

Let me give you an example for why the distinction matters. Suppose you just hatched from an egg and don’t know anything about astrophysics. You brush off a loose feather and look at our solar system for the first time. You notice immediately that the planetary orbits almost lie in the same plane.

Now, if you assume a uniform probability distribution for the initial values of the orbits, that’s an incredibly unlikely thing to happen. You would think, well, that needs explaining. Wouldn’t you?

The inflationary approach to solving this problem would be to say the orbits started with random values but then some so-far unobserved field pulled them all into the same plane. Then the field decayed so we can’t measure it. “Problem solved!” you yell and wait for the Nobel Prize.

But the right explanation is that due to the way the solar system formed, the initial values are likely to lie in a plane to begin with! You got the initial probability distribution wrong. There’s no fancy new field.

In the case of the solar system you could learn to distinguish dynamics from initial conditions by observing more solar systems. You’d find that aligned orbits are the rule not the exception. You’d then conclude that you should look for a mechanism that explains the initial probability distribution and not a dynamical mechanism to change the uniform distribution later.

In the case of inflation, unfortunately, we can’t do such an observation since this would require measuring the initial value of the curvature density in other universes.

While I am at it, it’s interesting to note that the erroneous argument against the heliocentric solar system, that the stars would have to be “unnaturally” far away, was based on the same mistake that the just-hatched chick made. Astronomers back then implicitly assumed a probability distribution for distances between stellar objects that was just wrong. (And, yes, I know they also wrongly estimated the size of the stars.)

In the hope that you’re still with me, let me emphasize that nevertheless I think inflation is a good theory. Even though it does not solve the flatness problem (or monopole problem or horizon problem) it explains certain correlations in the cosmic-microwave-background. (ET anticorrelations for certain scales, shown in the figure below.)
Figure 3.9 from Daniel Baumann’s highly recommendable lecture notes.


In the case of these correlations, adding inflation greatly simplifies the initial condition that gives rise to the observation. I am not aware that someone actually has quantified this simplification but I’m sure it could be done (and it should be done). Therefore, inflation actually is the better explanation. For the curvature, however, that isn’t so because replacing one number with another number times some exponential factor doesn’t explain anything.

I hope that suffices to convince you that it’s not me who is nuts.

I have a lot of sympathy for the need to sometimes oversimplify scientific explanations to make them accessible to non-experts. I really do. But the narrative that inflation solves the flatness problem can be found even in papers and textbooks. In fact, you can find it in the above-mentioned lecture notes! It’s about time this myth vanishes from the academic literature.

Friday, October 13, 2017

Is the inflationary universe a scientific theory? Not anymore.

Living in a Bubble?
[Image: YouTube]
We are made from stretched quantum fluctuations. At least that’s cosmologists’ currently most popular explanation. According to their theory, the history of our existence began some billion years ago with a – now absent – field that propelled the universe into a phase of rapid expansion called “inflation.” When inflation ended, the field decayed and its energy was converted into radiation and particles which are still around today.

Inflation was proposed more than 35 years ago, among others, by Paul Steinhardt. But Steinhardt has become one of the theory’s most fervent critics. In a recent article in Scientific American, Steinhardt together with Anna Ijjas and Avi Loeb, don’t hold back. Most cosmologists, they claim, are uncritical believers:
“[T]he cosmology community has not taken a cold, honest look at the big bang inflationary theory or paid significant attention to critics who question whether inflation happened. Rather cosmologists appear to accept at face value the proponents’ assertion that we must believe the inflationary theory because it offers the only simple explanation of the observed features of the universe.”
And it's even worse, they argue, inflation is not even a scientific theory:
“[I]nflationary cosmology, as we currently understand it, cannot be evaluated using the scientific method.”
As alternative to inflation, Steinhardt et al promote a “big bounce” in which the universe’s expansion was preceded by a phase of contraction, yielding similar benefits to inflation.

The group’s fight against inflation isn’t news. They laid out their arguments in a series of papers during the last years (on which I previously commented here). But the recent SciAm piece called The Defenders Of Inflation onto stage. Lead by David Kaiser, they signed a letter to Scientific American in which they complained that the magazine gave space to the inflationary criticism.

The letter’s list of undersigned is an odd selection of researchers who themselves work on inflation and of physics luminaries who have little if anything to do with inflation. Interestingly, Slava Mukhanov – one of the first to derive predictions from inflation – did not sign. And it’s not because he wasn’t asked. In an energetic talk delivered at Stephen Hawking’s birthday conference two months ago, Mukhanov made it pretty clear that he thinks most of the inflationary model building is but a waste of time.

I agree with Muhkanov’s assessment. The Steinhardt et al article isn’t exactly a masterwork of science writing. It’s also unfortunate they’re using SciAm to promote some other theory of how the universe began rather than sticking to their criticism of inflation. But some criticism is overdue.

The problem with inflation isn’t the idea per se, but the overproduction of useless inflationary models. There are literally hundreds of these models, and they are – as the philosophers say – severely underdetermined. This means if one extrapolates any models that fits current data to a regime which is still untested, the result is ambiguous. Different models lead to very different predictions for not-yet made observations. Presently, is therefore utterly pointless to twiddle with the details of inflation because there are literally infinitely many models one can think up.

Rather than taking on this overproduction problem, however, Steinhardt et al in their SciAm piece focus on inflation’s failure to solve the problems it was meant to solve. But that’s an idiotic criticism because the problems that inflation was meant to solve aren’t problems to begin with. I’m serious. Let’s look at those one by one:

1. The Monopole Problem

Guth invented inflation to solve the “monopole problem.” If the early universe underwent a phase-transition, for example because the symmetry of grand unification was broken – then topological defects, like monopoles, should have been produced abundantly. We do not, however, see any of them. Inflation dilutes the density of monopoles (and other worries) so that it’s unlikely we’ll ever encounter one.

But a plausible explanation for why we don’t see any monopoles is that there aren’t any. We don’t know there is any grand symmetry that was broken in the early universe, or if there is, we don’t know when it was broken, or if the breaking produced any defects. Indeed, all searchers for evidence of grand symmetry – mostly via proton decay – turned out negative. This motivation is interesting today merely for historical reasons.

2. The Flatness Problem

The flatness problem is a finetuning problem. The universe currently seems to be almost flat, or if it has curvature, then that curvature must be very small. The contribution of curvature to the dynamics of the universe however increases in relevance relative to that of matter. This means if the curvature density parameter is small today, it must have been even smaller in the past. Inflation serves to make any initial curvature contribution smaller by something like 100 orders of magnitude or so.

This is supposed to be an explanation, but it doesn’t explain anything, for now you can ask, well, why wasn’t the original curvature larger than some other number? The reason that some physicists believe something is being explained here is that numbers close to 1 are pretty according to current beauty-standards, while numbers much smaller than 1 numbers aren’t. The flatness problem, therefore, is an aesthetic problem, and I don’t think it’s an argument any scientist should take seriously.

3. The Horizon Problem

The Cosmic Microwave Background (CMB) has almost at the same temperature in all directions. Problem is, if you trace back the origin the background radiation without inflation, then you find that the radiation that reached us from different directions was never in causal contact with each other. Why then does it have the same temperature in all directions?

To see why this problem isn’t a problem, you have to know how the theories that we currently use in physics work. We have an equation – a “differential equation” – that tells us how a system (eg, the universe) changes from one place to another and one moment to another. To make any use of this equation, however, we also need starting values or “initial conditions.”*

The horizon problem asks “why this initial condition” for the universe. This question is justified if an initial condition is complicated in the sense of requiring a lot of information. But a homogeneous temperature isn’t complicated. It’s dramatically easy. And not only isn’t there much to explain, inflation moreover doesn’t even answer the question “why this initial condition” because it still needs an initial condition. It’s just a different initial condition. It’s not any simpler and it doesn’t explain anything.

Another way to see that this is a non-problem: If you’d go back in time far enough without inflation, you’d eventually get to a period when matter was so dense and curvature so high that quantum gravity was important. And what do we know about the likelihood of initial conditions in a theory of quantum gravity? Nothing. Absolutely nothing.

That we’d need quantum gravity to explain the initial condition for the universe, however, is an exceedingly unpopular point of view because nothing can be calculated and no predictions can be made.

Inflation, on the other hand, is a wonderfully productive model that allows cosmologists to churn out papers.

You will find the above three problems religiously repeated as a motivation for inflation, in lectures and textbooks and popular science pages all over the place. But these problems aren’t problems, never were problems, and never required a solution.

Even though inflation was ill-motivated when conceived, however, it later turned out to actually solve some real problems. Yes, sometimes physicists work on the wrong things for the right reasons, and sometimes they work on the right things for the wrong reasons. Inflation is an example for the latter.

The reasons why many physicists today think something like inflation must have happened are not that it supposedly solve the three above problems. It’s that some features of the CMB have correlations (the “TE power spectrum”) which depend on the size of the fluctuations, and implies a dependence on the size of the universe. This correlation, therefore, cannot be easily explained by just choosing an initial condition, since it is data that goes back to different times. It really tells us something about how the universe changed with time, not just where it started from.**

Two more convincing features of inflation are that, under fairly general circumstances, the model also explains the absence of certain correlations in the CMB (the “non-Gaussianities”) and how many CMB fluctuations there are of any size, quantified by what is known as the “scale factor.”

But here is the rub. To make predictions with inflation one cannot just say “there once was exponential expansion and it ended somehow.” No, to be able to calculate something, one needs a mathematical model. The current models for inflation work by introducing a new field – the “inflaton” – and give this field a potential energy. The potential energy depends on various parameters. And these parameters can then be related to observations.

The scientific approach to the situation would be to choose a model, determine the parameters that best fit observations, and then revise the model as necessary – ie, as new data comes in. But that’s not what cosmologists presently do. Instead, they have produced so many variants of models that they can now “predict” pretty much anything that might be measured in the foreseeable future.

It is this abundance of useless models that gives rise to the criticism that inflation is not a scientific theory. And on that account, the criticism is justified. It’s not good scientific practice. It is a practice that, to say it bluntly, has become commonplace because it results in papers, not because it advances science.

I was therefore dismayed to see that the criticism by Steinhardt, Ijas, and Loeb was dismissed so quickly by a community which has become too comfortable with itself. Inflation is useful because it relates existing observations to an underlying mathematical model, yes. But we don’t yet have enough data to make reliable predictions from it. We don’t even have enough data to convincingly rule out alternatives.

There hasn’t been a Nobelprize for inflation, and I think the Nobel committee did well in that decision.

There’s no warning sign you when you cross the border between science and blabla-land. But inflationary model building left behind reasonable scientific speculation long ago. I, for one, am glad that at least some people are speaking out about it. And that’s why I approve of the Steinhardt et al criticism.


* Contrary to what the name suggest, the initial conditions could be at any moment, not necessarily the initial one. We would still call them initial conditions.

** This argument is somewhat circular because extracting the time-dependence for the modes already presumes something like inflation. But at least it’s a strong indicator.

This article was previously published on Starts With A Bang. 

Tuesday, October 03, 2017

Yet another year in which you haven’t won a Nobel Prize!

“Do you hope to win a Nobel Prize?” asked an elderly man who had come to shake my hand after the lecture. I laughed, but he was serious. Maybe I had been a little too successful explaining how important quantum gravity is.

No, I don’t hope to win a Nobel Prize. If that’s what I’d been after, I certainly would have chosen a different field. Condensed matter physics, say, or quantum things. At least cosmology. But certainly not quantum gravity.
Nobel Prize medal for physics and chemistry. It shows nature in the form of a goddess emerging from the clouds. The veil which covers her face is held up by the Genius of Science. Srsly, see Nobelprize.org.

But the Nobel Prize is important for science. It’s important not because it singles out a few winners but because in science it’s the one annual event that catches everybody’s attention. On which other day does physics make headlines?

In recent years I heard increasingly louder calls that the Prize-criteria should be amended so that more than three people can win. I am not in favor of that. It doesn’t make sense anyway to hand out exactly one Prize each year regardless of how much progress was made. There is always a long list of people who deserved a Nobel but never got one. Like Vera Rubin, who died last year and who by every reasonable measure should have gotten one. Shame on you, Nobel Committee.

I am particularly opposed to the idea that the Nobel Prize should be awarded to collaborations with members sometimes in the hundreds or even thousands. While the three-people-cutoff is arguably arbitrary, I am not in favor of showering collaboration members with fractional prizes. Things don’t get going because a thousand scientists spontaneously decide to make an experiment. It’s always but a few people who are responsible to make things happen. Those are the ones which the Nobel committee should identify.

So, I am all in favor of the Nobel Prize and like it the way it is. But (leaving aside that many institutions seem to believe Nobel Prize winners lay golden eggs) the Prize has little relevance in research. I definitely know a few people who hope to win it and some even deserve it. But I yet have to meet anyone who deliberately chose their research with that goal in mind.

The Nobel Prize is by construction meant to honor living scientists. This makes sense because otherwise we’d have a backlog of thousands of deceased scientific luminaries and nobody would be interested watching the announcement. But in some research areas we don’t expect to see payoffs in our lifetime. Quantum gravity is one of them.

Personally, I feel less inspired by Nobel Prize winners than by long-dead geniuses like Da Vinci, Leibnitz, or Goethe – masterminds whose intellectual curiosity spanned disciplines. They were ahead of their time and produced writings that not rarely were vague, hard to follow, and sometimes outright wrong. None of them would have won a Nobel Prize had the Prize existed at the time. But their insights laid the basis for centuries of scientific progress.

And so, while we honor those who succeed in the present, let’s not forget that somewhere among us, unrecognized, are the seeds that will grow to next centuries’ discoveries.

Today, as the 2017 Nobel prize is awarded, I want to remind those of you who work in obscure research areas, produce unpopular artworks, or face ridicule for untimely writing, that history will be your final judge, not your contemporaries.

Then again maybe I should just work on those song-lyrics a little harder ;)

Wednesday, September 27, 2017

Dear Dr B: Why are neutrinos evidence for physics beyond the standard model?

Dear Chris,

The standard model of particle physics contains two different types of particles. There are the fermions, which make up matter, and the gauge-bosons which mediate interactions between the fermions and, in some cases, among themselves. There is one additional particle – the Higgs-boson – which is needed to give masses to both bosons and fermions.

Neutrino event at the IceCube Observatory in Antarctica.
Image: IceCube Collaboration

The fermions come in left-handed and right-handed versions which are mirror-images of each other. In what I think is the most perplexing feature of the standard model, the left-handed and right-handed versions of fermions behave differently. We say the fermions are “chiral.” The difference between the left- and right-handed particles is most apparent if you look at neutrinos: Nobody has ever seen a right-handed neutrino.

You could say, well, no problem, let’s just get rid of the right-handed neutrinos. Who needs those anyway?

But it’s not that easy because we have known for 20 years or so that neutrinos have masses. We know this because we see them mix or “oscillate” into each other, and such an oscillation requires a non-vanishing mass-difference. This means not all the neutrino-masses can be zero.

Neutrino masses are a complication because the usual way to give masses to fermions is to couple the left-handed version with the right-handed version and with the Higgs. So what do you do if you have no right-handed neutrinos and yet neutrinos are massive?

The current status is therefore that either a) there are right-handed neutrinos but we haven’t yet seen them, or b) neutrinos are different from the other fermions and can get masses in a different way. In either case, the standard model is incomplete.

It is partly an issue of terminology though. Some physicists say right-handed neutrinos are part of the standard model. In this case they aren’t “beyond the standard model” but instead their discovery is pending.

I have a personal fascination with neutrinos because I believe they’ll be key to understanding the pattern of particle-masses. This is because the right-handed neutrino is the only particle in the standard model that doesn’t carry gauge-charges (or they are all zero, respectively). It seems to me that this should be the reason for it either being very heavy or not being there at all. But that’s speculation.

In any case, there many neutrino experiments presently under way to closer study neutrino-oscillations and also to look for “neutrinoless double-beta decay.” The relevance of the latter is that such a decay is possible only if neutrinos are different from the other fermions of the standard model, so that no additional particles are needed to create neutrino masses.

So, no, particle physics isn’t dead and over, it’s still full with discoveries waiting to happen!

Thanks for an interesting question.


See also:
or click here for all posts in this series.

Thursday, September 21, 2017

The Quantum Quartet

I made some drawings recently. For no particular purpose, really, other than to distract myself.






And here is the joker:


Tuesday, September 19, 2017

Interna

I’m still writing on the book. After not much happened for almost a year, my publisher now rather suddenly asked for the final version of the manuscript. Until that’s done not much will be happening on this blog.

We do seem to have settled on a title though: “Lost in Math: How Beauty Leads Physics Astray.” The title is my doing, the subtitle isn’t. I just hope it won’t lead too many readers astray.

The book is supposed to be published in the USA/Canada by Basic Books next year in the Spring, and in Germany by Fischer half a year later. I’ll tell you more about the content at some point but right now I’m pretty sick of the whole book-thing.

In the meantime I have edited another book, this one on “Experimental Search for Quantum Gravity” which you can now preoder on amazon. It’s a, probably rather hard to digest, collection of essays about topics covered at a conference I organized last year. I merely wrote the preface.

Yesterday the twins had their first day in school. As is unfortunately still common in Germany, classes go only until noon. And so, we’re now trying a new arrangement to keep the kids occupied throughout the working day.



Wednesday, September 13, 2017

Away Note

I'm in Switzerland this week, for a conference on "Thinking about Space and Time: 100 Years of Applying and Interpreting General Relativity." I am also behind with several things and blogging will remain slow for the next weeks. If you miss my writing all too much, here is a new paper.

Wednesday, September 06, 2017

Wednesday, August 30, 2017

The annotated math of (almost) everything

Have you heard of the principle of least action? It’s the most important idea in physics, and it underlies everything. According to this principle, our reality is optimal in a mathematically exact way: it minimizes a function called the “action.” The universe that we find ourselves in is the one for which the action takes on the smallest value.

In quantum mechanics, reality isn’t quite that optimal. Quantum fields don’t have to decide on one specific configuration; they can do everything they want, and the action then quantifies the weight of each contribution. The sum of all these contributions – known as the path-integral – describes again what we observe.

This omniscient action has very little to do with “action” as in “action hero”. It’s simply an integral, usually denoted S, over another function, called the Lagrangian, usually denoted L. There’s a Lagrangian for the Standard Model and one for General Relativity. Taken together they encode the behavior of everything that we know of, except dark matter and quantum gravity.

With a little practice, there’s a lot you can read off directly from the Lagrangian, about the behavior of the theory at low or high energies, about the type of fields and mediator fields, and about the type of interaction.

The below figure gives you a rough idea how that works.



I originally made this figure for the appendix of my book, but later removed it. Yes, my editor is still optimistic the book will be published Spring 2018. The decision about this will fall in the next month or so, so stay tuned.

Wednesday, August 23, 2017

I was wrong. You were wrong too. Admit it.

I thought that anti-vaxxers are a US-phenomenon, certainly not to be found among the dutiful Germans. Well, I was wrong. The WHO estimates only 93% of children in Germany receive both measles shots.

I thought that genes determine sex. I was wrong. For certain species of fish and reptiles that’s not the case.

I thought that ultrasound may be a promising way to wirelessly transfer energy. That was wrong too.

Don’t worry, I haven’t suddenly developed a masochist edge. I’ve had an argument. Not my every-day argument about dark matter versus modified gravity and similar academic problems. This one was about Donald Trump and how to be wrong the right way.
Percentage of infants receiving 2nd dose of measles vaccine in Germany.
[Source: WHO]

Trump changes his mind. A lot. May that be about the NATO or about Afghanistan or, really, find me anything he has not changed his mind about.

Now, I suspect that’s because he doesn’t have an opinion, can’t recall what he said last time, and just hopes no one notices he wings that presidency thing. But whatever the reason, Trump’s mental flexibility is a virtue to strive for. You can see how that didn’t sit well with my liberal friends.

It’s usually hard to change someone’s mind, and a depressingly large amount of studies have shown that evidence isn’t enough to do it. Presenting people with evidence contradicting their convictions can even have the very opposite effect of reinforcing their opinions.

We hold on to our opinions, strongly. Constructing consistent explanations for the world is hard work, and we don’t like others picking apart the stories we settled on. The quirks of the human mind can be tricky – tricky to understand and tricky to overcome. Psychology is part of it. But my recent argument over Trump’s wrongness made me think about the part sociology has in our willingness to change opinion. It’s bad enough to admit to yourself you were wrong. It’s far worse to admit to other people you were wrong.

You see this play out in almost every comment section on social media. People defend hopeless positions, go through rhetorical tricks and textbook fallacies, appeal to authority, build straw men, and slide red herrings down slippery slopes. At the end, there’s always good, old denial. Anything, really, to avoid saying “I was wrong.”

And the more public an opinion was stated, the harder it becomes to backpedal. The more you have chosen friends by their like-mindedness, and the more they count on your like-mindedness, the higher the stakes for being unlike. The more widely known you are, the harder it is to tell your followers you won’t deliver arguments for them any longer. Turn your back on them. Disappoint them. Lose them.

It adds to this that public conversations encourage us to make up opinions on the fly. The three examples I listed above had one thing in common. In neither case did I actually know much about what I was saying. It wasn’t that I had wrong information – I simply had no information, and it didn’t occur to me to check, or maybe I just wasn’t interested enough. I was just hoping nobody would notice. I was winging it. You wouldn’t want me as president either.

But enough of the public self-flagellation and back to my usual self. Science is about being wrong more than it is about being right. By the time you have a PhD you’ll have been wrong in countless ways, so many ways indeed it’s not uncommon students despair over their seeming incapability until reassured we’ve all been there.

Science taught me it’s possible to be wrong gracefully, and – as with everything in life – it becomes easier with practice. And it becomes easier if you see other people giving examples. So what have you recently changed your mind about?

Tuesday, August 15, 2017

You don’t expand just because the universe does. Here’s why.

Not how it works.
It’s tough to wrap your head around four dimensions.

We have known that the universe expands since the 1930s, but whether we expand with it is still one of the questions I am asked most frequently. The less self-conscious simply inform me that the universe doesn’t expand but everything in it shrinks – because how could we tell the difference?

The best answer to these questions is, as usual, a lot of math. But it’s hard to find a decent answer online that is not a pile of equations, so here’s a verbal take on it.

The first clue you need to understand the expansion of the universe is that general relativity is a theory for space-time, not for space. As Herman Minkowski put it in 1908:
“Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”
Speaking about the expansion of space, hence, requires to undo this union.

The second clue is that in science a question must be answerable by measurement, at least in principle. We cannot observe space and neither can we observe space-time. We merely observe how space-time affects matter and radiation, which we can measure in our detectors.

The third clue is that the word “relativity” in “general relativity” means that every observer can chose to describe space-time whatever way he or she wishes. While each observer’s calculation will then differ, they will come to the same conclusion.

Armed with these three knowledge bites, let us see what we can say about the universe’s expansion.

Cosmologists describe the universe with a model known as Friedmann-Robertson-Walker (named after its inventors). The underlying assumption is that space (yes, space) is filled with matter and radiation that has the same density everywhere and in every direction. It is, as the terminology has it, homogeneous and isotropic. This assumption is called the “Cosmological Principle.”

While the Cosmological Principle originally was merely a plausible ad-hoc assumption, it is meanwhile supported by evidence. On large scales – much larger than the typical intergalactic distances – matter is indeed distributed almost the same everywhere.

But clearly, that’s not the case on shorter distances, like inside our galaxy. The Milky Way is disk-shaped with most of the (visible) mass in the center bulge, and this matter isn’t distributed homogeneously at all. The cosmological Friedmann-Robertson-Walker model, therefore, just does not describe galaxies.

This is a key point and missing it is origin of much confusion about the expansion of the universe: The solution of general relativity that describes the expanding universe is a solution on average; it is good only on very large distances. But the solutions that describe galaxies are different – and just don’t expand. It’s not that galaxies expand unnoticeably, they just don’t. The full solution, then, is both stitched together: Expanding space between non-expanding galaxies. (Though these solutions are usually only dealt with by computer simulations due to their mathematical complexity.)

You might then ask, at what distance does the expansion start to take over? That happens when you average over a volume so large that the density of matter inside the volume has a gravitational self-attraction weaker than the expansion’s pull. From atomic nuclei up, the larger the volume you average over, the smaller the average density. But it is only somewhere beyond the scales of galaxy clusters that expansion takes over. On very short distances, when the nuclear and electromagnetic forces aren’t neutralized, these also act against the pull of gravity. This safely prevents atoms and molecules from being torn apart by the universe’s expansion.

But here’s the thing. All I just told you relies on a certain, “natural” way to divide up space in space and time. It’s the cosmic microwave background (CMB) that helps us do it. There is only one way to split space and time so that the CMB looks on average the same in all direction. After that, you can still pick your time-labels, but the split is done.

Breaking up Minkowski’s union between space and time in this way is called a space-time “slicing.” Indeed, it’s much like slicing bread, where each slice is space at some moment of time. There are many ways to slice bread and there are also many ways to slice space-time. Which, as number 3 clued you, are all perfectly allowed.

The reason that physicists chose one slicing over another is usually that calculations can be greatly simplified with a smart choice of slicing. But if you really insist, there are ways to slice the universe so that space does not expand. However, these slicing are awkward: they are hard to interpret and make calculations very difficult. In such a slicing, for example, going forward in time necessarily pushes you around in space – it’s anything but intuitive.

Indeed, you can do this also with space-time around planet Earth. You could slice space-time so that space around us remains flat. Again though, this slicing is awkward and physically meaningless.

This brings us to the relevance of clue #2. We really shouldn’t be talking about space to begin with. Just as you could insist on defining space so that the universe doesn’t expand, by willpower you could also define space so that Brooklyn does expand. Let’s say a block down is a mile. You could simply insist on using units of length in which tomorrow a block down is two miles, and next week it’s ten miles, and so on. That’s pretty idiotic – and yet nobody could stop you from doing this.

But now consider you make a measurement. Say, you bounce a laser-beam back between the ends of the block, at fixed altitude, and use atomic clocks to measure the time that passes between two bounces. You would find that the time-intervals are always the same.

Atomic clocks rely on the constancy of atomic transition frequencies. The gravitational force inside an atom is entirely negligible relative to the electromagnetic force – its about 40 orders of magnitude smaller – and fixing the altitude prevents gravitational redshift caused by the Earth’s gravitational pull. It doesn’t matter which coordinates you used, you’d always find the same and unambiguous measurement result: The time elapsed between bounces of the laser remains the same.

It is similar in cosmology. We don’t measure the size of space between galaxies – how would we do that? We measure the light that comes from distant galaxies. And it turns out to be systematically red-shifted regardless of where we look. A simple way to describe this – a space-time slicing that makes calculations and interpretations easy – is that space between the galaxies expands.

So, the brief answer is: No, Brooklyn doesn’t expand. But the more accurate answer is that you should ask only for the outcome of clearly stated measurement procedures. Light from distant galaxies is shifted to the red meaning they are retreating from us. Light collected from the edges of Brooklyn isn’t redshifted. If we use a space-time slicing in which matter is at rest on the average, then the matter density of the universe is decreasing and was much higher in the past. To the extent that the density of Brooklyn has changed in the past, this can be explained without invoking general relativity.

It may be tough to wrap your head around four dimensions, but it’s always worth the effort.



[This post previously appeared on Starts With A Bang.]

Wednesday, August 09, 2017

Outraged about the Google diversity memo? I want you to think about it.

Chairs. [Image: Verco]
That leaked internal memo from James Damore at Google? The one that says one shouldn’t expect employees in all professions to reflect the demographics of the whole population? Well, that was a pretty dumb thing to write. But not because it’s wrong. Dumb is that Damore thought he could have a reasoned discussion about this. In the USA, out of all places.

The version of Damore’s memo that first appeared on Gizmodo missed references and images. But meanwhile, the diversity memo has its own website and it comes with links and graphics.

Damore’s strikes me as a pamphlet produced by a well-meaning, but also utterly clueless, young white man. He didn’t deserve to get fired for this. He deserved maybe a slap on the too-quickly typing fingers. But in his world, asking for discussion is apparently enough to get fired.

I don’t normally write about the underrepresentation of women in science. Reason is I don’t feel fit to represent the underrepresented. I just can’t seem to appropriately suffer in my male-dominated environment. To the extent that one can trust online personality tests, I’m an awkwardly untypical female. It’s probably unsurprising I ended up in theoretical physics.

There is also a more sinister reason I keep my mouth shut. It’s that I’m afraid of losing what little support I have among the women in science when I fall into their back.

I’ve lived in the USA for three years and for three more years in Canada. On several occasions during these years, I’ve been told that my views about women in science are “hardcore,” “controversial,” or “provocative.” Why? Because I stated the obvious: Women are different from men. On that account, I’m totally with Damore. A male-female ratio close to one is not what we should expect in all professions – and not what we should aim at either.

But the longer I keep my mouth shut, the more I think my silence is a mistake. Because it means leaving the discussion – and with it, power – to those who shout the loudest. Like CNBC. Which wants you to be “shocked” by Damore’s memo in a rather transparent attempt to produce outrage and draw clicks. Are you outraged yet?

Increasingly, media-storms like this make me worry about the impression scientists give to the coming generation. Give to kids like Damore. I’m afraid they think we’re all idiots because the saner of us don’t speak up. And when the kids think they’re oh-so-smart, they’ll produce pamphlets to reinvent the wheel.

Fact is, though, much of the data in Damore’s memo is well backed-up by research. Women indeed are, on the average, more neurotic than men. It’s not an insult, it’s a common term in psychology. Women are also, on the average, more interested in people than in things. They do, on the average, value work-life balance more, react differently to stress, compete by other rules. And so on.

I’m neither a sociologist nor psychologist, but my understanding of the literature is that these are uncontroversial findings. And not new either. Women are different from men, both by nature and by nuture, though it remains controversial just what is nurture and what is nature. But the cause is besides the point for the question of occupation: Women are different in ways that plausibly affect their choice of profession.

No, the problem with Damore’s argument isn’t the starting point, the problem is the conclusions that he jumps to.

To begin with, even I know most of Google’s work is people-centric. It’s either serving people directly, or analyzing people-data, or imagining the people-future. If you want to spend your life with things and ideas rather than people, then go into engineering or physics, but not into software-development.

That coding actually requires “female” skills was spelled out clearly by Yonatan Zunger, a former Google employee. But since I care more about physics than software-development, let me leave this aside.

The bigger mistake in Damore’s memo is one I see frequently: Assuming that job skills and performance can be deduced from differences among demographic groups. This just isn’t so. I believe for example if it wasn’t for biases and unequal opportunities, then the higher ranks in science and politics would be dominated by women. Hence, aiming at a 50-50 representation gives men an unfair advantage. I challenge you to provide any evidence to the contrary.

I’m not remotely surprised, however, that Damore naturally assumes the differences between typically female and male traits mean that men are more skilled. That’s the bias he thinks he doesn’t have. And, yeah, I’m likewise biased in favor of women. Guess that makes us even then.

The biggest problem with Damore’s memo however is that he doesn’t understand what makes a company successful. If a significant fraction of employees think that diversity is important, then it is important. No further justification is needed for this.

Yes, you can argue that increasing diversity may not improve productivity. The data situation on this is murky, to say the least. There’s some story about female CEOs in Sweden that supposedly shows something – but I want to see better statistics before I buy that. And in any case, the USA isn’t Sweden. More importantly, productivity hinges on employees’ well-being. If a diverse workplace is something they value, then that’s something to strive for, period.

What Damore seems to have aimed at, however, was merely to discuss the best way to deal with the current lack of diversity. Biases and unequal opportunities are real. (If you doubt that, you are a problem and should do some reading.) This means that the current representation of women, underprivileged and disabled people, and other minorities, is smaller than it would be in that ideal world which we don’t live in. So what to do about it?

One way to deal with the situation is to wait until the world catches up. Educate people about bias, work to remove obstacles to education, change societal gender images. This works – but it works very slowly.

Worse, one of the biggest obstacles that minorities face is a chicken-and-egg problem that time alone doesn’t cure. People avoid professions in which there are few people like them. This is a hurdle which affirmative action can remove, fast and efficiently.

But there’s a price to pay for preferably recruiting the presently underrepresented. Which is that people supported by diversity efforts face a new prejudice: They weren’t hired because they’re skilled. They were hired because of some diversity policy!

I used to think this backlash has to be avoided at all costs, hence was firmly against affirmative action. But during my years in Sweden, I saw that it does work – at least for women – and also why: It makes their presence unremarkable.

In most of the European North, a woman in a leading position in politics or industry is now commonplace. It’s nothing to stare at and nothing to talk about. And once it’s commonplace, people stop paying attention to a candidate’s gender, which in return reduces bias.

I don’t know, though, if this would also work in science which requires an entirely different skill-set. And social science is messy – it’s hard to tell how much of the success in Northern Europe is due to national culture. Hence, my attitude towards affirmative action remains conflicted.

And let us be clear that, yes, such policies mean every once in a while you will not hire the most skilled person for a job. Therefore, a value judgement must be made here, not a logical deduction from data. Is diversity important enough for you to temporarily tolerate an increased risk of not hiring the most qualified person? That’s the trade-off nobody seems willing to spell out.

I also have to spell out that I am writing this as a European who now works in Europe again. For me, the most relevant contribution to equal opportunity is affordable higher education and health insurance, as well as governmentally paid maternity and parental leave. Without that, socially disadvantaged groups remain underrepresented, and companies continue to fear for revenue when hiring women in their fertile age. That, in all fairness, is an American problem not even Google can solve.

But one also doesn’t solve a problem by yelling “harassment” each time someone asks to discuss whether a diversity effort is indeed effective. I know from my own experience, and a poll conducted at Google confirms, that Damore’s skepticism about current practices is widespread.

It’s something we should discuss. It’s something Google should discuss. Because, for better or worse, this case has attracted much attention. Google’s handling of the situation will set an example for others.

Damore was fired, basically, for making a well-meant, if amateurish, attempt at institutional design, based on woefully incomplete information he picked from published research studies. But however imperfect his attempt, he was fired, in short, for thinking on his own. And what example does that set?

Thursday, August 03, 2017

Self-tuning brings wireless power closer to reality

Cables under my desk.
One of the unlikelier fights I picked while blogging was with an MIT group that aimed to wirelessly power devices – by tunneling:
“If you bring another resonant object with the same frequency close enough to these tails then it turns out that the energy can tunnel from one object to another,” said Professor Soljacic.
They had proposed a new method for wireless power transfer using two electric circuits in magnetic resonance. But there’s no tunneling in such a resonance. Tunneling is a quantum effect. Single particles tunnel. Sometimes. But kilowatts definitely don’t.

I reached out to the professor’s coauthor, Aristeidis Karalis, who told me, even more bizarrely: “The energy stays in the system and does not leak out. It just jumps from one to the other back and forth.”

I had to go and calculate the Poynting vector to make clear the energy is – as always – transmitted from one point to another by going through all points in between. It doesn’t tunnel, and it doesn’t jump either. For the MIT guys’ envisioned powering device with the resonant coils the energy flow is focused between the coils’ centers.

The difference between “jumping” and “flowing” energy is more than just words. Once you know that energy is flowing, you also know that if you’re in its way you might get some of it. And the more focused the energy, the higher the possible damage. This means, large devices have to be close together and the energy must be spread out over large surfaces to comply with safety standards.

Back then, I did some estimates. If you want to transfer, say, 1 Watt, and you distribute it over a coil with radius 30cm, you end up with a density of roughly 1 mW/cm2. That already exceeds the safety limit (in the frequency range 30-300 MHz). And that’s leaving aside there usually must be much more energy in the resonance field than what’s actually transmitted. And 30cm isn’t exactly handy. In summary, it’ll work – but it’s not practical and it won’t charge the laptop without roasting what gets in the way.

The MIT guys meanwhile founded a company, Witricity, and dropped the tunneling tale.

Another problem with using resonance for wireless power is that the efficiency depends on the distance between the circuits. It doesn’t work well when they’re too far, and not when they’re too close either. That’s not great for real-world applications.

But in a recent paper published in Nature, a group from Stanford put forward a solution to this problem. And even though I’m not too enchanted by transfering power by magnetic resonance, it is a really neat idea:
Usually the resonance between two circuits is designed, meaning they receiver’s and sender’s frequencies are tuned to work together. But in the new paper, the authors instead let the frequency of the sender range freely – they merely feed it energy. They then show that the coupled system will automatically tune to a resonance frequency at which efficiency is maximal.

The maximal efficiency they reach is the same as with the fixed-frequency circuits. But it works better for shorter distances. While the usual setting is inefficient both at too short and too long distances, the self-tuned system has a stable efficiency up to some distance, and then decays. This makes the new arrangement much more useful in practice.
Efficiency of energy transfer as a function of distance
between the coils (schematic). Blue curve is for the
usual setting with pre-fixed frequency. Red curve is
for the self-tuned circuits.

The group didn’t only calculate this, they also did an experiment to show it works. One limitation of the present setup though is that it works only in one direction, so still not too practical. But it’s a big step forward.

Personally, I’m more optimistic about using ultrasound for wireless power transfer than about the magnetic resonance because ultrasound presently reaches larger distances. Both technologies, however, are still very much in their infancy, so hard to tell which one will win out.

(Note added: Ultrasound not looking too convincing either, ht Tim, see comments for more.)

Let me not forget to mention that in an ingenious paper which was completely lost on the world I showed you don’t need to transfer the total energy to the receiver. You only need to send the information necessary to decrease entropy in the receiver’s surrounding, then it can draw energy from the environment.

Unfortunately, I could think of how to do this only for a few atoms at a time. And, needless to say, I didn’t do any experiment – I’m a theoretician after all. While I’m sure in a few thousand years everyone will use my groundbreaking insight, until then, it’s coils or ultrasound or good, old cables.

Friday, July 28, 2017

New paper claims string theory can be tested with Bose-Einstein-Condensates

Fluorescence image of
Bose-Einstein-Condensate.
Image Credits: Stefan Kuhr and
Immanuel Bloch, MPQ
String theory is infamously detached from experiment. But in a new paper, a group from Mexico put forward a proposal to change that
    String theory phenomenology and quantum many–body systems
    Sergio Gutiérrez, Abel Camacho, Héctor Hernández
    arXiv:1707.07757 [gr-qc]
Ahead, let me be clear they don’t want to test string theory, but the presence of additional dimensions of space, which is a prediction of string theory.

In the paper, the authors calculate how additional space-like dimensions affect a condensate of ultra-cold atoms, known as Bose-Einstein-Condensate. At such low temperatures, the atoms transition to a state where their quantum wave-function acts as one and the system begins to display quantum effects, such as interference, throughout.

In the presence of extra-dimensions, every particle’s wave-function has higher harmonics because the extra-dimensions have to close up, in the simplest case like circles. The particle’s wave-functions have to fit into the extra dimensions, meaning their wave-length must be an integer fraction of the radius.

Each of the additional dimensions has a radius of about a Planck length, which is 10-35m or 15 orders of magnitude smaller than what even the LHC can probe. To excite these higher harmonics, you correspondingly need an energy of 1015 TeV, or 15 orders of magnitude higher than what the LHC can produce.

How do the extra-dimensions of string theory affect the ultra-cold condensate? They don’t. That’s because at those low temperatures there is no way you can excite any of the higher harmonics. Heck, even the total energy of the condensates presently used isn’t high enough. There’s a reason string theory is famously detached from experiment – because it’s a damned high energy you must reach to see stringy effects!

So what’s the proposal in the paper then? There isn’t one. They simply ignore that the higher harmonics can’t be excited and make a calculation. Then they estimate that one needs a condensate of about a thousand particles to measure a discontinuity in the specific heat, which depends on the number of extra-dimensions.

It’s probably correct that this discontinuity depends on the number of extra-dimensions. Unfortunately the authors don’t go back and check what’s the mass per particle in the condensate that’s needed to make this work. I’ve put in the numbers and get something like a million tons. That gigantic mass becomes necessary because it has to combine with the miniscule temperature of about a nano-Kelvin to have a geometric mean that exceeds the Planck mass.

In summary: Sorry, but nobody’s going to test string theory with Bose-Einstein-Condensates.

Wednesday, July 19, 2017

Penrose claims LIGO noise is evidence for Cyclic Cosmology

Noise is the physicists’ biggest enemy. Unless you are a theorist whose pet idea masquerades as noise. Then you are best friends with noise. Like Roger Penrose.
    Correlated "noise" in LIGO gravitational wave signals: an implication of Conformal Cyclic Cosmology
    Roger Penrose
    arXiv:1707.04169 [gr-qc]

Roger Penrose made his name with the Penrose-Hawking theorems and twistor theory. He is also well-known for writing books with very many pages, most recently “Fashion, Faith, and Fantasy in the New Physics of the Universe.”

One man’s noise is another man’s signal.
Penrose doesn’t like most of what’s currently in fashion, but believes that human consciousness can’t be explained by known physics and that the universe is cyclically reborn. This cyclic cosmology, so his recent claim, gives rise to correlations in the LIGO noise – just like what’s been observed.

The LIGO experiment consists of two interferometers in the USA, separated by about 3,000 km. A gravitational wave signal should pass through both detectors with a delay determined by the time it takes the gravitational wave to sweep from one US-coast to the other. This delay is typically of the order of 10ms, but its exact value depends on where the waves came from.

The correlation between the two LIGO detectors is one of the most important criteria used by the collaboration to tell noise from signal. The noise itself, however, isn’t entirely uncorrelated. Some sources of the correlations are known, but some are not. This is not unusual – understanding the detector is as much part of a new experiment as is the measurement itself. The LIGO collaboration, needless to say, thinks everything is under control and the correlations are adequately taken care of in their signal analysis.

A Danish group of researchers begs to differ. They recently published a criticism on the arXiv in which they complain that after subtracting the signal of the first gravitational wave event, correlations remain at the same time-delay as the signal. That clearly shouldn’t happen. First and foremost it would demonstrate a sloppy signal extraction by the LIGO collaboration.

A reply to the Danes’ criticism by Ian Harry from the LIGO collaboration quickly appeared on Sean Carroll’s blog. Ian pointed out some supposed mistakes in the Danish group’s paper. Turns out though, the mistake was on his site. Once corrected, Harry’s analysis reproduces the correlations which shouldn’t be there. Bummer.

Ian Harry did not respond to my requests for comment. Neither did Alessandra Buonanno from the LIGO collaboration, who was also acknowledged by the Danish group. David Shoemaker, the current LIGO spokesperson, let me know he has “full confidence” in the results, and also, the collaboration is working on a reply, which might however take several months to appear. In other words, go away, there’s nothing to see here.

But while we wait for the LIGO response, speculations abound what might cause the supposed correlation. Penrose beat everyone to it with an explanation, even Craig Hogan, who has run his own experiment looking for correlated noise in interferometers, and who I was counting on.

Penrose’s cyclic cosmology works by gluing the big bang together with what we usually think of as the end of the universe – an infinite accelerated expansion into nothingness. Penrose conjectures that both phases – the beginning and the end – are conformally invariant, which means they possess a symmetry under a stretching of distance scales. Then he identifies the end of the universe with the beginning of a new one, creating a cycle that repeats indefinitely. In his theory, what we think of as inflation – the accelerated expansion in the early universe – becomes the final phase of acceleration in the cycle preceding our own.

Problem is, the universe as we presently see it is not conformally invariant. What screws up conformal invariance is that particles have masses, and these masses also set a scale. Hence, Penrose has to assume that eventually all particle masses fade away so that conformal invariance is restored.

There’s another problem. Since Penrose’s conformal cyclic cosmology has no inflation it also lacks a mechanism to create temperature fluctuations in the cosmic microwave background (CMB). Luckily, however, the theory also gives rise to a new scalar particle that couples only gravitationally. Penrose named it  “erebon” after the ancient Greek God of Darkness, Erebos, that gives rise to new phenomenology.

Erebos, the God of Darkness,
according to YouTube.
The erebons have a mass of about 10-5 gram because “what else could it be,” and they have a lifetime determined by the cosmological constant, presumably also because what else could it be. (Aside: Note that these are naturalness arguments.) The erebons make up dark matter and their decay causes gravitational waves that seed the CMB temperature fluctuations.

Since erebons are created at the beginning of each cycle and decay away through it, they also create a gravitational wave background. Penrose then argues that a gravitational wave signal from a binary black hole merger – like the ones LIGO has observed – should be accompanied by noise-like signals from erebons that decayed at the same time in the same galaxy. Just that this noise-like contribution would be correlated with the same time-difference as the merger signal.

In his paper, Penrose does not analyze the details of his proposal. He merely writes:
“Clearly the proposal that I am putting forward here makes many testable predictions, and it should not be hard to disprove it if it is wrong.”
In my impression, this is a sketchy idea and I doubt it will work. I don’t have a major problem with inventing some particle to make up dark matter, but I have a hard time seeing how the decay of a Planck-mass particle can give rise to a signal comparable in strength to a black hole merger (or why several of them would add up exactly for a larger signal).

Even taking this at face value, the decay signals wouldn’t only come from one galaxy but from all galaxies, so the noise should be correlated all over and at pretty much all time-scales – not just at the 12ms as the Danish group has claimed. Worst of all, the dominant part of the signal would come from our own galaxy and why haven’t we seen this already?

In summary, one can’t blame Penrose for being fashionable. But I don’t think that erebons will be added to the list of LIGO’s discoveries.