Vasudevan Mukunth is the science editor at The Wire.
Illustration: GuerillaBuzz/Unsplash
- The work of the three individuals who have won the Nobel Prize for physics this year won’t just make you go “wow”. They might just leave you unsettled.
- In 1935, Albert Einstein, Boris Podolsky and Nathan Rosen highlighted an implication of quantum physics that they interpreted as a flaw.
- In 1964, John Stewart Bell gave their arguments a mathematical form, allowing experimental physicists to test the intriguing accusation.
- Three of the more important tests were conducted in 1972, 1982 and 1998, led by John Clauser, Alain Aspect and Anton Zeilinger respectively.
- Their efforts helped physicists reach the conclusion that quantum physics is a complete description of reality as well as that this reality is indeterministic.
Bengaluru: The whiteness and the maleness of the winners of the Nobel Prize for physics continue to undermine its relevance in 2022 as well, as the honour this year has been conferred on Anton Zeilinger, Alain Aspect and John Clauser. But we must admit that the work that the prizes highlight every year are in quality but more importantly in being able to make one go “wow” nearly impossible to surpass. This year, again, is no exception – but the current laureates’ work won’t just make you go “wow”. If you understand all of it, you should be baffled and perhaps a little nervous.
(Before we go any further, I should say that in deference to the potentially complicated nature of the ideas to follow, I will use the simplest narrative structure I know: chronological.)
In 1802, a British physicist named Thomas Young first performed an experiment that has become emblematic of humans’ inability to grasp the fullness of quantum physics, and what our reality really looks like at its smallest, most fundamental scales. You must have encountered it in high school as the double slit experiment. Young passed light from a candle through two narrow slits cut on paper and observed their shadow on the wall opposite. He found that instead of casting two separate pricks of light on the wall, there was an undulating pattern of light and dark patches. It was a demonstration that light is a wave and that the shapes on the wall were of the interference pattern, which only waves can create.
Fast-forward to the 20th century, past the birth of quantum physics, the theories of relativity and the application of these ideas to several other fields of study. In 1935, Erwin Schrödinger articulated his objection to the Copenhagen interpretation of quantum physics with a famous thought-experiment involving a cat.
The Copenhagen interpretation is usually attributed to Werner Heisenberg and Niels Bohr. It has two important features. First, it claims that quantum physics is indeterministic, meaning every possible outcome has an associated probability with which it will occur. Second, some parameters of quantum systems occur in pairs such that the values of two paired parameters can’t be known simultaneously with the same precision. (A.k.a. Heisenberg’s uncertainty principle, taught in schools with the example that you can’t know the location and velocity of an electron with the same precision at the same time.)
Using his thought-experiment, Schrödinger explained that the Copenhagen interpreation allowed a quantum system to exist in two states at once. Each state has a fixed probability attached to it. When the system is observed, the system resolves itself into one of the two states depending on the probabilities. He wasn’t happy with the idea – so his choice of a metaphorical cat that was alive and dead at once.
Also in 1935, Albert Einstein, Boris Podolsky and Nathan Rosen proposed a thought experiment in a joint paper. Based on what they knew of quantum physics at the time, they realised the possibility of something called the entangled state. Say two electrons are separated by a large distance. If they are entangled, measuring, say, the speed of one electron will immediately give you information about the result of measuring the speed of the other electron. Particles become entangled when, simply speaking, they are created from a common packet of energy.
The “immediately” is crucial: you, the measurer, have information about the second electron faster than you can get with, say, a radio signal. In other words, information about the result of the second measurement will reach you before a signal travelling at the speed of light can. Einstein famously called this possibility, which arises once you measure the properties of one of the two particles, “spooky action at a distance”. According to the trio, quantum physics thus violated a fundamental tenet of physics at the time: locality, the idea that an action couldn’t affect its surroundings at a speed faster than that of light. [footnote]Einstein’s theories of relativity hinged on the speed of light in vacuum being the highest speed possible.[/footnote] So they concluded that quantum physics was an incomplete description of reality and that it required some hitherto unknown “hidden variables” to describe reality completely, without any paradoxes.
Next, jump to 1964: An Irish physicist named John Bell published a theorem to test whether quantum physics is compatible with theories that have “hidden variables” to describe reality – i.e. whether quantum physics per se was compatible with a classical theory that didn’t violate locality. Bell argued that if a “hidden variable” was helping two entangled particles communicate with each other, the rules of classical physics would impose limits on how often they could communicate. He calculated a threshold: if the extent of correlation between two measurements was below the threshold, a “hidden variable” could be at work. If the extent exceeded the threshold, only quantum physics could be at work. This is why physicists also refer to Bell’s theorem in terms of Bell’s inequalities.
Experiments that test the Bell theorem are called Bell tests. Each Bell test is an experiment in which physicists close different loopholes, or combinations of loopholes, in a “spooky action at a distance” experiment, so that they can check if “hidden variables” could explain the results. Put another way, each Bell test is a test of a Bell’s inequality. The first Bell test was conducted in 1972 by John Clauser, one of the physics Nobel laureates this year, and Stuart Freedman (who died in 2012). By 1976, three pairs of physicists, working separately, had conducted seven Bell tests. The consensus was that quantum physics was not incomplete and that “hidden variables” theories couldn’t be involved.
Clauser, Aspect and Zeilinger were part of three experiments that successively closed important loopholes in Bell’s theorem and proved that a) quantum physics is incompatible with any theory that requires “hidden variables” and b) a theory that requires “hidden variables” can’t adequately describe reality. Ultimately, they helped physicists come together to an unsettling conclusion: if quantum physics is both complete and probabilistic (by the Copenhagen interpretation), then our reality is also nonlocal.
The three tests
The Clauser-Freedman test happened in 1972, the Aspect et al. test in 1982 and the Zeilinger et al. test in 1998. They are so separated by time because of the technologies involved in each test. For this reason, it might be easier to start with the Aspect et al. test in 1982. (What follows is a simplistic description in order to make a point, not to describe the experiment accurately.)
Imagine a large room with two physicists, Selvi and Geeta, standing at opposite ends, next to a detector. In the middle of the room, there is a source emitting entangled photons (particles of light), one in each direction towards the detectors. Both Selvi and Geeta have independently (separately, without telling each other) decided to check which way a photon is polarised.[footnote]The angle of polarisation is the angle between the direction of the particle’s movement and the direction in which its electric field is oscillating.[/footnote] That is, when Selvi’s detector receives a photon, she checks if its angle of polarisation is P. When Geeta’s detector receives a photon, she checks if its angle of polarisation is Q. In each case, they record the result of their checks: “yes” or “no”.
After recording results from hundreds of photons, they compare their notes. If the extent of correlation between their results exceeds Bell’s inequality for the test, then “hidden variable” theories are ruled out. In their test, Clauser and Freedman used a rudimentary technique to produce entangled photons. The particles flew off to two detectors 5 m apart that measured their polarisation. If the photons arrived simultaneously and their polarisation matched, the detectors fired signals to a device called a coincidence counter. Finally, they compared the number of signals recorded by the coincidence counter to the Bell’s inequality threshold for their test.
Between this time and 1980, Alain Aspect and his colleagues at the École Supérieure d’Optique in Orsay, France, got to work on technologies that would allow them to produce and record photons more efficiently, with tighter control. The Clauser-Freedman test used instruments that together had so many compounding inefficiencies that the duo required more than 200 hours of measurements to produce one significant result. Aspect & co. improved on this setup with enhancements to the photon source and the coincidence counter.
The subsequent test was similar to the Selvi-Geeta experiment, with one change: instead of Selvi and Geeta deciding on P and Q, the experiment itself decided which combination of polarisation values to measure on the fly, after each photon had departed from the source. Doing this closed one loophole in the Clauser-Freedman test itself, but more importantly produced results of a quality that put Alain Aspect on the map. Where Clauser and Freedman had succeeded in demonstrating that there was a practical way to test Bell’s inequalities, Aspect & co. produced the first result that violated an inequality with enough confidence to indicate that quantum physics would win on all counts over “hidden variable” theories.
Aspect’s own test had one loophole, even if it was relatively minor: the detection loophole. That is, the detectors would detect only a fraction of the incoming photons, not all of them. This opened the door for the possibility that while the detected photons led to a violation of Bell’s inequality, including the undetected photons as well would lead to a result that respects the inequality. But other physicists soon designed experiments based on Aspect’s design, and with better specs, to close this gap.
The third big Bell test came in 1998, led by Anton Zeilinger – and this one was spectacular. A highly sophisticated version of Young’s double-slit experiment two centuries prior, it was known as the delayed-choice quantum eraser experiment. Recall that in Young’s experiment, the interference pattern on the wall was evidence that light behaved like a wave as it moved through the two slits and interfered on the other side. The pattern takes shape because one part of the light wave interferes with another part of itself. How might the outcomes change if experimenters instead emitted one photon at a time into each slit? Let’s answer this in two parts.
First: the “quantum eraser”. When you emit a beam of light towards two slits, you see an interference pattern on the other side. You can rationalise what you’re seeing by imagining light as waves. If the light behaved particle-like when it went through the slits, you wouldn’t see any patterns.
Let’s say there is a photon emitter directed at a beam-splitter, a device that deflects photons to the left or to the right at random, at a 50-50 rate. If a photon is registered at a detector on the right, we can say the beam-splitter deflected the photon to the right; and so also for the left. Now, using an arrangement of mirrors, let’s redirect the two separate streams of photons so that they intersect each other. And at the intersection, place a second beam-splitter. This arrangement is called a Mach-Zehnder interferometer.
When the two streams intersect, some light will behave wave-like, intersect and produce an interference pattern; and some light will behave particle-like and not produce any patterns. The second beam-splitter will split up the interference pattern. That is, one pattern will show on the right side of the splitter and another pattern will show on the left side. These patterns won’t retain any information about which photon came from which stream. Instead, it will have been erased – thus the experiment’s name.
In their test, Zeilinger & co. used entangled photons instead of pure-state photons. Specifically, a source would emit a pair of entangled photons. One photon would enter a Mach-Zehnder interferometer installed in a laboratory in La Palma. The other photon would be shot towards a telescope in Tenerife, some 144 km away. (The team waited until sundown to conduct these tests and had to finish before the Moon rose in the sky.)
The beam-splitters at La Palma were sensitive to the photon’s polarisation. If the team knew how a photon was polarised, they could tell which path it took through the setup to reach the second beam-splitter. The researchers could also obtain this information by measuring the polarisation of the photon received at Tenerife, since the two photons are entangled. For the same reason, if they changed the polarisation of the Tenerife photon without measuring it first, they would lose information about the La Palma photon’s polarisation, and thus erase any information about which path it takes.
This decision – whether to measure or to erase – was left to a random-number generator. If its reading was ‘0’, the team measured the polarisation of the Tenerife photon. If the reading was ‘1’, the team altered its polarisation.
The “144 km” is important: by the time the second photon has reached Tenerife, the first photon will have passed through the Mach-Zehnder interferometer in La Palma. That is, it will have finished behaving either like a particle or like a wave. When the researchers knew which photon had followed which path, that ‘bit’ of light behaved particle-like and wasn’t involved in an interference pattern. But when they didn’t know the path information of a photon, that ‘bit’ of light behaved like a wave and produced an interference pattern at the second beam-splitter.
Here’s the fun part: Zeilinger & co. used the random-number generator in Tenerife 0.5 milliseconds[footnote]In which light travels almost 150 km[/footnote] after the first photon emerged from the Mach-Zehnder interferometer in La Palma. That is, they would ‘decide’ whether one ‘bit’ of light would behave wave-like or particle-like, and the corresponding ‘bit’ would or wouldn’t produce an interference pattern – except the effect preceded the cause by 0.5 ms!
This way, Zeilinger & co. conclusively defied Bell’s inequality to show that reality is nonlocal. As Anil Ananthaswamy wrote in his book on the topic (p. 179), “Language fails us at this point. Here and there, past and future don’t quite work.”
Clauser and Freedman had realised one of the first Bell tests. Aspect et al. closed a combination of loopholes that made the rest of the world sit up and take notice. Zeilinger et al. violated another Bell’s inequality, sure, but instead of stopping there, they stormed past the boundaries of our comprehension, boundaries that we take for granted even today, because otherwise the world stops making sense in ways that could render us like characters in a Lovecraftian horror.
Technically, Clauser, Aspect and Zeilinger have given us the facts as they have observed them, and left us the responsibility of interpreting them in a way that allows us to unearth more insights about quantum physics without compromising our grip on reality. Many physicists are also less concerned with this interpretation than they are with employing these facts to develop computers that solve problems thus far deemed unsolvable. You decide what you prefer, now that you know where to begin.
I’m curious about the interpretation. Aspect, for example, has expressed discomfort with the conclusions of his and others’ Bell tests. Zeilinger on the other hand has been comfortable with what his experiment uncovered: that our reality is fundamentally indeterministic and probabilistic. It doesn’t manifest that way at the macroscopic scale, of course, because of the overwhelming forces of chemistry and gravity. But the philosophy itself is arguably essential. As Aspect said in 2000:
“We must be grateful to John Bell for having shown us that philosophical questions about the nature of reality could be translated into a problem for physicists, where naïve experimentalists can contribute.”