Author: Sean Carroll
ISBN:978-1524743017
If we can see and feel the material stuff, then it must be real, right? If we can observe the passage of time and perceive the space, then they must exist, right? This book on physics will do more for your personal growth than any book about personal growth. Caution: read only if ready to change your perception of reality; destroying it is not the easiest thing to go through!
EXCERPTS
Scientists are trained to value tangible results, whether they are exciting experimental findings or quantitative theoretical models. The idea of working to understand a theory we already have, even if that effort might not lead to any specific new technologies or predictions, can be a tough sell. The underlying tension was illustrated in the TV show The Wire, where a group of hardworking detectives labored for months to meticulously gather evidence that would build a case against a powerful drug ring. Their bosses, meanwhile, had no patience for such incremental frivolity. They just wanted to see drugs on the table for their next press conference, and encouraged the police to bang heads and make splashy arrests. Funding agencies and hiring committees are like those bosses. In a world where all the incentives push us toward concrete, quantifiable outcomes, less pressing big-picture concerns can be pushed aside as we race toward the next immediate goal. [Same in business.]
Quantum mechanics is unique among physical theories in drawing an apparent distinction between what we see and what really is.
In particular, when we turn to understanding the nature of spacetime itself, and the origin and ultimate fate of the entire universe, the foundations of quantum mechanics are absolutely crucial.
Together, the position and velocity make up the state of any object in classical mechanics [= Newtonian mechanics/ physics]. If we have a system with multiple moving parts, the classical state of that entire system is just a list of the states of each of the individual parts.
Newtonian mechanics describes a deterministic, clockwork universe.
As far as Newtonian mechanics is concerned the momentum is simply the particle’s mass times its velocity.)
The French mathematician Pierre-Simon Laplace pointed out a profound implication of the classical mechanics way of thinking. In principle, a vast intellect could know the state of literally every object in the universe, from which it could deduce everything that would happen in the future, as well as everything that had happened in the past. [= Laplace’s demon]
Alongside Newton’s formulation of classical mechanics, the invention of quantum mechanics represents the other great revolution in the history of physics. Unlike anything that had come before, quantum theory didn’t propose a particular physical model within the basic classical framework; it discarded that framework entirely, replacing it with something profoundly different.
The fundamental new element of quantum mechanics, the thing that makes it unequivocally distinct from its classical predecessor, centers on the question of what it means to measure something about a quantum system. What exactly a measurement is, and what happens when we measure something, and what this all tells us about what’s really happening behind the scenes: together, these questions constitute what’s called the measurement problem of quantum mechanics. There is absolutely no consensus within physics or philosophy on how to solve the measurement problem!
There is no such thing as a measurement problem in classical mechanics. The state of the system is given by its position and its velocity, and if we want to measure those quantities, we simply do so.
Unlike in classical mechanics, where the state of a system is described by its position and velocity, the nature of a quantum system is something a bit less concrete. Consider an electron in its natural habitat, orbiting the nucleus of an atom. You might think, from the word “orbit” as well as from the numerous cartoon depictions of atoms you have doubtless been exposed to over the years, that the orbit of an electron is more or less like the orbit of a planet in the solar system. The electron (so you might think) has a location, and a velocity, and as time passes it zips around the central nucleus in a circle or maybe an ellipse. Quantum mechanics suggests something different. We can measure values of the location or velocity (though not at the same time), and if we are sufficiently careful and talented experimenters we will obtain some answer. But what we’re seeing through such a measurement is not the actual, complete, unvarnished state of the electron. Indeed, the particular measurement outcome we will obtain cannot be predicted with perfect confidence, in a profound departure from the ideas of classical mechanics. The best we can do is to predict the probability of seeing the electron in any particular location or with any particular velocity. The classical notion of the state of a particle, “its location and its velocity,” is therefore replaced in quantum mechanics by something utterly alien to our everyday experience: a cloud of probability. For an electron in an atom, this cloud is more dense toward the center and thins out as we get farther away. Where the cloud is thickest, the probability of seeing the electron is highest; where it is diluted almost to imperceptibility, the probability of seeing the electron is vanishingly small. This cloud is often called a wave function, because it can oscillate like a wave, as the most probable measurement outcome changes over time. We usually denote a wave function by Ψ, the Greek letter Psi. For every possible measurement outcome, such as the position of the particle, the wave function assigns a specific number, called the amplitude associated with that outcome. The amplitude that a particle is at some position x0, for example, would be written Ψ(x0). The probability of getting that outcome when we perform a measurement is given by the amplitude squared.
Probability of a particular outcome = |Amplitude for that outcome|2 [squared]
The wave function is the sum total of reality, and ideas such as the position or the velocity of the electron are merely things we can measure.
When you perform a measurement, such as the position or spin of a particle, quantum mechanics says there are only certain possible results you will ever get. You can’t predict which of the results it will be, but you can calculate the probability for each allowed outcome. And after your measurement is done, the wave function collapses to a completely different function, with all of the new probability concentrated on whatever result you just got. So if you measure a quantum system, in general the best you can do is predict probabilities for various outcomes, but if you were to immediately measure the same quantity again, you will always get the same answer—the wave function has collapsed onto that outcome.
What precisely do you mean by a “measurement”? How quickly does it happen? What exactly constitutes a measuring apparatus? Does it need to be human, or have some amount of consciousness, or perhaps the ability to encode information? When exactly does the measurement occur, and how quickly? How in the world does the wave function collapse so dramatically? If the wave function were very spread out, does the collapse happen faster than the speed of light? And what happens to all the possibilities that were seemingly allowed by the wave function but which we didn’t observe? Were they never really there? Do they just vanish into nothingness? Why do quantum systems evolve smoothly and deterministically according to the Schrödinger equation as long as we aren’t looking at them, but then dramatically collapse when we do look? How do they know, and why do they care?
To tag this distinction with philosophical buzzwords, in statistical mechanics the probability distribution is an epistemic notion—describing the state of our knowledge—rather than an ontological one—describing some objective feature of reality. Epistemology is the study of knowledge; ontology is the study of what is real.
Bell’s theorem implies that any such theory requires “action at a distance”—a measurement at one location can instantly affect the state of the universe arbitrarily far away. This seems to be in violation of the spirit if not the letter of the theory of relativity, which says that objects and influences cannot propagate faster than the speed of light.
Every version of quantum mechanics (and there are plenty) employs a wave function or something equivalent, and posits that the wave function obeys Schrödinger’s equation, at least most of the time.
The world is a wave function, nothing more nor less.
Consider an idea you will often hear: “Atoms are mostly empty space.” Utterly wrong, according to the austere quantum mechanics (AQM) way of thinking. It comes from a stubborn insistence on thinking of an electron as a tiny classical dot zipping around inside of the wave function, rather than the electron actually being the wave function. In AQM, there’s nothing zipping around; there is only the quantum state. Atoms aren’t mostly empty space; they are described by wave functions that stretch throughout the extent of the atom.
The way to break out of our classical intuition is to truly abandon the idea that the electron has some particular location. An electron is in a superposition of every possible position we could see it in, and it doesn’t snap into any one specific location until we actually observe it to be there. “Superposition” is the word physicists use to emphasize that the electron exists in a combination of all positions, with a particular amplitude for each one. Quantum reality is a wave function; classical positions and velocities are merely what we are able to observe when we probe that wave function.
So the reality of a quantum system, according to austere quantum mechanics, is described by a wave function or quantum state, which can be thought of as a superposition of every possible outcome of some observation we might want to make. How do we get from there to the annoying reality that wave functions appear to collapse when we make such measurements? All we need to know is that there is some measuring apparatus (a camera or whatever) that somehow interacts with the electron, and then lets us read off where the electron was seen. If atoms obey the rules of quantum mechanics and cameras are made of atoms, presumably cameras obey the rules of quantum mechanics too. For that matter, you and I presumably obey the rules of quantum mechanics. The fact that we are big, lumbering, macroscopic objects might make classical physics a good approximation to what we are, but our first guess should be that it’s really quantum from top to bottom. If that’s true, it’s not just the electron that has a wave function. The camera should have a wave function of its own. So should the experimenter. Everything is quantum.
Fortunately we can appeal to another startling feature of quantum mechanics: given two different objects (like an electron and a camera), they are not described by separate, individual wave functions. There is only one wave function, which describes the entire system we care about, all the way up to the “wave function of the universe” if we’re talking about the whole shebang.
This is the quantum phenomenon known as entanglement. There is a single wave function for the combined electron+camera system, consisting of a superposition of various possibilities of the form “the electron was at this location, and the camera observed it at the same location.” Rather than the electron and the camera doing their own thing, there is a connection between the two systems.
You never feel like you have evolved into a superposition of different possible measurement outcomes; you simply think you’ve seen some specific outcome, which can be predicted with a definite probability.
Certainly in the story told above, where an observer measures the position of an electron, it definitely seems as if that observer evolves into an entangled superposition of the different possible measurement outcomes. But there’s an alternative possibility. Before the measurement happened, there was one electron and one observer (or camera, if you prefer—it doesn’t matter how we think about the thing that interacts with the electron as long as it’s a big, macroscopic object). After they interact, however, rather than thinking of that one observer having evolved into a superposition of possible states, we could think of them as having evolved into multiple possible observers. The right way to describe things after the measurement, in this view, is not as one person with multiple ideas about where the electron was seen, but as multiple worlds, each of which contains a single person with a very definite idea about where the electron was seen. Here’s the big reveal: what we’ve described as austere quantum mechanics is more commonly known as the Everett, or Many-Worlds, formulation of quantum mechanics, first put forward by Hugh Everett in 1957.
This brief introduction to Many-Worlds leaves many questions unanswered. When exactly does the wave function split into many worlds? What separates the worlds from one another? How many worlds are there? Are the other worlds really “real”? How would we ever know, if we can’t observe them? (Or can we?) How does this explain the probability that we’ll end up in one world rather than another one?
All of these questions have good answers—or at least plausible ones—and much of the book to come will be devoted to answering them. But we should also admit that the whole picture might be wrong, and something very different is required.
Every version of quantum mechanics features two things: (1) a wave function, and (2) the Schrödinger equation, which governs how wave functions evolve in time. The entirety of the Everett formulation is simply the insistence that there is nothing else, that these ingredients suffice to provide a complete, empirically adequate account of the world. Any other approach to quantum mechanics consists of adding something to that bare-bones formalism, or somehow modifying what is there.
The existence of multiple incompatible theories that all lead (at least thus far) to the observable predictions of quantum mechanics creates a conundrum for anyone who wants to talk about what quantum theory really means. While the quantum recipe is agreed upon by working scientists and philosophers, the underlying reality—what any particular phenomenon actually means—is not. I am defending one particular view of that reality, the Many-Worlds version of quantum mechanics, and for most of this book I will simply be explaining things in Many-Worlds terms.
By the end of the nineteenth century, scientists had managed to distill every single one of these things down to two fundamental kinds of substances: particles and fields. Particles are point-like objects at a definite location in space, while fields (like the gravitational field) are spread throughout space, taking on a particular value at every point. When a field is oscillating across space and time, we call that a “wave.”
Quantum mechanics ultimately unified particles and fields into a single entity, the wave function. The impetus to do so came from two directions: first, physicists discovered that things they thought were waves, like the electric and magnetic fields, had particle-like properties. Then they realized that things they thought were particles, like electrons, manifested field-like properties. The reconciliation of these puzzles is that the world is fundamentally field-like (it’s a quantum wave function), but when we look at it by performing a careful measurement, it looks particle-like.
It wasn’t until the 1960s and ’70s that physicists established that protons and neutrons are also made of smaller particles, called quarks, held together by new force-carrying particles called gluons.
Chemically speaking, electrons are where it’s at. Nuclei give atoms their heft, but outside of rare radioactive decays or fission/fusion reactions, they basically go along for the ride. The orbiting electrons, on the other hand, are light and jumpy, and their tendency to move around is what makes our lives interesting. Two or more atoms can share electrons, leading to chemical bonds. Under the right conditions, electrons can change their minds about which atoms they want to be associated with, which gives us chemical reactions. Electrons can even escape their atomic captivity altogether in order to move freely through a substance, a phenomenon we call “electricity.” And when you shake an electron, it sets up a vibration in the electric and magnetic fields around it, leading to light and other forms of electromagnetic radiation.
As far as anyone can tell, electrons are truly elementary particles. You can see why discussions of quantum mechanics are constantly referring to electrons when they reach for examples—they’re the easiest fundamental particle to make and manipulate, and play a central role in the behavior of the matter of which we and our surroundings are made.
Fields can be thought of as the opposite of particles, at least in the context of classical mechanics. The defining feature of a particle is that it’s located at one point in space, and nowhere else. The defining feature of a field is that it is located everywhere. A field is something that has a value at literally every point in space. Particles need to interact with each other somehow, and they do so through the influence of fields.
If you’ve read anything about quantum mechanics before, you’ve probably heard the question “Is an electron a particle, or a wave?” The answer is: “It’s a wave, but when we look at (that is, measure) that wave, it looks like a particle.” That’s the fundamental novelty of quantum mechanics. There is only one kind of thing, the quantum wave function, but when observed under the right circumstances it appears particle-like to us.
Any particle with an electrical charge, such as an electron, creates an electric field everywhere around it, fading in magnitude as you get farther away from the charge. If we shake an electron, oscillating it up and down, the field oscillates along with it, in ripples that gradually spread out from its location. This is electromagnetic radiation, or “light” for short.
In the process, Planck was forced to posit the existence of a new fundamental parameter of nature, now known as Planck’s constant and denoted by the letter h. The amount of energy contained in a quantum of light is proportional to its frequency, and Planck’s constant is the constant of proportionality: the energy is the frequency times h. Very often it’s more convenient to use a modified version ħ, pronounced “h-bar,” which is just Planck’s original constant h divided by 2Π. The appearance of Planck’s constant in an expression is a signal that quantum mechanics is at work.
Remember that if you shake an electron, it emits light. By “shake” we just mean accelerate in some way. An electron that does anything other than move in a straight line at a constant velocity should emit light.
Every single atom in your body, and in the environment around you, should be glowing, if classical mechanics was right. That means the electrons should be losing energy as they emit radiation, which in turn implies that they should spiral downward into the central nucleus. Classically, electron orbits should not be stable.
Schrödinger’s equation involves unfamiliar symbols, but its basic message is not hard to understand. De Broglie had suggested that the momentum of a wave goes up as its wavelength goes down. Schrödinger proposed a similar thing, but for energy and time: the rate at which the wave function is changing is proportional to how much energy it has.
The famous Heisenberg uncertainty principle, often explained as saying that we cannot simultaneously know both the position and the velocity of any object. But the reality is deeper than that. It’s not that we can’t know position and momentum, it’s that they don’t even exist at the same time. Only under extremely special circumstances can an object be said to have a location—when its wave function is entirely concentrated on one point in space, and zero everywhere else—and similarly for velocity. And when one of the two is precisely defined, the other could be literally anything, were we to measure it. More often, the wave function includes a spread of possibilities for both quantities, so neither has a definite value.
The absence of definite quantities at the heart of reality that map more or less straightforwardly onto what we can eventually observe is one of the deep features of quantum mechanics that can be hard to accept upon first encounter. There are quantities that are not merely unknown but do not even exist, even though we can seemingly measure them.
It’s not an assertion that “everything is uncertain.” Either position or momentum could be certain in an appropriate quantum state; they just can’t be certain at the same time.
The uncertainty principle is a statement about the nature of quantum states and their relationship to observable quantities, not a statement about the physical act of measurement.
The idea that quantum mechanics violates logic lives in the same neighborhood of the idea that atoms are mostly empty space (a bad neighborhood).
The spin of a particle = a degree of freedom in addition to its position or momentum.
The notion of spin itself isn’t hard to grasp: it’s just rotation around an axis, as the Earth does every day or a pirouetting ballet dancer does on their tiptoes. But just like the energies of an electron orbiting an atomic nucleus, in quantum mechanics there are only certain discrete results we can obtain when we measure a particle’s spin.
For an electron, for example, there are two possible measurement outcomes for spin. First pick an axis with respect to which we measure the spin. We always find that the electron is spinning either clockwise or counterclockwise when we look along that axis, and always at the same rate. These are conventionally referred to as “spin-up” and “spin-down.”
What we’re bumping up against is another manifestation of the uncertainty principle. The lesson we learned was that “position” and “momentum” aren’t properties that an electron has; they are just things we can measure about it. In particular, no particle can have a definite value of both simultaneously. Once we specify the exact wave function for position, the probability of observing any particular momentum is entirely fixed, and vice versa. The same is true for “vertical spin” and “horizontal spin.” These are not separate properties an electron can have; they are just different quantities we can measure. If we express the quantum state in terms of the vertical spin, the probability of observing left or right horizontal spin is entirely fixed. The measurement outcomes we can get are determined by the underlying quantum state, which can be expressed in different but equivalent ways. The uncertainty principle expresses the fact that there are different incompatible measurements we can make on any particular quantum state.
Hilbert space, is infinite-dimensional for the position of a single particle. That’s why qubits are so much easier to think about. Two dimensions are easier to visualize than infinite dimensions.
So from Pythagoras’s theorem, we have a simple relationship: the squares of the amplitudes add up to unity, |a|^2 + |b|^2 = 1.
For a single spin, the uncertainty principle says that the state can’t have a definite value for the spin along the original axes (up/down) and the rotated axes (right/left) at the same time. Just as there are no quantum states that are simultaneously localized in position and momentum, there are no states that are simultaneously localized in both vertical spin and horizontal spin. The uncertainty principle reflects the relationship between what really exists (quantum states) and what we can measure (one observable at a time).
What happens at one point in space can seemingly have immediate consequences for experiments done very far away.
One of the most profound features of the quantum world: the phenomenon of entanglement. Entanglement arises because there is only one wave function for the entire universe, not separate wave functions for each piece of it.
Classically, if we were given the initial positions and velocities of the electrons, we could calculate precisely the directions into which each of them would scatter. Quantum-mechanically, all we can do is calculate the probability that they will each be observed on various paths after they interact with each other.
When we actually do this experiment, and observe the electrons after they have scattered, we notice something important. Since the electrons initially had equal and opposite velocities, the total momentum was zero. And momentum is conserved, so the post-interaction momentum should also be zero. This means that while the electrons might emerge moving in various different directions, whatever direction one of them moves in, the other moves in precisely the opposite. How could it know that it’s supposed to be moving in the opposite direction when we actually do measure it? The two electrons don’t have separate wave functions; their behavior is described by the single wave function of the universe. The predictions we make for observations of either one can be dramatically affected by the outcome of observations of the other. The electrons are entangled.
A wave function is an assignment of a complex number, the amplitude, to each possible observational outcome, and the square of the amplitude equals the probability that we would observe that outcome were we to make that measurement. When we’re talking about more than one particle, that means we assign an amplitude to every possible outcome of observing all the particles at once. If what we’re observing is positions, for example, the wave function of the universe can be thought of as assigning an amplitude to every possible combination of positions for all the particles in the universe.
Because of the finite speed of light and a finite time since the Big Bang, we can see only a finite region of the cosmos, which we label “the observable universe.” There are approximately 10^88 particles in the observable universe, mostly photons and neutrinos. That is a number much greater than two. And each particle is located in three-dimensional space, not just a one-dimensional line. How in the world are we supposed to visualize a wave function that assigns an amplitude to every possible configuration of 1088 particles distributed through three-dimensional space? We’re not. Sorry. The human imagination wasn’t designed to visualize the enormously big mathematical spaces that are routinely used in quantum mechanics.
The entanglement between two particles doesn’t fade away as they are moved apart; as long as neither Alice nor Bob measures the spins of their qubits, the overall quantum state will remain the same.
Thirty years earlier, Einstein had established the rules of the special theory of relativity, which says among other things that signals cannot travel faster than the speed of light. And yet here we’re saying that according to quantum mechanics, a measurement that Alice does here and now has an immediate effect on Bob’s qubit, even though it’s four light-years away. How does Bob’s qubit know that Alice’s has been measured, and what the outcome was? This is the “spooky action at a distance” that Einstein so memorably fretted about.
The first thing you might wonder about, upon being informed that quantum mechanics apparently sends influences faster than the speed of light, is whether or not we could take advantage of this phenomenon to communicate instantly across large distances. Can we build a quantum-entanglement phone, for which the speed of light is not a limitation at all? No, we can’t. This is pretty clear in our simple example: if Alice measures spin-up, she instantly knows that Bob will also measure spin-up when he gets around to it. But Bob doesn’t know that. In order for him to know what the spin of his particle is, Alice has to send him her measurement result by conventional means—which are limited by the speed of light.
So quantum mechanics seems to be exploiting a subtle loophole, violating the spirit of relativity (nothing travels faster than the speed of light) while obeying the letter of the law (actual physical particles, and whatever useful information they might convey, cannot travel faster than the speed of light).
What is real, Bohr seems to suggest, depends not only on what we measure, but on how we choose to measure it.
There are two assumptions behind Bell’s theorem in particular that one might want to doubt. One is contained in the simple idea that Bob “decides” to measure the spin of his qubit along a certain axis. An element of human choice, or free will, seems to have crept into our theorem about quantum mechanics. That’s hardly unique, of course; scientists are always assuming that they can choose to measure whatever they want. But really we think that’s just a convenient way of talking, and even those scientists are composed of particles and forces that themselves obey the laws of physics. So we can imagine invoking superdeterminism—the idea that the true laws of physics are utterly deterministic (no randomness anywhere), and furthermore that the initial conditions of the universe were laid down at the Big Bang in just precisely such a way that certain “choices” are never going to be made.
That doesn’t mean that Bell’s theorem is wrong in Many-Worlds; mathematical theorems are unambiguously right, given their assumptions. It just means that the theorem doesn’t apply. Bell’s result does not imply that we have to include spooky action at a distance in Everettian quantum mechanics, as it does for boring old single-world theories. The correlations don’t come about because of any kind of influence being transmitted faster than light, but because of branching of the wave function into different worlds, in which correlated things happen.
The rest of physics—matter, electromagnetism, the nuclear forces—seems to fit comfortably within the framework of quantum mechanics. But gravity was (and remains) a stubborn exception.
But what happens when the quantum system under consideration is the entire universe? Crucial to the Copenhagen approach is the distinction between the quantum system being measured and the classical observer doing the measuring. If the system is the universe as a whole, we are all inside it; there’s no external observer to whom we can appeal.
Clearly, Everett reasoned, if we’re going to talk about the universe in quantum terms, we can’t carve out a separate classical realm. Every part of the universe will have to be treated according to the rules of quantum mechanics, including the observers within it. There will only be a single quantum state, described by what Everett called the “universal wave function” (and we’ve been calling “the wave function of the universe”). If everything is quantum, and the universe is described by a single wave function, how is measurement supposed to occur? It must be, Everett reasoned, when one part of the universe interacts with another part of the universe in some appropriate way. That is something that’s going to happen automatically, he noticed, simply due to the evolution of the universal wave function according to the Schrödinger equation. We don’t need to invoke any special rules for measurement at all; things bump into each other all the time. It’s for this reason that Everett titled his eventual paper on the subject “‘Relative State’ Formulation of Quantum Mechanics.” As a measurement apparatus interacts with a quantum system, the two become entangled with each other. There are no wave-function collapses or classical realms. The apparatus itself evolves into a superposition, entangled with the state of the thing being observed. The apparently definite measurement outcome (“the electron is spin-up”) is only relative to a particular state of the apparatus (“I measured the electron to be spin-up”). The other possible measurement outcomes still exist and are perfectly real, just as separate worlds.
This is the secret to Everettian quantum mechanics. The Schrödinger equation says that an accurate measuring apparatus will evolve into a macroscopic superposition, which we will ultimately interpret as branching into separate worlds. We didn’t put the worlds in; they were always there, and the Schrödinger equation inevitably brings them to life. The problem is that we never seem to come across superpositions involving big macroscopic objects in our experience of the world.
To the modern Everettian, decoherence is absolutely crucial to making sense of quantum mechanics. It explains once and for all why wave functions seem to collapse when you measure quantum systems—and indeed what a “measurement” really is. We know there is only one wave function, the wave function of the universe. But when we’re talking about individual microscopic particles, they can settle into quantum states where they are unentangled from the rest of the world. In that case, we can sensibly talk about “the wave function of this particular electron” and so forth, keeping in mind that it’s really just a useful shortcut we can employ when systems are unentangled with anything else.
We don’t (and generally can’t) keep track of exactly what’s going on in the environment—it’s too complicated. It’s not going to just be a single photon that interacts differently with different parts of the apparatus’s wave function, it will be a huge number of them. That simple process—macroscopic objects become entangled with the environment, which we cannot keep track of—is decoherence, and it comes with universe-altering consequences. Decoherence causes the wave function to split, or branch, into multiple worlds.
Any observer branches into multiple copies along with the rest of the universe. After branching, each copy of the original observer finds themselves in a world with some particular measurement outcome. To them, the wave function seems to have collapsed. We know better; the collapse is only apparent, due to decoherence splitting the wave function. We don’t know how often branching happens, or even whether that’s a sensible question to ask. It depends on whether there are a finite or infinite number of degrees of freedom in the universe, which is currently an unanswered question in fundamental physics. But we do know that there’s a lot of branching going on; it happens every time a quantum system in a superposition becomes entangled with the environment. In a typical human body, about 5,000 atoms undergo radioactive decay every second. If every decay branches the wave function in two, that’s 25000 new branches every second. It’s a lot.
What makes a “world,” anyway? One thing you would like to have in a world is that different parts of it can, at least in principle, affect each other.
In order to get interference, we need to be adding up two equal and opposite quantities: 1 + (-1) = 0. When we say equal and opposite, we mean precisely equal and opposite, not “equal and opposite except for that thing we’re entangled with.” Being entangled with different states of the detector and environment—being decohered, in other words—means that the two parts of the electron’s wave function can no longer interfere with each other. And that means they can’t interact at all. And that means they are, for all intents and purposes, part of separate worlds. [Separate worlds cannot interact with each other.]
And there’s nothing special about what constitutes “a measurement” or “an observer”—a measurement is any interaction that causes a quantum system to become entangled with the environment, creating decoherence and a branching into separate worlds, and an observer is any system that brings such an interaction about. Consciousness, in particular, has nothing to do with it. The “observer” could be an earthworm, a microscope, or a rock. There’s not even anything special about macroscopic systems, other than the fact that they can’t help but interact and become entangled with the environment. The price we pay for such powerful and simple unification of quantum dynamics is a large number of separate worlds.
A major puzzle for Many-Worlds: the origin and nature of probability. The Schrödinger equation is perfectly deterministic. Why do probabilities enter at all, and why do they obey the Born rule: probabilities equal amplitudes—the complex numbers the wave function associates with each possible outcome—squared? Does it even make sense to speak of the probability of ending up on some particular branch if there will be a future version of myself on every branch?
Probability in Many-Worlds is necessarily a statement about what we should believe and how we should act, not about how often things happen.
Textbook quantum mechanics says that we have a 50 percent chance of the wave function collapsing to spin-up, and a 50 percent chance of it collapsing to spin-down. Many-Worlds, on the other hand, says there is a 100 percent chance of the wave function of the universe evolving from one world into two. True, in one of those worlds the experimenter will have seen spin-up and in the other they will have seen spin-down. But both worlds are indisputably there. If the question we’re asking is “What is the chance I will end up being the experimenter on the spin-up branch of the wave function?,” there doesn’t seem to be any answer. You will not be one or other experimenters; your current single self will evolve, with certainty, into both of them. How are we supposed to talk about probabilities in such a situation? It’s a good question. To answer it, we have get a bit philosophical, and think about what “probability” really means.
If Everett is right, there is a 100 percent probability that each possibility is realized in some particular world.
Quantum mechanics suggests that we’re going to have to modify this story somewhat. When a spin is measured, the wave function branches via decoherence, a single world splits into two, and there are now two people where I used to be just one. It makes no sense to ask which one is “really me.” Likewise, before the branching happens, it makes no sense to wonder which branch “I” will end up in. Both of them have every right to think of themselves as “me.” Every one of those people has a reasonable claim to being “you.” None of them is wrong. Each of them is a separate person, all of whom trace their beginnings back to the same person.
But there is something that the actual people on these branches don’t know: which branch they’re on. This state of affairs, first emphasized in the quantum context by physicist Lev Vaidman, is called self-locating uncertainty—you know everything there is to know about the universe, except where you are within it.
Weight of a branch = |Amplitude of that branch|^2. When there are two branches with unequal amplitudes, we say that there are only two worlds, but they don’t have equal weight; the one with higher amplitude counts for more. The weights of all the branches of any particular wave function always add up to one. And when one branch splits into two, we don’t simply “make more universe” by duplicating the existing one; the total weight of the two new worlds is equal to that of the single world we started with, and the overall weight stays the same. Worlds get thinner as branching proceeds.
The difference between theories such as Einstein’s general relativity, which made definite empirical predictions for the bending of light by the sun, and those like Marxist history or Freudian psychoanalysis is that with the latter ideas, no matter what actually happened, you could cook up a story to explain why it was so.
If a system is completely unentangled with anything else, we can safely talk about its wave function in isolation from the rest of the world. But when it is entangled, the individual wave function is undefined, and we can only talk about the wave function for the combined system.”
The more entangled something is with the rest of the world, the higher its entropy.
The low entropy of the early universe corresponds to the idea that there were many unentangled subsystems back then. As they interact with each other and become entangled, we see that as branching of the wave function.”
Does that mean that it’s possible that decoherence will someday reverse, and worlds actually will fuse together rather than branching apart?” “Absolutely,” said Alice with a nod. “But just like with entropy, the chance of that happening is so preposterously small that it’s irrelevant to our daily lives, or to any experiment in the history of physics. It’s extremely unlikely that two macroscopically distinct configurations have recohered even once in the lifetime of our universe.” [But it's possible!]
The separate branches of the wave function aren’t put in as part of the basic architecture of the theory. It’s just extraordinarily convenient for us human beings to think of a superposition of many such worlds, rather than treating the quantum state as an undifferentiated abstraction.”
David Wallace: ‘Asking how many worlds there are is like asking how many experiences you had yesterday, or how many regrets a repentant criminal has had. It makes perfect sense to say that you had many experiences or that he had many regrets; it makes perfect sense to list the most important categories of either; but it is a non-question to ask how many.’
Infinity sounds like a big number, but we use infinite quantities in physics all the time. The number of real numbers between 0 and 1 is infinite, as you know.
“The real world isn’t a bunch of particles, nor is it even described by quantum field theory.” “It’s not?” said her father in mock dismay. “What have I been doing all my life?” “You’ve been ignoring gravity,” replied Alice, “which is a perfectly sensible thing to do while you’re thinking about particle physics. But there are indications from quantum gravity that the number of distinct possible quantum states is finite, not infinite. If that’s true, there is a maximum number of worlds we could sensibly talk about, given by the dimensionality of Hilbert space. The kinds of estimates that get thrown around for the number of dimensions of the Hilbert space of our observable universe are things like 2^10^122.
Are we sure there’s enough room in Hilbert space for all the branches of the wave function that are being produced as the universe evolves?” “Hmm, I never thought about that, to be honest.” Alice grabbed a napkin and started scribbling some numbers on it. “Let’s see, there are about 10^88 particles within our observable universe, mostly photons and neutrinos. For the most part these particles travel peacefully through space, not interacting or becoming entangled with anything. So as a generous overestimate, let’s imagine that every particle in the universe interacts and splits the wave function in two a million times per second, and has been doing so since the Big Bang, which was about 10^18 seconds ago. That’s 10^88 × 10^6 × 10^18 = 10^112 splittings, producing a total number of branches of 2^10^122. “Nice!” Alice seemed pleased with herself. “That’s still a really big number, but it’s much smaller than the number of dimensions in the Hilbert space of the universe. Pitifully smaller, really. And it should be a safe overestimate of the number of branches required. So even if the question of how many branches there are doesn’t have a definite answer, we don’t need to worry that Hilbert space is going to run out of room.”
Many-Worlds doesn’t say ‘everything possible happens’; it says ‘the wave function evolves according to the Schrödinger equation.’ Some things don’t happen, because the Schrödinger equation never leads to them happening. For example, we will never see an electron spontaneously convert into a proton. That would change the amount of electric charge, and charge is strictly conserved. So branching will never create, for example, universes with more or less charge than we started with. Just because many things happen in Everettian quantum mechanics doesn’t mean that everything does.”
If you accept how Everettians derive the Born rule, you should act as if there is a probability of you tunneling through the wall, and that probability is so preposterously small that there’s no reason whatsoever to take it into consideration as you go through your everyday life.
The very phenomenon of ‘branching’ is one that we humans invent to provide a convenient description of a complicated wave function, and whether we think of branching as happening all at once or as spreading out from a point depends on what’s more convenient for the situation.
Can we describe branching as a local process, proceeding only inside the future light cone of an event?’ The answer is ‘Yes, but we can equally well describe it as a nonlocal process, occurring instantly throughout the universe.
Where are these other worlds located, anyway? The branches aren’t ‘located’ anywhere. If you’re stuck thinking of things as having locations in space, it might seem natural to ask about where the other worlds are. But there is no ‘place’ where those branches are hiding; they simply exist simultaneously, along with our own, effectively out of contact with it. I suppose they exist in Hilbert space, but that’s not really a ‘place.’
What about conservation of energy? Where does all that stuff come from when you suddenly create a whole new universe?” “Well,” replied Alice, “just think about ordinary textbook quantum mechanics. Given a quantum state, we can calculate the total energy it describes. As long as the wave function evolves strictly according to the Schrödinger equation, that energy is exactly conserved, right?” “Sure.” “That’s it. In Many-Worlds, the wave function obeys the Schrödinger equation, which conserves energy.” “But what about the extra worlds?” her father insisted. “I could measure the energy contained in this world I see around me, and you say it’s being duplicated all the time.” “Not all worlds are created equal. Think about the wave function. When it describes multiple branched worlds, we can calculate the total amount of energy by adding up the amount of energy in each world, times the weight (the amplitude squared) for that world. When one world divides in two, the energy in each world is basically the same as it previously was in the single world (as far as anyone living inside is concerned), but their contributions to the total energy of the wave function of the universe have divided in half, since their amplitudes have decreased. Each world got a bit thinner, although its inhabitants can’t tell any difference.”
Despite the unrivaled empirical success of quantum theory, the very suggestion that it may be literally true as a description of nature is still greeted with cynicism, incomprehension, and even anger.
Our mission is to understand things, isn’t it?
Whatever your feelings might be about Many-Worlds, its simplicity provides a good starting point for considering alternatives.
In Bohmian mechanics this ambiguity is not mysterious at all: there are both particles and waves. The particles are what we observe; the wave function affects their motion, but we have no way of measuring it directly.
In contemplating ways to eliminate the many worlds implied by a bare-bones version of the underlying quantum formalism, we have explored chopping off the worlds by a random event (GRW) or reaching some kind of threshold (Penrose) or picking out particular worlds as real by adding additional variables (de Broglie–Bohm). What’s left? The problem is that the appearance of multiple branches of the wave function is automatic once we believe in wave functions and the Schrödinger equation. So the alternatives we have considered thus far either eliminate those branches or posit something that picks out one of them as special. A third way suggests itself: deny the reality of the wave function entirely. By this we don’t mean to deny the central importance of wave functions in quantum mechanics. Rather, we can use wave functions, but we might not claim that they represent part of reality. They might simply characterize our knowledge; in particular, the incomplete knowledge we have about the outcome of future quantum measurements. This is known as the “epistemic” approach to quantum mechanics, as it thinks of wave functions as capturing something about what we know, as opposed to “ontological” approaches that treat the wave function as describing objective reality.
There have been many attempts to interpret the wave function epistemically, just as there are competing collapse models or hidden-variable theories. One of the most prominent is Quantum Bayesianism, typically shortened to QBism and pronounced “cubism.” Bayesian inference suggests that we all carry around with us a set of credences for various propositions to be true or false, and update those credences when new information comes in. All versions of quantum mechanics (and indeed all scientific theories) use Bayes’s theorem in some version or another, and in many approaches to understanding quantum probability it plays a crucial role. QBism is distinguished by making our quantum credences personal, rather than universal. According to QBism, the wave function of an electron isn’t a once-and-for-all thing that everyone could, in principle, agree on. Rather, everyone has their own idea of what the electron’s wave function is, and uses that idea to make predictions about observational outcomes. If we do many experiments and talk to one another about what we’ve observed, QBists claim, we will come to a degree of consensus about what the various wave functions are. But they are fundamentally measures of our personal belief, not objective features of the world. When we see an electron deflected upward in a Stern-Gerlach magnetic field, the world doesn’t change, but we’ve learned something new about it. There is one immediate and undeniable advantage of such a philosophy: if the wave function isn’t a physical thing, there’s no need to fret about it “collapsing,” even if that collapse is purportedly nonlocal. If Alice and Bob possess two particles that are entangled with each other and Alice makes a measurement, according to the ordinary rules of quantum mechanics the state of Bob’s particle changes instantaneously. QBism reassures us that we needn’t worry about that, as there is no such thing as “the state of Bob’s particle.” What changed was the wave function that Alice carries around with her to make predictions: it was updated using a suitably quantum version of Bayes’s theorem. Bob’s wave function didn’t change at all. QBism arranges the rules of the game so that when Bob does get around to measuring his particle, the outcome will agree with the prediction we would make on the basis of Alice’s measurement outcome. But there is no need along the way to imagine that any physical quantity changed over at Bob’s location. All that changes are different people’s states of knowledge, which after all are localized in their heads, not spread through all space. [The theory I want to believe in. It allows for consciousness ?]
What is reality supposed to be in this view? (Abraham Pais recalled that Einstein once asked him whether he “really believed that the moon exists only when I look at it.”) The answer is not clear. Imagine that we send an electron through a Stern-Gerlach magnet, but we choose not to look at whether it’s deflected up or down. For an Everettian, it is nevertheless the case that decoherence and branching has occurred, and there is a fact of the matter about which branch any particular copy of ourselves is on. The QBist says something very different: there is no such thing as whether the spin was deflected up or down. All we have is our degrees of belief about what we will see when we eventually decide to look. There is no spoon, as Neo learned in The Matrix. Fretting about the “reality” of what’s going on before we look, in this view, is a mistake that leads to all sorts of confusion.
QBists have chosen not to dwell too much on the questions concerning the nature of reality about which the rest of us care so much. The fundamental ingredients of the theory are a set of agents, who have beliefs, and accumulate experiences. Quantum mechanics, in this view, is a way for agents to organize their beliefs and update them in the light of new experiences. The idea of an agent is absolutely central; this is in stark contrast to the other formulations of quantum theory that we’ve been discussing, according to which observers are just physical systems like anything else. Sometimes QBists will talk about reality as something that comes into existence as we make observations. Mermin has written, “There is indeed a common external world in addition to the many distinct individual personal external worlds. But that common world must be understood at the foundational level to be a mutual construction that all of us have put together from our distinct private experiences, using our most powerful human invention: language.” The idea is not that there is no reality, but that reality is more than can be captured by any seemingly objective third-person perspective. Fuchs has dubbed this view Participatory Realism: reality is the emerging totality of what different observers experience.
So far we’ve treated branching of the wave function as something that happens independently of ourselves, so that we simply have to go along for the ride. It’s worth asking whether that’s the proper perspective. Whenever I make a decision, are different worlds created where I chose different things? Are there realities out there corresponding to every series of alternative choices I could have made, universes that actualize all the possibilities of my life? The idea of “making a decision” isn’t something inscribed in the fundamental laws of physics. It’s one of those useful, approximate, emergent notions that we find convenient to invoke when describing human-scale phenomena. What you and I label “making a decision” is a set of neurochemical processes happening in our brain. It’s perfectly okay to talk about making decisions, but it’s not something over and above ordinary material stuff obeying the laws of physics. So the question is, do the physical processes going on in your brain when you make a decision cause the wave function of the universe to branch, with different decisions being made in each branch? If I’m playing poker and lose all my chips after making an ill-timed bluff, can I take solace in the idea that there is another branch where I played more conservatively? No, you do not cause the wave function to branch by making a decision. In large part that’s just due to what we mean (or ought to mean) by something “causing” something else. Branching is the result of a microscopic process amplified to macroscopic scales: a system in a quantum superposition becomes entangled with a larger system, which then becomes entangled with the environment, leading to decoherence. A decision, on the other hand, is a purely macroscopic phenomenon. There are no decisions being made by the electrons and atoms inside your brain; they’re just obeying the laws of physics. Decisions and choices and their consequences are useful concepts when we are talking about things at the macroscopic, human-size level. It’s perfectly okay to think of choices as really existing and having influences, as long as we confine such talk to the regime in which they apply. We can choose, in other words, to talk about a person as a bunch of particles obeying Schrödinger’s equation, or we can equally well talk about them as an agent with volition who makes decisions that affect the world. But we can’t use both descriptions at once. Your decisions don’t cause the wave function to branch, because “the wave function branching” is a relevant concept at the level of fundamental physics, and “your decisions” is a relevant concept at the everyday macroscopic level of people. So there is no sense in which your decisions cause branching. But we can still ask whether there are other branches where you made different decisions. And indeed there might be, but the right way to think about the causality is “some microscopic process happened that caused branching, and on different branches you ended up making different decisions,” rather than “you made a decision, which caused the wave function of the universe to branch.” For the most part, however, when you do make a decision—even one that seems like a close call at the time—almost all of the weight will be concentrated on a single branch, not spread equally over many alternatives.
The neurons in our brains are cells consisting of a central body and a number of appendages. Most of those appendages are dendrites, which take in signals from surrounding neurons, but one of them is the axon, a longer fiber down which outgoing signals are sent. Charged molecules (ions) build up in the neuron until they reach a point where an electrochemical pulse is triggered, traveling down the axon and across synapses to the dendrites of other neurons. Combine many such events, and we have the makings of a “thought.” (We’re glossing over some complications here; hopefully neuroscientists will forgive me.) For the most part, these processes can be thought of as being purely classical, or at least deterministic. Quantum mechanics plays a role at some level in any chemical reaction, since it’s quantum mechanics that sets the rules for how electrons want to jump from one atom to another or bind two atoms together. But when you get enough atoms together in one place, their net behavior can be described without any reference to quantum concepts like entanglement or the Born rule—otherwise you wouldn’t have been able to take a chemistry class in high school without first learning the Schrödinger equation and worrying about the measurement problem. So “decisions” are best thought of as classical events, not quantum ones. While you might be personally unsure what choice you will eventually make, the outcome is encoded in your brain. We’re not absolutely sure about the extent to which this is true, since there’s still a lot we don’t know about the physical processes behind thinking. It’s possible that the rates of neurologically important chemical reactions can vary slightly depending on the entanglement between the different atoms involved. If that turns out to be true, there would be a sense in which your brain is a quantum computer, albeit a limited one.
At the same time, an honest Everettian admits that there will always be branches of the wave function on which quantum systems appear to have done very unlikely things. As Alice mentioned in Chapter Eight, there will be branches where I run into a wall and happen to tunnel through it, rather than bouncing off. Likewise, even if the classical approximation to my brain implies that I’m going to bet all my chips at the poker table, there is some tiny amplitude for a bunch of neurons to do unlikely things and cause me to make a snug fold. But it’s not my decision that’s causing the branching; it’s the branching that I interpret as leading to my decision.
Under the most straightforward understanding of the chemistry going on in our brains, most of our thinking has nothing to do with entanglement and branching of the wave function. We shouldn’t imagine that making a difficult decision splits the world into multiple copies, each containing a version of you that chose differently. [At least according to the Many World QM interpretation.]
Similarly, quantum mechanics has nothing to do with the question of free will. It’s natural to think that it might, as free will is often contrasted with determinism, the idea that the future is completely determined by the present state of the universe. After all, if the future is determined, what room is there for me to make choices? In the textbook presentation of quantum mechanics, measurement outcomes are truly random, so physics is not deterministic. Maybe that opens the door a crack for free will to sneak back in, after it was banished by the Newtonian clockwork paradigm of classical mechanics? There’s so much wrong with this that it’s hard to know where to start. First, “free will” versus “determinism” isn’t the right distinction to draw. Determinism should be opposed to “indeterminism,” and free will should be opposed to “no free will.” Determinism is straightforward to define: given the exact current state of the system, the laws of physics determine precisely the state at later times. Free will is trickier. One usually hears free will defined as something like “the ability to have chosen otherwise.” That means we’re comparing what really happened (we were in a situation, we made a decision, and we acted accordingly) to a different hypothetical scenario (we wind the clock backward to the original situation, and ask whether we “could have” decided differently). When playing this game, it’s crucial to specify exactly what is kept fixed between the real and hypothetical situations. Is it absolutely everything, down to the last microscopic detail? Or do we just imagine fixing our available macroscopic information, allowing for variation within invisible microscopic details? Let’s say we’re hard-core about this question, and compare what actually happened to a hypothetical re-running of the universe starting from exactly the same initial condition, down to the precise state of every last elementary particle. In a classical deterministic universe the outcome would be precisely the same, so there’s no possibility you could have “made a different decision.” By contrast, according to textbook quantum mechanics, an element of randomness is introduced, so we can’t confidently predict exactly the same future outcome from the same initial conditions. But that has nothing to do with free will. A different outcome doesn’t mean we manifested some kind of personal, supra-physical volitional influence over the laws of nature. It just means that some unpredictable quantum random numbers came up differently. What matters for the traditional “strong” notion of free will is not whether we are subject to deterministic laws of nature, but whether we are subject to impersonal laws of any sort. The fact that we can’t predict the future isn’t the same as the idea that we are free to bring it about. Even in textbook quantum mechanics, human beings are still collections of particles and fields obeying the laws of physics. For that matter, quantum mechanics is not necessarily indeterministic. Many-Worlds is a counterexample. You evolve, perfectly deterministically, from a single person now into multiple persons at a future time. No choices come into the matter anywhere. On the other hand, we can also contemplate a weaker notion of free will, one that refers to the macroscopically available knowledge we actually have about the world, rather than running thought experiments based on microscopically perfect knowledge. In that case, a different form of unpredictability arises. Given a person and what we (or they, or anyone) know about their current mental state, there will typically be many different specific arrangements of atoms and molecules in their bodies and brains that are compatible with that knowledge. Some of those arrangements may lead to sufficiently different neural processes that we would end up acting very differently, if those arrangements had been true. In that case, the best we can realistically do to describe the way human beings (or other conscious agents) act in the real world is to attribute volition to them—the ability to choose differently. Attributing volition to people is what every one of us actually does as we go through life talking about ourselves and others. For practical purposes it doesn’t matter whether we could predict the future from perfect knowledge of the present, because we don’t have such knowledge, nor will we ever. This has led philosophers, going back as far as Thomas Hobbes, to propose compatibilism between underlying deterministic laws and the reality of human choice-making. Most modern philosophers are compatibilists about free will (which doesn’t mean it’s right, of course). Free will is real, just like tables and temperature and branches of the wave function. As far as quantum mechanics is concerned, it doesn’t matter whether you are a compatibilist or an incompatibilist concerning free will. In neither case should quantum uncertainty affect your stance; even if you can’t predict the outcome of a quantum measurement, that outcome stems from the laws of physics, not any personal choices made by you. We don’t create the world by our actions, our actions are part of the world.
I would be remiss to talk about the human side of Many-Worlds without confronting the question of consciousness. There is a long history of claiming that human consciousness is necessary to understand quantum mechanics, or that quantum mechanics may be necessary to understand consciousness. Much of this can be attributed to the impression that quantum mechanics is mysterious, and consciousness is mysterious, so maybe they have something to do with each other. That’s not wrong, as far as it goes. Maybe quantum mechanics and consciousness are somehow interconnected; it’s a hypothesis we’re welcome to contemplate. But according to everything we currently know, there is no good evidence this is actually the case. Let’s first examine whether quantum mechanics might help us understand consciousness. It’s conceivable—though far from certain—that the rates of various neural processes in your brain depend on quantum entanglement in an interesting way, so that they cannot be understood by classical reasoning alone. But accounting for consciousness, as we traditionally think about it, isn’t a straightforward matter of the rates of neural processes. Philosophers distinguish between the “easy problem” of consciousness—figuring out how we sense things, react to them, think about them—and the “hard problem”—our subjective, first-person experience of the world; what it is like to be us, rather than someone else. Quantum mechanics doesn’t seem to have anything to do with the hard problem. [If interested in the topic, read The Big Picture by Sean Carroll]
Everettian quantum mechanics has nothing specific to say about the hard problem of consciousness that wouldn’t be shared by any other view in which the world is entirely physical. In such a view, the relevant facts about consciousness include these: Consciousness arises from brains. Brains are coherent physical systems. That’s all. (“Coherent” here means “made of mutually interacting parts”; two collections of neurons on two non-interacting branches of the wave function are two distinct brains.) You can extend “brains” to “nervous systems” or “organisms” or “information-processing systems” if you like. Many-Worlds quantum mechanics is a quintessentially mechanistic theory, with no special role for observers or experiences.
The human mind generally, and consciousness in particular, are extremely complex phenomena. The fact that we don’t fully understand them shouldn’t tempt us into proposing entirely new laws of fundamental physics to help ourselves out. The laws of physics are enormously better understood, and that understanding has been much better verified by experiment, than the functioning of our brains and their relationship to our minds. We might someday have to contemplate modifying the laws of physics to successfully account for consciousness, but that should be a move of last resort. [The problem is that our understanding of fundamental physics stems from our mind!]
We can also flip the question on its head: If quantum mechanics doesn’t help account for consciousness, is it nevertheless possible that consciousness plays a central role in accounting for quantum mechanics? Many things are possible. But there’s a bit more to it than that. Given the prominence afforded to the act of measurement in the rules of standard textbook quantum theory, it’s natural to wonder whether there isn’t something special about the interaction between a conscious mind and a quantum system. Could the collapse of the wave function be caused by the conscious perception of certain aspects of physical objects? According to the textbook view, wave functions collapse when they are measured, but what precisely constitutes “measurement” is left a little vague. The Copenhagen interpretation posits a distinction between quantum and classical realms, and treats measurement as an interaction between a classical observer and a quantum system. Where we should draw the line is hard to specify. If we have a Geiger counter observing emission from a radioactive source, for example, it would be natural to treat the counter as part of the classical world. But we don’t have to; even in Copenhagen, we could imagine treating Geiger counters as quantum systems that obey the Schrödinger equation. It’s only when the outcome of a measurement is perceived by a human being that (in this way of thinking) the wave function absolutely has to collapse, because no human being has ever reported being in a superposition of different measurement outcomes. So the last possible place we can draw the cut is between “observers who can testify as to whether they are in a superposition” and “everything else.” Since the perception of not being in a superposition is part of our consciousness, it’s not crazy to ask whether it’s actually consciousness that causes the collapse.
In Wigner’s words: All that quantum mechanics purports to provide are probability connections between subsequent impressions (also called “apperceptions”) of the consciousness, and even though the dividing line between the observer, whose consciousness is being affected, and the observed physical object can be shifted towards the one or the other to a considerable degree, it cannot be eliminated. It may be premature to believe that the present philosophy of quantum mechanics will remain a permanent feature of future physical theories; it will remain remarkable, in whatever way our future concepts may develop, that the very study of the external world led to the conclusion that the content of the consciousness is an ultimate reality.
If consciousness did play a role in the quantum measurement process, what exactly would that mean? The most straightforward approach would be to posit a dualist theory of consciousness, according to which “mind” and “matter” are two distinct, interacting categories. The general idea would be that our physical bodies are made of particles with a wave function that obeys the Schrödinger equation, but that consciousness resides in a separate immaterial mind, whose influence causes wave functions to collapse upon being perceived. Dualism has waned in popularity since its heyday in the time of René Descartes. The basic conundrum is the “interaction problem”: How do mind and matter interact with each other? In the present context, how is an immaterial mind, lacking extent in space and time, supposed to cause wave functions to collapse? There is another strategy, however, that seems at once less clunky and considerably more dramatic. This is idealism, in the philosophical sense of the word - that the fundamental essence of reality is mental, rather than physical, in character. Idealism can be contrasted with physicalism or materialism, which suggest that reality is fundamentally made of physical stuff, and minds and consciousness arise out of that as collective phenomena. If physicalism claims that there is only the physical world, and dualism claims that there are both physical and mental realms, idealism claims that there is only the mental realm. (There is not a lot of support on the ground for the remaining logical possibility, that neither the physical nor the mental exists.) For an idealist, mind comes first, and what we think of as “matter” is a reflection of our thoughts about the world. In some versions of the story, reality emerges from the collective effort of all the individual minds, whereas in others, a single concept of “the mental” underlies both individual minds and the reality they bring to be. Some of history’s greatest philosophical minds, including many in various Eastern traditions but also Westerners such as Immanuel Kant, have been sympathetic to some version of idealism. It’s not hard to see how quantum mechanics and idealism might seem like a good fit. Idealism says that mind is the ultimate foundation of reality, and quantum mechanics (in its textbook formulation) says that properties like position and momentum don’t exist until they are observed, presumably by someone with a mind. All varieties of idealism are challenged by the fact that, aside from the contentious exception of quantum measurement, the real world seems to move along quite well without any particular help from conscious minds. Our minds discover things about the world through the process of observation and experiment, and different minds end up discovering aspects of the world that always end up being wholly consistent with one another. We have assembled quite a detailed and successful account of the first few minutes of the history of the universe, a time when there were no known minds around to think about it. Meanwhile, progress in neuroscience has increasingly been able to identify particular thought processes with specific biochemical events taking place in the material that makes up our brains. If it weren’t for quantum mechanics and the measurement problem, all of our experience of reality would speak to the wisdom of putting matter first and mind emergent from it, rather than the other way around. So, is the weirdness of the quantum measurement process sufficiently intractable that we should discard physicalism itself, in favor of an idealistic philosophy that takes mind as the primary ground of reality? Does quantum mechanics necessarily imply the centrality of the mental? No. We don’t need to invoke any special role for consciousness in order to address the quantum measurement problem. We’ve seen several counterexamples. Many-Worlds is an explicit example, accounting for the apparent collapse of the wave function using the purely mechanistic process of decoherence and branching. We’re allowed to contemplate the possibility that consciousness is somehow involved, but it’s just as certainly not forced on us by anything we currently understand.
Idealism isn’t something that’s easy to disprove; if someone is convinced it’s right, it’s hard to point to anything that would obviously change their mind (or Mind). But what they can’t do is claim that quantum mechanics forces us into such a position. We have very straightforward and compelling models of the world in which reality exists independently of us; there’s no need to think we bring reality into existence by observing or thinking about it.
Back in the nineteenth century, physicists seemed to be homing in on a view of the world in which both particles and fields played a role: matter was made of particles, and the forces by which they interacted were described by fields. These days we know better; even the particles that we know and love are actually vibrations in fields that suffuse the space around us. When we see particle-like tracks in a physics experiment, that’s a reflection of the fact that what we see is not what there really is. Under the right circumstances we see particles, but our best current theories say that fields are more fundamental.
Gravity is the one part of physics that doesn’t fit comfortably into the quantum-field-theory paradigm. You will often hear that “we don’t have a quantum theory of gravity,” but that’s a bit too strong. We have an extremely good classical theory of gravity: Einstein’s general relativity, which describes the curvature of spacetime. General relativity is itself a field theory—it describes a field pervading all of space, in this case the gravitational field. And we have very well understood procedures for taking a classical field theory and quantizing it, yielding a quantum field theory. Apply those procedures to the known fields of fundamental physics, and we end up with something called the Core Theory. The Core Theory accurately describes not only particle physics but also gravity, as long as the strength of the gravitational field doesn’t grow too large.
We have a theory of quantum gravity that is adequate when gravity is fairly weak, one that is perfectly capable of describing why apples fall from trees or how the moon orbits the Earth. But it’s limited; once gravity becomes very strong, or we try to push our calculations too far, our theoretical apparatus fails us. As far as we can tell, this situation is unique to gravity. For all the other particles and forces, quantum field theories seem to be able to handle any situation we can imagine.
Every physicist understands that the world is fundamentally quantum, but as we actually do physics we can’t help but be influenced by our experience and intuitions, which have long been trained on classical principles. There are particles, there are fields, they do things, we can observe them. Even when we explicitly move to quantum mechanics, physicists generally start by taking a classical theory and quantizing it. But nature doesn’t do that. Nature simply is quantum from the start; classical physics, as Everett insisted, is an approximation that is useful in the right circumstances.
General relativity is a theory of the dynamics of spacetime, so in this chapter we’ll ask why the concept of “space” is so important in the first place. The answer resides in the concept of locality—things interact with one another when they are nearby in space.
Dynamical locality, on the other hand, refers to the smooth evolution of the quantum state when no measurement or branching is happening. That’s the context in which physicists expect everything to be perfectly local, with disturbances at one location only immediately affecting things right nearby. This kind of locality is enforced by the rule in special relativity that nothing can travel faster than light. And it’s this dynamical locality that we’re concerned with at the moment as we study the nature and emergence of space itself.
By way of an analogy, think of all the matter in the room around you right now. You could describe it—helping ourselves to the classical approximation for the moment—by listing the position and velocity of every atom in the room. But that would be crazy. You neither have access to all that information, nor could you put it to use if you did, nor do you really need it. Instead, you chunk up the stuff around you into a set of useful concepts: chairs, tables, lights, floors, and so on. That’s an enormously more compact description than listing every atom would be, but still gives us a great deal of insight into what’s going on. Similarly, characterizing the quantum state in terms of multiple worlds isn’t necessary—it just gives us an enormously useful handle on an incredibly complex situation. The worlds aren’t fundamental. Rather, they’re emergent. Emergence in this sense does not refer to events unfolding over time, as when a baby bird emerges from its egg. It’s a way of describing the world that isn’t completely comprehensive, but divides up reality into more manageable chunks. Notions like rooms and floors are nowhere to be found in the fundamental laws of physics—they’re emergent. They are ways of effectively describing what’s going on even if we lack perfect knowledge of each and every atom and molecule around us. To say that something is emergent is to say that it’s part of an approximate description of reality that is valid at a certain (usually macroscopic) level, and is to be contrasted with “fundamental” things, which are part of an exact description at the microscopic level.
The same thing can be said for worlds in Everettian quantum mechanics. For a quantum version of Laplace’s demon, with exact knowledge of the quantum state of the universe, there would never be any need to divide the wave function into a set of branches describing a collection of worlds. But it is enormously convenient and helpful to do so, and we’re allowed to take advantage of this convenience because the individual worlds don’t interact with one another.
That doesn’t mean that the worlds aren’t “real.” Fundamental versus emergent is one distinction, and real versus not-real is a completely separate one. Chairs and tables and cups of coffee are indubitably real, as they describe true patterns in the universe, ones that organize the world in ways that reflect the underlying reality. The same goes for Everettian worlds. We choose to invoke them when carving up the wave function for our convenience, but we don’t do that carving randomly. There are right and wrong ways to divide the wave function into branches, and the right ways leave us with independent worlds that obey approximately classical laws of physics. Which ways actually work is ultimately determined by the fundamental laws of nature, not by human whimsy.
In order to accurately predict what a system made of many parts will do next, you need to keep track of the information of all the parts. Lose just a little bit, and you know nothing. Emergence happens when the opposite is possible: we can throw away almost all the information, keeping just a little bit (as long as you correctly identify which bit), and still say quite a lot about what will happen.
Everettian worlds are the same way. We don’t need to keep track of the entire wave function to make useful predictions, just what happens in an individual world. To a good approximation we can treat what happens in each world using classical mechanics, with just the occasional quantum intervention when we entangle with microscopic systems in superposition. That’s why Newton’s laws of gravitation and motion are sufficient to fly rockets to the moon without knowing the complete quantum state of the universe; our individual branch of the wave function describes an emergent almost-classical world.
Why do we end up seeing macroscopic objects with pretty well-defined locations in space, rather than being in superpositions of different locations? Why is “space” apparently such a central concept at all?
From a Many-Worlds perspective that treats quantum states as fundamental and everything else as emergent, this suggests that we should really turn things around: “positions in space” are the variables in which interactions look local. Space isn’t fundamental; it’s just a way to organize what’s going on in the underlying quantum wave function.
Once we have an Everettian perspective on quantum dynamics, we accept that the wave function smoothly evolves into an equal superposition of two possibilities, one in which the cat is asleep and the other in which it is awake. But decoherence tells us that the cat is also entangled with its environment, consisting of all the air molecules and photons within the box. The effective branching into separate worlds happens almost right away after the detector clicks. By the time the experimenter gets around to opening the box, there are two branches of the wave function, each of which has a single cat and a single experimenter, not a superposition.
Decoherence is the phenomenon that ultimately links the austere simplicity of Everettian quantum mechanics to the messy particularity of the world we see.
Newtonian gravity can be summed up in the famous inverse-square law: the gravitational force between two objects is proportional to the mass of each of them, and inversely proportional to the square of the distance between them. So if you moved the moon to be twice as far away from the Earth, the gravitational force between them would be only one-fourth as large.
There is indeed an “agent” that causes gravity to act the way it does, and that agent is perfectly material—it’s the gravitational field.
It wasn’t until Einstein came along with general relativity that changes in the gravitational field, just like changes in the electromagnetic field, were shown to travel through space at the speed of light.
The idea of a field carrying a force is conceptually appealing because it instantiates the idea of locality. As the Earth moves, the direction of its gravitational pull doesn’t change instantly throughout the universe. Rather, it changes right where the Earth is located, and then the field at that point tugs on the field nearby, which tugs on the field a little farther away, and so on in a wave moving outward at the speed of light.
One kind of waviness arises when we make the transition from a classical theory of particles to a quantum version, obtaining the quantum wave function of a set of particles. The other kind is when we have a classical field theory to start with, even before quantum mechanics becomes involved at all. That’s the case with classical electromagnetism, or with Einstein’s theory of gravity. Classical electromagnetism and general relativity are both theories of fields (and therefore of waves), but are themselves perfectly classical.
In quantum field theory, we start with a classical theory of fields and construct a quantum version of that. Instead of a wave function that tells us the probability of seeing a particle at some location, we have a wave function that tells us the probability of seeing a particular configuration of a field throughout space. A wave function of a wave, if you like.
There are many ways to quantize a classical theory, but the most direct one is the route we have already taken. Thinking of a collection of particles, we can ask, “Where can the particles be?” The answer for each individual particle is simply “At any point in space.” If there were just one particle, the wave function would therefore assign an amplitude to every point in space. But when we have several particles, there isn’t a separate wave function for each particle. There is one big wave function, assigning a different amplitude to every possible set of locations that all the particles could be in at once. That’s how entanglement can happen; for every configuration of the particles, there is an amplitude we could square to get the probability of observing them there all at the same time. It’s the same thing for fields, with “possible configuration of the particles” replaced by “possible configurations of the field,” where by “configuration” we now mean the values of the field at each point throughout all of space.
This is the difference between a classical field and a quantum wave function. A classical field is a function of space, and a classical theory with many fields would describe multiple functions of space overlapping with one another. The wave function in quantum field theory is not a function of space, it’s a function of the set of all configurations of all the classical fields. (In the Core Theory, that would include the gravitational field, the electromagnetic field, the fields for the various subatomic particles, and so on.)
The overall minimum-energy wave function is one in which every single mode has the lowest possible energy. That’s a unique state, which we call the vacuum. When quantum field theorists talk about the vacuum, they don’t mean a machine that lifts dust off your floors, or even a region of interplanetary space devoid of matter. What they mean is “the lowest-energy state of your quantum field theory.” You might think that the quantum vacuum would be empty and boring, but it’s actually a wild place. An electron in an atom has a lowest-energy state it can be in, but if we think about it as a wave function of the position of the electron, that function can still have an interesting shape. Likewise, the vacuum state in field theory can still have interesting structure if we ask about individual parts of the field.
In ordinary particle-based quantum mechanics, the number of particles is fixed, but quantum field theory has no problem describing particles decaying or annihilating or being created in collisions. Which is good, because things like that happen all the time.
Fields are more fundamental; it’s fields that provide the best picture we currently have of what the universe is made of. Particles are simply what we see when we observe fields under the right circumstances.
To bring home the interestingness of the field-theory vacuum, let’s focus on one of its most obvious aspects, its energy. It’s tempting to think that the energy is zero by definition. But we’ve been careful not to say that: the vacuum is the “lowest-energy state,” not necessarily a “zero-energy state.” In fact, its energy can be anything at all; it’s a constant of nature, a parameter of the universe that is not determined by any other set of measurable parameters.
According to general relativity, energy is the source of the curvature of spacetime, and therefore of gravity. The energy of empty space takes a particular form: there is a precisely constant amount in every cubic centimeter of space, unchanging through the universe, even as spacetime expands or warps. Einstein referred to the vacuum energy as the cosmological constant, and cosmologists long debated whether its value was exactly zero or some other number. That debate seems to have been settled in 1998, when astronomers discovered that the universe is not only expanding but also accelerating. If you look at a distant galaxy and measure the velocity with which it is receding, that velocity is increasing with time. That would be extremely surprising if all the universe contained were ordinary matter and radiation, both of which have the gravitational effect of pulling things together and slowing down the expansion rate. A positive vacuum energy has the opposite effect: it pushes the universe apart, leading to accelerated expansion. (The debate “seems to” have been settled, because it’s still an open possibility that cosmic acceleration is caused by something other than vacuum energy. But that’s by far the leading explanation, on both theoretical and observational grounds.)
You might be tempted to ask: But are the particles really there? How can there be zero particles in the universe as a whole, and yet we might see particles when we look in any particular location? But we’re not dealing with a theory of particles; it’s a theory of fields. Particles are what we see when we observe the theory in particular ways. We shouldn’t be asking, “How many particles are there, really?” We should be asking, “What are the possible measurement outcomes when we observe a quantum state in this specific way?”
The number of particles we see isn’t an absolute reality, it depends on how we look at the state.
This leads us directly to an important property of quantum field theory: the entanglement between parts of the field in different regions of space.
If a box is entangled with its neighbors, and those neighboring boxes are entangled with their neighbors, it stands to reason that the fields in our original box should be entangled not only with its neighbors, but with the fields one box away. (That’s not logically necessary, but it seems reasonable in this case, and a careful calculation affirms that it is true.) There will be a lot less entanglement with the fields one box away than for direct neighbors, but there will still be some there. And indeed this pattern continues all throughout space: the fields in any one box are entangled with the fields in every other box in the universe, although the amount of entanglement becomes less and less as we consider boxes that are farther and farther apart.
Can the fields in one little region, say, a single cubic centimeter, really be entangled with fields in every other cubic centimeter of the universe? Yes, they can. In field theory, even a single cubic centimeter (or a box of any other size) contains an infinite number of degrees of freedom. Remember that we defined a degree of freedom as a number needed to specify the state of a system, such as “position” or “spin.” In field theory, there are an infinite number of degrees of freedom in any finite region: at every point in space, the value of the field at that point is a separate degree of freedom.
Quantum-mechanically, the space of all the possible wave functions for a system is that system’s Hilbert space. So the Hilbert space describing any region in quantum field theory is infinite-dimensional, because there are an infinite number of degrees of freedom. As we’ll see, that might not continue to hold true in the correct theory of reality; there are reasons to think that quantum gravity features only a finite number of degrees of freedom in a region. But quantum field theory, without gravity, allows for infinite possibilities in any tiny box.
By poking a quantum field in one tiny region of space, it’s possible to turn the quantum state of the whole universe into literally any state at all. Technically this result is known as the Reeh-Schlieder theorem, but it has also been called the Taj Mahal theorem. That’s because it implies that without leaving my room, I can do an experiment and get an outcome that implies there is now, suddenly, a copy of the Taj Mahal on the moon. (Or any other building, at any other location in the universe.) But the other side argues that once you understand entanglement, and appreciate that things can technically be possible but are so incredibly improbable that it really doesn’t matter, we shouldn’t be very surprised after all.
Quantum field theory is able to successfully account for every experiment ever performed by human beings. When it comes to describing reality, it’s the best approach we have. It’s therefore extremely tempting to imagine that future physical theories will be set within the broad paradigm of quantum field theory, or perhaps small variations thereof. But gravity, at least when it becomes strong, doesn’t seem to be well described by quantum field theory.
Gravity, which describes the state of spacetime itself rather than just particles or fields moving within spacetime, presents special challenges when we try to describe it in quantum terms. Rather than taking classical general relativity and quantizing it, we will try to find gravity within quantum mechanics. That is, we will take the basic ingredients of quantum theory—wave functions, Schrödinger’s equation, entanglement—and ask under what circumstances we can obtain emergent branches of the wave function that look like quantum fields propagating in a curved spacetime.
Up to this point in the book, basically everything we’ve talked about is either well understood and established doctrine (such as the essentials of quantum mechanics), or at least a plausible and respectable hypothesis (the Many-Worlds approach). Now we’ve reached the edge of what is safely understood, and will be venturing out into uncharted territory. We’ll be looking at speculative ideas that might be important to understanding quantum spacetime and cosmology.
Like “quantum mechanics,” “relativity” does not refer to a specific physical theory, but rather a framework within which theories can be constructed. Theories that are “relativistic” share a common picture of the nature of space and time, one in which the physical world is described by events happening in a single unified “spacetime.”
There are two big ideas that go under the name of “the theory of relativity,” the special theory and the general theory. Special relativity, which came together in 1905, is based on the idea that everyone measures light to travel at the same speed in empty space [and nothing can exceed the speed of light]. Combining that insight with an insistence that there is no absolute frame of motion leads us directly to the idea that time and space are “relative.” Spacetime is universal and agreed upon by everyone, but how we divvy it up into “space” and “time” will be different for different observers.
Quantum mechanics and special relativity are 100 percent compatible with each other. The quantum field theories used in modern particle physics are relativistic to their cores.
The other big idea in relativity came ten years later, when Einstein proposed general relativity, his theory of gravity and curved spacetime. The crucial insight was that four-dimensional spacetime isn’t just a static background on which the interesting parts of physics take place; it has a life of its own. Spacetime can bend and warp, and does so in response to the presence of matter and energy. We grow up learning about the flat geometry described by Euclid, in which initially parallel lines remain parallel forever and the angles inside a triangle always add up to 180 degrees. Spacetime, Einstein realized, has a non-Euclidean geometry.
The effects of this warping of geometry are what we recognize as “gravity.” General relativity came with numerous mind-stretching consequences, such as the expansion of the universe and the existence of black holes, though it has taken physicists a long time to appreciate what those consequences are.
Special relativity is a framework, but general relativity is a specific theory. Just like Newton’s laws govern the evolution of a classical system or the Schrödinger equation governs the evolution of a quantum wave function, Einstein derived an equation that governs the curvature of spacetime.
Matter tells spacetime how to curve, and spacetime tells matter how to move.
General relativity is classical. The geometry of spacetime is unique, evolves deterministically, and can in principle be measured to arbitrary precision without disturbing it. Once quantum mechanics came along, it was perfectly natural to try to “quantize” general relativity, obtaining a quantum theory of gravity.
Apparently, spacetime can’t play the same central role in quantum gravity that it does in the rest of physics. There isn’t a single spacetime, there’s a superposition of many different spacetime geometries. We can’t ask what the probability might be to find an electron at a certain point in space, since there’s no objective way to specify which point we’re talking about.
Quantum gravity, then, comes with a set of conceptual issues that distinguish it from other quantum-mechanical theories. These issues can have important ramifications for the nature of our universe, including the question of what happened at the beginning, or if there was a beginning at all. We can even ask whether space and time are themselves fundamental, or if they emerge out of something deeper.
The most popular contemporary approach to quantum gravity is string theory, which replaces particles by little loops or segments of one-dimensional “string.” (Don’t ask what the strings are made of—string stuff is what everything else is made of.) The strings themselves are incredibly small, so much so that they appear like particles when we observe them from a distance. String theory, loop quantum gravity, and other ideas share a common pattern: they start with a set of classical variables, then quantize. From the perspective we’ve been following in this book, that’s a little backward. Nature is quantum from the start, described by a wave function evolving according to an appropriate version of the Schrödinger equation. Things like “space” and “fields” and “particles” are useful ways of talking about that wave function in an appropriate classical limit. We don’t want to start with space and fields and quantize them; we want to extract them from an intrinsically quantum wave function.
What we need is a quantitative measure of how entangled a quantum subsystem actually is. Happily, such a measure exists: it’s the entropy. We can even calculate what that entropy is. The answer is: infinity.
The behavior of spacetime in general relativity can be thought of as simply the natural tendency of systems to move toward configurations of higher entropy.
John Wheeler used to talk about the idea of “It from Bit,” suggesting that the physical world arose (somehow) out of information. These days, when entanglement of quantum degrees of freedom is the main focus, we like to talk about “It from Qubit.”
In other words, the combination of (1) knowing how our degrees of freedom are entangled, and (2) postulating that the entropy of any collection of degrees of freedom defines an area of the boundary around that collection, suffices to fully determine the geometry of our emergent space.
It is nevertheless overwhelmingly tempting to wonder whether time, like space, might be emergent rather than fundamental, and whether entanglement might have anything to do with it. The answer is yes on both counts, although the details remain a little sketchy.
If we take the Schrödinger equation at face value, time seems to be right there in a fundamental way. Indeed, it immediately follows that the universe lasts eternally toward both the past and future, for almost all quantum states. You might think that this conflicts with the oft-repeated fact that the Big Bang was the beginning of our universe, but we don’t actually know that oft-repeated fact to be true. That’s a prediction of classical general relativity, not of quantum gravity.
The Big Bang might be simply a transitional phase, with an infinitely old universe preceding it. We have to say “almost all” in these statements because there is one loophole. The Schrödinger equation says that the rate of change of the wave function is driven by how much energy the quantum system has. What if we consider systems whose energy is precisely zero? Then all the equation says is that the system doesn’t evolve at all; time has disappeared from the story. You might think it’s extremely implausible that the universe has exactly zero energy, but general relativity suggests you shouldn’t be so sure. Of course there seem to be energy-containing things all around us—stars, planets, interstellar radiation, dark matter, dark energy, and so on. But when you go through the math, there is also a contribution to the energy of the universe from the gravitational field itself, which is generally negative. In a closed universe—one that wraps around on itself to form a compact geometry, like a three-dimensional sphere or torus, rather than stretching to infinity—that gravitational energy precisely cancels the positive energy from everything else. A closed universe has exactly zero energy, regardless of what’s inside. That’s a classical statement, but there’s a quantum-mechanical analogue that was developed by John Wheeler and Bryce DeWitt. The Wheeler-DeWitt equation simply says that the quantum state of the universe doesn’t evolve at all as a function of time. This seems crazy, or at least in flagrant contradiction to our observational experience. The universe certainly seems to evolve. This puzzle has been cleverly labeled the problem of time in quantum gravity, and it is where the possibility of emergent time might come to the rescue. If the quantum state of the universe obeys the Wheeler-DeWitt equation (which is plausible, but far from certain), time has to be emergent rather than fundamental.
Imagine a quantum system consisting of two parts: a clock, and everything else in the universe. Imagine that both the clock and the rest of the system evolve in time as usual. Now take snapshots of the quantum state at regular intervals, perhaps once per second or once per Planck time. In any particular snapshot, the quantum state describes the clock reading some particular time, and the rest of the system in whatever configuration it was in at that time. That gives us a collection of instantaneous quantum states of the system. The great thing about quantum states is that we can simply add them together (superposing them) to make a new state. So let’s make a new quantum state by adding together all of our snapshots. This new quantum state doesn’t evolve over time; it just exists, as we constructed it by hand. And there is no specific time reading on the clock; the clock subsystem is in a superposition of all the times at which we took snapshots. It doesn’t sound much like our world. In other words, there’s not “really” time in the superposition state, which is completely static. But entanglement generates a relationship between what the clock reads and what the rest of the universe is doing. And the state of the rest of the universe is precisely what it would be if it were evolving as the original state did over time. We have replaced “time” as a fundamental notion with “what the clock reads in this part of the overall quantum superposition.” In that way, time has emerged from a static state, thanks to the magic of entanglement. The jury remains out on whether the energy of the universe actually is zero, and therefore time is emergent, or it is any other number, such that time is fundamental. At the current state of the art, it makes sense to keep our options open and investigate both possibilities.
Rather than being distributed throughout space, degrees of freedom squeeze together on a surface, and “space” is merely a holographic projection of the information contained therein.
In general relativity, a black hole is a region of spacetime that is curved so dramatically that nothing can escape from it, not even light itself. The edge of the black hole, demarcating the inside from the outside, is the event horizon. According to classical relativity, the area of the event horizon can only grow, not shrink; black holes increase in size when matter and energy fall in, but cannot lose mass to the outside world. Everyone thought that was true in nature until 1974, when Hawking announced that quantum mechanics changes everything. In the presence of quantum fields, black holes naturally radiate particles into their surroundings. Those particles have a blackbody spectrum, so every black hole has a temperature; more massive black holes are cooler, while very small black holes are incredibly hot. The formula for the temperature of a black hole’s radiation is engraved on Hawking’s gravestone in Westminster Abbey. Particles radiated by a black hole carry away energy, causing the hole to lose mass and eventually evaporate away completely.
There is a standard story that is told to explain why black holes emit radiation. I’ve told it, Hawking has told it, everyone tells it. It goes like this: according to quantum field theory, the vacuum is a bubbling stew of particles popping in and out of existence, typically in pairs consisting of one particle and one anti-particle. Ordinarily we don’t notice, but in the vicinity of a black hole event horizon, one of the particles can fall inside the hole and then never get out, while the other escapes to the outside world. From the perspective of someone watching from afar, the escaping particle has positive energy, so to balance the books the infalling particle must have negative energy, and the black hole shrinks in mass as it absorbs these negative-energy particles. It’s not that different from an atom whose electrons have a bit of extra energy, and which therefore drop down to lower-energy states by emitting photons. The difference is that the atom eventually reaches a state of lowest possible energy and stays there, while the black hole (as far as we understand) just decays away entirely, exploding at the last second in a flash of high-energy particles.
That raises a problem, one that has become notorious within theoretical physics as the black hole information puzzle. Remember that quantum mechanics, in its Many-Worlds version, is a deterministic theory. Randomness is only apparent, arising from self-locating uncertainty when the wave function branches and we don’t know which branch we’re on. But in Hawking’s calculation, black-hole radiation seems not to be deterministic; it’s truly random, even without any branching. Starting from a precise quantum state describing matter that collapses to make a black hole, there is no way of computing the precise quantum state of the radiation into which it evaporates. The information specifying the original state seems to be lost.
Imagine taking a book—maybe the very one you are reading right now—and throwing it into a fire, letting it burn completely away. (Don’t worry, you can always buy more copies.) It might appear that the information contained in the book is lost in the flames. But if we turn on our physicist’s powers of thought-experiment ingenuity, we realize that this loss is only apparent. In principle, if we captured every bit of light and heat and dust and ash from the fire, and had perfect knowledge of the laws of physics, we could reconstruct exactly what went into the fire, including all the words on the pages of the book. It’ll never happen in the real world, but physics says it’s conceivable. Most physicists think that black holes should be just like that: throw a book in, and the information contained in its pages should be secretly encoded in the radiation that the black hole emits. But this is not what happens, according to Hawking’s derivation of black-hole radiation; rather, the information in the book appears to be truly destroyed. It’s possible, of course, that this implication is correct, that the information really is destroyed, and that black-hole evaporation is nothing like an ordinary fire. It’s not like we have any experimental input one way or the other. But most physicists believe that information is conserved, and that it really does get out somehow. And they suspect that the secret to getting it out lies in a better understanding of quantum gravity.
One reason why this is such a provocative result is that classically, black holes don’t seem like things that should have entropy at all. They’re just regions of empty space. You get entropy when your system is made of atoms or other tiny constituents, which can be arranged in many different ways while maintaining the same macroscopic appearance. What are these constituents supposed to be for a black hole? The answer has to come from quantum mechanics. It’s natural to presume that the Bekenstein-Hawking entropy of a black hole is a kind of entanglement entropy. There are some degrees of freedom inside the black hole, and they are entangled with the outside world. What are they?
If there’s entropy, and that entropy comes from entanglement, there must be degrees of freedom that can entangle with the rest of the world in many different ways, even if classical black holes are all featureless. If this story is right, the number of degrees of freedom in a black hole isn’t infinite, but it is very large indeed.
While we tend to pay attention to the stuff we see in the universe—matter, radiation, and so on—almost all of the universe’s quantum degrees of freedom are invisible, doing nothing more than stitching spacetime together. In a volume of space roughly the size of an adult human, there must be at least 10^70 degrees of freedom; we know that because that’s the entropy of a black hole that would fill such a volume. But there are only about 10^28 particles in a person. We can think of a particle as a degree of freedom that has been “turned on,” while all the other degrees of freedom are peacefully “turned off” in the vacuum state. As far as quantum field theory is concerned, a human being or the center of a star isn’t all that different from empty space.
Black holes have a very special property: they represent the highest-entropy states we can have in any given size region of space.
That conclusion is profoundly different from what we would expect in an ordinary quantum field theory without gravity. There, there is no limit on how much entropy we can fit in a region, because there’s also no limit on how much energy there can be. This reflects the fact that there are an infinite number of degrees of freedom in quantum field theory, even in a finite-sized region. Gravity appears to be different. There is a maximum amount of energy and entropy that can fit into a given region, which seems to imply that there are only a finite number of degrees of freedom there. Somehow these degrees of freedom become entangled in the right way to stitch together into the geometry of spacetime. It’s not just black holes: every region of spacetime has a maximum entropy we could imagine fitting into it (the entropy that a black hole of that size would have), and therefore a finite number of degrees of freedom. It’s even true for the universe as a whole; because there is vacuum energy, the acceleration of space is expanding, and that means there is a horizon all around us that delineates the extent of the observable part of our cosmos. That observable patch of space has a finite maximum entropy, so there are only a finite number of degrees of freedom needed to describe everything we see or ever will see. If this story is on the right track, it has an immediate, profound consequence for the Many-Worlds picture of quantum mechanics. A finite number of quantum degrees of freedom implies a finite-dimensional Hilbert space for the system as a whole (in this case, any chosen region of space). That in turn implies that there is some finite number of branches of the wave function, not an infinite number. That’s why Alice was cagey about whether there are an infinite number of “worlds” in the wave function. In many simple models of quantum mechanics, including that of a fixed set of particles moving smoothly through space or any ordinary quantum field theory, Hilbert space is infinite-dimensional and there could potentially be an infinite number of worlds. But gravity seems to change things around in an important way. It prevents most of those worlds from existing, because they would describe too much energy being packed into a local region. So maybe in the real universe, where gravity certainly exists, Everettian quantum mechanics only describes a finite number of worlds.
Our confidence in the basic principles of quantum gravity isn’t strong enough to be absolutely sure that there are only a finite number of Everettian worlds, but it seems reasonable, and it certainly would make things much simpler.
The maximum-entropy nature of black holes also has an important consequence for quantum gravity. In classical general relativity, there’s nothing special about the interior region of a black hole, in between the event horizon and the singularity. There’s a gravitational field there, but to an infalling observer it otherwise looks like empty space. According to the story we told in the last chapter, the quantum version of “empty space” is something like “a collection of spacetime degrees of freedom entangled together in such a way as to form an emergent three-dimensional geometry.” Implicit in that description is that the degrees of freedom are scattered more or less uniformly throughout the volume of space we’re looking at. And if that were true, the maximum-entropy state of that form would have all of those degrees of freedom entangled with the outside world. The entropy would thus be proportional to the volume of the region, not the area of its boundary. What’s up? There is a clue from the black hole information puzzle. The issue there was that there is no obvious way to transmit information from a book that has fallen into the black hole to the Hawking radiation emitted from the event horizon, at least not without signals moving faster than light. So what about this crazy idea: maybe all of the information about the state of the black hole—the “inside” as well as the horizon—can be thought of as living on the horizon itself, not buried in the interior. The black-hole state “lives,” in some sense, on a two-dimensional surface, rather than being stretched across a three-dimensional volume.
This idea is known as the holographic principle. In an ordinary hologram, shining light on a two-dimensional surface reveals an apparently three-dimensional image. According to the holographic principle, the apparently three-dimensional interior of a black hole reflects information encoded on the two-dimensional surface of its event horizon. If this is true, maybe it’s not so hard to get information from the black hole to its outgoing radiation, because the information was always on the horizon to start with.
According to Maldacena, these two theories are secretly equivalent to each other. That’s extremely provocative, for a couple of reasons. First, the AdS theory includes gravity, while the CFT is an ordinary field theory that has no gravity at all. Second, the boundary of a spacetime has one fewer dimensions than the spacetime itself. If we consider four-dimensional AdS, for example, that is equivalent to a three-dimensional conformal field theory. You couldn’t ask for a more explicit example of holography in action.
The world is a quantum state evolving in Hilbert space, and physical space emerges out of that. It shouldn’t come as a surprise that a single quantum state might exhibit different notions of position and locality depending on what kind of observations we perform on it.
Space itself is not fundamental; it’s just a useful way of talking from certain points of view.
Recommended reading:
- Albert, D. Z. (1994). Quantum Mechanics and Experience. Harvard University Press. A short introduction to quantum mechanics and the measurement problem from a philosophical perspective.
- Susskind, L., and A. Friedman. (2015). Quantum Mechanics: The Theoretical Minimum. Basic Books. A serious introduction to quantum mechanics, taught at the level of an introductory course for physics students at a good university.
- The Big Picture: On the Origins of Life, Meaning, and the Universe Itself.