Author: Sean Carroll

ISBN: 978-1101984253

Remember Nietzsche's God is dead? The outlook on life presented on this book is exactly the one he had in mind, when he said it. While I do agree with Nietzsche that we're nowadays predominantly facing a crisis of meaning I do agree with Carroll as well. We shouldn't stop exploring just because it may cause a crisis of meaning. It is up to us to explore, face the truth and live with it. Each of us still has the ability to make his own meaning. The problem this time is that we have to find meaning on our own. It's not anymore given to us. While finding one's own meaning leads to self-actualisation, lack thereof leads to depression, existential angst and myriad of other problems. Read this book with caution!

EXCERPTS

Fourteen billion years after the Big Bang, the region of space we can directly see is populated by a few hundred billion galaxies, averaging a hundred billion stars each. We human beings, by contrast, are quite tiny—a recent arrival on an insignificant planet orbiting a nondescript star.

Everybody dies. Life is not a substance, like water or rock; it’s a process, like fire or a wave crashing on the shore. It’s a process that begins, lasts for a while, and ultimately ends. Long or short, our moments are brief against the expanse of eternity.

We humans are blobs of organized mud, which through the impersonal workings of nature’s patterns have developed the capacity to contemplate and cherish and engage with the intimidating complexity of the world around us.

By the old way of thinking, human life couldn’t possibly be meaningful if we are “just” collections of atoms moving around in accordance with the laws of physics. That’s exactly what we are, but it’s not the only way of thinking about what we are. We are collections of atoms, operating independently of any immaterial spirits or influences, and we are thinking and feeling people who bring meaning into existence by the way we live our lives.

We have to be willing to accept uncertainty and incomplete knowledge, and always be ready to update our beliefs as new evidence comes in.

The emergence of complex structures isn’t a strange phenomenon in tension with the general tendency of the universe toward greater disorder; it is a natural consequence of that tendency. In the right circumstances, matter self-organizes into intricate configurations, capable of capturing and using information from their environments. The culmination of this process is life itself.

We are not the reason for the existence of the universe, but our ability for self-awareness and reflection makes us special within it.

The hardest problem of all, that of how to construct meaning and values in a cosmos without transcendent purpose.

Poetic naturalism strikes a middle ground, accepting that values are human constructs, but denying that they are therefore illusory or meaningless. All of us have cares and desires, whether given to us by evolution, our upbringing, or our environment. The task before us is to reconcile those cares and desires within ourselves, and amongst one another. The meaning we find in life is not transcendent, but it’s no less meaningful for that.

What is the fundamental nature of reality? Philosophers call this the question of ontology—the study of the basic structure of the world, the ingredients and relationships of which the universe is ultimately composed. It can be contrasted with epistemology, which is how we obtain knowledge about the world.

The number of approaches to ontology alive in the world today is somewhat overwhelming. There is the basic question of whether reality exists at all. A realist says, “Of course it does”; but there are also idealists, who think that capital-M Mind is all that truly exists, and the so-called real world is just a series of thoughts inside that Mind. Among realists, we have monists, who think that the world is a single thing, and dualists, who believe in two distinct realms (such as “matter” and “spirit”).

The broader ontology typically associated with atheism is naturalism— there is only one world, the natural world, exhibiting patterns we call the “laws of nature,” and which is discoverable by the methods of science and empirical investigation. There is no separate realm of the supernatural, spiritual, or divine; nor is there any cosmic teleology or transcendent purpose inherent in the nature of the universe or in human life. “Life” and “consciousness” do not denote essences distinct from matter; they are ways of talking about phenomena that emerge from the interplay of extraordinarily complex systems. Purpose and meaning in life arise through fundamentally human acts of creation, rather than being derived from anything outside ourselves.

It’s a bit of a leap, in the face of all of our commonsense experience, to think that life can simply start up out of non-life, or that our experience of consciousness needs no more ingredients than atoms obeying the laws of physics.

We don’t know how the universe began, or if it’s the only universe. We don’t know the ultimate, complete laws of physics. We don’t know how life began, or how consciousness arose. And we certainly haven’t agreed on the best way to live in the world as good human beings.

The pressing, human questions we have about our lives depend directly on our attitudes toward the universe at a deeper level. For many people, those attitudes are adopted rather informally from the surrounding culture, rather than arising out of rigorous personal reflection.

The absence of a supernatural guiding force doesn’t mean we can’t meaningfully talk about right and wrong, but it doesn’t mean we instantly know one from the other, either.

Imagine a transporter machine that could disassemble a single individual and reconstruct multiple exact copies of them out of different atoms. Which one, if any, would be the “real” one? If there were just a single copy, most of us would have no trouble accepting them as the original person. (Using different atoms doesn’t really matter; in actual human bodies, our atoms are lost and replaced all the time.) Or what if one copy were made of new atoms, while the original you remained intact—but the original suffered a tragic death a few seconds after the duplicate was made. Would the duplicate count as the same person?

Theseus, the legendary founder of Athens, had an impressive ship in which he had fought numerous battles. To honor him, the citizens of Athens preserved his ship in their port. Occasionally a plank or part of the mast would decay beyond repair, and at some point that piece would have to be replaced to keep the ship in good order. Once again we have a question of identity: is it the same ship after we’ve replaced one of the planks? If you think it is, what about after we’ve replaced all of the planks, one by one? And (as Thomas Hobbes went on to ask), what if we then took all the old planks and built a ship out of them? Would that one then suddenly become the Ship of Theseus? Narrowly speaking, these are all questions about identity. When is one thing “the same thing” as some other thing?

It just means that the notion of a ship is a derived category in our ontology, not a fundamental one. It is a useful way of talking about certain subsets of the basic stuff of the universe. We invent the concept of a ship because it is useful to us, not because it’s already there at the deepest level of reality. Is it the same ship after we’ve gradually replaced every plank? I don’t know. It’s up to us to decide. The very notion of “ship” is something we created for our own convenience.

Should we count only the underlying stuff of the world as real, and all the different ways we have of dividing it up and talking about it as merely illusions? That’s the most hard-core attitude we could take to reality, sometimes called eliminativism.

Naturalism comes down to three things: 1. There is only one world, the natural world. 2. The world evolves according to unbroken patterns, the laws of nature. 3. The only reliable way of learning about the world is by observing it.

According to classical physics, to know the future—in principle—requires only precise knowledge of the present moment, not any additional knowledge of the past. In modern parlance, Laplace was pointing out that the universe is something like a computer. You enter an input (the state of the universe right now), it does a calculation (the laws of physics) and gives you an output (the state of the universe one moment later). Laplace imagined a “vast intellect” that knew the positions and velocities of all the particles in the universe, and understood all the forces they were subject to, and had sufficient computational power to apply Newton’s laws of motion. In that case, as he put it, “for such an intellect nothing would be uncertain, and the future just like the past would be present before its eyes.” His contemporaries immediately judged “vast intellect” to be too boring, and renamed it Laplace’s Demon.

By the “state” of the universe, or any subsystem thereof, we mean the position and the velocity of every particle within it. Together, you give me the state of the universe at one time, and I can use the laws of physics to integrate forward (or backward) and get the state of the universe at any other time.

Unlike a particle, which has a position in space, a field has a value at every single point in space—that’s just what a field is. But we can treat that field value like a “position,” and its rate of change as a “velocity,” and the whole Laplacian thought experiment goes through undisturbed. The same is true for Einstein’s general theory of relativity, or Schrödinger’s equation in quantum mechanics, or modern speculations such as superstring theory. Since the days of Laplace, every serious attempt at understanding the behavior of the universe at a deep level has included the feature that the past and future are determined by the present state of the system. (One possible exception is the collapse of the wave function in quantum mechanics). This principle goes by a simple, if potentially misleading, name: conservation of information. Conservation of information implies that each moment contains precisely the right amount of information to determine every other moment. (Information: the complete specification of the state of the system, everything you could possibly know about it.)

Quantum mechanics has supplanted classical mechanics as the best way we know to talk about the universe at a deep level. Unfortunately, and to the chagrin of physicists everywhere, we don’t fully understand what the theory actually is. We know that the quantum state of a system, left alone, evolves in a perfectly deterministic fashion, free even of the rare but annoying examples of non-determinism that we can find in classical mechanics. But when we observe a system, it seems to behave randomly, rather than deterministically. The wave function “collapses,” and we can state with very high precision the relative probability of observing different outcomes, but never know precisely which one it will be.

There are several competing approaches as to how to best understand the measurement problem in quantum mechanics. Some involve true randomness, while others (such as my favorite, the Everett or Many-Worlds formulation) retain complete determinism.

There is a bit of a mismatch between Laplace’s notion of determinism and what most people think of when they hear “the future is determined.” The latter phrase conjures up images of destiny or fate—the idea that what will eventually happen has “already been decided,” with the implication that it’s been decided by someone, or something. The physical notion of determinism is different from destiny or fate in a subtle but crucial way: because Laplace’s Demon doesn’t actually exist, the future may be determined by the present, but literally nobody knows what it will be.

There is also another way of talking about it, where we zoom out a bit and introduce categories like “people” and “choices.” Unlike our best theory of planets or pendulums, our best theories of human behavior are not deterministic. We don’t know any way to predict what a person will do based on what we can readily observe about their current state. Whether we think of human behavior as determined depends on what we know.

Looking for causes and reasons is a deeply ingrained human impulse. We are pattern-recognizing creatures, quick to see faces in craters on Mars or connections between the location of Venus in the sky and the state of our love life. Not only do we seek order and causation, but we favor fairness as well.

Searching for reasons why things happen is by no means an irrational pursuit. In many familiar contexts, things don’t “just happen.” If you are sitting in your living room and a baseball suddenly crashes through your window, it makes sense to look outside and expect to see some kids at play.

The mistake is to elevate this expectation to an unbreakable principle. We see things happen, and we attribute reasons to them. Not only with events at home and people’s personal fates but all the way down to the basics of ontology. If the world consists of certain things and behaves in certain ways, we think, there must be a reason why it is so. This mistake has a name: the Principle of Sufficient Reason.

Principle of Sufficient Reason: For any true fact, there is a reason why it is so, and why something else is not so instead. Leibniz once formulated it simply as “Nothing happens without a reason,” which is remarkably close to the maxim “Everything happens for a reason,” which you can buy on T-shirts and bumper stickers today.

Whenever we are confronted with questions about belief, we can employ the technique called abduction, or “inference to the best explanation.” Abduction is a type of reasoning that can be contrasted with deduction and induction. With deduction, we start with some axioms whose truth we do not question, and derive rigorously necessary conclusions from them. With induction, we start with some examples we know about, and generalize to a wider context—rigorously, if we have some reason for believing that such a generalization is always correct, but often we don’t quite have that guarantee. With abduction, by contrast, we take all of our background knowledge about how the world works, and perhaps some preference for simple explanations over complex ones (Occam’s razor), and decide what possible explanation provides the best account of all the facts we have.

It may seem strange to suggest, on the one hand, that we live in a Laplacian universe where one moment follows directly from the next in accordance with unbreakable laws of physics, and on the other hand that there are facts that don’t have any reasons to explain them. Can’t we always give a reason for what happens, namely “the laws of physics and the prior configuration of the universe”? That depends on what we mean by a “reason.” It’s important to first distinguish between two kinds of “facts” we might want to explain. There are things that happen—that is, states of the universe (or parts thereof) at specific moments in time. And then there are features of the universe, such as the laws of physics themselves. The kinds of reasons that would suffice to explain one have a different character from the other. When it comes to “things that happen,” what we mean by a “reason” is essentially the same as what we mean when we refer to the “cause” of an event. And yes, we are free to say that events are explained or caused by “the laws of physics and the prior configuration of the universe.” That’s true even in quantum mechanics, which is itself sometimes erroneously offered up as an example of things (like the decay of an atomic nucleus) happening without reasons. If that’s what one is looking for in a reason, the laws of physics do indeed provide it. Not as some metaphysical principle but as an observed pattern in our universe. However, that isn’t really what people have in mind when they’re searching for reasons. What we are really after is some identifiable aspect of the configuration of the universe without which the event in question would not have occurred. The laws themselves, as we’ve discussed, make no reference to “reasons” or “causes.” They are simply patterns that connect what happens at different places and times. What we might want to ask is: “What is the reason why it makes sense to talk about ‘reasons why’?” And there’s a good answer, namely: because of the arrow of time.

We don’t know whether the Big Bang was the actual beginning of time, but it was a moment in time beyond which we can’t see any further into the past, so it’s the beginning of our observable part of the cosmos. The particular kind of arrangement the universe was in at that time is one with a very low entropy—the scientific way of measuring disorderliness or randomness of a system. Entropy used to be very low, and has been growing ever since—which is to say our observable universe used to be in a specific, orderly arrangement, and has been becoming more disorderly for 14 billion years.

It’s that tendency for entropy to increase that is responsible for the existence of time’s arrow. It’s easy to break eggs, and hard to unbreak them; cream and coffee mix together, but don’t unmix; we were all born young, and gradually grow older; we remember what happened yesterday, but we don’t remember what will happen tomorrow. Most of all, what causes an event must precede the event, not come afterward.

Just as there is no reference to “causes” in the fundamental laws of physics, there isn’t an arrow of time, either. The laws treat the past and future on an equal footing. But the usefulness of our everyday language of explanation and causation is intimately tied to time’s arrow. Without it, those terms wouldn’t be a useful way of talking about the universe at all.

The “reasons” and “causes” why things happen, in other words, aren’t fundamental; they are emergent. We need to dig in to the actual history of the universe to see why these concepts have emerged.

An obvious place where it’s tempting to look for reasons why is the question of why various features of the universe take the form that they do. Why was the entropy low near the Big Bang? Why are there three dimensions of space? Why is the proton almost 2,000 times heavier than the electron? Why does the universe exist at all? These are very different questions from “Why is there an accordion in my bathtub?” We’re no longer asking about occurrences, so “Because of the laws of physics and the prior configuration of the universe” isn’t a good answer. Now we’re trying to figure out why the fundamental fabric of reality is one way rather than some other way. The secret here is to accept that such questions may or may not have answers. We have every right to ask them, but we have no right at all to demand an answer that will satisfy us. We have to be open to the possibility that they are brute facts, and that’s just how things are.

But the universe, and the laws of physics, aren’t embedded in any bigger context, as far as we know. They might be—we should be open-minded about the possibility of something outside our physical universe, whether it’s a nonphysical reality or something more mundane, like an ensemble of universes that make up a multiverse.

Alternatively, we could discover reasons why the laws of physics themselves necessitate that something we thought was arbitrary (like the masses of the proton and the electron) can actually be derived from a deeper principle.

It’s hard to count precisely how many, but there are over 100 billion stars in the Milky Way. It’s not alone; scattered throughout observable space we find at least 100 billion galaxies, typically with sizes roughly comparable to that of our own. (By coincidence, the number 100 billion is also a very rough count of the number of neurons in a human brain.) Recent studies of relatively nearby stars suggest that most of them have planets of some sort, and perhaps one in six stars has an “Earth-like” planet orbiting around it.

Perhaps the most notable feature of the distribution of galaxies through space is that, the farther out we look, the more uniform things become. On the very largest scales, the universe is extremely smooth and featureless. There is no center, no top or bottom, no edges, no preferred location at all.

According to general relativity, if we keep running the movie of the early universe backward, we come to a singularity at which the density and expansion rate approach infinity.

When we talk about the “Big Bang model,” we have to be careful to distinguish that from “the Big Bang” itself. The former is an extraordinarily successful theory of the evolution of the observable universe; the latter is a hypothetical moment that we know almost nothing about.

The Big Bang itself, as predicted by general relativity, is a moment in time, not a location in space. It would not be an explosion of matter into an empty, preexisting void; it would be the beginning of the entire universe, with matter smoothly distributed all throughout space, all at once. It would be the moment prior to which there were no moments: no space, no time. It’s also, most likely, not real. The Big Bang is a prediction of general relativity, but singularities where the density is infinitely big are exactly where we expect general relativity to break down—they are outside the theory’s domain of applicability. At the very least, quantum mechanics should become crucially important under such conditions, and general relativity is a purely classical theory. So the Big Bang doesn’t actually mark the beginning of our universe; it marks the end of our theoretical understanding.

In 1998 two teams of astronomers announced that the universe wasn’t only expanding; it was accelerating. Normally we’d expect the expansion of the universe to slow down as the gravitational forces between the galaxies worked to pull them together. The observed acceleration must be due to something other than matter as we know it. There is a very obvious, robust candidate for what the culprit might be: vacuum energy, which Einstein invented and called the cosmological constant. Vacuum energy is a kind of energy that is inherent in space itself, remaining at a constant density (amount of energy per cubic centimeter) even as space expands. Due to the interplay of energy and spacetime in general relativity, vacuum energy never runs out or fades away; it can keep pushing forever. We don’t know for sure whether it will keep pushing forever, of course; we can only extrapolate our theoretical understanding into the future. But it’s possible, and in some sense would be simplest, for the accelerated expansion to simply continue without end.

That leads to a somewhat lonely future for our universe. Right now the night sky is alive with brightly shining stars and galaxies. That can’t last forever; stars use up their fuel, and will eventually fade to black. Astronomers estimate that the last dim star will wink out around 1 quadrillion (10^15) years from now. By then other galaxies will have moved far away, and our local group of galaxies will be populated by planets, dead stars, and black holes. One by one, those planets and stars will fall into the black holes, which in turn will join into one supermassive black hole. Ultimately, as Stephen Hawking taught us, even those black holes will evaporate. After about 1 googol (10^100) years, all of the black holes in our observable universe will have evaporated into a thin mist of particles, which will grow more and more dilute as space continues to expand. The end result of this, our most likely scenario for the future of our universe, is nothing but cold, empty space, which will last literally forever.

We are small, and the universe is large. It’s hard, upon contemplating the scale of the cosmos, to think that our existence here on Earth plays an important role in the purpose or destiny of it all.

We look at the world around us and describe it in terms of causes and effects, reasons why, purposes and goals. None of those concepts exists as part of the fundamental furniture of reality at its deepest. They emerge as we zoom out from the microscopic level to the level of the everyday.

If you were an astronaut, floating in your spacesuit while you performed an extravehicular activity, you wouldn’t notice any difference between one direction in space and any other. The reason why there’s a noticeable distinction between up and down for us isn’t because of the nature of space; it’s because we live in the vicinity of an extremely influential object: the Earth.

Time works the same way. In our everyday world, time’s arrow is unmistakable, and you would be forgiven for thinking that there is an intrinsic difference between past and future. In reality, both directions of time are created equal. The reason why there’s a noticeable distinction between past and future isn’t because of the nature of time; it’s because we live in the aftermath of an extremely influential event: the Big Bang.

For every way that a system can evolve forward in time in accordance with the laws of physics, there is another allowed evolution that is just “running the system backward in time.” There is nothing in the underlying laws that says things can evolve in one direction in time but not the other. Physical motions, to the best of our understanding, are reversible. Both directions of time are on an equal footing. None of these processes violates the laws of physics—it’s just that they are extraordinarily unlikely. The real question is not why we never see eggs unbreaking toward the future; it’s why we see them unbroken in the past.

A low-entropy configuration is one where relatively few states would look that way, while a high-entropy one corresponds to many possible states. There are many ways to arrange molecules of cream and coffee so that they look all mixed together; there are far fewer arrangements where all of the cream is on the top and all of the coffee on the bottom.

When the entropy of a system is as high as it can get, we say that the system is in equilibrium. In equilibrium, time has no arrow.

What Boltzmann successfully explained is why, given the entropy of the universe today, it’s very likely to be higher-entropy tomorrow. The problem is that, because the underlying rules of Newtonian mechanics don’t distinguish between past and future, precisely the same analysis should predict that the entropy was higher yesterday, as well. Nobody thinks the entropy actually was higher in the past, so we have to add something to our picture.

From the Laplacian point of view, where information is present in each moment and conserved through time, a memory isn’t some kind of direct access to events in the past. It must be a feature of the present state, since the present state is all we presently have. And yet there is an epistemic asymmetry, an imbalance of knowledge, between past and future. That asymmetry is a consequence of the low entropy of the early universe. Think of walking down the street and noticing a broken egg lying on the sidewalk. Ask yourself what the future of that egg might have in store, in comparison with its recent past. In the future, the egg might wash away in a storm, or a dog might come by and lap it up, or it might just fester for a few more days. Many possibilities are open. In the past, however, the basic picture is much more constrained: it seems exceedingly likely that the egg used to be unbroken, and was dropped or thrown to this location. The story of the egg is a paradigm for every kind of “memory” we might have. It’s not just literal memories in our brain; any records that we may have of past events, from photographs to history books, work on the same principle. All of these records, including the state of certain neuronal connections in our brain that we classify as a memory, are features of the current state of the universe. The current state, by itself, constrains the past and future equally. But the current state plus the hypothesis of a low-entropy past gives us enormous leverage over the actual history of the universe. It’s that leverage that lets us believe (often correctly) that our memories are reliable guides to what actually happened. Understanding context becomes important because our invocation of causality relies on comparing what actually happened to what could have happened, in a different hypothetical world. Philosophers refer to this as modal reasoning—thinking not only about what does happen but about what could happen in possible worlds.

Among the small but passionate community of probability-theory aficionados, fierce debates rage over What Probability Really Is. In one camp are the frequentists, who think that “probability” is just shorthand for “how frequently something would happen in an infinite number of trials.”

In another camp are the Bayesians, for whom probabilities are simply expressions of your states of belief in cases of ignorance or uncertainty. For a Bayesian, saying there is a 50 percent chance of the coin coming up heads is merely to state that you have zero reason to favor one outcome over another.

We’re interested in beliefs: things that people think are true, or at least likely to be true. The word “belief” is sometimes used as a synonym for “thinking something is true without sufficient evidence,” a concept that drives nonreligious people crazy and causes them to reject the word entirely. We’re going to use the word to mean anything we think is true regardless of whether we have a good reason for it; it’s perfectly okay to say “I believe that two plus two equals four.” What we actually have are degrees of belief, which professional statisticians refer to as credences.

Bayes’s main idea, now known simply as Bayes’s Theorem, is a way to think about credences. It allows us to answer the following question. Imagine that we have certain credences assigned to different beliefs. Then we gather some information, and learn something new. How does that new information change the credences we have assigned? That’s the question we need to be asking ourselves over and over, as we learn new things about the world.

These starting chances are known as your prior credences. They are the credences you have in mind to start, prior to learning anything new. But then something happens: your friend discards a certain number of cards, and draws an equal number of replacements. That’s new information, and you can use it to update your credences. These likely behaviors, sensibly enough, are called the likelihoods of the problem. By combining the prior credences with the likelihoods, we arrive at updated credences for what their starting hand was. Those updated chances are naturally known as the posterior credences.

A startling claim is more likely to be believed if there is a compelling theoretical explanation ready to hand. The existence of such an explanation increases the prior credence we would assign to the claim in the first place. Once we admit that we all start out with a rich set of prior credences, the crucial step is to update those credences when new information comes in. When priors are very large or very small, the data has to be very surprising in order to shift our credences.

In the Bayesian philosophy, to every proposition that may or may not be true about the world, we assign a prior credence. Each such proposition also comes with a collection of likelihoods: the chances that various other things would be true if that proposition were true. Every time we observe new information, we update our degrees of belief by multiplying our original credences by the relevant likelihood of making that observation under each of the propositions.

Bayes’s Theorem allows us to be quantitative about our degrees of belief, but it also helps us keep in mind how belief works at all.

Prior beliefs matter. When we’re trying to understand what is true about the world, everyone enters the game with some initial feeling about what propositions are plausible, and what ones seem relatively unlikely. This isn’t an annoying mistake that we should work to correct; it’s an absolutely necessary part of reasoning in conditions of incomplete information.

Simple theories should be given larger priors than complicated ones.

Some people don’t like the Bayesian emphasis on priors, because they seem subjective rather than objective. And that’s right—they are. It can’t be helped; we have to start somewhere. Everyone’s entitled to their own priors, but not to their own likelihoods. Evidence should move us toward consensus.

The credence we assign to a theory should go down every time we make observations that are more probable in competing theories. The shift might be small, but it is there.

All evidence matters. It’s not hard to pretend we’re being good Bayesians while we’re actually cooking the books by looking at some evidence but not all of it.

Bayes’s Theorem is one of those insights that can change the way we go through life. Each of us comes equipped with a rich variety of beliefs, for or against all sorts of propositions. Bayes teaches us (1) never to assign perfect certainty to any such belief; (2) always to be prepared to update our credences when new evidence comes along; and (3) how exactly such evidence alters the credences we assign. It’s a road map for coming closer and closer to the truth.

Radical skepticism is less useful to us; it gives us no way to go through life. All of our purported knowledge, and all of our goals and aspirations, might very well be tricks being played on us. But what then? We cannot actually act on such a belief, since any act we might think is reasonable would have been suggested to us by that annoying demon. Whereas, if we take the world roughly at face value, we have a way of moving forward. There are things we want to do, questions we want to answer, and strategies for making them happen.

We have every right to give high credence to views of the world that are productive and fruitful, in preference to those that would leave us paralyzed with ennui.

Is it possible that you, and everything you’ve ever experienced, are simply a simulation being conducted by a higher level of intelligent being? Sure, it’s possible. It’s not even, strictly speaking, a skeptical hypothesis: there is still a real world, presumably structured according to laws of nature. It’s just one to which we don’t have direct access. If our concern is to understand the rules of the world we do experience, the right attitude is: so what? Even if our world has been constructed by higher-level beings rather than constituting the entirety of reality, by hypothesis it’s all we have access to, and it’s an appropriate subject of study and attempted understanding.

These discoveries indicate that the world operates by itself, free of any external guidance. Together they have dramatically increased our credence in naturalism: there is only one world, the natural world, operating according to the laws of physics. But they also highlight a looming question: Why does the world of our everyday experience seem so different from the world of fundamental physics? Why aren’t the basic workings of reality perfectly obvious at first glance? While there is one world, there are many ways of talking about it. We refer to these ways as “models” or “theories” or “vocabularies” or “stories”; it doesn’t matter. It’s not good enough that the stories succeed individually; they have to fit together.

One pivotal word enables that reconciliation between all the different stories: emergence. A property of a system is “emergent” if it is not part of a detailed “fundamental” description of the system, but it becomes useful or even inevitable when we look at the system more broadly. A naturalist believes that human behavior emerges from the complex interplay of the atoms and forces that make up individual human beings.

This example illustrates a number of features that commonly appear in discussions of emergence:

  • The different stories or theories use utterly different vocabularies; they are different ontologies, despite describing the same underlying reality. In one we talk about the density, pressure, and viscosity of the fluid; in the other we talk about the position and velocity of all the individual molecules. Each story comes with an elaborate set of ingredients—objects, properties, processes, relations—and those ingredients can be wildly different from one story to another, even if they are all “true.”
  • Each theory has a particular domain of applicability. The fluid description wouldn’t be legitimate if the number of molecules in a region were so small that the effects of particular molecules were important individually, rather than only in aggregate. The molecular description is effective under wider circumstances, but still not always; we could imagine packing enough molecules into a small enough region of space that they collapsed to make a black hole, and the molecular vocabulary would no longer be appropriate.
  • Within their respective domains of applicability, each theory is autonomous—complete and self-contained, neither relying on the other. If we’re speaking the fluid language, we describe the air using density and pressure and so on. Specifying those quantities is enough to answer whatever questions we have about the air, according to that theory. In particular, we don’t need to ever refer to any ideas about molecules and their properties. Historically, we talked about air pressure and velocity long before we knew it was made of molecules. Likewise, when we are talking about molecules, we don’t ever have to use words like “pressure” or “viscosity”—those concepts simply don’t apply.

The important takeaway here is that stories can invoke utterly different ideas, and yet accurately describe the same underlying stuff. This will be crucially important down the line. Organisms can be alive even if their constituent atoms are not. Animals can be conscious even if their cells are not. People can make choices even if the very concept of “choice” doesn’t apply to the pieces of which they are made.

If we have two different theories that both accurately describe the same underlying reality, they must be related to each other and mutually consistent.

Coarse-graining goes one way—from microscopic to macroscopic—but not the other way. You can’t discover the properties of the microscopic theory just from knowing the macroscopic theory.

Quantum mechanics, in particular, features the phenomenon of entanglement. It’s not possible to specify the state of a system by listing the state of all of its subsystems individually; we have to look at the system as a whole, because different parts of it can be entangled with one another. To dig a bit deeper, when we combine quantum mechanics with gravity, it is widely believed (although not known for certain, since we know almost nothing for certain about quantum gravity) that space itself is emergent rather than fundamental. Then it doesn’t even make sense to talk about “a location in space” as a fundamental concept.

Seeing how relatively easy it is to derive fluid mechanics from molecules, one can get the idea that deriving one theory from another is what emergence is all about. It’s not—emergence is about different theories speaking different languages, but offering compatible descriptions of the same underlying phenomena in their respective domains of applicability.

As systems evolve through time, perhaps in response to changes in their external environment, they can pass from the domain of applicability of one kind of emergent description to a different one—what’s known as a phase transition. Water is the most familiar example. Depending on the temperature and pressure, water can find itself in the form of solid ice, liquid water, or gaseous water vapor. The underlying microscopic description remains the same—molecules of H2O—but the macroscopic properties shift from one “phase” to another. Because of the different conditions, the way that we talk about the water changes.

Consciousness: the awareness of self and the ability to form mental representations of the universe.

Philosopher of science Thomas Kuhn popularized the idea of a “paradigm shift” to describe how new theories could induce scientists to conceptualize the world in starkly different ways.

From the point of view of emergence, the question becomes: how new and different are emergent phenomena? Is an emergent theory just a way of repackaging the microscopic theory, or is it something truly novel? For that matter, is the behavior of the emergent theory derivable, even in principle, from the microscopic description, or does the underlying stuff literally act differently in the macroscopic context? A more provocative way of putting the same questions would be: are emergent phenomena real, or merely illusory? As you might imagine, these questions lie front and center when we start talking about knotty issues such as the emergence of consciousness or free will.

Is behavior at the macroscopic level incompatible—literally inconsistent with—how we would expect the system to behave if we knew only the microscopic rules? This is where all hell breaks loose. We’re now entering into the realm known as strong emergence. So far we’ve been discussing “weak emergence”: even if the emergent theory gives you new understanding and an enormous increase in practicality in terms of calculations, in principle you could put the microscopic theory on a computer and simulate it, thereby finding out exactly how the system would behave. In strong emergence—if such a thing actually exists—that wouldn’t be possible. When many parts come together to make a whole, in this view, not only should we be on the lookout for new knowledge in the form of better ways to describe the system, but we should contemplate new behavior. In strong emergence, the behavior of a system with many parts is not reducible to the aggregate behavior of all those parts, even in principle. [If strong emergence exists, we won't know how the world operates even if we figure out the fundamental physics.]

A strong emergentist will say: No, you can’t do that. That atom is part of you, a person, and you can’t predict the behavior of that atom without understanding something about the bigger person-system. Knowing about the atom and its surroundings is not enough. That is certainly a way the world could work. If it’s how the world actually does work, then our purported microscopic theory of the atom is simply wrong.

There’s no ambiguity in what that atom is supposed to do, according to our best theory of physics. If there are situations in which the atom behaves otherwise, such as when it’s part of the tip of my finger, then our theory is wrong and we have to do better.

A poetic naturalist has another way out: something is “real” if it plays an essential role in some particular story of reality that, as far as we can tell, provides an accurate description of the world within its domain of applicability. Atoms are real; tables are real; consciousness is undoubtedly real.

Illusions are just mistakes, concepts that play no useful role in descriptions at any level of coarse-graining.

Consciousness is not an illusion, even if we think it is “just” an emergent way of talking about our atoms each individually obeying the laws of physics. If hurricanes are real—and it makes sense to think that they are—even though they are just atoms in motion, there is no reason why we should treat consciousness any differently.

The most seductive mistake we can be drawn into when dealing with multiple stories of reality is to mix up vocabularies appropriate to different ways of talking. Someone might say, “You can’t truly want anything, you’re just a collection of atoms, and atoms don’t have wants.” It’s true that atoms don’t have wants; the idea of a “want” is not part of our best theory of atoms. There would be nothing wrong with saying “None of these atoms making up you want anything.” But it doesn’t follow that you can’t have wants. “You” are not part of our best theory of atoms either; you are an emergent phenomenon, meaning that you are an element in a higher-level ontology that describes the world at a macroscopic level. At the level of description where it is appropriate to talk about “you,” it’s also perfectly appropriate to talk about wants and feelings and desires. Those are all real phenomena in our best understanding of human beings. You can think of yourself as an individual human being, or you can think of yourself as a collection of atoms. Just not both at the same time, at least when it comes to asking how one kind of thing interacts with another one.

Even wildly different priors will eventually be swamped by the process of updating if we collect enough evidence. If we try to be as honest as possible with others and with ourselves, we can hope to bring our planets of belief into closer alignment.

Science never proves anything. A lot depends on our definition of “proof.” Scientists will often have in their minds the kind of proof we have access to in mathematics or logic: a rigorous demonstration of the truth of a proposition, starting with some explicitly stated axioms. This differs in important ways from how we might hear “proof” used in casual conversation, where it’s closer to “sufficient evidence that we believe something is true.” In a court of law, where precision is a goal but metaphysical certitude can never be attained, the flexible nature of proof is explicitly recognized by invoking different standards depending on the case. In US civil courts, proving your case requires that a “preponderance of evidence” be on your side. In some administrative courts, “clear and convincing evidence” is required. And a criminal defendant is not considered to be proven guilty unless the case has been demonstrated “beyond a reasonable doubt.”

The truths of math and logic would be true in any possible world; the things science teaches us are true about our world, but could have been false in some other one. Most of the interesting things it is possible to know are not things we could ever hope to “prove,” in the strong sense.

Even when we do believe a theory beyond reasonable doubt, we still understand that it’s an approximation, likely (or certain) to break down somewhere.

The resolution is to admit that some credences are so small that they’re not worth taking seriously. It makes sense to act as if we know those possibilities to be false. So we take “I believe x” not to mean “I can prove x is the case,” but rather “I feel it would be counterproductive to spend any substantial amount of time and effort doubting x.”

Math is all about proving things, but the things that math proves are not true facts about the actual world. They are the implications of various assumptions. A mathematical demonstration shows that given a particular set of assumptions (such as the axioms of Euclidean geometry or of number theory), certain statements inevitably follow (such as the angles inside a triangle adding up to 180 degrees, or there being no largest prime number). In logic, as in math, we start with axioms and derive results that inevitably follow from them. The statements we can prove based on explicitly stated axioms are known as theorems. But “theorem” doesn’t imply “something that is true”; it only means “something that definitely follows from the stated axioms.” For the conclusion of the theorem to be “true,” we would also require that the axioms themselves be true.

Math is concerned with truths that would hold in any possible world: given these axioms, these theorems will follow. Science is all about discovering the actual world in which we live.

One way that inner, personal spiritual experiences would count as genuine evidence against naturalism would be if it were possible to demonstrate that such mental states—feelings of being in touch with something greater, of being outside one’s own body, dissolving the boundaries of self, communicating with nonphysical spirits, participating in a kind of cosmic joy—did not, or could not, arise from ordinary material causes.

This can sound reminiscent of the old postmodern slogan that “reality is socially constructed.” There’s a sense in which that’s true. What’s socially constructed are the ways we talk about the world, and if a particular way of talking involves concepts that are useful and fit the world quite accurately, it’s fair to refer to those concepts as “real.” But we can’t forget that there is a single world underlying it all, and there’s no sense in which the underlying world is socially constructed. It simply is, and we take on the task of discovering it and inventing vocabularies with which to describe it.

The question, however, is whether a particular way of talking about the world is useful. And usefulness is always relative to some purpose. If we’re being scientists, our goal is to describe and understand what happens in the world, and “useful” means “providing an accurate model of some aspect of reality.” If we’re interested in a person’s health, “useful” might mean “helping us see how to make a person more healthy.”

Everyone knows Friedrich Nietzsche proclaimed that God is dead. What becomes of meaning and purpose when we can’t rely on gods to provide them?

I loved mind-bending ideas, and what’s more mind-bending than the possibility that the mind itself can actually bend things?

There are things we don’t understand about, for example, treating the common cold. But there is no reason to think that cold viruses are anything other than particular arrangements of atoms obeying the rules of particle physics. And that knowledge puts limits on what those viruses can possibly do.

We never know anything about the empirical world with absolute certainty. We must always be open to changing our theories in the face of new information. But we can, in the spirit of the later Wittgenstein, be sufficiently confident in some claims that we treat the matter as effectively settled. It’s possible that at noon tomorrow, the force of gravity will reverse itself, and we’ll all be flung away from the Earth and into space. It’s possible—we can’t actually prove it won’t happen. And if surprising new data or an unexpected theoretical insight forces us to take the possibility seriously, that’s exactly what we should do. But until then, we don’t worry about it.

When we say that a quantum state is a superposition, we don’t mean “it could be any one of various possibilities, we’re not sure which.” We mean “it is a weighted combination of all those possibilities at the same time.”

As that “most likely” position evolves over time, it obeys the rules of classical mechanics, just as Newton and Laplace thought. But there is a chance that when you look at it, you’ll see it somewhere else.

The symbol | Ψ 〉 represents the quantum state.

Evolution according to the Schrödinger equation is very much like the evolution of a state in classical mechanics. It is smooth, reversible, and completely deterministic; Laplace’s Demon would have no problem predicting what the state would be in the past and future.

But there is also an entirely different way the quantum state can evolve, according to the textbook treatment: namely, when it is observed. In that case, we teach our undergraduates, the wave function “collapses,” and we obtain some particular measurement outcome. The collapse is sudden, and the evolution is nondeterministic—knowing what the state was before, you can’t perfectly predict what the state will be afterward. All you have are probabilities.

What counts as an “observer” or an “observation” anyway? Does a microscope count, or does a conscious human being have to be using it? What about a squirrel, or a video camera? What if I just glance at the thing rather than observing it closely? When exactly does the “wave function collapse” take place? Together these issues are known as the measurement problem of quantum mechanics. After fretting about it for decades, physicists still don’t agree on how to address it. They have ideas. One approach is to suggest that while the wave function plays an important role in predicting experimental outcomes, it doesn’t actually represent physical reality. It might be that there is a deeper way of describing the world, in addition to the wave function, in terms of which the evolution would be in principle completely predictable. This possibility is sometimes called the “hidden variables” approach, since it suggests that we just haven’t yet pinpointed the real way to best describe the state of a quantum system. If such a theory is true, it would have to be nonlocal—parts of the system would have to directly interact with parts at other locations in space. An even more radical approach is to simply deny the existence of an underlying reality altogether. This would be an antirealist approach to quantum mechanics, since it treats the theory as merely a bookkeeping device for predicting the outcomes of future experiments. If you ask an antirealist what aspect of the current universe that knowledge is about, they will tell you that it’s not a sensible question to ask. There is, in this view, no underlying “stuff” that is being described by quantum mechanics; all we are ever allowed to talk about is the outcomes of experimental measurements.

Fortunately, we have not yet exhausted our possibilities. The simplest possibility is that the quantum wave function isn’t a bookkeeping device at all, nor is it one of many kinds of quantum variables; the wave function simply represents reality directly. Just as Newton or Laplace would have thought of the world as a set of positions and velocities of particles, the modern quantum theorist can think of the world as a wave function, full stop.

If everything is just wave function, what makes states “collapse,” and why is the act of observation so important? A resolution was suggested in the 1950s by a young physicist named Hugh Everett III. He proposed that there is only one piece of quantum ontology—the wave function—and only one way it ever evolves—via the Schrödinger equation. There are no collapses, no fundamental division between system and observer, no special role for observation at all. Everett proclaimed that quantum mechanics fits perfectly comfortably into a deterministic Laplacian view of the world. But if that’s right, why does it seem to us that wave functions collapse when we observe them? The trick, in modern language, can be traced to a feature of quantum mechanics called entanglement.

There is not a wave function for the Earth, another one for Mars, and so on through all of space. There is only one wave function for the entire universe at once—what we call, with no hint of modesty, the “wave function of the universe.” A wave function is simply a number we assign to every possible measurement outcome, like the position of a particle, such that the number tells us the probability of obtaining that outcome. So the wave function of the universe assigns a number to every possible way that objects in the universe could be distributed through space. There’s one number for “the Earth is here, and Mars is over there,” and another number for “the Earth is at this other place, and Mars is yet somewhere else,” and so on. The state of Earth can therefore be entangled with the state of Mars. Both parts of the superposition actually exist, and they continue to exist and evolve as the Schrödinger equation demands. At last, then, we have a candidate for a final answer to the critical ontological question “What is the world, really?” It is a quantum wave function. At least until a better theory comes along.

The two parts of the wave function of the universe, one in which you saw the particle spinning clockwise and the other in which you saw it spinning counterclockwise, subsequently evolve completely independently of each other. There is no future communication or interference between them. That’s because you and the particle become entangled with the rest of the universe, in a process known as decoherence. The different parts of the wave function are different “branches,” so it’s convenient to say that they describe different worlds. (There’s still one “world” in the sense of “the natural world,” described by the wave function of the universe, but there are many different branches of that wave function, and they evolve independently, so we call them “worlds.” Our language hasn’t yet caught up to our physics.)

It’s perfectly deterministic, even though individual observers can’t tell which world they are in before they actually look at it, so there is necessarily some probabilistic component when it comes to people making predictions. And there’s no difficulty in explaining things like the measurement process, or any need to invoke conscious observers to carry out such measurements. Everything is just a wave function, and all wave functions evolve in the same way.

There are two important things to take away from this discussion, as far as the big picture is concerned. One is that, while we don’t have a finished understanding of quantum mechanics at a fundamental level, there is nothing we know about it that necessarily invalidates determinism (the future follows uniquely from the present), realism (there is an objective real world), or physicalism (the world is purely physical). All of these features of the Newtonian/Laplacian clockwork universe can easily still hold true in quantum mechanics—but we don’t know for sure.

Quantum mechanics is, as far as we currently know, the way the universe works. But quantum mechanics isn’t a specific theory of the world; it’s a framework within which particular theories can be constructed. Just as classical mechanics includes the theory of planets moving around the sun, or the theory of electricity and magnetism, or even Einstein’s theory of general relativity, there are an enormous number of particular physical models that qualify as “quantum-mechanical.”

The protons and neutrons are bound to each other by a force (the nuclear force), and the electrons are bound to the nucleus by a different force (electromagnetism), and everything pulls toward everything else because of yet another force (gravitation).

Particles and forces arise out of fields. A field is kind of the opposite of a particle; while a particle has a specific location in space, a field is something that stretches all throughout space, taking on some particular value at every point. Modern physics says that the particles and the forces that make up atoms all arise out of fields. That viewpoint is called quantum field theory.

And what are the fields made of? There isn’t any such thing. The fields are the stuff that everything else is made of. There could always be a deeper level, but we haven’t found it yet.

But what about the particles? Particles and fields seem like they’re diametrically opposed to each other—particles live at one spot, while fields live everywhere. Surely we’re not going to be told that a particle like an electron comes out of some “electron field” filling space? That is exactly what you are going to be told. And the connection is provided by quantum mechanics. The fundamental feature of quantum mechanics is that what we see when we look at something is different from how we describe the thing when we’re not looking at it. When we measure the energy of an electron orbiting a nucleus, we get a definite answer, and that answer is one of a specific number of allowed outcomes; but when we’re not looking at it, the state of the electron is generally a superposition of all those possible outcomes. Fields are exactly the same way. According to quantum field theory, there are certain basic fields that make up the world, and the wave function of the universe is a superposition of all the possible values those fields can take on. If we observe quantum fields—very carefully, with sufficiently precise instruments—what we see are individual particles. For electromagnetism, we call those particles “photons”; for the gravitational field, they’re “gravitons.” We’ve never observed an individual graviton, because gravity interacts so very weakly with other fields, but the basic structure of quantum field theory assures us that they exist. If a field takes on a constant value through space and time, we don’t see anything at all; but when the field starts vibrating, we can observe those vibrations in the form of particles.

There are two basic kinds of fields and associated particles: bosons and fermions. Bosons, such as the photon and graviton, can pile on top of each other to create force fields, like electromagnetism and gravity. Fermions take up space: there can only be one of each kind of fermion in one place at one time. Fermions, like electrons, protons, and neutrons, make up the objects of matter like you and me and chairs and planets, and give them all the property of solidity. As fermions, two electrons can’t be in the same place at the same time; otherwise objects made of atoms would just collapse to a microscopic size.

The ordinary stuff out of which you and I are made, as well as the Earth and everything you see around you, only really involves three matter particles and three forces. Electrons in atoms are bound to the nucleus by electromagnetism, and the nucleus itself is made of protons and neutrons held together by the nuclear force, and of course everything feels the force of gravity. Protons and neutrons, in turn, are made out of two kinds of smaller particles: up quarks and down quarks. They are held together by the strong nuclear force, carried by particles called gluons. The “nuclear force” between protons and neutrons is a kind of spillover of the strong nuclear force. There’s also a weak nuclear force, carried by W and Z bosons, which lets other particles interact with a final kind of fermion, the neutrino. And the four fermions (electron, neutrino, up and down quarks) are just one generation out of a total of three. Finally, in the background lurks the Higgs field, responsible for giving masses to all the particles that have them.

Physicists divide our theoretical understanding of these particles and forces into two grand theories: the standard model of particle physics, which includes everything we’ve been talking about except for gravity, and general relativity, Einstein’s theory of gravity as the curvature of spacetime. We lack a full “quantum theory of gravity”—a model that is based on the principles of quantum mechanics, and matches onto general relativity when things become classical-looking. Superstring theory is one promising candidate for such a model, but right now we just don’t know how to talk about situations where gravity is very strong, like near the Big Bang or inside a black hole, in quantum-mechanical terms. Figuring out how to do so is one of the greatest challenges currently occupying the minds of theoretical physicists around the world.

The Core Theory. It’s the quantum field theory of the quarks, electrons, neutrinos, all the families of fermions, electromagnetism, gravity, the nuclear forces, and the Higgs.

In the previous chapter we concluded that “what the world is” is a quantum wave function. A wave function is a superposition of configurations of stuff. The next question is “What is the stuff that the wave function is a function of?” The answer, as far as the regime of our everyday life is concerned, is “the fermion and boson fields of the Core Theory.”

In field theory, every particle has an antiparticle with the opposite electric charge. The antiparticle of an electron is a particle called the positron, which is positively charged.

One as-yet-undiscovered particle we believe exists is dark matter. Astronomers, studying the motions of stars and galaxies as well as the large-scale structure of the universe, have become convinced that most matter is “dark”—some kind of new particle that is not part of the Core Theory. The dark-matter particle must be quite long-lived, or it would have decayed away long ago. But it cannot interact strongly with ordinary matter, or it would have already been found in one of the many dark-matter detection experiments that physicists are currently running. Whatever the dark matter is, it certainly plays no role in determining the weather here on Earth, or anything having to do with biology, consciousness, or human life.

We can imagine that the correct theory of quantum mechanics will ultimately tell us that wave functions don’t really collapse randomly, for example; perhaps there are subtle features of quantum measurement that have thus far eluded experimental detection, but will end up playing an important role in how we come to understand biology or consciousness. It’s possible.

Then in 1915 along comes Einstein and his theory of general relativity. Space and time are subsumed into a four-dimensional spacetime, and spacetime is not absolute—it is dynamic, stretching and twisting in response to matter and energy. Not long thereafter, we learned that the universe is expanding, which led to the prediction of a Big Bang singularity in the past. In classical general relativity, the Big Bang is the very first moment in the history of the universe. It is the beginning of time. Then in the 1920s we stumbled across quantum mechanics. The “state of the universe” in quantum mechanics isn’t simply a particular configuration of spacetime and matter. The quantum state is a superposition of many different classical possibilities. This completely changes the rules of the game. In classical general relativity, the Big Bang is the beginning of spacetime; in quantum general relativity—whatever that may be, since nobody has a complete formulation of such a theory as yet—we don’t know whether the universe has a beginning or not.

One possibility is that time is fundamental, and the universe changes as time passes. In that case, the Schrödinger equation is unequivocal: time is infinite. If the universe truly evolves, it always has been evolving and always will evolve. There is no starting and stopping. There may have been a moment that looks like our Big Bang, but it would have only been a temporary phase, and there would be more universe that was there even before the event. The other possibility is that time is not truly fundamental, but rather emergent. Then, the universe can have a beginning. The Schrödinger equation has solutions describing universes that don’t evolve at all: they just sit there, unchanging.

That’s a universe that is not evolving in time—the quantum state itself simply is, unchanging and forever. But in any one part of the state, it looks like one moment of time in a universe that is evolving. Every element in the quantum superposition looks like a classical universe that came from somewhere, and is going somewhere else. If there were people in that universe, at every part of the superposition they would all think that time was passing, exactly as we actually do think. That’s the sense in which time can be emergent in quantum mechanics. Quantum mechanics allows us to consider universes that are fundamentally timeless, but in which time emerges at a coarse-grained level of description. And if that’s true, then there’s no problem at all with there being a first moment in time. The whole idea of “time” is just an approximation anyway.

The idea of the universe having a beginning—whether time is fundamental or emergent—suggests to some people that there must be something that brought it into being, and typically that something is identified with God. Said another way: even if the universe has a first moment of time, it’s wrong to say that it “comes from nothing.” That formulation places into our mind the idea that there was a state of being, called “nothing,” which then transformed into the universe. That’s not right; there is no state of being called “nothing,” and before time began, there is no such thing as “transforming.” What there is, simply, is a moment of time before which there were no other moments.

The second mistake is to assert that things don’t simply pop into existence, rather than asking why that doesn’t happen in the world we experience. I can be fairly confident that a bowl of ice cream isn’t going to materialize in front of me because that would violate the conservation of energy. Along those lines, it seems reasonable to believe that the universe can’t simply begin to exist, because it’s full of stuff, and that stuff has to come from somewhere. Translating that into physics-speak, the universe has energy, and energy is conserved—it’s neither created nor destroyed. Which brings us to the important realization that makes it completely plausible that the universe could have had a beginning: as far as we can tell, every conserved quantity characterizing the universe (energy, momentum, charge) is exactly zero. It’s not surprising that the electric charge of the universe is zero. Protons have a positive charge, electrons have an equal but opposite negative charge, and there seem to be equal numbers of them in the universe, adding up to a total charge of zero. But claiming that the energy of the universe is zero is something else entirely. There are clearly many things in the universe that have positive energy. So to have zero energy overall, there would have to be something with negative energy—what is that? The answer is “gravity.” In general relativity, there is a formula for the energy of the whole universe at once. And it turns out that a uniform universe—one in which matter is spread evenly through space on very large scales—has precisely zero energy. The energy of “stuff” like matter and radiation is positive, but the energy associated with the gravitational field (the curvature of spacetime) is negative, and exactly enough to cancel the positive energy in the stuff. If the universe had a nonzero amount of some conserved quantity like energy or charge, it couldn’t have an earliest moment in time—not without violating the laws of physics. The first moment of such a universe would be one in which energy or charge existed without any previous existence, which is against the rules. But as far as we know, our universe isn’t like that. There seems to be no obstacle in principle to a universe like ours simply beginning to exist.

Our job, in other words, is to move from the first question, “Can the universe simply exist?” (yes, it can) to the second, harder one: “What is the best explanation for the existence of the universe?” The answer is certainly “We don’t know.”

Descartes’s most famous positions: mind-body dualism, the idea that the mind or soul is an immaterial substance distinct from the body. If that were true, she insisted on knowing, how did the two substances communicate with each other?

If you want to say that the mind is a separate substance, not just a way of talking about the collective effect of all those particles, how does that substance interact with the particles? (QB; immaterial substance might somehow interract with regular matter through observation/ wave function collapse. I have yet to read Mitja Peruš's book which purportedly clarifies precisely this.)

Descartes’s argument was pretty simple. He’d already established that we can doubt the existence of many things, even the chair we are sitting on. So there’s no real problem doubting the existence of your own body. But you can’t doubt the existence of your mind—you think, therefore your mind must really exist. And if you can doubt the existence of your body but not your mind, they must be two different things. The body, Descartes went on to explain, works like a machine, having material properties and obeying the laws of motion. The mind is an entirely separate kind of entity. Not only is it not made of material stuff; it doesn’t even have a specific location on the material plane. Whatever the mind is, it’s something very different from tables and chairs, something that occupies an utterly distinct realm of existence. We label this view substance dualism, since it claims that mind and body are two distinct kinds of substance, not merely two different aspects of one underlying kind of stuff. But the mind and body interact with each other, of course. Certainly our minds communicate with our bodies, nudging them to perform this or that action. Descartes felt that the interaction also went the other way: our bodies can influence our minds.

It’s a question that cuts to the heart of the mind/body split. You say that mind and body act on each other, fine. But how, exactly?

But she was also scrupulously honest, and could not understand how an immaterial mind was supposed to push around the material body. When something pushes something else, the two things need to be located at the same place. But the mind isn’t “located” anywhere—it’s not part of the physical plane. Your mind has a thought, such as “I’ve got it—Cogito, ergo sum.” How is that thought supposed to lead to the body lifting a pen and committing those words to paper? How is it even conceivable that something with no extent or location could influence an ordinary physical object?

How an immaterial soul might interact with the physical body remains a challenging question for dualists even today.

To imagine that the soul pushes around the electrons and protons and neutrons in our bodies in a way that we haven’t yet detected is certainly conceivable, but it implies that modern physics is profoundly wrong in a way that has so far eluded every controlled experiment ever performed.

At the microscopic scale, quantum mechanics implies that individual measurement outcomes are expressed in probabilities rather than certainties, but those probabilities are unambiguously fixed by the theory, and when we aggregate many particles the overall behavior becomes fantastically predictable (at least in principle, to a Laplace’s Demon–level intellect). There are no vague or unspecified pieces waiting to be filled in; the equations predict how matter and energy behave in any given situation, whether it’s the Earth revolving around the sun, or electrochemical impulses cascading through your central nervous system.

We would, however, need to be specific and quantitative about how the Core Theory could possibly be changed. There needs to be a way that “soul stuff” interacts with the fields of which we are made—with electrons, or photons, or something. Do those interactions satisfy conservation of energy, momentum, and electric charge? Does matter interact back on the soul, or is the principle of action and reaction violated? Is there “virtual soul stuff” as well as “real soul stuff,” and do quantum fluctuations of soul stuff affect the measurable properties of ordinary particles? Or does the soul stuff not interact directly with particles, and merely affect the quantum probabilities associated with measurement outcomes? Is the soul a kind of “hidden variable” playing an important role in quantum ontology?

You can’t bend spoons with your mind. Actually you can, but only by the traditional method: sending signals from your brain, down your arms, to your hands, which then pick up the spoon and bend it. The argument is simple. Your body, including your brain, is made up of only a few particles (electrons, up quarks, and down quarks), interacting through a few forces (gravity, electromagnetism, and the strong and weak nuclear forces). If you’re not going to reach out and touch the spoon with your hands, any influence you have on it is going to have to come through one of the four forces. It won’t be through one of the nuclear forces, since those reach only over microscopically small distances. And it won’t be through gravity, since gravity is far too weak. We’re left with electromagnetism. Unlike gravity, the potential electromagnetic force from your body actually is strong enough to bend spoons—indeed, that’s what happens when you use your hands. All of chemistry is essentially due to electromagnetic forces acting on electrons and ions (atoms that are charged by having more or fewer electrons than protons). Having the brain function as a kind of electromagnetic tractor beam would not violate the laws of physics, but it doesn’t work for more mundane reasons. The brain itself is subtle and complicated, so we could imagine generating a large electromagnetic field. But once generated, that field would be a blunt instrument. Spoons are not subtle and complicated; they are just inert pieces of metal. Not only would any brain-produced electromagnetic field have no special reason to home in on a spoon in the desired way; it would be incredibly easy to notice for other reasons. Every metallic object in the vicinity would go flying around in response to this force field, and it would be straightforward to measure it using conventional methods. [Or it might be that our physical laws exist only as a current agreement/ sum of individual minds. If that's true, might it be that you can bend a spoon with your mind locally but not universally?]

It certainly seems as if, when something dies, there is some thing that is no longer present. Where, it seems natural to ask, does the energy associated with life go when we die? The trick is to think of life as a process rather than a substance. When a candle is burning, there is a flame that clearly carries energy. When we put the candle out, the energy doesn’t “go” anywhere. The candle still contains energy in its atoms and molecules. What happens, instead, is that the process of combustion has ceased. Life is like that: it’s not “stuff”; it’s a set of things happening. When that process stops, life ends. Life is a way of talking about a particular sequence of events taking place among atoms and molecules arranged in the right way.

élan vital (life force).

Ludwig Boltzmann explained entropy to us: it’s a way of counting how many possible microscopic arrangements of the stuff in a system would look indistinguishable from a macroscopic point of view. If there are many ways to rearrange the particles in a system without changing its basic appearance, it’s high-entropy; if there are a relatively small number, it’s low-entropy. The Past Hypothesis says that our observable universe started in a very low-entropy state. From there, the second law is easy to see: as time goes on, the universe goes from being low-entropy to high-entropy, simply because there are more ways that entropy can be high.

We haven’t yet given a precise definition of what we mean by “complexity,” as we were able to do for entropy. Partly that’s because there is no one definition that works for every circumstance—different systems can exhibit complexity in different ways. That’s a feature, not a bug; complexity comes in many forms.

The evolution of entropy and complexity in a closed system over time.

In both physics and biology, complexity often emerges in a hierarchical fashion: small pieces conglomerate into larger units, which then conglomerate into even larger ones, and so on. Smaller units maintain their integrity while interacting together within the whole. In this way, networks are built up that exhibit complex overall behavior emerging from simple underlying rules.

If we wait long enough, any isolated system reaches equilibrium, where nothing interesting happens.

There is no law of nature, therefore, that says complexity necessarily develops as systems evolve from low entropy to high entropy. But it can develop—whether it does or not depends on the details of the system you are thinking of.

The only reason complex structures form at all is because the universe is undergoing a gradual evolution from very low entropy to very high entropy. “Disorder” is growing, and that’s precisely what permits complexity to appear and endure for a long time.

Physics is the simplest of all the sciences, and fundamental physics—the study of the basic pieces of reality at the deepest level—is the simplest of all. Not “simple” in the sense that the homework problems are easy, but simple in the sense that Galileo’s trick of ignoring friction and air resistance makes our lives easier. We can study the behavior of an electron without worrying about, or even knowing much about, neutrinos or Higgs bosons, at least to a pretty good approximation. The rich and multifaceted aspects of the emergent layers of our world are not nearly so accommodating to the curious scientist. Once we start dealing with chemistry, biology, or human thought and behavior, all of the pieces matter, and they matter all at once. We have made correspondingly less progress in obtaining a complete understanding of them than we have, for example, on the Core Theory. The reason why physics classes seem so hard is not because physics is so hard—it’s because we understand so much of it that there’s a lot to learn, and that’s because it’s fundamentally pretty simple.

We can always be wrong in that belief; but then again, we can always be wrong about any belief.

The question is, will we know it when we see it? What is “life” anyway? Nobody knows. There is not a single agreed-upon definition that clearly separates things that are “alive” from those that are not. People have tried. NASA, which is heavily invested in looking for life outside the Earth, adopted a working definition of a living organism: a self-sustaining chemical system capable of Darwinian evolution.

This question prompted Schrödinger to put forward a definition of life that seems very different from NASA’s: When is a piece of matter said to be alive? When it goes on “doing something,” exchanging material with its environment, and so forth, and that for a much longer period than we would expect an inanimate piece of matter to “keep going” under similar circumstances.

A rock might maintain its shape for a long time, but it will never repair itself. A rock can be in motion, for example, if an avalanche starts it rolling down-hill; but once it gets to the bottom, it will stop moving and just sit there. It won’t brush itself off and climb back up the hill, like an animal might.

Complex structures can form, not despite the growth of entropy but because entropy is growing. Living organisms can maintain their structural integrity, not despite the second law but because of it.

Everyone knows that the sun provides a useful service to life here on Earth: energy, in the form of photons of visible light. But the really important thing we get from the sun is energy with very low entropy—so-called free energy. That energy is then put to use by biological organisms, and returned to the universe in a highly degraded form. “Free energy” is a confusing term that actually means “useful energy”—think “free” as in “free to do something.” It has nothing to do with “energy for free”—the total amount of energy is still constant. The second law says that the entropy of an isolated system will increase until the system reaches maximum entropy, after which it will sit there in equilibrium.

Free energy can be used to do what physicists call work.

One way of formulating the second law is to say that, in an isolated system, free energy is converted into disordered energy as time passes.

Schrödinger’s idea was that biological systems manage to keep moving and maintaining their basic integrity by taking advantage of free energy in their environments. They take in free energy, use it to do whatever work they need it to do, then return the energy to the world in a more disordered form.

Whether a certain amount of energy is “free” or “disordered” depends on its environment. If we have a piston full of hot gas, we can use it to do work by letting it expand and push the piston. But that’s assuming that the piston isn’t surrounded by gas of equal temperature and density; if it is, there’s no net force on the piston, and we can’t do any work with it. The light we get from the sun is low-entropy relative to its environment, and therefore contains free energy, available to do work. The environment is just the rest of the sky, dotted with starlight and suffused with the cosmic microwave background radiation, at a few degrees above absolute zero.

Imagine there were no sun. The entire sky would look like the night sky does now. Here on Earth, we would quickly equilibrate, and come to the same cold temperature as the night sky.

We receive photons from the sun, primarily in the visible-light part of the electromagnetic spectrum. We process the energy, and then return it to the universe in the form of lower-energy infrared photons. The entropy of a collection of photons is roughly equal to the total number of photons you have. For every one visible photon it receives from the sun, the Earth radiates approximately twenty infrared photons back into space, with approximately one-twentieth of the energy each. The Earth gives back the same amount of energy as it gets, but we increase the entropy of the solar radiation by twenty times before returning it to the universe.

The cell is the basic unit of life: a collection of functional subunits, organelles, suspended in a viscous fluid, all surrounded by a cellular membrane. Immersed as we are in a technological society, we tend to think of cells as tiny “machines.” But the differences between real biological systems and the artificially constructed machines that we’re used to dealing with are as important as their similarities. These differences stem in large part from the fact that machines are generally created for some particular purpose. Because of this origin, machines tend to be just good enough for their designated purposes, and no better. Design tends to be specific, and brittle. When something goes wrong—you lose a tire on your car, or the battery dies on your phone—the machine doesn’t work at all. Biological organisms, which have developed over the years with no specific purpose in mind, tend to be more flexible, multipurpose, and self-repairing.

The solar energy we started with is gradually degraded along the way, turning into disordered energy in the form of heat. That energy is ultimately radiated back to the universe as relatively low-energy infrared photons.

Once we move beyond vitalism, and understand that “life” is a label we attach to certain kinds of processes rather than a substance that inhabits matter and starts pushing it around, we begin to appreciate what an enormously complex and interconnected process it is.

It’s one thing to see how living organisms can harness free energy to maintain themselves and move around. It’s quite another thing to understand how life ever got started. As of this writing we have more questions than answers.

Let’s focus on three features that seem to be ubiquitous in life as we know it:

  1. Compartmentalization. Cells, the building blocks of living organisms, are bounded by membranes that separate their inner structure from the outside world.
  2. Metabolism. Living creatures take in free energy, and use it to maintain their form as well as performing actions.
  3. Replication with variation. Living beings create more of themselves, passing along information about their structure. Small variations in that information enable Darwinian natural selection.

Entropy increases, which suggests to us a certain emergent vocabulary, in which the molecules “want” to find a state with low free energy. The arrow of time leads us to speak a language of purpose and desire, even though we’re only talking about molecules obeying the laws of physics.

Many intricate processes go on inside the cell, and many things are happening all the time in the environment outside. But communication between the two is mediated through the cell membrane.

This theory was originally developed not for individual cells but as a way of thinking about how brains interact with the outside world. Our brains construct models of their surroundings, with the goal of not being surprised very often by new information. That process is precisely Bayesian reasoning—subconsciously, the brain carries with it a set of possible things that could happen next, and updates the likelihood of each of them as new data comes in. It is interesting that the same mathematical framework might apply to systems on the level of individual cells. Keeping the cell membrane intact and robust turns out to be a kind of Bayesian reasoning.

One compelling aspect of the picture is that it’s not simply working backward from “We know there’s life; how did it start?” Instead, it’s suggesting that life is the solution to a problem: “We have some free energy; how do we liberate it?”

Metabolism is essentially “burning fuel,” something we see all around us, from lighting a candle to starting a car engine. Replication seems harder, more precious, difficult to obtain. If there is any part of “life” that might act as a bottleneck to getting it started, it’s the fact that living beings reproduce themselves. Fire is a well-known chemical reaction that readily reproduces itself, leaping from tree to tree in a forest, but by most definitions it doesn’t count as alive. We want something that carries information through the reproduction process: something whose “offspring” keep some knowledge of where they came from.

Hoyle was a master of vivid imagery, and he illustrated his point with a famous analogy: The chance that higher life forms might have emerged in this way is comparable to the chance that a tornado sweeping through a junkyard might assemble a Boeing 747 from the materials therein. The problem is that Hoyle’s version of “this way” is nothing at all like how actual abiogenesis researchers believe that life came about. Nobody thinks that the first cell occurred when a fixed collection of atoms was rearranged over and over in all possible ways until it just happened to take on a cell-like configuration. What Hoyle is describing is essentially the Boltzmann Brain scenario—truly random fluctuations coming together to create something complex and ordered. The real world is different. The “unlikeliness” associated with low-entropy configurations is built into the universe from the start, by the incredibly low entropy near the Big Bang. The fact that the development of the cosmos proceeds from this very special initial condition, rather than wandering through a more typical equilibrium ensemble of states, imposes a strong nonrandom aspect on the evolution of the universe. The appearance of cells and metabolism is a reflection of the universe’s progression toward higher entropy, not an unlikely happenstance in an equilibrium background. Like the swirls of cream mixing into coffee, the marvelous complexity of biological organisms is a natural consequence of the arrow of time.

Organisms reproduce, and they hand down their genetic information to the next generation. That information is largely stable—children resemble their parents—but it’s not absolutely fixed. Small, random variations can be introduced at every step. The variations do not strive to reach any future goals, and neither can individual organisms influence them by their actions. (Your offspring don’t become more muscular just because you work out.)

Variations that fortuitously improve an organism’s chances of handing down its genetic heritage will be more likely to persist than those that are harmful or neutral.

These ingredients shouldn’t be taken for granted. This is why biologists highlight the difference between “evolution” and “natural selection.” The former is the change of the genome (complete set of genetic information) over time; the latter refers to the specific case where changes in the genome are driven by different amounts of reproductive success.

In Lenski’s long-term evolution experiment, the mutation that allowed some of the bacteria to metabolize citrate occurred around generation 31,000. When the researchers unfroze some of the earlier generations to see if they would evolve this ability again, they found that the answer was yes—but only when they started with cells from generation 20,000 or later. Around generation 20,000, one or more mutations must have occurred that did not themselves allow the bacteria to metabolize citrate, but set the stage for a later mutation that would do so. A single trait can be brought to life by multiple, separate mutations, which may not individually have much noticeable impact at all.

Natural selection can be thought of as a search algorithm. The problem being tackled by evolution is: “What organism would survive and reproduce most effectively in this particular environment?” Except it’s not really “organisms” that are being searched, it’s genomes, or particular strings of nucleotides in a strand of DNA.

Evolution provides a strategy for searching for high-fitness genomes in a ridiculously big space of possibilities. Computer scientists have recently shown that a simplified model of evolution (allowing for mixing via sexual reproduction, but not for mutations) is mathematically equivalent to an algorithm devised by game theorists years ago, known as multiplicative weight updates. Good ideas tend to show up in a variety of places.

[On natural selection and the fittest one.] When that happens, it’s literally impossible to simply find the top of a hill and just sit there; one day’s maximum might be a valley the next day.

Just as for the traveling-salesman problem, finding a good-enough solution can be extremely useful for all practical purposes.

Many useful computer programs operate according to genetically constructed algorithms that no human programmer actually understands, which is a scary thought.

The point is that natural selection, or directed evolution in this case, is a really good search strategy. It doesn’t necessarily find the best solution, but it regularly finds impressively clever ones.

As wonderful as evolution is at searching for peaks in a complex, high-dimensional fitness landscape, there are places that it won’t find. Consider a landscape with a very high mountain, separated by a long, flat plain from a collection of undulating hills. And imagine a population whose genomes are located within those hills. The process of small variation and natural selection will let the species explore around the hills, looking for the highest point it can find. But as long as the variations in the genome within the population remain small, all of the individuals will remain in the grouping of hills. None will have any reason to make a long, unrewarding trek across the flat plain to get to the isolated peak. Evolution can’t see globally across the space of genomes and find a better one; it proceeds locally through random variation and then an evaluation (through reproduction) of how well that particular variation is doing at the moment.

An irreducibly complex system, in Behe’s definition, is one whose functioning involves a number of interacting parts, with the property that every one of the parts is necessary for the system to function. The idea is that certain systems are made of parts that are so intimately interconnected that they can’t arise gradually; they must have come together all at once. To illustrate the concept, Behe mentions an ordinary mousetrap, with a spring mechanism and a release lever and so forth. Remove any one of the parts, he argues, and the mousetrap is useless; it must have been designed, rather than incrementally put together through small changes that were individually beneficial.

Irreducible complexity reflects a deep concern that many people have about evolution: the particular organisms we find in our biosphere are just too designed-looking to possibly have arisen through “random chance plus selection.”

The idea that something wants something else is a way of talking that is potentially useful in the right circumstances—a simple idea that summarizes a good amount of complex behavior in a convenient way. If we see a monkey climbing a tree, we could describe what’s happening by providing a list of what the monkey is doing at each moment in time, or for that matter we could specify the position and velocity of every atom in the monkey and the environment at each moment. But it’s immensely easier and more efficient to say, “The monkey wants those bananas that are up in the tree.” The fact that we can say that is a piece of useful knowledge over and above all of those positions and velocities.

Under naturalism, there isn’t that much difference between a human being and a robot. We are all just complicated collections of matter moving in patterns, obeying impersonal laws of physics in an environment with an arrow of time. Wants and purposes and desires are the kinds of things that naturally develop along the way.

There is a similar story to tell about “information.” If the universe is just a bunch of stuff obeying mechanistic physical rules, how can one thing ever “carry information” about anything else? Words like “information” are a useful way of talking about certain things that happen in the universe. We don’t ever need to talk about information. But the fact that information is an effective way of characterizing certain physical realities is a true and nontrivial insight onto the world.

We tend to use the word “information” in multiple, often incompatible, ways. In chapter 4 we talked about conservation of information in the fundamental physical laws. There, what we might call the “microscopic information” refers to a complete specification of the exact state of a physical system, and is neither created nor destroyed. But often we think of a higher-level macroscopic concept of information, one that can indeed come and go; if a book is burned, the information contained in it is lost to us, even if not to the universe.

In equilibrium, where entropy is high, the microstate could be almost anything, and we have essentially no information about it.

As the universe evolves from this very specific configuration to increasingly generic ones, correlations between different parts of the universe develop very naturally. It becomes useful to say that one part carries information about another part. It’s just one of the many helpful ways we have of talking about the world at an emergent, macroscopic level.

But quantum mechanics only predicts probabilities. In this view, God can simply choose certain quantum-mechanical outcomes to become real, without actually violating physical law; he is merely bringing physical reality into line with one of the many possibilities inherent in quantum dynamics. Along similar lines, Plantinga has suggested that quantum mechanics can help explain a number of cases of divine action, from miraculous healing to turning water into wine and parting the Red Sea. True, all of these seemingly miraculous occurrences would be allowed under the rules of quantum mechanics; they would simply be very unlikely.

If someone flips a coin one hundred times and gets heads every time, you are observing an outcome that was possible if the coin was fair—but it’s much more likely that the game is rigged.

As impressive as the appearance and evolution of life are, doesn’t it seem a bit—fragile? If conditions were just a bit different, doesn’t it seem plausible that life wouldn’t have come about at all? This concern is sometimes developed into the positive claim that the existence of life is evidence against naturalism. The idea is that conditions—anything from the mass of the electron to the rate of expansion of the early universe—are fine-tuned for life’s existence. If these numbers were just a little bit different, the argument goes, we wouldn’t be here to talk about it. That makes perfect sense under theism, since God would want us to be here, but might be hard to account for under naturalism. If this logic is right, we actually are the center of the universe, figuratively speaking. We are the reason the universe exists; numbers like the mass of the electron take the values they do because of us, not simply by accident or even because of some hidden physical mechanism. It can come across as more than a little arrogant to contemplate all of the interacting quantum fields of the Core Theory, or see an image of some of the hundreds of billions of galaxies that populate our universe, and say to yourself, “I know why it’s like that—so that I could be here.”

It’s still not a very good argument. It relies heavily on what statisticians call “old evidence”—we didn’t first formulate predictions of theism and naturalism and then go out and test them; we knew from the start that life exists. There is a selection effect: we can be having this conversation only in possible worlds where we exist, so our existence doesn’t really tell us anything new.

If naturalism is true, what is the probability that the universe would be able to support life? The usual fine-tuning argument is that the probability is very small, because small changes in the numbers that define our world would render life impossible.

Second, we don’t know that much about whether life would be possible if the numbers of our universe were very different. Think of it this way: if we didn’t know anything about the universe other than the basic numbers of the Core Theory and cosmology, would we predict that life would come about?

But when it comes to most of the numbers characterizing physics and astronomy, it’s very hard to say what would happen were they to take on other values. There’s little doubt that the universe would look quite different, but we don’t know whether it would be hospitable to biology.

Astronomer Fred Adams has shown that the mass of the neutron could be substantially different from its actual value, and stars would still be able to shine, using alternative mechanisms to the ones employed by our universe.

Life is a complex system of interlocking chemical reactions, driven by feedback and free energy. Here on Earth, it has taken a particular form, making use of the wonderful flexibility of carbon-based chemistry. Who is to say what other forms analogous complex systems might take?

There is another famous complication: we might not have just a universe, but a multiverse. The physical numbers that are purportedly fine-tuned—even supposedly fixed constants, such as the mass of the neutron—could take on very different values from place to place. If that’s the case, the fact that we find ourselves in a part of the multiverse that is compatible with life is exactly what we should expect. Where else would we find ourselves?

The only real question is whether it is reasonable to imagine that we do live in a multiverse in the first place. The terminology can be confusing; naturalism says there is only one world, but that “world” can include an entire multiverse. In this context, what we care about is a cosmological multiverse. That means there are literally different regions of space, very far away and therefore unobservable to us, where conditions are quite different. We call these regions “other universes,” even though they are still part of the natural world. Because there’s been a finite number of years since the Big Bang, and because light moves at a fixed speed (one light-year per year), there are parts of space that are simply too far away for us to see them. It’s completely possible that out beyond our visible horizon, there are regions where the local laws of physics—the equivalent of the Core Theory—are utterly different. Different particles, different forces, different parameters, even different numbers of dimensions of space. And there could be a huge number of such regions, each with its own version of the local laws of physics. That’s the cosmological multiverse. (It’s a separate idea from the “many worlds” of quantum mechanics, where different branches of the wave function are all subject to the same physical laws.)

Two theories, in particular, move us to contemplate the multiverse: string theory and inflation. String theory is currently our leading candidate for reconciling gravitation with the rules of quantum mechanics. It naturally predicts more dimensions of space than the three we observe. You might think that this rules out the idea, and we should move on with our lives. But these extra dimensions of space can be curled up into a tiny geometric figure, far too small to be seen in any experiment yet performed. There are many ways to do the curling up—many different shapes the extra dimensions can take. We don’t know the actual number, but physicists like to throw around estimates like 10500 different ways. Every one of those ways to hide the extra dimensions—what string theorists call a compactification—leads to an effective theory with different observable laws of physics. In string theory, “constants of nature” like the vacuum energy or the masses of the elementary particles are fixed by the exact way in which extra dimensions are curled up in any given region of the universe. Elsewhere, if the extra dimensions are curled up in a different way, anyone who lived there would measure radically different numbers.

String theory, then, allows for the existence of a multiverse. To actually bring it into existence, we turn to inflation. This idea, pioneered by physicist Alan Guth in 1980, posits that the very early universe underwent a period of extremely rapid expansion, powered by a kind of temporary super-dense vacuum energy. This has numerous beneficial aspects, in terms of explaining the universe we see: it predicts a smooth, flat spacetime, but one with small fluctuations in density—exactly the kind that can grow into stars and galaxies through the force of gravity over time. We don’t currently have direct evidence that inflation actually occurred, but it is such a natural and useful idea that many cosmologists have adopted it as a default mechanism for shaping our universe into its present state. Taking the idea of inflation, and combining it with the uncertainty of quantum mechanics, can lead to a dramatic and unanticipated consequence: in some places the universe stops inflating and starts looking like what we actually observe, while in other places inflation keeps going. This “eternal inflation” creates larger and larger volumes of space. In any particular region, inflation will eventually end—and when it does, we can find ourselves with a completely different compactification of extra dimensions than we have elsewhere. Inflation can create a potentially infinite number of regions, each with its own version of the local laws of physics—each a separate “universe.” Together, inflation and string theory can plausibly bring the multiverse to life. We don’t need to postulate a multiverse as part of our ultimate physical theory; we postulate string theory and inflation, both of which are simple, robust ideas that were invented for independent reasons, and we get a multiverse for free. Both inflation and string theory are, at present, entirely speculative ideas; we have no direct empirical evidence that they are correct. But as far as we can tell, they are reasonable and promising ideas. Future observations and theoretical developments will, we hope, help us decide once and for all.

Consciousness is not a single brain organ or even a single activity; it’s a complex interplay of many processes acting on multiple levels. It involves wakefulness, receiving and responding to sensory inputs, imagination, inner experience, and volition. Neuroscience and psychology have learned a great deal about what consciousness is and how it functions, but we are still far away from any sort of complete understanding.

Consciousness is also a unique and heavy burden. Being able to reflect on ourselves, our past and possible futures, and the state of the world and the cosmos brings great benefits, but it also opens the door to alienation and anxiety.

Our minds are not run as top-down dictatorships; they are rambunctious parliaments, populated by squabbling factions and caucuses, with much more going on beneath the surface than our conscious awareness ever accesses.

Daniel Kahneman, a psychologist who won the Nobel Prize in Economics for his work on decision making, has popularized dividing how we think into two modes of thought, dubbed System 1 and System 2. (The terms were originally introduced by Keith Stanovich and Richard West.) System 1 includes all the various modules churning away below the surface of our conscious awareness. It is automatic, “fast,” intuitive thinking, driven by unconscious reactions and heuristics—rough-and-ready strategies shaped by prior experience. When you manage to make your coffee in the morning or drive from home to work without really paying attention to what you are doing, it’s System 1 that is in charge. System 2 is our conscious, “slow,” rational mode of thinking. It demands attention; when you’re concentrating on a hard math problem, that’s System 2’s job.

The hallmark of consciousness is an inner mental experience. A dictionary definition might be something like “an awareness of one’s self, thoughts, and environment.” The key is awareness: you exist, and the chair you’re sitting on exists, but you know you exist, while your chair presumably does not. It’s this reflexive property—the mind thinking about itself—that makes consciousness so special.

The “now” of your conscious perception is not the same as the current moment in which you are living. Though we sometimes think of consciousness as a unified essence guiding our thoughts and behavior, in fact it is stitched together out of inputs from different parts of the brain as well as our sensory perceptions. That stitching takes time. If you use one hand to touch your nose, and the other to touch one of your feet, you experience them as simultaneous, even though it takes longer for the nerve impulses to travel to your brain from your feet than from your nose. Your brain waits until all of the relevant inputs have been assembled, and only then presents them to you as your conscious perceptions. Typically, what you experience as “now” corresponds to what was actually happening some tens or hundreds of milliseconds in the past.

Mental time travel, Tulving suggested, is related to episodic memory: imagining the future is a similar conscious activity to recalling events in the past.

Memories of past experiences, it turns out, are not like a video or film recording of an event, with individual sounds and images stored for each moment. What’s stored is more like a script. When we remember a past event, the brain pulls out the script and puts on a little performance of the sights and sounds and smells. Part of the brain stores the script, while others are responsible for the stage settings and props. This helps explain why memories can be completely false, yet utterly vivid and real-seeming to us—the brain can put on a convincing show from an incorrect script just as well as an accurate one.

As the reducibly complex mousetrap reminds us, we shouldn’t let the intimidating sophistication of the final product trick us into thinking that it couldn’t have come about via numerous small steps.

What we call a “thought” corresponds directly and unmistakably to the motion of certain charged particles inside my head. That’s an amazing, humbling fact about how the universe works. What would Descartes and Princess Elisabeth have thought?

People change over time, and our connectomes change along with us. The strength of the connections evolves, as the repeated firing of certain signals increases the chances that specific synapses will fire again in the future. We believe that memories are formed in this way, by synapses growing and shrinking in strength in response to stimuli.

Short-term memories were associated with synapses being strengthened, while long-term memories came from entirely new synapses being created.

More recently, neuroscientists have been able to directly observe neurons in mice growing and connecting as they learned how to perform new tasks. Impressively (or disturbingly, depending on your perspective), they have also been able to remove memories from mice by weakening specific synapses, and even artificially implanting false memories by directly stimulating individual nerve cells with electrodes. Memories are physical things, located in your brain.

To say that the connectome is a hierarchical network is to say that it lies somewhere between being maximally connected (every neuron is talking to every other neuron) and minimally connected (every neuron talks only to its immediate neighbors). As far as we can tell, the connectome is what mathematicians call a small-world network. The name comes from the famous six-degrees-of-separation experiment by psychologist Stanley Milgram. He found that randomly chosen people in Omaha, Nebraska, were linked to a specific person living in Boston, Massachusetts, by an average of about six first-name relationships. In network theory, we say that a network has the small-world property when most nodes are not directly connected to one another, but each one can be reached from any other one by a small number of steps.

Perhaps the brain is like a radio receiver. Altering it or damaging it will change how it plays, but that doesn’t mean that the original signal is being created inside the radio itself. That idea doesn’t really hold up either. Damaging a radio might hurt our reception, making it hard to pick up our favorite station. But it doesn’t turn that station from heavy-metal music into a smooth-jazz format. Damaging the brain, on the other hand, can change who a person is at a fundamental level.

Presumably self-awareness is just the kind of thing that happens when thinking devices become sufficiently large and complex.

How would we know that a computer was actually “thinking,” as opposed to mindlessly pushing numbers around? (Is there a difference?)

Alan Turing back in 1950. Turing proposed what he called the imitation game, which is now more commonly known as the Turing test. Can a machine converse with a person in such a way as to make the person believe that the machine was also a person? Turing put forward the ability to pass as human in such a test as a reasonable criterion for what it means to “think.” We will very likely get there at some point, but contemporary machines do not “think” in Turing’s sense.

When and if we do manage to construct a machine that can pass the Turing test to almost everyone’s satisfaction, we will still be debating whether that machine truly thinks in the same sense that a human being does. The issue is consciousness, and the closely related issue of “understanding.” No matter how clever a computer became at carrying on conversations, can it truly understand what it’s saying? If the discussions turn to aesthetics or emotions, could a piece of software running on a silicon chip experience beauty or feel grief as a human can?

The argument from consciousness seemed, to Turing, to ultimately be solipsistic: you could never know that anyone was conscious unless you actually were that person. How do you know that everyone else in the world is actually conscious at all, other than by how they behave?

Is the kind of thinking done in my brain really qualitatively distinct from what happens inside a computer? Heinlein’s protagonist didn’t think so: “Can’t see it matters whether paths are protein or platinum.”

Searle’s original target was research in artificial intelligence, which he felt would never be able to achieve a truly human level of thinking. In the terms of his analogy, a computer that tries to pass the Turing test is like the person in the Chinese room: it might be able to push symbols around to give the illusion of understanding, but no real comprehension is present.

But the Chinese Room experiment doesn’t provide a convincing argument for that conclusion. It does illustrate the view that “understanding” is a concept that transcends mere physical correlation between input and output, and requires something extra: a sense in which what goes on in the system is truly “about” the subject matter at hand. To a poetic naturalist, “aboutness” isn’t an extra metaphysical quality that information can have; it’s simply a convenient way of talking about correlations between different parts of the physical world.

If the world is purely physical, then what we mean by “understanding” is a way of talking about a particular kind of correlation between information located in one system (as instantiated in some particular arrangement of matter) and conditions in the external world.

The difficulty in clarifying what we mean by “understanding.” A textbook on quantum field theory contains information about quantum field theory, but it doesn’t itself “understand” the subject. A book can’t answer questions that we put to it, neither can it do calculations using the tools of field theory. Understanding is necessarily a more dynamic and process-oriented concept than the mere presence of information.

The extension is straightforward enough: if you think the system inside the room doesn’t really “understand,” you probably don’t think it’s aware and experiencing either.

The one system we generally agree is conscious is a human being—mostly the brain, but we can include the rest of the body if you like. A human can be thought of as a configuration of several trillion cells. If the physical world is all there is, we have to think that consciousness results from the particular motions and interactions of all those cells, with one another, and with the outside world. It is not supposed to be the fact that cells are “cells” that matters, only how they interact with one another, the dynamic patterns they carve out in space as they move through time. That’s the consciousness version of multiple realizability, sometimes called substrate independence—many different substances could embody the patterns of conscious thought. And if that’s true, then all kinds of things could be conscious.

Imagine that we take one neuron in your brain, and study what it does until we have it absolutely figured out. We know precisely what signals it will send out in response to any conceivable signals that might be coming in. Then, without making any other changes to you, we remove that neuron and replace it with an artificial machine that behaves in precisely the same way, as far as inputs and outputs are concerned. A “neuristor,” as in Heinlein’s self-aware computer, Mike. But unlike Mike, you are almost entirely made of your ordinary biological cells, except for this one replacement neuristor. Are you still conscious? Most people would answer yes, a person with one neuron replaced by an equivalently behaving neuristor is still conscious. So what if we replace two neurons? Or a few hundred million? By hypothesis, all of your external actions will be unaltered—at least, if the world is wholly physical and your brain isn’t affected by interactions with any immaterial soul substance that communicates with organic neurons but not with neuristors. A person with every single one of their neurons replaced by artificial machines that interact in the same way would indisputably pass the Turing test. Would it qualify as being conscious?

It’s logically possible that a phase transition occurs somewhere along the way as we gradually replace neurons one by one, even if we can’t predict exactly when it would happen. But we have neither evidence nor reason to believe that there is any such phase transition. Following Turing, if a cyborg hybrid of neurons and neuristors behaves in exactly the same way as an ordinary human brain would, we should attribute to it consciousness and all that goes along with it.

Like “entropy” and “heat,” the concepts of “consciousness” and “understanding” are ones that we invent in order to give ourselves more useful and efficient descriptions of the world. We should judge a conception of what consciousness really is on the basis of whether it provides a useful way of talking about the world—one that accurately fits the data and offers insight into what is going on.

A form of multiple realizability must be true at some level. Like the Ship of Theseus, most of the individual atoms and many of the cells in any human body are replaced by equivalent copies each year. Not every one—the atoms in your tooth enamel are thought to be essentially permanent, for example. But who “you” are is defined by the pattern that your atoms form and the actions that they collectively take, not their specific identities as individual particles. It seems reasonable that consciousness would have the same property.

If any element of consciousness is absolutely necessary, it should be the ability to have thoughts.

A complete video and audio recording of the life of a human being wouldn’t be “conscious,” even if it precisely captured everything that person had done to date, because the recording wouldn’t be able to extrapolate that behavior into the future. We couldn’t ask it questions or interact with it.

Consider the color red. It is a useful concept, one that can apparently be recognized universally and objectively, at least by sighted people who are not prevented from seeing red by color blindness. The operational instruction “stop when the light is red” can be understood without ambiguity. But there is the famous lurking question: do you and I see the same thing when we see something red? That’s the question of phenomenal consciousness— what is it like to experience redness? The word qualia (plural of “quale,” which is pronounced KWAH-lay) is sometimes used to denote the subjective experience of the way something seems to us. “Red” is a color, a physically objective wavelength of light or appropriate combination thereof; but “the experience of the redness of red” is one of the qualia we would like to account for in a complete understanding of consciousness.

The attributes of consciousness, including our qualia and inner subjective experiences, are useful ways of talking about the effective behavior of the collections of atoms we call human beings. Consciousness isn’t an illusion, but it doesn’t point to any departure from the laws of physics as we currently understand them.

Mary can know all of the physical facts about color, but there is still something she doesn’t know: “what it is like” to experience the color red. Mary’s situation is related to the old chestnut “Is my color red the same as your color red?” Not the wavelengths, but is the experience of redness the same for you as it is for me? In some strict sense, no: my experience of the color red is a way of talking about certain electrochemical signals traveling through my brain, while yours is a way of talking about certain electrochemical signals traveling through your brain.

But my experience of red is probably pretty similar to yours, simply because our brains are pretty similar.

The big question about zombies is a simple one: can they possibly exist? If they can, it’s a knockout argument against the idea that consciousness can be explained in completely physical terms. If you can have two identical collections of atoms, both of which take the form of a human being, but one has consciousness and the other does not, then consciousness cannot be purely physical. There must be something else going on, not necessarily a disembodied spirit, but at least a mental aspect in addition to the physical configuration. As long as zombies are conceivable or logically possible, Chalmers argues, then we know that consciousness is not purely physical, regardless of whether zombies could exist in our world. Because then we would know that consciousness can’t simply be attributed to what matter is doing: the same behavior of matter could happen with or without conscious experience.

The idea that our mental experiences or qualia are not actually separate things, but instead are useful parts of certain stories we tell about ordinary physical things, is one that many people find hard to swallow.

A poetic naturalist has no trouble saying that conscious experiences exist. They are not part of the fundamental architecture of reality, but they serve as essential pieces of an emergent effective theory.

Poetic naturalism is “poetic” because there are different stories we can tell about the world, many of them capturing some aspects of reality, and all useful in their appropriate context.

If consciousness were something over and above the physical properties of matter, there would be a puzzle: what was it doing for all those billions of years before life came along? Poetic naturalists have no problem with this question. The appearance of consciousness is a phase transition, like water boiling. The fact that sufficiently hot water is in the form of a gas doesn’t mean that there was always something gaslike about the water, even when it was in the form of liquid; the system simply acquired new properties as its situation changed.

Consciousness, or at least protoconsciousness, could be analogous to “spin” or “electric charge”—one of the basic properties characterizing each bit of matter in the universe.

Quantum mechanics says that superpositions evolve into definite outcomes during the process of measurement, at least for any one observer; it’s not hard to twist that into the claim that conscious observation literally brings reality into existence.

Advocates of this approach will sometimes throw in something about “entanglement”—which isn’t even a mystery, just an interesting feature of quantum mechanics—to make you feel like you are connected to everything else in the universe. As a final flourish, they might suggest that quantum mechanics has discarded the physical world entirely, leaving us with idealism, where everything is a projection of the mind.

Every particle in your head is constantly being jostled by other particles, leading to an ongoing process of “collapse” (or branching of the wave function, for fearless Everettians like me).

Incompleteness Theorem is that within any consistent mathematical formal system—a set of axioms, and rules for deriving consequences from them—there will be statements that are true but cannot be proven within that system.

Alan Turing: “If we want a machine to be intelligent, it can’t also be infallible.

The idea that we are part of the natural world can lead to a sense of profound loss if the reasons and causes for our actions aren’t what we thought they were. We’re not human beings, equipped with intentions and goals, so the worry goes; we’re bags of particles mindlessly bumping into one another as time chugs forward. It’s not love that will keep us together, it’s just the laws of physics.

“Causation,” which after all is itself a derived notion rather than a fundamental one, is best thought of as acting within individual theories that rely on the concept. Thinking of behavior in one theory as causing behavior in a completely different theory is the first step toward a morass of confusion from which it is difficult to extract ourselves.

The usual argument against free will is straightforward: We are made of atoms, and those atoms follow the patterns we refer to as the laws of physics. These laws serve to completely describe the evolution of a system, without any influences from outside the atomic description. If information is conserved through time, the entire future of the universe is already written, even if we don’t know it yet. Quantum mechanics predicts our future in terms of probabilities rather than certainties, but those probabilities themselves are absolutely fixed by the state of the universe right now. A quantum version of Laplace’s Demon could say with confidence what the probability of every future history will be, and no amount of human volition would be able to change it. There is no room for human choice, so there is no such thing as free will. We are just material objects who obey the laws of nature.

It’s not hard to see where that argument violates our rules. Of course there is no such notion as free will when we are choosing to describe human beings as collections of atoms or as a quantum wave function. But that says nothing about whether the concept nevertheless plays a useful role when we choose to describe human beings as people. Indeed, it pretty clearly does play a useful role.

But none of us is Laplace’s Demon. None of us knows the exact state of the universe, or has the calculational power to predict the future even if we did.

One popular definition of free will is “the ability to have acted differently.” In a world governed by impersonal laws, one can argue that there is no such ability. Given the quantum state of the elementary particles that make up me and my environment, the future is governed by the laws of physics.

Much of our legal system, and much of the way we navigate the waters of our social environment, hinges on the idea that individuals are largely responsible for their actions. At extreme levels of free-will denial, the idea of “responsibility” is as problematic as that of human choice. How can we assign credit or blame if people don’t choose their own actions? And if we can’t do that, what is the role of punishment or reward?

What seems clear is that we should base our ideas about personal responsibility on the best possible understanding of how the brain works that we can possibly achieve, and be willing to update those ideas whenever the data call for it.

It takes courage to face up to the finitude of our lives, and even more courage to admit the limits of purpose in our existence.

The number of heartbeats per typical lifetime is roughly the same for all mammals—about 1.5 billion heartbeats. In the modern world, where we are the beneficiaries of advanced medicine and nutrition, humans live on average for about twice as long as West’s scaling laws would predict. Call it 3 billion heartbeats. Three billion isn’t such a big number. What are you going to do with your heartbeats?

Ideas like “meaning” and “morality” and “purpose” are nowhere to be found in the Core Theory of quantum fields, the physics underlying our everyday lives. They aren’t built into the architecture of the universe; they emerge as ways of talking about our human-scale environment. But there is a difference; the search for meaning is not another kind of science. In science we want to describe the world as efficiently and accurately as possible. The quest for a good life isn’t like that: it’s about evaluating the world, passing judgment on the way things are and could be. We want to be able to point to different possible events and say, “That’s a worthy goal to strive for,” or “That’s the way we ought to behave.” Science couldn’t care less about such judgments. The source of these values isn’t the outside world; it’s inside us. We’re part of the world, but we’ve seen that the best way to talk about ourselves is as thinking, purposeful agents who can make choices. One of those choices, unavoidably, is what kind of life we want to live. We’re not used to thinking that way. Our folk ontology treats meaning as something wholly different from the physical stuff of the world. It might be given by God, or inherent in life’s spiritual dimension, or part of a teleological inclination built into the universe itself, or part of an ineffable, transcendent aspect of reality. Poetic naturalism rejects all of those possibilities, and asks us to take the dramatic step of viewing meaning in the same way we view other concepts that human beings invent to talk about the universe.

Poetic naturalism offers no such escape from the demands of meeting life in a creative and individual way. It is about you: it’s up to you, me, and every other person to create meaning and purpose for ourselves. This can be a scary prospect, not to mention exhausting. We can decide that what we want is to devote ourselves to something larger—but that decision comes from us.

There are two legitimate worries about the idea that we construct meaning for our lives. The first worry is that it’s cheating. Maybe we are fooling ourselves if we think we can find fulfillment once we accept that we are part of the physical world, patterns of elementary particles beholden to the laws of physics. Sure, you can say you are leading a rich and rewarding life based on your love for your family and friends, your dedication to your craft, and your work to make the world a better place. But are you really? If the value we place in such things isn’t objectively determined, and if you won’t be around to witness any of it in a hundred years or so, how can you say your life truly matters?

This is just grumpiness talking. Say you love somebody, genuinely and fiercely. And let’s say you also believe in a higher spiritual power, and think of your love as a manifestation of that greater spiritual force. But you’re also an honest Bayesian, willing to update your credences in light of the evidence. Somehow, over the course of time, you accumulate a decisive amount of new information that shifts your planet of belief from spiritual to naturalist. You’ve lost what you thought was the source of your love—do you lose the love itself? Are you now obligated to think that the love you felt is now somehow illegitimate? No. Your love is still there, as pure and true as ever. How you would explain your feelings in terms of an underlying ontological vocabulary has changed, but you’re still in love. Water doesn’t stop being wet when you learn it’s a compound of hydrogen and oxygen. The same goes for purpose, meaning, and our sense of right and wrong. If you are moved to help those less fortunate than you, it doesn’t matter whether you are motivated by a belief that it’s God’s will, or by a personal conviction that it’s the right thing to do. Your values are no less real either way.

The second worry about creating meaning within ourselves is that there isn’t any place to start. If neither God nor the universe is going to help us attach significance to our actions, the whole project seems suspiciously arbitrary. But we do have a starting place: who we are. As living, thinking organisms, we are creatures of motion and motivation. At a basic, biological level, we are defined not by the atoms that make us up but by the dynamic patterns we trace out as we move through the world. The most important thing about life is that it occurs out of equilibrium, driven by the second law. To stay alive, we have to continually move, process information, and interact with our environment. In human terms, the dynamic nature of life manifests itself as desire. There is always something we want, even if what we want is to break free of the bonds of desire. That’s not a sustainable goal; to stay alive, we have to eat, drink, breathe, metabolize, and generally continue to ride the wave of increasing entropy. Desire has a bad reputation in certain circles, but that’s a bum rap. Curiosity is a form of desire; so are helpfulness and artistic drive. Desire is an aspect of caring: about ourselves, about other people, about what happens to the world.

When our lives are in good shape, and we are enjoying health and leisure, what do we do? We play. Once the basic requirements of food and shelter have been met, we immediately invent games and puzzles and competitions. That’s a lighthearted and fun manifestation of a deeper impulse: we enjoy challenging ourselves, accomplishing things, having something to show for our lives.

We are built from the start to care about the world, to make it matter.

The world, and what happens in the world, matters. Why? Because it matters to me. And to you.

The construction of meaning is a fundamentally individual, subjective, creative enterprise, and an intimidating responsibility. As Carl Sagan put it, “We are star stuff, which has taken its destiny into its own hands.”

The finitude of life lends poignancy to our situations. Each of us will have a last word we say, a last book we read, a last time we fall in love. At each moment, who we are and how we behave is a choice that we individually make. The challenges are real; the opportunities are incredible.

There isn’t anything outside the natural world to which we can turn for guidance about how to behave. The temptation to somehow extract such guidance from the natural world itself is incredibly strong. The natural world doesn’t pass judgment; it doesn’t provide guidance; it doesn’t know or care about what ought to happen.

ON LOGIC

  1. X is true.
  2. If X is true, then Y is true.
  3. Therefore, Y is true.

The first two statements in a syllogism are the premises of the argument, while the third statement is the conclusion. An argument is said to be valid if the conclusion follows logically from the premises. In contrast, an argument is said to be sound if the conclusion follows from the premises and the premises themselves are true—a much higher standard to achieve.

Consider: “Pineapples are reptiles. All reptiles eat cheese. Therefore, pineapples eat cheese.” Any logician will explain to you that this is a completely valid argument. But it’s not very sound.

The lack of an ultimate objective scientific grounding for morality can be worrisome. It implies that people with whom we have moral disagreements—whether it’s Hitler, the Taliban, or schoolyard bullies who beat up smaller children—aren’t wrong in the same sense that it’s wrong to deny Darwinian evolution or the expansion of the universe.

As Abraham learned, having an absolute moral standard such as God can be extraordinarily challenging. But without God, there is no such standard, and that is challenging in its own way [Nietzsche!]. The dilemmas are still there, and we have to figure out a way to face them. Nature alone is no help, as we can’t extract ought from is; the universe doesn’t pass moral judgments. And yet we must live and act. We are collections of vibrating quantum fields, held together in persistent patterns by feeding off of ambient free energy according to impersonal and uncaring laws of nature, and we are also human beings who make choices and care about what happens to ourselves and to others. What’s the best way to think about how we should live?

Our ethical systems are things that are constructed by us human beings, not discovered out there in the world, and should be evaluated accordingly.

If deontology is about what you do, and consequentialism is about what happens, virtue ethics is about who you are. To a virtue ethicist, what matters isn’t so much how many people you save by diverting a trolley, or the intrinsic good of your actions; what matters is whether you made your decision on the basis of virtues such as courage, responsibility, and wisdom.

You’re telling me that judging right from wrong is just a matter of our personal feelings and preferences, grounded in nothing more substantial than our own views, with nothing external to back it up? That there are no objectively true moral facts out there in the world? Yes. But admitting that morality is constructed, rather than found lying on the street, doesn’t mean that there is no such thing as morality.

The idea that moral guidelines are things invented by human beings based on their subjective judgments and beliefs, rather than being grounded in anything external, is known as moral constructivism.

Hume: “Reason is, and ought only to be, the slave of the passions.” Reason, that is, can help us get what we want; but what we actually do want is defined by our passions.

A Kantian constructivist accepts that morality is constructed by human beings, but believes that every rational person would construct the same moral framework, if only they thought about it clearly enough. A Humean constructivist takes one more step: morality is constructed, and different people might very well construct different moral frameworks for themselves. Hume was right. We have no objective guidance on how to distinguish right from wrong: not from God, not from nature, not from the pure force of reason itself.

Judging what is good and what is not is a quintessentially human act, and we need to face up to that reality. Morality exists only insofar as we make it so, and other people might not pass judgments in the same way that we do.

So then, fellow humans. What kind of morality shall we construct? There is no unique answer to this question that applies equally well to all persons. But that shouldn’t stop each of us from doing the best we can to expand and articulate our own moral impulses into systematic positions.

We want to act in good ways; we want to make the world a better place; we want to be good people. But we also want to make sense and be internally consistent. That’s hard to do while accepting all of these competing impulses at once. In practice, moral philosophies tend to pick one approach and apply it universally. And as a result of that, we often end up with conclusions that don’t sit easily with the premises we started with.

Poetic naturalism refuses to offer us the consolation of objective moral certainty. There is no “right” answer to the trolley problem. How you should act depends on who you are.

The modern version of this worry is that, if we were to accept that morality is constructed, individuals will run around giving in to their worst instincts, and we would have no basis on which to condemn obviously bad things like the Holocaust. After all, somebody thought it was a good idea, and without objective guidance how can we say they were wrong? The constructivist answers that just because moral rules are invented by human beings, that doesn’t make them any less real. The rules of basketball are also invented by human beings, but once invented they really exist. 

Morality is like that: we invent the rules, but we invent them for sensible purposes. The problem arises when we imagine people whose purposes—whose foundational moral sentiments and commitments—are radically at odds with ours. What are we to do with someone who just wants to play hockey rather than basketball? In sports we might seek out different people to play with, but when it comes to morality we all have to live together here on this Earth.

Here in the early years of the twenty-first century, a majority of philosophers and scientists are naturalists. But in the public sphere, at least in the United States, on questions of morality and meaning, religion and spirituality are given a preeminent place. Our values have not yet caught up to our best ontology.

We don’t need an immovable place to stand; we need to make our peace with a universe that doesn’t care what we do, and take pride in the fact that we care anyway.

It makes sense, then, to put aside the concept of “commandments” and instead propose Ten Considerations: a list of things we think are true, that might be useful to keep in mind as we shape and experience our own ways of valuing and caring about our lives.

1. Life Isn’t Forever.

You don’t really want to live forever. Eternity is longer than you think. Life ends, and that’s part of what makes it special. What exists is here, in front of us, what we can see and touch and affect. Our lives are not dress rehearsals in which we plan and are tested in anticipation of the real show to come. This is it, the only performance we’re going to get to give, and it is what we make of it.

2. Desire Is Built into Life.

Life is characterized by motion and change, and these characteristics manifest themselves in human beings as forms of desire.

3. What Matters Is What Matters to People.

The universe doesn’t care about us, but we care about the universe. That’s what makes us special, not any immaterial souls or special purpose in the grand cosmic plan.

Whenever we ask ourselves whether something matters, the answer has to be found in whether it matters to some person or persons. We take the world and attach value to it, an achievement of which we can be justly proud.

4. We Can Always Do Better.

Understanding develops through the process of making mistakes. We make guesses about the world, test them against what we observe, learn more often than not that we were wrong, and try to improve our hypotheses. To err is human, and that’s about it.

Mathematical proofs can be perfect in their logic, but scientific discoveries are typically the conclusion of a long series of trials and errors. When it comes to valuing, caring, loving, and being good, perfection is even more of a chimera, since there isn’t even an objective standard against which to judge our successes. We nevertheless make progress, both at understanding the world and at living within it. It may seem strange to claim the existence of moral progress when there isn’t even an objective standard of morality, but that’s exactly what we find in human history. Progress comes, not from new discoveries in an imaginary science of morality, but from being more honest and rigorous with ourselves—from uncovering our rationalizations and justifications for behavior that, if we admit it, was pretty reprehensible from the start. Becoming better people is hard work, but by sifting through our biases and being open to new ideas, our ability to be good advances.

5. It Pays to Listen.

6. There Is No Natural Way to Be.

7. It Takes All Kinds.

If our lives are to have meaning and purpose, we are going to have to create them. And people are different, so they’re going to create different things. That’s a feature to be celebrated, not an annoyance to be eradicated.

We are faced with both an opportunity and a challenge. There is no single right way to live, an objectively best life out there to be discovered by reason or revelation. We have the opportunity to shape our lives in many ways, and count them as true and good.

8. The Universe Is in Our Hands.

9. We Can Do Better Than Happiness.

Think of Socrates, Jesus, Gandhi, Nelson Mandela. Or Michelangelo, Beethoven, Virginia Woolf. Is “happy” the first word that comes to mind when you set out to describe them? They may have been—and surely were, from time to time—but it’s not their defining characteristic. The mistake we make in putting emphasis on happiness is to forget that life is a process, defined by activity and motion, and to search instead for the one perfect state of being. There can be no such state, since change is the essence of life.

At the end of the day, or the end of your life, it doesn’t matter so much that you were happy much of the time. Wouldn’t you rather have a good story to tell?

10. Reality Guides Us.

“positive illusions” to describe beliefs people have that aren’t true but that make them happy.

Illusions can be pleasant, but the rewards of truth are enormously greater.

I tended to embrace the idea that science would eventually solve all of our problems, including answering questions about why we are here and how we should behave. The more I thought about it, the less sanguine I became about such a possibility; science describes the world, but what we’re going to do with that knowledge is a different matter.

All lives are different, and some face hardships that others will never know. But we all share the same universe, the same laws of nature, and the same fundamental task of creating meaning and of mattering for ourselves and those around us in the brief amount of time we have in the world. Three billion heartbeats. The clock is ticking.

Appendix: The Equation Underlying You and Me

[You ever wondered, how you'd describe reality? Here you go:]

The essence of the Core Theory—the laws of physics underlying everyday life—expressed in a single equation. This equation is the quantum amplitude for undergoing a transition from one specified field configuration to another, expressed as a sum over all the paths that could possibly connect them.

The Core Theory: a quantum field theory describing the dynamics and interactions of a certain set of matter particles (fermions) and force particles (bosons), including both the standard model of particle physics and Einstein’s general theory of relativity (in the weak-gravity regime).

There are two kinds of quantum fields: fermions and bosons. Fermions are the particles of matter; they take up space, which helps explain the solidity of the ground beneath your feet or the chair you are sitting on. Bosons are the force-carrying particles; they can pile on top of one another, giving rise to macroscopic force fields like those of gravity and electromagnetism.

Fermions:

  1. Electron, muon, tau (electric charge −1).
  2. Electron neutrino, muon neutrino, tau neutrino (neutral).
  3. Up quark, charm quark, top quark (charge +2/3).
  4. Down quark, strange quark, bottom quark (charge −1/3).

Bosons:

  1. Graviton (gravity; spacetime curvature).
  2. Photon (electromagnetism).
  3. Eight gluons (strong nuclear force).
  4. W and Z bosons (weak nuclear force).
  5. Higgs boson.

Elementary particles (which are really vibrations of quantum fields)

Spin is an intrinsic property, not the revolution of their bodies around an axis.

The largest quantum probability gets associated with evolution that looks almost classical. That’s why our everyday world is well modeled by classical mechanics; it’s classical behavior that gives the largest contributions to the probability of quantum transitions.

We know that the Core Theory, and therefore this equation, can’t be the final story. There is dark matter in the universe, which doesn’t fit comfortably into any of the known fields.

Moreover, almost every physicist believes there are more particles and fields to be found, at higher masses and energies—but they must be ones that either interact with us very weakly (like dark matter) or decay away very quickly.

The Core Theory doesn’t even provide a complete theory of the fields that we know are there. That’s the problem, for example, with quantum gravity. The equation we wrote is okay if the gravitational field is very weak, but it doesn’t work when gravity becomes strong, such as near the Big Bang or inside a black hole.

As science continues to learn more about the universe, we will keep adding to it, and perhaps we will even find a more comprehensive theory underlying it that doesn’t refer to quantum field theory at all. But none of that will change the fact that the Core Theory is an accurate description of nature in its claimed domain. The fact that we have successfully put together such a theory is one of the greatest triumphs of human intellectual history.

Penrose, R. (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. A. A. Knopf.

open