Author: Michael Lewis

ISBN: 978-0393354775

This book is a story about how two psychologists Daniel Kahneman and Amos Tversky came to wrote a series of original papers that invented the field of behavioral economics. The first one claimed the Nobel Prize and their work is thoroughly described in the book Thinking, Fast and Slow (you can find it under Books).

EXCERPTS

In the early 1960s the psychologist Walter Mischel created these wonderfully simple tests on children that wound up revealing a lot about them. In what became known as the “marshmallow experiment,” Mischel put three-, four-, and five-year-old kids in a room alone with their favorite treat—a pretzel stick, a marshmallow—and told them that if they could last a few minutes without eating the treat they’d receive a second treat. A small child’s ability to wait turned out to be correlated with his IQ and his family circumstances and some other things as well. Tracking the kids through life, Mischel later found that the better a five-year-old resisted the temptation, the higher his future SAT scores and his sense of self-worth, and the lower his body fat and the likelihood he’d suffer from some addiction.

A lot of things that most human beings would never think to do, to Amos simply made sense. For instance, when he wanted to go for a run he . . . went for a run. No stretching, no jogging outfit or, for that matter, jogging: He’d simply strip off his slacks and sprint out his front door in his underpants and run as fast as he could until he couldn’t run anymore. “Amos thought people paid an enormous price to avoid mild embarrassment,”

“The nice thing about things that are urgent,” he liked to say, “is that if you wait long enough they aren’t urgent anymore.”

Amos would have decided, in the first five minutes, whether the movie was worth seeing—and if it wasn’t he’d just come home. He’d then go back and fetch his wife after her movie ended. “They’ve already taken my money,” he’d explain. “Should I give them my time, too?”

It never occurred to him [Amos Tversky] that anyone with whom he wanted to spend time wouldn’t want to spend time with him.

But Amos approached intellectual life strategically, as if it were an oil field to be drilled, and after two years of sitting through philosophy classes he announced that philosophy was a dry well. “I remember his words,” recalled Amnon. “He said, ‘There is nothing we can do in philosophy. Plato solved too many of the problems. We can’t have any impact in this area. There are too many smart guys and too few problems left, and the problems have no solutions.’”

Shore asked him how he had become a psychologist. “It’s hard to know how people select a course in life,” Amos said. “The big choices we make are practically random. The small choices probably tell us more about who we are. Which field we go into may depend on which high school teacher we happen to meet. Who we marry may depend on who happens to be around at the right time of life. On the other hand, the small decisions are very systematic. That I became a psychologist is probably not very revealing. What kind of psychologist I am may reflect deep traits.”

“Is this behavior irrational?” he wrote. “We tend to doubt it. . . . When faced with complex multidimensional alternatives, such as job offers, gambles or [political] candidates, it is extremely difficult to utilize properly all the available information.” It wasn’t that people actually preferred A to B and B to C and then turned around and preferred C to A. It was that it was sometimes very hard to understand the differences. Amos didn’t think that the real world was as likely to fool people into contradicting themselves as were the experiments he had designed.

“Similarity increases with the addition of common features and/or deletion of distinctive features.”

The idea was interesting: When people make decisions, they are also making judgments about similarity, between some object in the real world and what they ideally want. They make these judgments by, in effect, counting up the features they notice. And as the noticeability of features can be manipulated by the way they are highlighted, the sense of how similar two things are might also be manipulated. For instance, if you wanted two people to think of themselves as more similar to each other than they otherwise might, you might put them in a context that stressed the features they shared. Two American college students in the United States might look at each other and see a total stranger; the same two college students on their junior year abroad in Togo might find that they are surprisingly similar: They’re both Americans!

By changing the context in which two things are compared, you submerge certain features and force others to the surface.

“It is generally assumed that classifications are determined by similarities among the objects,” wrote Amos, before offering up an opposing view: that “the similarity of objects is modified by the manner in which they are classified. Thus, similarity has two faces: causal and derivative. It serves as a basis for the classification of objects, but is also influenced by the adopted classification.” A banana and an apple seem more similar than they otherwise would because we’ve agreed to call them both fruit. Things are grouped together for a reason, but, once they are grouped, their grouping causes them to seem more like each other than they otherwise would. That is, the mere act of classification reinforces stereotypes. If you want to weaken some stereotype, eliminate the classification.

Danny was then helping the Israeli Air Force to train fighter pilots. He’d noticed that the instructors believed that, in teaching men to fly jets, criticism was more useful than praise. They’d explained to Danny that he only needed to see what happened after they praised a pilot for having performed especially well, or criticized him for performing especially badly. The pilot who was praised always performed worse the next time out, and the pilot who was criticized always performed better. Danny watched for a bit and then explained to them what was actually going on: The pilot who was praised because he had flown exceptionally well, like the pilot who was chastised after he had flown exceptionally badly, simply were regressing to the mean. They’d have tended to perform better (or worse) even if the teacher had said nothing at all. An illusion of the mind tricked teachers—and probably many others—into thinking that their words were less effective when they gave pleasure than when they gave pain.

That was another thing colleagues and students noticed about Danny: how quickly he moved on from his enthusiasms, how easily he accepted failure. It was as if he expected it. But he wasn’t afraid of it. He’d try anything. He thought of himself as someone who enjoyed, more than most, changing his mind. “I get a sense of movement and discovery whenever I find a flaw in my thinking,” he said.

At which point one of Goldberg’s fellow Oregon researchers—Goldberg doesn’t recall which one—made a radical suggestion. “Someone said, ‘One of these models you built [to predict what the doctors were doing] might actually be better than the doctor,’” recalled Goldberg. “I thought, Oh, Christ, you idiot, how could that possibly be true?” How could their simple model be better at, say, diagnosing cancer than a doctor? The model had been created, in effect, by the doctors. The doctors had given the researchers all the information in it. The Oregon researchers went and tested the hypothesis anyway. It turned out to be true. If you wanted to know whether you had cancer or not, you were better off using the algorithm that the researchers had created than you were asking the radiologist to study the X-ray. The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor.

The model captured their theory of how to best diagnose an ulcer. But in practice they did not abide by their own ideas of how to best diagnose an ulcer. As a result, they were beaten by their own model.

If they both committed the same mental errors, or were tempted to commit them, they assumed—rightly, as it turned out—that most other people would commit them, too.

In these and many other uncertain situations, the mind did not naturally calculate the correct odds. So what did it do? The answer they now offered: It replaced the laws of chance with rules of thumb. These rules of thumb Danny and Amos called “heuristics.”

The more easily people can call some scenario to mind—the more available it is to them—the more probable they find it to be. Any fact or incident that was especially vivid, or recent, or common—or anything that happened to preoccupy a person—was likely to be recalled with special ease, and so be disproportionately weighted in any judgment.

The point, once again, wasn’t that people were stupid. This particular rule they used to judge probabilities (the easier it is for me to retrieve from my memory, the more likely it is) often worked well. But if you presented people with situations in which the evidence they needed to judge them accurately was hard for them to retrieve from their memories, and misleading evidence came easily to mind, they made mistakes.

Here, clearly, was another source of error: not just that people don’t know what they don’t know, but that they don’t bother to factor their ignorance into their judgments.

“There is much evidence showing that, once an uncertain situation has been perceived or interpreted in a particular fashion, it is quite difficult to view it in any other way.

A human being who finds himself stuck at some boring meeting or cocktail party often finds it difficult to invent an excuse to flee. Amos’s rule, whenever he wanted to leave any gathering, was to just get up and leave. Just start walking and you’ll be surprised how creative you will become and how fast you’ll find the words for your excuse, he said.

Unless you are kicking yourself once a month for throwing something away, you are not throwing enough away, he said.

What Amos and Danny suspected—because they had tested it first on themselves—is that people would essentially leap from the similarity judgment (“that guy sounds like a computer scientist!”) to some prediction (“that guy must be a computer scientist!”) and ignore both the base rate (only 7 percent of all graduate students were computer scientists) and the dubious reliability of the character sketch.

They told their subjects that they had picked a person from a pool of 100 people, 70 of whom were engineers and 30 of whom were lawyers. Then they asked them: What is the likelihood that the selected person is a lawyer? The subjects correctly judged it to be 30 percent. And if you told them that you were doing the same thing, but from a pool that had 70 lawyers in it and 30 engineers, they said, correctly, that there was a 70 percent chance the person you’d plucked from it was a lawyer. But if you told them you had picked not just some nameless person but a guy named Dick, and read them Danny’s description of Dick—which contained no information whatsoever to help you guess what Dick did for a living—they guessed there was an equal chance that Dick was a lawyer or an engineer, no matter which pool he had emerged from. “Evidently, people respond differently when given no specific evidence and when given worthless evidence,” wrote Danny and Amos. “When no specific evidence is given, the prior probabilities are properly utilized; when worthless specific evidence is given, prior probabilities are ignored.”

There was much more to “On the Psychology of Prediction”—for instance, they showed that the very factors that caused people to become more confident in their predictions also led those predictions to be less accurate.

People are very good at detecting patterns and trends even in random data. In contrast to our skill in inventing scenarios, explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly inadequate. Once we have adopted a particular hypothesis or interpretation, we grossly exaggerate the likelihood of that hypothesis, and find it very difficult to see things any other way.

Their memories of the odds they had assigned to various outcomes were badly distorted. They all believed that they had assigned higher probabilities to what happened than they actually had. They greatly overestimated the odds that they had assigned to what had actually happened. That is, once they knew the outcome, they thought it had been far more predictable than they had found it to be before, when they had tried to predict it. A few years after Amos described the work to his Buffalo audience, Fischhoff named the phenomenon “hindsight bias.”

All too often, we find ourselves unable to predict what will happen; yet after the fact we explain what did happen with a great deal of confidence. This “ability” to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning. It leads us to believe that there is a less uncertain world than there actually is, and that we are less bright than we actually might be. For if we can explain tomorrow what we cannot predict today, without any added information except the knowledge of the actual outcome, then this outcome must have been determined in advance and we should have been able to predict it. The fact that we couldn’t is taken as an indication of our limited intelligence rather than of the uncertainty that is in the world. All too often, we feel like kicking ourselves for failing to foresee that which later appears inevitable. For all we know, the handwriting might have been on the wall all along. The question is: was the ink visible?

“He who sees the past as surprise-free is bound to have a future full of surprises.”

“Hyperthyroidism is a classic cause of an irregular heart rhythm, but hyperthyroidism is an infrequent cause of an irregular heart rhythm.” Hearing that the young woman had a history of excess thyroid hormone production, the emergency room medical staff had leaped, with seeming reason, to the assumption that her overactive thyroid had caused the dangerous beating of her heart. They hadn’t bothered to consider statistically far more likely causes of an irregular heartbeat. In Redelmeier’s experience, doctors did not think statistically. “Eighty percent of doctors don’t think probabilities apply to their patients,” he said. “Just like 95 percent of married couples don’t believe the 50 percent divorce rate applies to them, and 95 percent of drunk drivers don’t think the statistics that show that you are more likely to be killed if you are driving drunk than if you are driving sober applies to them.” Redelmeier asked the emergency room staff to search for other, more statistically likely causes of the woman’s irregular heartbeat. That’s when they found her collapsed lung. Like her fractured ribs, her collapsed lung had failed to turn up on the X-ray. Unlike the fractured ribs, it could kill her. Redelmeier ignored the thyroid and treated the collapsed lung. The young woman’s heartbeat returned to normal. The next day, her formal thyroid tests came back: Her thyroid hormone production was perfectly normal. Her thyroid never had been the issue. “It was a classic case of the representativeness heuristic,”

“You need to be so careful when there is one simple diagnosis that instantly pops into your mind that beautifully explains everything all at once. That’s when you need to stop and check your thinking.”

It wasn’t that what first came to mind was always wrong; it was that its existence in your mind led you to feel more certain than you should be that it was correct.

Specifically, given a choice between a sure gain and a bet with the same expected value (say, $100 for sure or a 50-50 shot at winning $200), Amos had explained to Hal Sox, people tended to take the sure thing. A bird in the hand. But, given the choice between a sure loss of $100 and a 50-50 shot of losing $200, they took the risk.

Both doctors and patients made the same choices differently when those choices were framed in terms of losses rather than gains.

Lung cancer proved to be a handy example. Lung cancer doctors and patients in the early 1980s faced two unequally unpleasant options: surgery or radiation. Surgery was more likely to extend your life, but, unlike radiation, it came with the small risk of instant death. When you told people that they had a 90 percent chance of surviving surgery, 82 percent of patients opted for surgery. But when you told them that they had a 10 percent chance of dying from the surgery—which was of course just a different way of putting the same odds—only 54 percent chose the surgery.

People facing a life-and-death decision responded not to the odds but to the way the odds were described to them. And not just patients; doctors did it, too.

In treating individual patients, doctors often did things they would disapprove of if they were creating a public policy to treat groups of patients with the exact same illness.

People had incredible ability to see meaning in these patterns where none existed.

Funny things happened when you did this with people. Their memory of pain was different from their experience of it. They remembered moments of maximum pain, and they remembered, especially, how they felt the moment the pain ended. But they didn’t particularly remember the length of the painful experience. If you stuck people’s arms in ice buckets for three minutes but warmed the water just a bit for another minute or so before allowing them to flee the lab, they remembered the experience more fondly than if you stuck their arms in the bucket for three minutes and removed them at a moment of maximum misery. If you asked them to choose one experiment to repeat, they’d take the first session. That is, people preferred to endure more total pain so long as the experience ended on a more pleasant note.

“Last impressions can be lasting impressions.”

Amos wasn’t a thrill seeker, exactly, but he had strong, almost childlike passions that, every so often, he allowed to grab hold of him and take him places most people would never wish to go.

“That was the moment I gave up on decision analysis,” said Danny. “No one ever made a decision because of a number. They need a story.”

Gamblers accepted bets with negative expected values; if they didn’t, casinos wouldn’t exist. And people bought insurance, paying premiums that exceeded their expected losses; if they didn’t, insurance companies would have no viable business.

The understanding of any decision had to account not just for the financial consequences but for the emotional ones, too.

“It is the anticipation of regret that affects decisions, along with the anticipation of other consequences.” Danny thought that people anticipated regret, and adjusted for it, in a way they did not anticipate or adjust for other emotions.

When they made decisions, people did not seek to maximize utility. They sought to minimize regret.

“The pain that is experienced when the loss is caused by an act that modified the status quo is significantly greater than the pain that is experienced when the decision led to the retention of the status quo,”

The nearer you came to achieving a thing, the greater the regret you experienced if you failed to achieve it.

Regret was closely linked to feelings of responsibility. The more control you felt you had over the outcome of a gamble, the greater the regret you experienced if the gamble turned out badly.

Danny and Amos agreed that there was a real-world equivalent of a “sure thing”: the status quo. The status quo was what people assumed they would get if they failed to take action. “Many instances of prolonged hesitation, and of continued reluctance to take positive action, should probably be explained in this fashion,”

They played around with the idea that the anticipation of regret might play an even greater role in human affairs than it did if people could somehow know what would have happened if they had chosen differently. “The absence of definite information concerning the outcomes of actions one has not taken is probably the single most important factor that keeps regret in life within tolerable bounds,” Danny wrote. “We can never be absolutely sure that we would have been happier had we chosen another profession or another spouse. . . . Thus, we are often protected from painful knowledge concerning the quality of our decisions.”

But what was this thing that everyone had been calling “risk aversion?” It amounted to a fee that people paid, willingly, to avoid regret: a regret premium.

Expected utility theory wasn’t exactly wrong. It simply did not understand itself, to the point where it could not defend itself against seeming contradictions. The theory’s failure to explain people’s decisions, Danny and Amos wrote, “merely demonstrates what should perhaps be obvious, that non-monetary consequences of decisions cannot be neglected, as they all too often are, in applications of utility theory.

Today Jack and Jill each have a wealth of 5 million. Yesterday, Jack had 1 million and Jill had 9 million. Are they equally happy? (Do they have the same utility?) Of course they weren’t equally happy. Jill was distraught and Jack was elated. Even if you took a million away from Jack and left him with less than Jill, he’d still be happier than she was. In people’s perceptions of money, as surely as in their perception of light and sound and the weather and everything else under the sun, what mattered was not the absolute levels but changes. People making choices, especially choices between gambles for small sums of money, made them in terms of gains and losses; they weren’t thinking about absolute levels.

When you gave a person a choice between a gift of $500 and a 50-50 shot at winning $1,000, he picked the sure thing. Give that same person a choice between losing $500 for sure and a 50-50 risk of losing $1,000, and he took the bet. He became a risk seeker. The odds that people demanded to accept a certain loss over the chance of some greater loss crudely mirrored the odds they demanded to forgo a certain gain for the chance of a greater gain. For example, to get people to prefer a 50-50 chance of $1,000 over some certain gain, you had to lower the certain gain to around $370. To get them to prefer a certain loss to a 50-50 chance of losing $1,000, you had to lower the loss to around $370. [We experience pain (loss) about 2.7x more intesely than pleasure (gain)!]

When choosing between sure things and gambles, people’s desire to avoid loss exceeded their desire to secure gain.

For most people, the happiness involved in receiving a desirable object is smaller than the unhappiness involved in losing the same object.”

As they sorted through the implications of their new discovery, one thing was instantly clear: Regret had to go, at least as a theory. It might explain why people made seemingly irrational decisions to accept a sure thing over a gamble with a far greater expected value. It could not explain why people facing losses became risk seeking. Anyone who wanted to argue that regret explains why people prefer a certain $500 to an equal chance to get $0 and $1,000 would never be able to explain why, if you simply subtracted $1,000 from all the numbers and turned the sure thing into a $500 loss, people would prefer the gamble.

There were three raisins in the new theory. The first was the realization that people responded to changes rather than absolute levels. The second was the discovery that people approached risk very differently when it involved losses than when it involved gains. Exploring people’s responses to specific gambles, they found a third raisin: People did not respond to probability in a straightforward manner. Amos and Danny already knew, from their thinking about regret, that in gambles that offered a certain outcome, people would pay dearly for that certainty. Now they saw that people reacted differently to different degrees of uncertainty. When you gave them one bet with a 90 percent chance of working out and another with a 10 percent chance of working out, they did not behave as if the first was nine times as likely to work out as the second. They made some internal adjustment, and acted as if a 90 percent chance was actually slightly less than a 90 percent chance, and a 10 percent chance was slightly more than a 10 percent chance. They responded to probabilities not just with reason but with emotion.

Whatever that emotion was, it became stronger as the odds became more remote. If you told them that there was a one-in-a-billion chance that they’d win or lose a bunch of money, they behaved as if the odds were not one in a billion but one in ten thousand. They feared a one-in-a-billion chance of loss more than they should and attached more hope to a one-in-a-billion chance of gain than they should. People’s emotional response to extremely long odds led them to reverse their usual taste for risk, and to become risk seeking when pursuing a long-shot gain and risk avoiding when faced with the extremely remote possibility of loss. (Which is why they bought both lottery tickets and insurance.)

It was as easy to get people to take risks as it was to get them to avoid them. All you had to do was present them with a choice that involved a loss.

A loss, according to the theory, was when a person wound up worse off than his “reference point.” But what was this reference point? The easy answer was: wherever you started from. Your status quo. A loss was just when you ended up worse than your status quo. But how did you determine any person’s status quo? “In the experiments it’s pretty clear what a loss is,” Arrow said later. “In the real world it’s not so clear.” The reference point was a state of mind.

The reference point—the point that enabled you to distinguish between a gain and a loss— wasn’t some fixed number. It was a psychological state. “What constitutes a gain or a loss depends on the representation of the problem and on the context in which it arises,” the first draft of “Value Theory” rather loosely explained. “We propose that the present theory applies to the gains and losses as perceived by the subject.”

Danny and Amos were trying to show that people faced with a risky choice failed to put it in context. They evaluated it in isolation. In exploring what they now called the isolation effect, Amos and Danny had stumbled upon another idea—and its real-world implications were difficult to ignore. This one they called “framing.” Simply by changing the description of a situation, and making a gain seem like a loss, you could cause people to completely flip their attitude toward risk, and turn them from risk avoiding to risk seeking.

The Asian Disease Problem was actually two problems, which they gave, separately, to two different groups of subjects innocent of the power of framing. The first group got this problem: Problem 1. Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequence of the programs is as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved. Which of the two programs would you favor? An overwhelming majority chose Program A, and saved 200 lives with certainty. The second group got the same setup but with a choice between two other programs: If Program C is adopted, 400 people will die. If Program D is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die. When the choice was framed this way, an overwhelmingly majority chose Program D. The two problems were identical, but, in the first case, when the choice was framed as a gain, the subjects elected to save 200 people for sure (which meant that 400 people would die for sure, though the subjects weren’t thinking of it that way). In the second case, with the choice framed as a loss, they did the reverse, and ran the risk that they’d kill everyone.

People did not choose between things. They chose between descriptions of things.

The endowment effect was a psychological idea with economic consequences. People attached some strange extra value to whatever they happened to own, simply because they owned it, and so proved surprisingly reluctant to part with their possessions, or endowments, even when trading them made economic sense.

Why were people so slow to sell vacation homes that, if they hadn’t bought them in the first place and were offered them now, they would never buy?

Why were investors so reluctant to sell stocks that had fallen in value, even when they admitted that they would never buy those stocks at their current market prices? There was no end of things people did that economic theory had trouble explaining. “When you start looking for the endowment effect,” Thaler said, “you see it everywhere.”

Then I realized: They had one idea. Which was systematic bias.” If people could be systematically wrong, their mistakes couldn’t be ignored. The irrational behavior of the few would not be offset by the rational behavior of the many. People could be systematically wrong, and so markets could be systematically wrong, too.

In a series of famous papers, Hobson had landed body blows on the Freudian idea that dreams arose from unconscious desires, by showing that they actually came from a part of the brain that had nothing to do with desire. He’d proven that the timing and the length of dreams were regular and predictable, which suggested that dreams had less to say about a person’s psychological state than about his nervous system. Among other things, Hobson’s research suggested that people who paid psychoanalysts to find meaning in their unconscious states were wasting their money.

Danny now had an idea that there might be a fourth heuristic—to add to availability, representativeness, and anchoring. “The simulation heuristic,” he’d eventually call it, and it was all about the power of unrealized possibilities to contaminate people’s minds. As they moved through the world, people ran simulations of the future.

He asked people: Which is more likely to happen in the next year, that a thousand Americans will die in a flood, or that an earthquake in California will trigger a massive flood that will drown a thousand Americans? People went with the earthquake. The force that led human judgment astray in this case was what Danny and Amos had called “representativeness,” or the similarity between whatever people were judging and some model they had in their mind of that thing.

Any prediction, for instance, could be made to seem more believable, even as it became less likely, if it was filled with internally consistent details. And any lawyer could at once make a case seem more persuasive, even as he made the truth of it less likely, by adding “representative” details to his description of people and events.

Danny and Amos had explained repeatedly that the rules of thumb that the mind used to cope with uncertainty often worked well. But sometimes they didn’t; and these specific failures were both interesting in and of themselves and revealing about the mind’s inner workings.

The mind, when it dealt with uncertain situations, was like a Swiss Army knife. It was a good enough tool for most jobs required of it, but not exactly suited to anything—and certainly not fully “evolved.”

what was now being called “choice architecture.” The decisions people made were driven by the way they were presented. People didn’t simply know what they wanted; they took cues from their environment. They constructed their preferences. And they followed paths of least resistance, even when they paid a heavy price for it. Millions of U.S. corporate and government employees had woken up one day during the 2000s and found they no longer needed to enroll themselves in retirement plans but instead were automatically enrolled. They probably never noticed the change. But that alone caused the participation in retirement plans to rise by roughly 30 percentage points. Such was the power of choice architecture. One tweak to the society’s choice architecture made by Sunstein, once he’d gone to work in the U.S. government, was to smooth the path between homeless children and free school meals. In the school year after he left the White House, about 40 percent more poor kids ate free school lunches than had done so before, back when they or some adult acting on their behalf had to take action and make choices to get them.

A part of good science is to see what everyone else can see but think what no one else has ever said.

“He said, ‘Once I did that, I felt obliged to keep this image of hero. I did that, now I have to live up to it.’”

open