User Tools

Site Tools


thinking_in_bets_-_annie_duke

This is an old revision of the document!


# “Thinking in Bets” - Annie Duke

Over time, those world-class poker players taught me to understand what a bet really is: a decision about an uncertain future. The implications of treating decisions as bets made it possible for me to find learning opportunities in uncertain environments. Treating decisions as bets, I discovered, helped me avoid common decision traps, learn from results in a more rational way, and keep emotions out of the process as much as possible.

Thinking in bets starts with recognizing that there are exactly two things that determine how our lives turn out: the quality of our decisions and luck. Learning to recognize the difference between the two is what thinking in bets is all about.

Pete Carroll was a victim of our tendency to equate the quality of a decision with the quality of its outcome. Poker players have a word for this: “resulting.” When I started playing poker, more experienced players warned me about the dangers of resulting, cautioning me to resist the temptation to change my strategy just because a few hands didn’t turn out well in the short run.

When I consult with executives, I sometimes start with this exercise. I ask group members to come to our first meeting with a brief description of their best and worst decisions of the previous year. I have yet to come across someone who doesn’t identify their best and worst results rather than their best and worst decisions.

Note: Exercise - bias

He was not only resulting but also succumbing to its companion, hindsight bias.

Hindsight bias is the tendency, after an outcome is known, to see the outcome as having been inevitable.

In the exercise I do of identifying your best and worst decisions, I never seem to come across anyone who identifies a bad decision where they got lucky with the result, or a well-reasoned decision that didn’t pan out.

No sober person thinks getting home safely after driving drunk reflects a good decision or good driving ability.

Seeking certainty helped keep us alive all this time, but it can wreak havoc on our decisions in an uncertain world. When we work backward from results to figure out why those things happened, we are susceptible to a variety of cognitive traps, like assuming causation when there is only a correlation, or cherry-picking data to confirm the narrative we prefer. We will pound a lot of square pegs into round holes to maintain the illusion of a tight relationship between our outcomes and our decisions.

The challenge is not to change the way our brains operate but to figure out how to work within the limitations of the brains we already have.

Our goal is to get our reflexive minds to execute on our deliberative minds’ best intentions.

Poker, in contrast, is a game of incomplete information. It is a game of decision-making under conditions of uncertainty over time. (Not coincidentally, that is close to the definition of game theory.) Valuable information remains hidden. There is also an element of luck in any outcome. You could make the best possible decision at every point and still lose the hand, because you don’t know what new cards will be dealt and revealed. Once the game is finished and you try to learn from the results, separating the quality of your decisions from the influence of luck is difficult.

Note: Definition of business decision

The quality of our lives is the sum of decision quality plus luck.

Now if that person flipped the coin 10,000 times, giving us a sufficiently large sample size, we could figure out, with some certainty, whether the coin is fair. Four flips simply isn’t enough to determine much about the coin.

“I’m not sure”: using uncertainty to our advantage

Note: The correct answer in many cases

But getting comfortable with “I’m not sure” is a vital step to being a better decision-maker. We have to make peace with not knowing.

What makes a decision great is not that it has a great outcome. A great decision is the result of a good process, and that process must include an attempt to accurately represent our own state of knowledge. That state of knowledge, in turn, is some variation of “I’m not sure.”

There are many reasons why wrapping our arms around uncertainty and giving it a big hug will help us become better decision-makers. Here are two of them. First, “I’m not sure” is simply a more accurate representation of the world. Second, and related, when we accept that we can’t be sure, we are less likely to fall into the trap of black-and-white thinking.

When we think in advance about the chances of alternative outcomes and make a decision based on those chances, it doesn’t automatically make us wrong when things don’t work out. It just means that one event in a set of possible futures occurred.

Any prediction that is not 0% or 100% can’t be wrong solely because the most likely future doesn’t unfold.

Decisions are bets on the future, and they aren’t “right” or “wrong” based on whether they turn out well on any particular iteration.

Poker teaches that lesson. A great poker player who has a good-size advantage over the other players at the table, making significantly better strategic decisions, will still be losing over 40% of the time at the end of eight hours of play. That’s a whole lot of wrong. And it’s not just confined to poker.

whenever we choose an alternative (whether it is taking a new job or moving to Des Moines for a month), we are automatically rejecting every other possible choice. All those rejected alternatives are paths to possible futures where things could be better or worse than the path we chose. There is potential opportunity cost in any choice we forgo.

By treating decisions as bets, poker players explicitly recognize that they are deciding on alternative futures, each with benefits and risks. They also recognize there are no simple answers. Some things are unknown or unknowable.

In most of our decisions, we are not betting against another person. Rather, we are betting against all the future versions of ourselves that we are not choosing.

Our beliefs drive the bets we make: which brands of cars better retain their value, whether critics knew what they were talking about when they panned a movie we are thinking about seeing, how our employees will behave if we let them work from home.

This is ultimately very good news: part of the skill in life comes from learning to be a better belief calibrator, using experience and information to more objectively update our beliefs to more accurately represent the world. The more accurate our beliefs, the better the foundation of the bets we make.

When I speak at professional conferences, I will occasionally bring up the subject of belief formation by asking the audience a question: “Who here knows how you can predict if a man will go bald?” People will raise their hands, I’ll call on someone, and they’ll say, “You look at the maternal grandfather.” Everyone nods in agreement. I’ll follow up by asking, “Does anyone know how you calculate a dog’s age in human years?” I can practically see audience members mouthing, “Multiply by seven.” Both of these widely held beliefs aren’t actually accurate. If you search online for “common misconceptions,” the baldness myth is at the top of most lists. As Medical Daily explained in 2015, “a key gene for baldness is on the X chromosome, which you get from your mother” but “it is not the only genetic factor in play since men with bald fathers have an increased chance of going bald when compared to men whose fathers have a full set of hair. . . . [S]cientists say baldness anywhere in your family may be a sign of your own impending fate.”

Note: Activity. Our beliefs about something are often wrong

This is how we think we form abstract beliefs: We hear something; We think about it and vet it, determining whether it is true or false; only after that We form our belief. It turns out, though, that we actually form abstract beliefs this way: We hear something; We believe it to be true; Only sometimes, later, if we have the time or the inclination, we think about it and vet it, determining whether it is, in fact, true or false.

“Findings from a multitude of research literatures converge on a single point: People are credulous creatures who find it very easy to believe and very difficult to doubt.

Truthseeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information. We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.

Flaws in forming and updating beliefs have the potential to snowball. Once a belief is lodged, it becomes difficult to dislodge.

This irrational, circular information-processing pattern is called motivated reasoning. The way we process new information is driven by the beliefs we hold, strengthening them. Those strengthened beliefs then drive how we process further information, and so on.

Disinformation is different than fake news in that the story has some true elements, embellished to spin a particular narrative. Fake news works because people who already hold beliefs consistent with the story generally won’t question the evidence. Disinformation is even more powerful because the confirmable facts in the story make it feel like the information has been vetted, adding to the power of the narrative being pushed.

Fake news isn’t meant to change minds. As we know, beliefs are hard to change. The potency of fake news is that it entrenches beliefs its intended audience already has, and then amplifies them.

How we form beliefs, and our inflexibility about changing our beliefs, has serious consequences because we bet on those beliefs. Every bet we make in our lives depends on our beliefs:

Surprisingly, being smart can actually make bias worse. Let me give you a different intuitive frame: the smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalizing and framing the data to fit your argument or point of view. After all, people in the “spin room” in a political setting are generally pretty smart for a reason.

Note: Interesting. Ned to watch my thinking here.

In 2012, psychologists Richard West, Russell Meserve, and Keith Stanovich tested the blind-spot bias—an irrationality where people are better at recognizing biased reasoning in others but are blind to bias in themselves.

The surprise is that blind-spot bias is greater the smarter you are.

When the researchers kept the data the same but substituted “concealed-weapons bans” for “skin treatment” and “crime” for “rashes,” now the subjects’ opinions on those topics drove how subjects analyzed the exact same data. Subjects who identified as “Democrat” or “liberal” interpreted the data in a way supporting their political belief (gun control reduces crime). The “Republican” or “conservative” subjects interpreted the same data to support their opposing belief (gun control increases crime).

Note: Neutral subject was skin cream. Same data presented but driven by belief system

It turns out the better you are with numbers, the better you are at spinning those numbers to conform to and support your beliefs.

We can train ourselves to view the world through the lens of “Wanna bet?”

Samuel Arbesman’s The Half-Life of Facts is a great read about how practically every fact we’ve ever known has been subject to revision or reversal.

Note: Book: Read

What if, in addition to expressing what we believe, we also rated our level of confidence about the accuracy of our belief on a scale of zero to ten?

So instead of saying to ourselves, “Citizen Kane won the Oscar for best picture,” we would say, “I think Citizen Kane won the Oscar for best picture but I’m only a six on that.” Or “I’m 60% that Citizen Kane won the Oscar for best picture.”

Note: Perhaps use this In experiments

We are less likely to succumb to motivated reasoning since it feels better to make small adjustments in degrees of certainty instead of having to grossly downgrade from “right” to “wrong.” When confronted with new evidence, it is a very different narrative to say, “I was 58% but now I’m 46%.” That doesn’t feel nearly as bad as “I thought I was right but now I’m wrong.” Our narrative of being a knowledgeable, educated, intelligent person who holds quality opinions isn’t compromised when we use new information to calibrate our beliefs, compared with having to make a full-on reversal. This shifts us away from treating information that disagrees with us as a threat, as something we have to defend against, making us better able to truthseek.

Expressing our level of confidence also invites people to be our collaborators.

“I’m 80%” and thereby communicating we aren’t sure, we open the door for others to tell us what they know.

Acknowledging that decisions are bets based on our beliefs, getting comfortable with uncertainty, and redefining right and wrong are integral to a good overall approach to decision-making.

as all psychology students are, that learning occurs when you get lots of feedback tied closely in time to decisions and actions.

Experience can be an effective teacher. But, clearly, only some students listen to their teachers. The people who learn from experience improve, advance, and (with a little bit of luck) become experts and leaders in their fields.

Aldous Huxley recognized, “Experience is not what happens to a man; it is what a man does with what happens to him.”

Note: Quote

Ideally, our beliefs and our bets improve with time as we learn from experience. Ideally, the more information we have, the better we get at making decisions about which possible future to bet on. Ideally, as we learn from experience we get better at assessing the likelihood of a particular outcome given any decision, making our predictions about the future more accurate. As you may have guessed, when it comes to how we process experience, “ideally” doesn’t always apply.

We are good at identifying the “-ER” goals we want to pursue (better, smarter, richer, healthier, whatever). But we fall short in achieving our “-ER” because of the difficulty in executing all the little decisions along the way to our goals. The bets we make on when and how to close the feedback loop are part of the execution, all those in-the-moment decisions about whether something is a learning opportunity. To reach our long-term goals, we have to improve at sorting out when the unfolding future has something to teach us, when to close the feedback loop.

Note: Basis of agile

Also need to recognize that sometimes happen because of other form ofuncertainty- luck

The way our lives turn out is the result of two things: the influence of skill and the influence of luck. For the purposes of this discussion, any outcome that is the result of our decision-making is in the skill category. If making the same decision again would predictably result in the same outcome, or if changing the decision would predictably result in a different outcome, then the outcome following that decision was due to skill. The quality of our decision-making was the main influence over how things turned out. If, however, an outcome occurs because of things that we can’t control (like the actions of others, the weather, or our genes), the result would be due to luck. If our decisions didn’t have much impact on the way things turned out, then luck would be the main influence.*

Note: Definition of luck vs skill

We make similar bets about where to “throw” an outcome: into the “skill bucket” (in our control) or the “luck bucket” (outside of our control). This initial fielding of outcomes, if done well, allows us to focus on experiences that have something to teach us (skill) and ignore those that don’t (luck). Get this right and, with experience, we get closer to whatever “-ER” we are striving for: better, smarter, healthier, happier, wealthier, etc.

Outcomes don’t tell us what’s our fault and what isn’t, what we should take credit for and what we shouldn’t.

Getting insight into the way uncertainty trips us up, whether the errors we make are patterned (hint: they are) and what motivates those errors, should give us clues for figuring out achievable strategies to calibrate the bets we make on our outcomes.

The way we field outcomes is predictably patterned: we take credit for the good stuff and blame the bad stuff on luck so it won’t be our fault.

Note: Self serving bias

Our capacity for self-deception has few boundaries. Look at the reasons people give for their accidents on actual auto insurance forms: “I collided with a stationary truck coming the other way.” “A pedestrian hit me and went under my car.” “The guy was all over the road. I had to swerve a number of times before I hit him.” “An invisible car came out of nowhere, struck my car, and vanished.” “The pedestrian had no idea which direction to run, so I ran over him.” “The telephone pole was approaching. I was attempting to swerve out of its way when it struck my car.”*

Note: Self serving bias in insurance claims

Self-serving bias has immediate and obvious consequences for our ability to learn from experience.* Blaming the bulk of our bad outcomes on luck means we miss opportunities to examine our decisions to see where we can do better. Taking credit for the good stuff means we will often reinforce decisions that shouldn’t be reinforced and miss opportunities to see where we could have done better.

Black-and-white thinking, uncolored by the reality of uncertainty, is a driver of both motivated reasoning and self-serving bias. If our only options are being 100% right or 100% wrong, with nothing in between, then information that potentially contradicts a belief requires a total downgrade, from right all the way to wrong. There is no “somewhat less sure” option in an all-or-nothing world, so we ignore or discredit the information to hold steadfast in our belief.

Just as with motivated reasoning, self-serving bias arises from our drive to create a positive self-narrative.

In poker, the bulk of what goes on is watching. An experienced player will choose to play only about 20% of the hands they are dealt, forfeiting the other 80% of the hands before even getting past the first round of betting. That means about 80% of the time is spent just watching other people play.

Unfortunately, learning from watching others is just as fraught with bias. Just as there is a pattern in the way we field our own outcomes, we field the outcomes of our peers predictably.

We see this pattern of blaming others for bad outcomes and failing to give them credit for good ones all over the place.

But this comparison of our results to others isn’t confined to zero-sum games where one player directly loses to the other (or where one lawyer loses to opposing counsel, or where one salesperson loses a sale to a competitor, etc.). We are really in competition for resources with everyone. Our genes are competitive. As Richard Dawkins points out, natural selection proceeds by competition among the phenotypes of genes so we literally evolved to compete, a drive that allowed our species to survive. Engaging the world through the lens of competition is deeply embedded in our animal brains. It’s not enough to boost our self-image solely by our own successes. If someone we view as a peer is winning, we feel like we’re losing by comparison. We benchmark ourselves to them.

Note: Win loss

What accounts for most of the variance in happiness is how we’re doing comparatively.

A consistent example of how we price our own happiness relative to others comes from a version of the party game “Would You Rather . . . ?” When you ask people if they would rather earn $70,000 in 1900 or $70,000 now, a significant number choose 1900. True, the average yearly income in 1900 was about $450. So we’d be doing phenomenally well compared to our peers from 1900. But no amount of money in 1900 could buy Novocain or antibiotics or a refrigerator or air-conditioning or a powerful computer we could hold in one hand. About the only thing $70,000 bought in 1900 that it couldn’t buy today was the opportunity to soar above most everyone else. We’d rather lap the field in 1900 with an average life expectancy of only forty-seven years than sit in the middle of the pack now with an average life expectancy of over seventy-six years (and a computer in our palm).

Note: Exercise: win / lose in comparison to each other

Habits operate in a neurological loop consisting of three parts: the cue, the routine, and the reward.

“To change a habit, you must keep the old cue, and deliver the old reward, but insert a new routine.”

There are people who, like Phil Ivey, have substituted the routine of truthseeking for the outcome-oriented instinct to focus on seeking credit and avoiding blame.

Mia Hamm said, “Many people say I’m the best women’s soccer player in the world. I don’t think so. And because of that, someday I just might be.”

Note: Quote

Keep the reward of feeling like we are doing well compared to our peers, but change the features by which we compare ourselves: be a better credit-giver than your peers, more willing than others to admit mistakes, more willing to explore possible reasons for an outcome with an open mind, even, and especially, if that might cast you in a bad light or shine a good light on someone else.

The prospect of a bet makes us examine and refine our beliefs, in this case the belief about whether luck or skill was the main influence in the way things turned out.

Once we start actively training ourselves in testing alternative hypotheses and perspective taking, it becomes clear that outcomes are rarely 100% luck or 100% skill. This means that when new information comes in, we have options beyond unquestioned confirmation or reversal. We can modify our beliefs along a spectrum because we know it is a spectrum, not a choice between opposites without middle ground.

This makes us more compassionate, both toward ourselves and others.

Thinking in bets won’t make self-serving bias disappear or motivated reasoning vanish into thin air. But it will make those things better. And a little bit better is all we need to transform our lives. If we field just a few extra outcomes more accurately, if we catch just a few extra learning opportunities, it will make a huge difference in what we learn, when we learn, and how much we learn.

The benefits of recognizing just a few extra learning opportunities compound over time.

The first step is identifying the habit of mind that we want to reshape and how to reshape it. That first step is hard and takes time and effort and a lot of missteps along the way. So the second step is recognizing that it is easier to make these changes if we aren’t alone in the process. Recruiting help is key to creating faster and more robust change, strengthening and training our new truthseeking routines.

In the movie, the matrix was built to be a more comfortable version of the world. Our brains, likewise, have evolved to make our version of the world more comfortable: our beliefs are nearly always correct; favorable outcomes are the result of our skill; there are plausible reasons why unfavorable outcomes are beyond our control; and we compare favorably with our peers. We deny or at least dilute the most painful parts of the message.

Giving that up is not the easiest choice.

Note: Doing bets is not easiest choice. We tell ourselves stories and we might find that those stories are not real

A good decision group is a grown-up version of the buddy system.

Note: Role of leadership team in holding each other accountable?

Forming or joining a group where the focus is on thinking in bets means modifying the usual social contract. It means agreeing to be open-minded to those who disagree with us, giving credit where it’s due, and taking responsibility where it’s appropriate, even (and especially) when it makes us uncomfortable.

as long as there are three people in the group (two to disagree and one to referee*), the truthseeking group can be stable and productive.

Note: Or small subgroups

Philip Tetlock and Jennifer Lerner, leaders in the science of group interaction, described the two kinds of group reasoning styles in an influential 2002 paper: “Whereas confirmatory thought involves a one-sided attempt to rationalize a particular point of view, exploratory thought involves even-handed consideration of alternative points of view.”

Note: Get paper by Tetlock and Learner

Confirmatory amplify motivated reasoning bias.

Without an explicit charter for exploratory thought and accountability to that charter, our tendency when we interact with others follows our individual tendency, which is toward confirmation.

Note: Have a charter aimed at exploratory thought

truthseeking charter: A focus on accuracy (over confirmation), which includes rewarding truthseeking, objectivity, and open-mindedness within the group; Accountability, for which members have advance notice; and Openness to a diversity of ideas.

Note: Charter for truth-seeking

It is one thing to commit to rewarding ourselves for thinking in bets, but it is a lot easier if we get others to do the work of rewarding us.

Note: Approval as reward to make establish new habit easier

Accountability, like reinforcement of accuracy, also improves our decision-making and information processing when we are away from the group because we know in advance that we will have to answer to the group for our decisions.

A group with diverse viewpoints can help us by sharing the work suggested in the previous two chapters to combat motivated reasoning about beliefs and biased outcome fielding.

For example: Why might my belief not be true? What other evidence might be out there bearing on my belief? Are there similar areas I can look toward to gauge whether similar beliefs to mine are true? What sources of information could I have missed or minimized on the way to reaching my belief? What are the reasons someone else could have a different belief, what’s their support, and why might they be right instead of me? What other perspectives are there as to why things turned out the way they did?

Note: Questions to test your beliefs

Others aren’t wrapped up in preserving our narrative, anchored by our biases. It is a lot easier to have someone else offer their perspective than for you to imagine you’re another person and think about what their perspective might be.

Dissent channels and red teams are a beautiful implementation of Mill’s bedrock principle that we can’t know the truth of a matter without hearing the other side.

Note: Idea - how to get other viewpoint

Diversity is the foundation of productive group decision-making, but we can’t underestimate how hard it is to maintain. We all tend to gravitate toward people who are near clones of us. After all, it feels good to hear our ideas echoed back to us. If

As the Supreme Court has become more divided, this practice has all but ceased. According to a New York Times article in 2010, only Justice Breyer regularly employed clerks who had worked for circuit judges appointed by presidents of both parties. Since 2005, Scalia had hired no clerks with experience working for Democrat-appointed judges. In light of the shift in hiring practices, it should not be so surprising that the court has become more polarized. The justices are in the process of creating their own echo chambers.

Thomas once said, “I won’t hire clerks who have profound disagreements with me. It’s like trying to train a pig. It wastes your time, and it aggravates the pig.”* That makes sense only if you believe the goal of a decision group is to train people to agree with you. But if your goal is to develop the best decision process, that is an odd sentiment indeed.

Note: Idea is to develop best decision process, not train people in how you make decisions

We should also recognize that it’s really hard: the norm is toward homogeneity; we’re all guilty of it; and we don’t even notice that we’re doing it.

Haidt, along with Philip Tetlock and four others (social psychologists José Duarte, Jarret Crawford, and Lee Jussim, and sociologist Charlotta Stern) founded an organization called Heterodox Academy, to fight this drift toward homogeneity of thought in science and academics as a whole. In 2015, they published their findings in the journal Behavioral and Brain Sciences (BBS), along with thirty-three pieces of open peer commentary.

Note: Find out more?

“Even research communities of highly intelligent and well-meaning individuals can fall prey to confirmation bias, as IQ is positively correlated with the number of reasons people find to support their own side in an argument.”

The BBS paper, and the continuing work of Heterodox Academy, includes specific recommendations geared toward encouraging diversity and dissenting opinions. I encourage you to read the specific recommendations, which include things like a stated antidiscrimination policy (against opposing viewpoints), developing ways to encourage people with contrary viewpoints to join the group and engage in the process, and surveying to gauge the actual heterogeneity or homogeneity of opinion in the group. These are exactly the kinds of things we would do well to adopt (and, where necessary, adapt) for groups in our personal lives and in the workplace.

Note: Perhaps use this approach in investment decisions

Also need to be concerned about groups of coaches.

Experts engaging in traditional peer review, providing their opinion on whether an experimental result would replicate, were right 58% of the time. A betting market in which the traders were the exact same experts and those experts had money on the line predicted correctly 71% of the time.

Note: Betting market improve accuracy

Per the BBS paper, CUDOS stands for Communism (data belong to the group), Universalism (apply uniform standards to claims and evidence, regardless of where they came from), Disinterestedness (vigilance against potential conflicts that can influence the group’s evaluation), and Organized Skepticism (discussion among the group to encourage engagement and dissent).

Note: Practical rules for truth seeking group

The mere fact of our hesitation and discomfort is a signal that such information may be critical to providing a complete and balanced account.

Be a data sharer. That’s what experts do. In fact, that’s one of the reasons experts become experts. They understand that sharing data is the best way to move toward accuracy because it extracts insight from your listeners of the highest fidelity.

Note: To be expert, share data

When we have a negative opinion about the person delivering the message, we close our minds to what they are saying and miss a lot of learning opportunities because of it. Likewise, when we have a positive opinion of the messenger, we tend to accept the message without much vetting. Both are bad.

Another way to disentangle the message from the messenger is to imagine the message coming from a source we value much more or much less.

John Stuart Mill made it clear that the only way to gain knowledge and approach truth is by examining every variety of opinion.

Note: Important process

Our brains have built-in conflicts of interest, interpreting the world around us to confirm our beliefs, to avoid having to admit ignorance or error, to take credit for good results following our decisions, to find reasons bad results following our decisions were due to factors outside our control, to compare well with our peers, and to live in a world where the way things turn out makes sense. We are not naturally disinterested.

Knowing how something turned out creates a conflict of interest that expresses itself as resulting.

If the group is blind to the outcome, it produces higher fidelity evaluation of decision quality. The best way to do this is to deconstruct decisions before an outcome is known.

Beliefs are also contagious. If our listeners know what we believe to be true, they will likely work pretty hard to justify our beliefs, often without even knowing they are doing it. They will develop an ideological conflict of interest created by our informing our listeners of our beliefs.

Another way a group can de-bias members is to reward them for skill in debating opposing points of view and finding merit in opposing positions.

Note: Ritual dissent?

a referee can get them to each argue the other’s position with the goal of being the best debater.

Skepticism is about approaching the world by asking why things might not be true rather than why they are true.

evidence. If we don’t “lean over backwards” (as Richard Feynman famously said) to figure out where we could be wrong, we are going to make some pretty bad bets.

First, express uncertainty. Uncertainty not only improves truthseeking within groups but also invites everyone around us to share helpful information and dissenting opinions.

Second, lead with assent. For example, listen for the things you agree with, state those and be specific, and then follow with “and” instead of “but.”

Third, ask for a temporary agreement to engage in truthseeking. If someone is off-loading emotion to us, we can ask them if they are just looking to vent or if they are looking for advice. If they aren’t looking for advice, that’s fine. The rules of engagement have been made clear.

Finally, focus on the future. As I said at the beginning of this book, we are generally pretty good at identifying the positive goals we are striving for; our problem is in the execution of the decisions along the way to reaching those goals. People dislike engaging with their poor execution.

Improving decision quality is about increasing our chances of good outcomes, not guaranteeing them.

This tendency we all have to favor our present-self at the expense of our future-self is called temporal discounting.*

Note: Temporal discounting bias

The future we imagine is a novel reassembling of our past experiences. Given that, it shouldn’t be surprising that the same neural network is engaged when we imagine the future as when we remember the past. Thinking about the future is remembering the future, putting memories together in a creative way to imagine a possible way things might turn out.

if regret occurred before a decision instead of after, the experience of regret might get us to change a choice likely to result in a bad outcome.

Suzy Welch developed a popular tool known as 10-10-10 that has the effect of bringing future-us into more of our in-the-moment decisions. “Every 10-10-10 process starts with a question. . . . [W]hat are the consequences of each of my options in ten minutes? In ten months? In ten years?”

Note: What is effect of decision in 10 mins, 10 months, 10 years. Futurespective

We would be better off thinking about our happiness as a long-term stock holding. We would do well to view our happiness through a wide-angle lens, striving for a long, sustaining upward trend in our happiness stock, so it resembles the first Berkshire Hathaway chart.

The way we field outcomes is path dependent. It doesn’t so much matter where we end up as how we got there. What has happened in the recent past drives our emotional response much more than how we are doing overall. That’s how we can win $100 and be sad, and lose $100 and be happy.

Now imagine if you had gone for that night of blackjack a year ago. When you think about the outcomes as having happened in the distant past, it is likely your preference for the results reverses, landing in a more rational place. You are now happier about the $100 win than about the $100 loss. Once we pull ourselves out of the moment through time-traveling exercises, we can see these things in proportion to their size, free of the distortion caused by whether the ticker just moved up or down.

Tilt is the poker player’s worst enemy, and the word instantly communicates to other poker players that you were emotionally unhinged in your decision-making because of the way things turned out.*

We’ve all had this experience in our personal and professional lives: blowing out of proportion a momentary event because of an in-the-moment emotional reaction.

This action—past-us preventing present-us from doing something stupid—has become known as a Ulysses contract.

One of the simplest examples of this kind of contract is using a ride-sharing service when you go to a bar. A past version of you, who anticipated that you might decide irrationally about whether you are okay to drive, has bound your hands by taking the car keys out of them.

We have discussed several patterns of irrationality in the way we lodge beliefs and field outcomes. From these, we can commit to vigilance around words, phrases, and thoughts that signal that we might be not be our most rational selves. Your list of those warning signs will be specific to you (or your family, friends, or enterprise), but here is a sample of the kinds of things that might trigger a decision-interrupt. Signs of the illusion of certainty: “I know,” “I’m sure,” “I knew it,” “It always happens this way,” “I’m certain of it,” “you’re 100% wrong,” “You have no idea what you’re talking about,” “There’s no way that’s true,” “0%” or “100%” or their equivalents, and other terms signaling that we’re presuming things are more certain than we know they are. This also includes stating things as absolutes, like “best” or “worst” and “always” or “never.” Overconfidence: similar terms to the illusion of certainty. Irrational outcome fielding: “I can’t believe how unlucky I got,” or the reverse, if we have some default phrase for credit taking, like “I’m at the absolute top of my game” or “I planned it perfectly.” This includes conclusions of luck, skill, blame, or credit. It includes equivalent terms for irrationally fielding the outcomes of others, like, “They totally had that coming,” “They brought it on themselves,” and “Why do they always get so lucky?” Any kind of moaning or complaining about bad luck just to off-load it, with no real point to the story other than to get sympathy. (An exception would be when we’re in a truthseeking group and we make explicit that we’re taking a momentary break to vent.) Generalized characterizations of people meant to dismiss their ideas: insulting, pejorative characterizations of others, like “idiot” or, in poker, “donkey.” Or any phrase that starts by characterizing someone as “another typical .” (Like David Letterman said to Lauren Conrad, he dismissed everyone around him as an idiot, until he pulled himself into deliberative mind one day and asked, “What are the odds that everyone is an idiot?”) Other violations of the Mertonian norm of universalism, shooting the message because we don’t think much of the messenger. Any sweeping term about someone, particularly when we equate our assessment of an idea with a sweeping personality or intellectual assessment of the person delivering the idea, such as “gun nut,” “bleeding heart,” “East Coast,” “Bible belter,” “California values”—political or social issues. Also be on guard for the reverse: accepting a message because of the messenger or praising a source immediately after finding out it confirms your thinking. Signals that we have zoomed in on a moment, out of proportion with the scope of time: “worst day ever,” “the day from hell.” Expressions that explicitly signal motivated reasoning, accepting or rejecting information without much evidence, like “conventional wisdom” or “if you ask anybody” or “Can you prove that it’s not true?” Similarly, look…

Note: “Swear jar” signals that we are not behaving rationally.

Maybe also include logic errors when thinking about arguments such as slippery slope, etc

For us to make better decisions, we need to perform reconnaissance on the future. If a decision is a bet on a particular future based on our beliefs, then before we place a bet we should consider in detail what those possible futures might look like. Any decision can result in a set of possible outcomes.

Note: Belief → bet → set of outcomes

We should prepare by doing a reconisence of the future. Plans are useless; Planning indispensable

Figure out the possibilities, then take a stab at the probabilities. To start, we imagine the range of potential futures. This is also known as scenario planning. Nate Silver, who compiles and interprets data from the perspective of getting the best strategic use of it, frequently takes a scenario-planning approach. Instead of using data to make a particular conclusion, he sometimes takes the approach of discussing all the scenarios the data could support.

The reason why we do reconnaissance is because we are uncertain. We don’t (and likely can’t) know how often things will turn out a certain way with exact precision. It’s not about approaching our future predictions from a point of perfection. It’s about acknowledging that we’re already making a prediction about the future every time we make a decision, so we’re better off if we make that explicit.

Note: . If we’re worried about guessing, we’re already guessing.

In addition to increasing decision quality, scouting various futures has numerous additional benefits. First, scenario planning reminds us that the future is inherently uncertain.

Second, we are better prepared for how we are going to respond to different outcomes that might result from our initial decision.

Third, anticipating the range of outcomes also keeps us from unproductive regret (or undeserved euphoria) when a particular future happens.

Finally, by mapping out the potential futures and probabilities, we are less likely to fall prey to resulting or hindsight bias, in which we gloss over the futures that did not occur and behave as if the one that did occur must have been inevitable, because we have memorialized all the possible futures that could have happened.

The misunderstanding came from the disconnect between the expected value of each grant and the amount they would be awarded if they got the grant.*

Coming up with the expected value of each grant involves a simple form of scenario planning: imagining the two possible futures that could result from the application (awarded or declined) and the likelihood of each future. For example, if they applied for a $100,000 grant that they would win 25% of the time, that grant would have an expected value of $25,000 ($100,000 × .25). If they expected to get the grant a quarter of the time, then it wasn’t worth $100,000; it was worth a quarter of $100,000.

Note: Expected value in portfolio planning based on scenario planning. X outcome with Y probability

Overall, their post-outcome reviews focused on understanding what worked, what didn’t work, what was luck, and how to do better, improving both their probability estimates and the quality of their grant applications.

When it comes to advance thinking, standing at the end and looking backward is much more effective than looking forward from the beginning.

Note: Futurespective

Imagining the future recruits the same brain pathways as remembering the past. And it turns out that remembering the future is a better way to plan for it.

1989 experiment by Deborah Mitchell, J. Edward Russo, and Nancy Pennington. They “found that prospective hindsight—imagining that an event has already occurred—increases the ability to correctly identify reasons for future outcomes by 30%.”

Note: Why futurespective works

Imagining a successful future and backcasting from there is a useful time-travel exercise for identifying necessary steps for reaching our goals. Working backward helps even more when we give ourselves the freedom to imagine an unfavorable future.

A premortem is an investigation into something awful, but before it happens. We all like to bask in an optimistic view of the future. We generally are biased to overestimate the probability of good things happening. Looking at the world through rose-colored glasses is natural and feels good, but a little naysaying goes a long way. A premortem is where we check our positive attitude at the door and imagine not achieving our goals.

Note: Assume future went wrong. Not successful

Backcasting and premortems complement each other. Backcasting imagines a positive future; a premortem imagines a negative future.

Despite the popular wisdom that we achieve success through positive visualization, it turns out that incorporating negative visualization makes us more likely to achieve our goals. Gabriele Oettingen, professor of psychology at NYU and author of Rethinking Positive Thinking: Inside the New Science of Motivation, has conducted over twenty years of research, consistently finding that people who imagine obstacles in the way of reaching their goals are more likely to achieve success, a process she has called “mental contrasting.”

Note: incorporating negative visualization makes us more likely to achieve our goals.

Oettingen recognized that we need to have positive goals, but we are more likely to execute on those goals if we think about the negative futures.

Conducting a premortem creates a path to act as our own red team. Once we frame the exercise as “Okay, we failed. Why did we fail?”

Remember, the likelihood of positive and negative futures must add up to 100%. The positive space of backcasting and the negative space of a premortem still have to fit in a finite amount of space. When we see how much negative space there really is, we shrink down the positive space to a size that more accurately reflects reality and less reflects our naturally optimistic nature.

Note: Total positive and negative outcomes probability need to 100%

One of the goals of mental time travel is keeping events in perspective. To understand an overriding risk to that perspective, think about time as a tree. The tree has a trunk, branches at the top, and the place where the trunk meets the branches. The trunk is the past. A tree has only one, growing trunk, just as we have only one, accumulating past. The branches are the potential futures. Thicker branches are the equivalent of more probable futures, thinner branches are less probable ones. The place where the top of the trunk meets the branches is the present. There are many futures, many branches of the tree, but only one past, one trunk.

Note: Metaphor for consider future options while understanding that we explain past easily (because there is only one path that got us there and it happened.

As the future becomes the past, what happens to all those branches? The ever-advancing present acts like a chainsaw. When one of those many branches happens to be the way things turn out, when that branch transitions into the past, present-us cuts off all those other branches that didn’t materialize and obliterates them.

Even the smallest of twigs, the most improbable of futures—like the 2%–3% chance Russell Wilson would throw that interception—expands when it becomes part of the mighty trunk. That 2%–3%, in hindsight, becomes 100%, and all the other branches, no matter how thick they were, disappear from view. That’s hindsight bias, an enemy of probabilistic thinking.

Note: This is hindsight bias. Cool.

This turned out to be a big part of the problem for the CEO in the wake of firing his president. Although he had initially characterized the decision as one of his worst, when we reconstructed the tree, essentially picking the branches up off the ground and reattaching them, it was clear that he and his company had made a series of careful, deliberative decisions. Because they led to a negative result, however, the CEO had been consumed with regret.

By keeping an accurate representation of what could have happened (and not a version edited by hindsight), memorializing the scenario plans and decision trees we create through good planning process, we can be better calibrators going forward.

Note: Remember the branches

To some degree, we’re all outcome junkies, but the more we wean ourselves from that addiction, the happier we’ll be.

For a good overview on our problems processing data, including assuming causation when there is only a correlation and cherry-picking data to confirm the narrative we prefer, see the New York Times op-ed by Gary Marcus and Ernest Davis, “Eight (No, Nine!) Problems with Big Data,” on April 6, 2014.

Note: Read this article

The quote about baldness is from Susan Scutti, “Going Bald Isn’t Your Mother’s Fault; Maternal Genetics Are Not to Blame,” Medical Daily, May 18, 2015, http://www.medicaldaily.com/going-bald-isnt-your-mothers-fault-maternal-genetics-are-not-blame-333668. There are numerous lists of such common misconceptions, such as Emma Glanfield’s “Coffee Isn’t Made from Beans, You Can’t See the Great Wall of China from Space and Everest ISN’T the World’s Tallest Mountain: The Top 50 Misconceptions That Have Become Modern Day ‘Facts,’” Daily Mail, April 22, 2015, http://www.dailymail.co.uk/news/article-3050941/Coffee-isn-t-beans-t-Great-Wall-China-space-Everest-ISN-T-worlds-tallest-mountain-Experts-unveil-life-s-50-misconceptions-modern-day-facts.html;

Note: Input into irrational knowledge you have today

/home/hpsamios/hanssamios.com/dokuwiki/data/attic/thinking_in_bets_-_annie_duke.1548609416.txt.gz · Last modified: 2020/06/02 14:24 (external edit)