Table of Contents
"Superforecasting: The Art and Science of Prediction" - Philip E. Tetlock, Dan Gardner
Notes and Review
I loved this book! To me the main message is that we all understand that prediction is hard but we don't do a lot to improve our ability to predict or to calibrate our predictions. In other words, when we make a prediction we treat is as a static item - we don't update it based on new knowledge and we hardly ever do a reasonable analysis to determine whether we have made a good prediction or now. To quote:
- “Old forecasts are like old news—soon forgotten—and pundits are almost never asked to reconcile what they said with what actually happened. The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction, and that is enough.”
- More often forecasts are made and then … nothing. Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn … Which means no revision. And without revision, there can be no improvement.“
A lot of the book is related to the author's own research:
- “Expert Political Judgment Project” which studied whether some people really are better predictors than others and, if so, how they differ from the less successful experts, and
- “Good Judgment Project” that was part of an effort to improve intelligence estimating techniques
These were funded by IARPA (the intelligence community’s equivalent of DARPA). After studying and identifying a group of superforecasters and their characteristics, Tetlock asked the natural question: Are superforecasters born, or can we all become superforecasters?
The mechanism used to understand whether a prediction is good or not (an idea new to me at least) is a Brier score. The math behind this system was developed by Glenn W. Brier in 1950, hence results are called Brier scores. In effect, Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0. A hedged fifty-fifty call, or random guessing in the aggregate, will produce a Brier score of 0.5. A forecast that is wrong to the greatest possible extent—saying there is a 100% chance that something will happen and it doesn’t, every time — scores a disastrous 2.0, as far from The Truth as it is possible to get.”
If your aim is to improve your own ability to make predictions, Tetlock will both give you valuable advice and explain how following that rather simple-sounding advice may be harder than you think. In summary, to become a super forecaster, you need to:
“PULLING IT ALL TOGETHER We have learned a lot about superforecasters, from their lives to their test scores to their work habits. Taking stock, we can now sketch a rough composite portrait of the modal superforecaster. In philosophic outlook, they tend to be:
- CAUTIOUS: Nothing is certain
- HUMBLE: Reality is infinitely complex
- NONDETERMINISTIC: What happens is not meant to be and does not have to happen In their abilities and thinking styles, they tend to be:
- ACTIVELY OPEN-MINDED: Beliefs are hypotheses to be tested, not treasures to be protected
- INTELLIGENT AND KNOWLEDGEABLE, WITH A “NEED FOR COGNITION”: Intellectually curious, enjoy puzzles and mental challenges REFLECTIVE: Introspective and self-critical
- NUMERATE: Comfortable with numbers In their methods of forecasting they tend to be:
- PRAGMATIC: Not wedded to any idea or agenda
- ANALYTICAL: Capable of stepping back from the tip-of-your-nose perspective and considering other views
- DRAGONFLY-EYED: Value diverse views and synthesize them into their own
- PROBABILISTIC: Judge using many grades of maybe
- THOUGHTFUL UPDATERS: When facts change, they change their minds
- GOOD INTUITIVE PSYCHOLOGISTS: Aware of the value of checking thinking for cognitive and emotional biases In their work ethic, they tend to have:
- A GROWTH MINDSET: Believe it’s possible to get better GRIT: Determined to keep at it however long it takes.”
Not much:-) But I will say that when you apply this to the business forecasting that we do, is it any wonder that we don't often get it right. The good news is that we can get better.
As said, I enjoyed this book a lot.
Interesting Metaphor
Thought this was a good discussion on why “book learning” may not help you and why you need to “do”:
“To demonstrate the limits of learning from lectures, the great philosopher and teacher Michael Polanyi wrote a detailed explanation of the physics of riding a bicycle: “The rule observed by the cyclist is this. When he starts falling to the right he turns the handlebars to the right, so that the course of the bicycle is deflected along a curve towards the right. This results in a centrifugal force pushing the cyclist to the left and offsets the gravitational force dragging him down to the right.” It continues in that vein and closes: “A simple analysis shows that for a given angle of unbalance the curvature of each winding is inversely proportional to the square of the speed at which the cyclist is proceeding.” It is hard to imagine a more precise description. “But does this tell us exactly how to ride a bicycle?” Polanyi asked. “No. You obviously cannot adjust the curvature of your bicycle’s path in proportion to the ratio of your unbalance over the square of your speed; and if you could you would fall off the machine, for there are a number of other factors to be taken into account in practice which are left out in the formulation of this rule.”8 The knowledge required to ride a bicycle can’t be fully captured in words and conveyed to others. We need “tacit knowledge,” the sort we only get from bruising experience. To learn to ride a bicycle, we must try to ride one. It goes badly at first. You fall to one side, you fall to the other. But keep at it and with practice it becomes effortless—although if you had to explain how to stay upright.”