User Tools

Site Tools


superforecasting_-_the_art_and_science_of_prediction_-_philip_e._tetlock_and_dan_gardner

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
superforecasting_-_the_art_and_science_of_prediction_-_philip_e._tetlock_and_dan_gardner [2016/12/18 12:47] – [Reference] hpsamiossuperforecasting_-_the_art_and_science_of_prediction_-_philip_e._tetlock_and_dan_gardner [2016/12/18 12:48] hpsamios
Line 18: Line 18:
 These were funded by IARPA (the intelligence community’s equivalent of DARPA). After studying and identifying a group of superforecasters and their characteristics, Tetlock asked the natural question: Are superforecasters born, or can we all become superforecasters? These were funded by IARPA (the intelligence community’s equivalent of DARPA). After studying and identifying a group of superforecasters and their characteristics, Tetlock asked the natural question: Are superforecasters born, or can we all become superforecasters?
  
-The mechanism used to understand whether a prediction is good or not (an idea new to me at least) is a Brier score. "This was a new idea to me. The math behind this system was developed by Glenn W. Brier in 1950, hence results are called Brier scores. In effect, Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0. A hedged fifty-fifty call, or random guessing in the aggregate, will produce a Brier score of 0.5. A forecast that is wrong to the greatest possible extent—saying there is a 100% chance that something will happen and it doesn’t, every time — scores a disastrous 2.0, as far from The Truth as it is possible to get."+The mechanism used to understand whether a prediction is good or not (an idea new to me at least) is a Brier score. The math behind this system was developed by Glenn W. Brier in 1950, hence results are called Brier scores. In effect, Brier scores measure the distance between what you forecast and what actually happened. So Brier scores are like golf scores: lower is better. Perfection is 0. A hedged fifty-fifty call, or random guessing in the aggregate, will produce a Brier score of 0.5. A forecast that is wrong to the greatest possible extent—saying there is a 100% chance that something will happen and it doesn’t, every time — scores a disastrous 2.0, as far from The Truth as it is possible to get."
  
 If your aim is to improve your own ability to make predictions, Tetlock will both give you valuable advice and explain how following that rather simple-sounding advice may be harder than you think. In summary, to become a super forecaster, you need to: If your aim is to improve your own ability to make predictions, Tetlock will both give you valuable advice and explain how following that rather simple-sounding advice may be harder than you think. In summary, to become a super forecaster, you need to:
/home/hpsamios/hanssamios.com/dokuwiki/data/pages/superforecasting_-_the_art_and_science_of_prediction_-_philip_e._tetlock_and_dan_gardner.txt · Last modified: 2020/06/04 11:31 by hans