The second post in our series on business prediction
Can humans do a good job predicting complex outcomes in different life and business domains? If you’re new here, please look at our first post on prediction to deep dive into why businesses need first-person judgments about future market trends. Learn why we need people to help us predict the future. More specifically, why we need them to help us envision the environments in which our products will have to compete.
Though in many contexts, human predictive judgment has its problems, the work of Phil Tetlock, Barbara Mellers, and their colleagues points to some important conclusions that are critical to understand for applied business prediction.
Some people are better than others at making predictions about future events. A small percentage of humans seem to have outsized ability for this kind of thinking.
You can do specific things to make any individual or group better at prediction than they would be, otherwise.
Therefore it makes sense for businesses to develop deliberate, quality-centered processes whenever they are engaged in prediction exercises – over time, this will pay significant dividends in being “right” about the future.
Business-relevant questions that stem from these conclusions include:
- How do we find the right humans to help us with prediction?
- How do we train ordinary people to do a better job with prediction?
- What are the quickest, easiest, and cheapest things we can do in our business to make better predictions, generally?
We are going to cover all these questions in this series. For this post, we want to address the first issue.
Who Should We Turn to for Predictive Judgment?
Let’s consider some of our options framed in terms of common intuitions about predictive judgment in business.
Goldilocks Participants
You can probably guess the punchline now. When we are setting up prediction-centered exercises, we don’t want super-experts or pundits because they are too biased toward specific future outcomes. We also don’t want any random person off the street because they don’t have the basic fluency to make sensible assertions about complex market dynamics.
In addition, most physicians will feel disinclined to make predictions about anything falling to either side of the knife’s edge of their expertise. But, in between these extremes, there is the “Goldilocks” participant. This participant has the right combination of expertise/fluency, willingness, and intellectual disposition to make the best possible predictions. We like the term “Goldilocks” because it helps us remember the characteristics we want them to have: not too biased, but not too uninformed; not too experienced, but not too inexperienced; not too invested, but not too disengaged.
For our prediction work, we have developed several screening techniques to get at the psychological dispositions of Goldilocks participants. For business-based prediction exercises, we focus on four ideas.
Dragonfly Eye:
This term (borrowed directly from Tetlock & Gardner’s book, Superforecasting) connotes an inborn disposition to look at subjects from multiple angles rather than just from one point of view.
Intellectual Curiosity:
The orientation that all teachers want from their students and many parents want from their children – the tendency to seek deeper understanding in any aspect of intellectual pursuit.
Evidence-Centricity:
The belief in evidence and data, rather than strict reliance on intuition, or normative beliefs for your social group.
Willingness to be Wrong:
The ability to disentangle one’s ego from the question at hand and to be comfortable updating one’s beliefs based on new information.
In reviewing this list, it’s not hard to imagine that such people will tend to be better at predicting in almost any context. At the same time, personal experience suggests that there may not be many such people out there, which sets us up for a practical challenge – how to find Goldilocks participants. Specifically, we want to find participants with a cognitive style that matches with the kinds of market prediction exercises we need to do for contemporary forecasting.
So How Do We Do This In Our Studies?
After digging around in the cognitive science literature to find validated psychometric instruments that could measure these constructs, we feel a bit frustrated. We want to have a way to screen respondents to find (or at least weight) participants who have some of the above characteristics. Many candidate instruments exist, but the majority were either too long and unwieldy or featured content that was not exactly aligned to the ideas above. There is probably no single best answer, so we’re hoping to save readers time by pointing them in good directions.
Oddly enough, we encountered a summary and some good suggestions relating to this issue in the book, Range, authored by investigative journalist David Epstein. We would encourage readers to review his summary of this literature in Chapter 10 of that book.
Science curiosity is one such construct (see Landrum et al, 2016). More science-curious people are more open to reviewing their own beliefs and to taking new data from multiple sources. This inventory seems like a solid way to identify good prediction characteristics; however, it is very lengthy, and, thus, would not work terribly well in the context of screening exercises. The Mellers, et al. (2015) paper also lists a range of measures, including a seven-item Open-minded thinking test (see pp.5), which, in theory, has good content overlap with the first two constructs above. This inventory seems like a good starting point for business practitioners because it is relatively brief. Mellers, et al. Also reference the use of a divergent construct called, Need for Closure, which connotes a psychological preference to get to an answer or to solve a problem quickly and definitively.
Strong disposition on this measure, they argue, would be essentially evidence that a person would NOT be a good predictor. Insights professionals might create/test short-form versions of any of these scales or find other scales that have content overlap with some or all of these constructs. Higher quality prediction participants could be either pre-screened with such an instrument or identified at the analytic stage and given differential weighting in data analysis.
One other thing to note is that the construct of Dragonfly Eye can also be simulated with normal study participants (i.e., without using any special screening). Ask a series of “challenge questions” where participants think about a particular prediction task from multiple angles.
For example, suppose you are exploring pricing for a product and you want to predict patients’ willingness to pay out-of-pocket. You could simulate a Dragonfly Eye disposition with a random sample of patients by forcing the participants to think about pricing from different points of view. Rather than just asking conventional questions to test price sensitivity, you might help the participant get into a Dragonfly Eye mindsight by using warmup questions.
Questions could include:
- How much money are you spending today on similar kinds of products?
- Thinking about the people you know well, how much money do you expect the average person would be willing to spend on Product X?
- What financial circumstances would make it challenging for you to spend $X on this product?
At the end of the day, if you cannot find Goldilocks participants through screening (or don’t want to), you can get at least some benefit by using specific exercises to pull them into a mindset that is conducive to good prediction.
For BioVid’s work, we use a proprietary integrated psychometric scale that blends sub-constructs from a range of published instruments. Psychometric validation is ongoing, but we are pleased with the quality of participants we see so far. If you have questions about adapting psychometric instruments for use in your research and prediction exercises, or if you want to learn more about setting up a formal prediction task to feed your forecast, please get in touch at demand@biovid.com.