The second post in our series on business prediction
Can humans do a good job predicting complex outcomes in different life and business domains? If you’re new here, please look at our first post on prediction to deep dive into why businesses need first-person judgments about future market trends. Learn why we need people to help us predict the future. More specifically, why we need them to help us envision the environments in which our products will have to compete.
Though in many contexts, human predictive judgment has its problems, the work of Phil Tetlock, Barbara Mellers, and their colleagues points to some important conclusions that are critical to understand for applied business prediction.
Some people are better than others at making predictions about future events. A small percentage of humans seem to have outsized ability for this kind of thinking.
You can do specific things to make any individual or group better at prediction than they would be, otherwise.
Therefore it makes sense for businesses to develop deliberate, quality-centered processes whenever they are engaged in prediction exercises – over time, this will pay significant dividends in being “right” about the future.
Business-relevant questions that stem from these conclusions include:
- How do we find the right humans to help us with prediction?
- How do we train ordinary people to do a better job with prediction?
- What are the quickest, easiest, and cheapest things we can do in our business to make better predictions, generally?
We are going to cover all these questions in this series. For this post, we want to address the first issue.
Who Should We Turn to for Predictive Judgment?
Let’s consider some of our options framed in terms of common intuitions about predictive judgment in business.
The science is clear: Experts or pundits generally are bad at predicting, and we are no worse off asking non-experts.
The more expert the person is on a topic, the less likely it is that they can make good, unbiased predictive declarations about specific future events (e.g., product performance). There’s a bit of an uncomfortable cultural implication for the life sciences that, in our opinion, needs to have some light shown on it. We tend to be disproportionately reliant on KOL-centered processes, such as advisory boards, to tell us what will happen with a product or market evolution.
If a person is intellectually, reputationally, or financially invested in a particular outcome, it is difficult to be objective. Having said this, the critical thing to realize is that the views of experts are still essential for business planning; but we must remember to put them to work in the right way. (We’ll talk more about this in a future post.)
Here, our instincts serve us well. Indeed, average person-off-the-street forecasters don’t do very well with prediction in domains they are not familiar with. Daniel Kahneman and Gary Klein explored this in their paper entitled Conditions for Intuitive Expertise.
That said, the wisdom of the crowds will tend to be statistically superior to the judgment of the individual expert. Knowing this, a business could set up a crowd-based feedback mechanism for future looking business questions and get a pretty decent assessment of the likelihood of that outcome. We suspect that most life science organizations would resist this idea because many of us have a strong cultural bias against the predictive competencies of laypeople. Nevertheless, the wisdom of the crowds is a consideration.
A middle-ground approach involving neither invested experts nor laypeople, would, at first blush, seem like a strong step in the correct direction. It helps that this is also how we typically assess product demand (i.e., utilization or market penetration). Kahneman and Klein explain that the type of environment powerfully mediated the quality of intuitive judgments under study.
People making predictions into relatively unfamiliar environments will tend to go awry. At the same time, people making judgments in domains of significant training, skill and repeated experience tend to do better with intuitive judgments. The extensive, rich literature on the ecological value of cognitive heuristics also underscore this idea (see, e.g., Marewski, Gaissmaier & Gigerenzer, 2010).
But there’s a real catch to this third intuition: physicians know the medicine, the patients, and the kinds of cues and symptoms that drive them toward one therapy or another. However, outside of this domain of expertise, the literature suggests that they would be no better than a random person pulled from the adult population. Qualitative research experience underscores how hard it is to get physicians to comment on issues beyond the clinical merits of the products, themselves. Ask a doctor about changes in regulation, guidelines, the economy or even the direction of science, and they will often be hard pressed to answer. So, in this sense, a large sample of HCPs might be much like a crowd of laypeople when it comes to many kinds of business prediction. Have we painted ourselves into a corner? If so, who is left to help us with prediction?
You can probably guess the punchline now. When we are setting up prediction-centered exercises, we don’t want super-experts or pundits because they are too biased toward specific future outcomes. We also don’t want any random person off the street because they don’t have the basic fluency to make sensible assertions about complex market dynamics.
In addition, most physicians will feel disinclined to make predictions about anything falling to either side of the knife’s edge of their expertise. But, in between these extremes, there is the “Goldilocks” participant. This participant has the right combination of expertise/fluency, willingness, and intellectual disposition to make the best possible predictions. We like the term “Goldilocks” because it helps us remember the characteristics we want them to have: not too biased, but not too uninformed; not too experienced, but not too inexperienced; not too invested, but not too disengaged.
For our prediction work, we have developed several screening techniques to get at the psychological dispositions of Goldilocks participants. For business-based prediction exercises, we focus on four ideas.
This term (borrowed directly from Tetlock & Gardner’s book, Superforecasting) connotes an inborn disposition to look at subjects from multiple angles rather than just from one point of view.
The orientation that all teachers want from their students and many parents want from their children – the tendency to seek deeper understanding in any aspect of intellectual pursuit.
The belief in evidence and data, rather than strict reliance on intuition, or normative beliefs for your social group.
Willingness to be Wrong:
The ability to disentangle one’s ego from the question at hand and to be comfortable updating one’s beliefs based on new information.
In reviewing this list, it’s not hard to imagine that such people will tend to be better at predicting in almost any context. At the same time, personal experience suggests that there may not be many such people out there, which sets us up for a practical challenge – how to find Goldilocks participants. Specifically, we want to find participants with a cognitive style that matches with the kinds of market prediction exercises we need to do for contemporary forecasting.
So How Do We Do This In Our Studies?
After digging around in the cognitive science literature to find validated psychometric instruments that could measure these constructs, we feel a bit frustrated. We want to have a way to screen respondents to find (or at least weight) participants who have some of the above characteristics. Many candidate instruments exist, but the majority were either too long and unwieldy or featured content that was not exactly aligned to the ideas above. There is probably no single best answer, so we’re hoping to save readers time by pointing them in good directions.
Oddly enough, we encountered a summary and some good suggestions relating to this issue in the book, Range, authored by investigative journalist David Epstein. We would encourage readers to review his summary of this literature in Chapter 10 of that book.
Science curiosity is one such construct (see Landrum et al, 2016). More science-curious people are more open to reviewing their own beliefs and to taking new data from multiple sources. This inventory seems like a solid way to identify good prediction characteristics; however, it is very lengthy, and, thus, would not work terribly well in the context of screening exercises. The Mellers, et al. (2015) paper also lists a range of measures, including a seven-item Open-minded thinking test (see pp.5), which, in theory, has good content overlap with the first two constructs above. This inventory seems like a good starting point for business practitioners because it is relatively brief. Mellers, et al. Also reference the use of a divergent construct called, Need for Closure, which connotes a psychological preference to get to an answer or to solve a problem quickly and definitively.
Strong disposition on this measure, they argue, would be essentially evidence that a person would NOT be a good predictor. Insights professionals might create/test short-form versions of any of these scales or find other scales that have content overlap with some or all of these constructs. Higher quality prediction participants could be either pre-screened with such an instrument or identified at the analytic stage and given differential weighting in data analysis.
One other thing to note is that the construct of Dragonfly Eye can also be simulated with normal study participants (i.e., without using any special screening). Ask a series of “challenge questions” where participants think about a particular prediction task from multiple angles.
For example, suppose you are exploring pricing for a product and you want to predict patients’ willingness to pay out-of-pocket. You could simulate a Dragonfly Eye disposition with a random sample of patients by forcing the participants to think about pricing from different points of view. Rather than just asking conventional questions to test price sensitivity, you might help the participant get into a Dragonfly Eye mindsight by using warmup questions.
Questions could include:
- How much money are you spending today on similar kinds of products?
- Thinking about the people you know well, how much money do you expect the average person would be willing to spend on Product X?
- What financial circumstances would make it challenging for you to spend $X on this product?
At the end of the day, if you cannot find Goldilocks participants through screening (or don’t want to), you can get at least some benefit by using specific exercises to pull them into a mindset that is conducive to good prediction.
For BioVid’s work, we use a proprietary integrated psychometric scale that blends sub-constructs from a range of published instruments. Psychometric validation is ongoing, but we are pleased with the quality of participants we see so far. If you have questions about adapting psychometric instruments for use in your research and prediction exercises, or if you want to learn more about setting up a formal prediction task to feed your forecast, please get in touch with Carter Smith (email@example.com or firstname.lastname@example.org).
Author: Carter Smith, Ph.D.
Principal, BioVid Corporation
Carter is an award-winning expert in forecasting and applications of prediction science. He is well known among his broad client base and the life sciences industry for the ways he applies decision science (including cognitive psychology, behavioral economics, and behavioral ecology) to new product demand and forecasting.