BioVid
Expertise
Why We Need Humans to Help Us Predict Market Evolution
April 19, 2022

Our first post in a series about the importance of Prediction Science
and its implications for product forecasting and business decision-making

In our forecasting and demand practice at BioVid, we see an increasing appetite to predict non-product-related aspects of the markets we are modeling in our forecasts. Our stakeholders often look at changes to department or health network level guidelines, access constraints on HCP decision-making, or promotional reach. Others are curious about the more ephemeral but no less powerful aspects like changes in scientific paradigms, economic trends, and cultural differences in disposition toward medicine. We contend (and many forecast consumers on the manufacturing side of the industry seem disposed to agree) that the future state of the commercial environment probably matters as much as the value of the product itself, if not more, when we are modeling future use and sales.

This is a natural extension of the realization that, increasingly, physician and patient preferences for specific therapies, at specific times, are being substantially constrained by outside forces.  Estimated demand for a product is not enough to go on, and forecasters need data to inform aspects of forecasts that, in the past, would often be handled with base-rate (i.e., status quo) assumptions or intuitions and best-informal guesses about change. 

However, we believe there is an increasing view that the classic approaches used to predict critical facets of the business environment are not quite sufficient. There is an appetite for evidence-based assessments of non-product forecasting. At the same time, as interest in this topic grows, we also observe a sense of uncertainty regarding how one should go about looking past the product and into other facets of market evolution.

Curious Female With Glasses

The uncertainty we see has two parts. The simple part has to do with methodology. In our many conversations with insights professionals on this topic, many express uncertainty about how to tackle this problem. On the product side of the equation, we have many methods for predicting the density of product penetration in a market (and some of them are good). But on the non-product side of things, we lack consensus on techniques for getting at quality predictions. 

So, one key consideration involves getting our arms around relevant prediction methods. But a potentially deeper issue may involve a sense of cynicism.  Clearly, many people in diverse business settings believe that predicting the future is a futile exercise.  These feelings are understandable because human prediction has a bad reputation.

The Case Against Human-Based Prediction

Professionals from a range of backgrounds have been trained to believe that asking humans to predict the future is not a good idea. At BioVid, we will be the first to admit that much judgment-based prediction can sometimes be unhelpful. The fact that strong evidence (see comments by Tschoegl & Armstrong, 2007, and Tetlock, 2005) has been accumulated that shows that expert forecasts are mostly no better than chance and less good than the wisdom of the crowds has the feel of a nail in the coffin for human prediction.  

Additionally, one of the practical implications of artificial intelligence (which might encompass all manner of linear models, decision trees or any other evidence-driven amalgamation of predictors) will, on average, tend to do a better job of making predictive judgments than subject-matter experts will make based on extensive training. This idea is spelled out in painstaking detail in the book, Expert Political Judgement, which should be required reading for anyone in the forecasting business.

In popular literature, behavioral economics focuses on prevalent cognitive biases, which are useful explanatory mechanisms for deficiencies in many types of human judgment, including prediction. Illustratively, Noise offers a contemporary summary of the good reasons why we should be skeptical about expert judgment. You might say that we know that normal humans and experts are, in many ways, wired for bad prediction.

Lastly, all this evidence piles on top of our cultural bias against predictive judgment. To appreciate this, think of our associations with fortune tellers, sports bettors, stock pickers, and pundits. In conversations with clients, we find that many people do not think we can predict features of the future with any accuracy. Can anybody really wonder why?

Man deep in thought at laptop
So, we have a real tension point: On one hand, we hold this culturally induced and evidence-driven skepticism about the human ability to engage in solid prediction, but on the other hand, we hold the realization that we need to reduce the uncertainty around our products and the worlds in which they will compete. Can this tension be resolved?

Should Human Judgment Have Any Role in Business Prediction?

So now, we get to the crux of the issue. We are in business   so we don’t have any choice but to continue making forecasts. Further, as we’ll discuss, there will be circumstances where human predictive judgment is the only tool available to us. And finally, there is a cascade of considerations that routinely get overlooked by the meta-conclusion that human predictive judgment is faulty.

  • First, algorithmic predictions work poorly when environmental conditions change markedly. Put differently, algorithms and observational data (i.e., product analogs) work very well when the future is going to look exactly like the present.  If we expect that to happen, we probably don’t need to do much prediction work, anyway. Change in commercial environments is expected to accelerate, if only because of the continued proliferation of technology. This means that model-driven prediction is going to lag, to greater or lesser extents, in many commercial contexts.
  • Second, there are still many market contexts where data quality is poor, or where the data we would want for predictive modeling is absent. While data is increasingly available in many settings, we are still left with the problem that many variables we would want to have access to simply don’t get measured or coded. Modeling is still constrained by data quality and the scope of what gets measured.
  • Third, there is extensive, rigorous experimental work demonstrating that human predictive judgment can be quite good (meaningfully better than chance) in the right conditions, and that almost any human can be trained to improve their predictive judgment with specific steps. Some of the foundational work is summarized in the book, Superforecasting (also required reading).
5 Books Stacked

Our experience has been that not everyone working in forecasting has had exposure to the literature on predictive judgment. We may be displaying our confirmation biases against predictive judgment and simply find it easy to ignore or overlook any evidence that goes against the grain of conventional wisdom. But, the research is compelling and offers real hope for improving judgment-based prediction and specific practical recommendations for making that happen. And more to the point, we probably won’t have much choice.

As markets get more complicated, with greater potency of external mediators (such as regulatory constraints, access constraints, and economic constraints), with more patient subtypes (think oncology genotypes and lines of therapy), and with more treatment options (think, diabetes or hemophilia), the average value of algorithms or analogs for predicting into any particular market seems likely to deteriorate.

All this translates to a simple conclusion:  Despite our preference for purely model-based prediction, we will have an increased need for first-person judgments about future market trends.

Practically, we will need people to help us predict the future and envision the kinds of environments in which our products will have to compete.

This upcoming series of blog posts will attempt a clear-eyed look at prediction from the standpoint of behavioral and decision science. We will also explore the practical implications for making the highest quality use of human predictive judgment in business prediction and product forecasting.

We will make a strong case for the following four assertions:

Number One Circled

There is robust scientific evidence to support the value of using humans to make predictions.  We will look at this.

Number Two Circled

Perhaps, more importantly, the science tells us a great deal about how to help humans with prediction exercises.  It also tells us what to avoid and what not to do.

Number Three Circled

The systematic use of carefully executed judgment-based prediction should become a systematic part of product forecasting, and it should be used for predicting both the value of a product and the nature of the environmental constraints that the product will face in its commercial life.

Number Four Circled

We will also look at how to use data (e.g., AI) in combination with human judgment in order to optimize prediction quality by minimizing the worst effects of cognitive bias.

In our next post, we will begin to unpack the science of prediction by looking at the possibility of identifying human dispositions and traits that reliably correlate to superior skill in predictive judgment.

To learn how BioVid can apply the science of prediction to your business, email demand@biovid.com to reach one of our experts.

Contact Us