## IDENTIFYING AND CORRECTING COGNITIVE BIASES IN SUBJECTIVE PROBABILITY ELICITATION SURVEYS: MODEL BASED APPROACHES

##### Abstract

As probability elicitation becomes widely used, methods other than one-on-one interviews are being used to elicit expert probabilities. This thesis considers biases that may arise when probabilities are elicited in an online or workbook setting. In Chapter 2, I develop models to identify and correct for partition dependence bias, carryover biases when probabilities are elicited for a pre-specified set of intervals for continuous variables. In Chapters 3 and 4, I develop models to correct for overconfidence bias when elicitation is conducted to assess interval ranges for a specified confidence level.
In Chapter 2 I develop a prescriptive model in which the elicited probability is a convex combination of the expert’s underlying probability with elements of partition dependence and two anchors arising from responses to previous questions (“carryover” bias). I apply the proposed model to two data sets and estimate the amount of the various biases in a set of elicited probabilities from experts. I find that both the format of the questions—whether they appear on the same or separate pages/screens—and the ordering of the questions can affect the amount of bias. This research addresses biases in the presence of multiple anchors and provides guidance on manipulating the availability of anchors. The results demonstrate the persistence of anchoring even with careful questionnaire design; thus, the proposed model-based methods are useful to suggest corrections for the resulting biases.
In Chapter 3 I develop correction approaches for overconfidence bias in assessed confidence intervals by assuming that the underlying unbiased distributions are Normal distributions or Bi-Normal distributions. For the Normal distributions, I adjust the assessed 90th confidence intervals such that the adjusted tails have 10% of true values using a set of calibration questions for which true values are known. For the Bi-Normal distribution, which has the median and two standard deviations as parameters, I estimate the parameters by adjusting the assessed 5th, 50th and 95th percentiles such that the unbiased percentiles are well-calibrated. Both correction approaches are applied to two studies from the Cooke dataset and the improvement in calibration of corrected intervals from these approaches is found to be significant. I also validate the correction approaches using an out-of-sample validation test.
In Chapter 4, I modify the Bi-Normal approach from Chapter 3 to apply it to studies with few calibration questions and also estimate overconfidence bias parameters from calibration of several percentiles instead of calibrating just based on tails. The modified Bi-Normal (Multi-Parameter Bi-Normal) is hypothesized to improve the overall calibration (as measured by the deviation of cumulative probabilities of true values from a perfect calibration line) while the Bi-Normal is hypothesized to improve the calibration of 5th and 95th percentiles. A comparison of the application of the correction approaches to two elicitation studies, each with a large set of calibration questions, found both in-sample and out-of-sample evidence in support of the two hypotheses. The Multi-Parameter Bi-Normal was also applied to two more studies with a small set of calibration questions from Cooke dataset and the improvement in calibration of corrected confidence intervals is found to be significant by both in-sample and out-of-sample tests.