6533b7d8fe1ef96bd126b74c
RESEARCH PRODUCT
Calibrating Expert Assessments Using Hierarchical Gaussian Process Models
Anna ChrysafiJarno VanhataloTommi Peräläsubject
0106 biological sciencesComputer sciencepäätöksentekoRECONCILIATIONInferencecomputer.software_genre01 natural sciencesSTOCK ASSESSMENTenvironmental management010104 statistics & probabilityJUDGMENTSELICITATIONkalakantojen hoito111 Mathematicstilastolliset mallitReliability (statistics)Applied Mathematicsgaussiset prosessitfisheries sciencebias correctionexpert elicitationPROBABILITY62P1260G15symbols62F15Statistics and ProbabilityarviointimenetelmätBayesian probabilityenvironmental management.Bayesian inferenceMachine learningHEURISTICSsymbols.namesakeasiantuntijatMANAGEMENT0101 mathematicsGaussian processGaussian processCATCH LIMITSbusiness.industrybayesilainen menetelmä010604 marine biology & hydrobiologyUnivariateExpert elicitationOPINIONSupra BayesArtificial intelligenceHeuristicsbusinessFISHERIEScomputerdescription
Expert assessments are routinely used to inform management and other decision making. However, often these assessments contain considerable biases and uncertainties for which reason they should be calibrated if possible. Moreover, coherently combining multiple expert assessments into one estimate poses a long-standing problem in statistics since modeling expert knowledge is often difficult. Here, we present a hierarchical Bayesian model for expert calibration in a task of estimating a continuous univariate parameter. The model allows experts' biases to vary as a function of the true value of the parameter and according to the expert's background. We follow the fully Bayesian approach (the so-called supra-Bayesian approach) and model experts' bias functions explicitly using hierarchical Gaussian processes. We show how to use calibration data to infer the experts' observation models with the use of bias functions and to calculate the bias corrected posterior distributions for an unknown system parameter of interest. We demonstrate and test our model and methods with simulated data and a real case study on data-limited fisheries stock assessment. The case study results show that experts' biases vary with respect to the true system parameter value and that the calibration of the expert assessments improves the inference compared to using uncalibrated expert assessments or a vague uniform guess. Moreover, the bias functions in the real case study show important differences between the reliability of alternative experts. The model and methods presented here can be also straightforwardly applied to other applications than our case study. Peer reviewed
year | journal | country | edition | language |
---|---|---|---|---|
2020-12-01 | Bayesian Analysis |