Research

Inverse Uncertainty Quantification (UQ)

Uncertainty Quantification (UQ) is an essential step in computational model validation because assessment of the model accuracy requires a concrete, quantifiable measure of uncertainty in the model predictions. The concept of UQ in the nuclear community generally means forward UQ, in which the information flow is from the inputs to the outputs. Inverse UQ, in which the information flow is from the model outputs and experimental data to the inputs, is an equally important component of UQ but has been significantly underrated until recently. Forward UQ requires knowledge in the computer model input uncertainties, such as the statistical moments, probability density functions, upper and lower bounds, which are not always available. Historically, expert opinion or user self-evaluation have been predominantly used to specify such information in previous Verification, Validation and Uncertainty Quantification (VVUQ) studies. Such ad-hoc specifications are subjective, lack mathematical rigor, and can sometimes lead to inconsistencies. Inverse UQ is defined as the process to inversely quantify the input uncertainties based on experimental data. It seeks statistical descriptions of the uncertain input parameters that are consistent with the observation data.

Figure 1: Forward and Inverse UQ processes.

Inverse UQ methods can be broadly categorized by three main groups: frequentist (deterministic), Bayesian (probabilistic), and empirical (design-of-experiments). Our group has been developing Bayesian inverse UQ methods, which assume that the uncertain input parameters have true but unknown values, and use probabilistic treatment of these parameters with uncertain distributions, because it is impossible to quantify the exact values given limited available information. The Bayesian inverse UQ methods are built upon the Bayes’ rule as a procedure to update information after observing experimental data. Knowledge about the physical model uncertainties is first characterized as prior distributions, which is updated to posterior distributions based on a comparison of model and data. When sampling-based approaches are used to explore the posterior distributions, such as Markov Chain Monte Carlo (MCMC) sampling, the computational cost can be tremendous because typically tens of thousands of samples are needed. In this case, surrogate models based on machine learning algorithms (e.g. Gaussian Processes, deep neural networks, etc.) are usually employed.

We have developed an innovative inverse UQ method called the modular Bayesian approach (MBA). It has been successfully demonstrated on two cases with different levels of complexity: system thermal-hydraulics code TRACE with a relatively large amount of void fraction data from the BFBT benchmark, and fuel performance code Bison with very limited time series fission gas release measurement data. Compared to traditional Bayesian calibration, modularization was introduced to separate various modules in Bayesian inverse UQ to prevent suspect information belonging to one part from overly influencing another part. The resulting modular Bayesian-based inverse UQ process has reduced complexity from reasonable simplification and better convergence for MCMC sampling. The most significant characteristic of the MBA method is the simultaneous consideration of all major sources of quantifiable uncertainties in modeling & simulation (M&S), i.e., uncertainties from parameter, experiment, model and code. The resulting posterior distributions can effectively represent the input uncertainties that are consistent with the experimental data. Our current work focuses on improving the MBA method by resolving several important existing issues in inverse UQ, including a more rigorous ML-based representation of the model uncertainty, extrapolation of the model uncertainty term to generalized domains and test source allocation when the experimental data is limited.

Top of page