UQ18 Abstracts - SIAM

151
UQ18 Abstracts 1 IP1 Scalable Algorithms for PDE-Constrained Opti- mization Under Uncertainty We consider optimization problems governed by PDEs with infinite dimensional random parameter fields. Such problems arise in numerous applications: optimal de- sign/control of systems with stochastic forcing or uncer- tain material properties or geometry; inverse problems with stochastic forward problems; or Bayesian optimal experi- mental design problems with the goal of minimizing the uncertainty or maximizing the information gain in the in- ferred parameters. Monte Carlo evaluation of the objec- tive as per the popular Sample Average Approximation (SAA) algorithm results in an optimization problem that is constrained by N PDE systems, where N is the num- ber of samples. This results in an optimization problem that is prohibitive to solve, especially when the PDEs are “complex’ (large-scale, nonlinear, coupled) and discretiza- tion of the infinite-dimensional parameter field results in a high-dimensional parameter space. We discuss high- order derivative-based approximations of the parameter- to-objective maps that, in combination with randomized algorithms, exploit the structure of these maps (smooth- ness, low effective dimensionality). Their use as a basis for variance reduction is demonstrated to significantly acceler- ate Monte Carlo sampling and permit solution of problems with O(10 6 ) uncertain parameters. This work is joint with Peng Chen and Umberto Villa (ICES, UT Austin). Omar Ghattas The University of Texas at Austin [email protected] IP2 On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale statistical data analysis, driven by the needs of applications and the op- portunities provided by new hardware and software plat- forms. I discuss several recent results in this area, fo- cusing on: (1) a new framework for understanding Nes- terov acceleration, obtained by taking a continuous-time, Lagrangian/Hamiltonian/symplectic perspective, (2) a dis- cussion of how to escape saddle points efficiently in non- convex optimization, and (3) the acceleration of Langevin diffusion. Michael I. Jordan University of California, Berkeley [email protected] IP3 A Contemporary View of High-dimensional Quasi Monte Carlo The numerical computation of expected values as high- dimensional integrals is a central task in uncertainty quan- tification. Quasi Monte Carlo (QMC) methods are deter- ministic numerical integration methods that aim for better efficiency (and hence lower cost) than traditional Monte Carlo methods. Originally they were designed with the sole aim of obtaining convergence rates close to 1/N (where N is the number of evaluations of the integrand) for smooth enough integrands, compared to the Monte Carlo rate of 1/ N . But little or no attention was paid to the depen- dence of the error on s, where s is the number of variables, or the dimension. Nowadays, however, integrals with very large numbers of variables are being tackled, with s in the thousands or tens of thousands or more, and as a result there is as much concern about the dependence on s as on N . The aim of this talk is to present highlights of recent progress on QMC for high-dimensional problems. The highlights include algorithms and software for QMC rules tailored to solutions of elliptic PDE with random co- efficients, with error bounds provably independent of the cutoff dimension in this infinite-dimensional problem. In a different direction, there are now high-order QMC rules, rules with potential convergence rates of order 1/N 2 or even faster. Ian H. Sloan University of New South Wales School of Mathematics and Stat [email protected] IP4 Model Uncertainty and Uncertainty Quantification The Bayesian paradigm provides a coherent approach for quantifying uncertainty given available data and prior in- formation. Aspects of uncertainty that arise in practice include uncertainty regarding parameters within a model, the choice of model, and propagation of uncertainty in pa- rameters and models for predictions. In this talk I will present Bayesian approaches for addressing model uncer- tainty given a collection of competing models including model averaging and ensemble methods that potentially use all available models and will highlight computational challenges that arise in implementation of the paradigm. Merlise Clyde Duke University [email protected] IP5 Three Principles of Data Science: Predictability, Stability, and Computability In this talk, I’d like to discuss the intertwining importance and connections of three principles of data science in the title in data-driven decisions. Making prediction as its cen- tral task and embracing computation as its core, machine learning has enabled wide-ranging data-driven successes. Good prediction implicitly assumes stability between past and future. Stability (relative to data and model perturba- tions) is also a minimum requirement for interpretability and reproducibility of data driven results (cf. Yu, ”Stabil- ity” in Bernnouli, 2013). It is closely related to uncertainty assessment. The three principles will be demonstrated in the context of two neuroscience projects and through an- alytical connections. In particular, the first project adds stability to predictive modeling used for reconstruction of movies from fMRI brain signlas to gain interpretability of the predictive model. The second project uses predictive transfer learning that combines AlexNet, GoogleNet and VGG with single V4 neuron data for state-of-the-art pre- diction performance. It provides stable function character- ization of neurons via (manifold) deep dream images from the predictive models in the difficult primate visual cortex V4 and such images are good candidates for follow-up ex- periments to probe the neurons for confirmation. Our V4 results lend support, to a certain extent, to the resemblance of these CNNs to a primate brain. Bin Yu

Transcript of UQ18 Abstracts - SIAM

UQ18 Abstracts 1

IP1

Scalable Algorithms for PDE-Constrained Opti-mization Under Uncertainty

We consider optimization problems governed by PDEswith infinite dimensional random parameter fields. Suchproblems arise in numerous applications: optimal de-sign/control of systems with stochastic forcing or uncer-tain material properties or geometry; inverse problems withstochastic forward problems; or Bayesian optimal experi-mental design problems with the goal of minimizing theuncertainty or maximizing the information gain in the in-ferred parameters. Monte Carlo evaluation of the objec-tive as per the popular Sample Average Approximation(SAA) algorithm results in an optimization problem thatis constrained by N PDE systems, where N is the num-ber of samples. This results in an optimization problemthat is prohibitive to solve, especially when the PDEs are“complex’ (large-scale, nonlinear, coupled) and discretiza-tion of the infinite-dimensional parameter field results ina high-dimensional parameter space. We discuss high-order derivative-based approximations of the parameter-to-objective maps that, in combination with randomizedalgorithms, exploit the structure of these maps (smooth-ness, low effective dimensionality). Their use as a basis forvariance reduction is demonstrated to significantly acceler-ate Monte Carlo sampling and permit solution of problemswith O(106) uncertain parameters. This work is joint withPeng Chen and Umberto Villa (ICES, UT Austin).

Omar GhattasThe University of Texas at [email protected]

IP2

On Gradient-Based Optimization: Accelerated,Stochastic and Nonconvex

Many new theoretical challenges have arisen in the area ofgradient-based optimization for large-scale statistical dataanalysis, driven by the needs of applications and the op-portunities provided by new hardware and software plat-forms. I discuss several recent results in this area, fo-cusing on: (1) a new framework for understanding Nes-terov acceleration, obtained by taking a continuous-time,Lagrangian/Hamiltonian/symplectic perspective, (2) a dis-cussion of how to escape saddle points efficiently in non-convex optimization, and (3) the acceleration of Langevindiffusion.

Michael I. JordanUniversity of California, [email protected]

IP3

A Contemporary View of High-dimensional QuasiMonte Carlo

The numerical computation of expected values as high-dimensional integrals is a central task in uncertainty quan-tification. Quasi Monte Carlo (QMC) methods are deter-ministic numerical integration methods that aim for betterefficiency (and hence lower cost) than traditional MonteCarlo methods. Originally they were designed with the soleaim of obtaining convergence rates close to 1/N (where Nis the number of evaluations of the integrand) for smoothenough integrands, compared to the Monte Carlo rate of1/

√N . But little or no attention was paid to the depen-

dence of the error on s, where s is the number of variables,

or the dimension. Nowadays, however, integrals with verylarge numbers of variables are being tackled, with s in thethousands or tens of thousands or more, and as a resultthere is as much concern about the dependence on s ason N . The aim of this talk is to present highlights ofrecent progress on QMC for high-dimensional problems.The highlights include algorithms and software for QMCrules tailored to solutions of elliptic PDE with random co-efficients, with error bounds provably independent of thecutoff dimension in this infinite-dimensional problem. Ina different direction, there are now high-order QMC rules,rules with potential convergence rates of order 1/N2 oreven faster.

Ian H. SloanUniversity of New South WalesSchool of Mathematics and [email protected]

IP4

Model Uncertainty and Uncertainty Quantification

The Bayesian paradigm provides a coherent approach forquantifying uncertainty given available data and prior in-formation. Aspects of uncertainty that arise in practiceinclude uncertainty regarding parameters within a model,the choice of model, and propagation of uncertainty in pa-rameters and models for predictions. In this talk I willpresent Bayesian approaches for addressing model uncer-tainty given a collection of competing models includingmodel averaging and ensemble methods that potentiallyuse all available models and will highlight computationalchallenges that arise in implementation of the paradigm.

Merlise ClydeDuke [email protected]

IP5

Three Principles of Data Science: Predictability,Stability, and Computability

In this talk, I’d like to discuss the intertwining importanceand connections of three principles of data science in thetitle in data-driven decisions. Making prediction as its cen-tral task and embracing computation as its core, machinelearning has enabled wide-ranging data-driven successes.Good prediction implicitly assumes stability between pastand future. Stability (relative to data and model perturba-tions) is also a minimum requirement for interpretabilityand reproducibility of data driven results (cf. Yu, ”Stabil-ity” in Bernnouli, 2013). It is closely related to uncertaintyassessment. The three principles will be demonstrated inthe context of two neuroscience projects and through an-alytical connections. In particular, the first project addsstability to predictive modeling used for reconstruction ofmovies from fMRI brain signlas to gain interpretability ofthe predictive model. The second project uses predictivetransfer learning that combines AlexNet, GoogleNet andVGG with single V4 neuron data for state-of-the-art pre-diction performance. It provides stable function character-ization of neurons via (manifold) deep dream images fromthe predictive models in the difficult primate visual cortexV4 and such images are good candidates for follow-up ex-periments to probe the neurons for confirmation. Our V4results lend support, to a certain extent, to the resemblanceof these CNNs to a primate brain.

Bin Yu

2 UQ18 Abstracts

University of California, [email protected]

IP6

Multi-level and Multi-index Monte Carlo Methodsin Practice

The multilevel Monte Carlo method has proven to be verypowerful to compute expectations of output quantities of astochastic model governed by differential equations. It ex-ploits several discretization levels of the underlying equa-tion to dramatically reduce the overall complexity with re-spect to a standard Monte Carlo method. However, itspractical implementation in complex engineering problemsaffected by a large number of uncertain parameters stillpresents considerable challenges. We overview in this talkrecent improvements and extensions of the MLMC idea,to include concurrent types of discretization (multi-indexMonte Carlo method) and to compute derived quantitiessuch as central moments, quantiles, or cdfs of output quan-tities. We illustrate then the power of the MLMC methodon applications such as compressible aerodynamics, shapeoptimization under uncertainty, ensemble Kalman filterand data assimilation.

Fabio NobileEPFL, [email protected]

IP7

Data Assimilation and Uncertainty QuantificationA Lagrangian Interacting Particle Perspective

The assimilation of data into computational models andthe quantification of forecast uncertainties is central tomany application areas including meteorology, hydrology,seismology, power networks etc. Broadly speaking, cur-rently used data assimilation techniques fall into one ofthe following three categories: (i) variational methods, (ii)Markov chain Monte Carlo methods, and (iii) sequentialparticle filters. Among sequential particle filters, the en-semble Kalman filter (EnKF) has become very popular butits wider application has been limited by its inherent Gaus-sian distributional/ linearity assumptions. In my talk, Iwill focus on recent particle filter extensions of the EnKFto high-dimensional problems with non-Gaussian uncer-tainties and to combined state-parameter estimation prob-lems. Unifying mathematical principles in these develop-ments are Lagrangian interacting particle representationsand optimal coupling arguments.

Sebastian ReichUniversitat [email protected]

IP8

Good and Bad Uncertainty: Consequences in UQand Design

Engineering decisions are invariably made under substan-tial uncertainty about current and future system cost andresponse. However, not all variability is equally detrimen-tal. The possibility of exceptionally high performance canbe viewed as good uncertainty, while the chance of failureis usually perceived as bad uncertainty. From this perspec-tive, we examine uncertainty quantification and its use inengineering design. We introduce models for uncertaintyquantification and decision making based on superquan-

tile risk (s-risk) that distinguish between good and baduncertainty, avoid paradoxes, and accrue substantial ben-efits in risk, reliability, and cost optimization. Leveragingmulti-fidelity simulations, we describe methods for predict-ing s-risk at reduced computational cost for complex sys-tems. Examples from naval architecture, earthquake engi-neering, and energy management illustrate the frameworkunder both parametric and model uncertainty.

Johannes O. RoysetOperations Research DepartmentNaval Postgraduate [email protected]

SP1

SIAG/Uncertainty Quantification Early CareerPrize Lecture - Multilevel Markov Chain MonteCarlo Methods for Uncertainty Quantification

Multilevel Monte Carlo methods have become increasinglypopular over the last decade, due to their simplicity andtheir ability to significantly outperform standard MonteCarlo approaches in complex simulation tasks. In this talk,we will discuss how the multilevel methodology can be ap-plied in the context of Markov chain Monte Carlo sampling.The general algorithm will be demonstrated on the partic-ular example of sampling from the posterior distribution ina Bayesian inverse problem, where the goal is to infer thecoefficient of an elliptic partial differential equation givenobservations of the solution. Numerical experiments con-firm that the multilevel methodology reduces the computa-tional effort to achieve a given tolerance by several ordersof magnitude.

Aretha L. TeckentrupUniversity of [email protected]

CP1

Probabilistic Graphical Model Based Approach forNonlinear Stochastic Dynamic Analysis

Uncertainty quantification in dynamical systems is oftendifficult due to the high dimensionality of the system re-sponses and inputs as well as the non-linear nature of theunderlying model. The proposed approach attempts tomitigate these issues. It utilizes a probabilistic graphicalmodel (PGM) framework together with a Gaussian pro-cess (GP) surrogate model and expectation propagation(EP) for performing inference tasks. A factor graph is uti-lized to efficiently represent the dependencies of the inputsand the response. Latent variables are introduced into thegraphical model for tackling dependencies in an efficientmanner. Sparse GPs are used to represent the functionalrelation between the nodes in the factor graph. To learnthe parameters associated with the PGM, sequential MonteCarlo, coupled with stochastic gradient descent algorithmis used. Finally, UQ of nonlinear dynamical system is per-formed by employing EP, which is an efficient approximateBayesian inference algorithm. Two high-dimensional non-linear dynamic problems are presented to illustrate the ac-curacy and efficiency of the proposed framework.

Souvik ChakrabortyDepartment of Aerospace and Mechanical EngineeringUniversity of Notre [email protected]

Nicholas Zabaras

UQ18 Abstracts 3

University of Notre [email protected]

CP1

Uncertainty Quantification for Numerical Modelswith Two or More Solutions

Some numerical models have two or more distinct solu-tions. These can include tipping points and bifurcations,many being discontinuous across the separate solutions.Illustrations include climate systems (eg overturning circu-lation) and biological systems (eg networks to model thehuman brain). For example, a computer model may fail tocomplete in specific regions and we would like to be ableto predict where to avoid running the model. We considerhow to identify these regions using a latent variable mod-elled as a Gaussian process (GP). This latent variable actsas an unobserved quantity to help identify the boundarybetween regions, where initially just 2 output solutions areconsidered. Given only minimal information from a set ofinitial inputs and their associated output, each input valueis given a corresponding class label; negative for solution1 and positive for solution 2. The latent GP is estimatedusing this limited information, and a threshold (normallyzero) defines the boundary. Methods explored for estimat-ing the latent GP include the EM algorithm, ABC andMCMC sampling. Results are similar, although MCMCproves to be the most efficient. Extensions to higher di-mensions and multiple solutions are discussed. We alsoconsider experimental design aspects, such as the optimumplace to add data points to improve estimation of the re-gion boundary. We examine whether it is better to placepoints to improve the boundary estimate or gain knowledgeof the actual system.

Louise KimptonUniversity of [email protected]

CP1

Gibbs Reference Posterior for Robust GaussianProcess Emulation

We propose an objective posterior distribution on correla-tion kernel parameters for Simple Kriging models in thespirit of reference posteriors. Because it is proper and de-fined through its conditional densities, it lends itself well toGibbs sampling, thus making the full-Bayesian proceduretractable. Numerical examples show it has near-optimalfrequentist performance in terms of prediction interval cov-erage.

Joseph MureUniversite Paris DiderotEDF R&[email protected]

Josselin GarnierEcole [email protected]

Loic Le GratietEDF R & DUniversite Nice Sofia [email protected]

Anne DutfoyEDF R&D

[email protected]

CP1

Uncertainty Quantification of Atmospheric Chem-ical Transport Models using Gaussian Process Em-ulators

Atmospheric chemical transport models (CTMs) simulatethe production, loss and transport of hundreds of gases inthe atmosphere and are used to make local, regional andglobal forecasts of air quality. There is an urgent need to:(i) understand why CTMs differ significantly in their pre-dictions of important gases such as tropospheric O3, OHand CH4, and (ii) quantify the uncertainty in these pre-dictions. We address these problems by using global sen-sitivity analysis, model calibration and uncertainty anal-ysis. We are the first to apply these statistical methodsacross multiple CTMs. These methods require thousandsof model runs and so we replace the CTM, which typicallytakes 12 hours to do a one year run, with a Gaussian pro-cess emulator. In this presentation, I will show results toa sensitivity analysis, a model calibration and an uncer-tainty analysis involving five CTMs, 30 parameters, andmodel outputs consisting of global maps of O3, CO, andCH4 lifetime. A preliminary analysis found that the differ-ences in predicted global CH4 lifetime by two CTMs is dueto predicted CH4 lifetime being sensitive to two differentprocesses, a result which was unexpected. In a separatepreliminary analysis which used measurements of O3 andCO and a MCMC routine, we reduced the uncertainty ofthe parameters that had the largest influence on the O3and CO outputs of a CTM. This had the knock-on effectof reducing the uncertainty in the modelled predictions ofO3 and CO.

Edmund M. Ryan, Oliver WildLancaster [email protected], [email protected]

Apostolos VoulgarakisImperial College [email protected]

Fiona O’ConnorUK Met [email protected]

Paul YoungLancaster [email protected]

David StevensonUniversity of [email protected]

CP1

Nonstationary Gaussian Process Emulation ofComputer Models via Cluster-based CovarianceMixtures

A common assumption, adopted when constructing aGaussian Process emulator, is the stationarity of the covari-ance function, implying a global smoothness for the simu-lator response. However, stationary Gaussian Process em-ulators perform poorly for simulators, that exhibit hetero-geneity and local features. Stationary Gaussian Processesbecome overconfident in high variability regions of param-eter space and underconfident in regions where the simu-

4 UQ18 Abstracts

lator is well-behaved. We propose a new method of con-structing nonstationary Gaussian Process emulators basedon a spatial covariance structure proposed by Banerjee etal. (2004). We identify L distinct regions across the in-put space found from clustering standardized errors froman initial stationary fit and for each region we define itsown covariance function with the corresponding parame-ters. The covariance function of the nonstationary GP em-ulator is a sum of the L covariance functions with the mix-ture parameters derived from the Bayesian clustering. Weexamine the performance of our method against stationaryGaussian Process emulator, Treed Gaussian Process model(TGP) [Gramacy and Lee, 2008] and Composite GaussianProcess model (CGP) [Ba and Joseph, 2012] on a numberof numerical examples. The application of our nonstation-ary method to the modelling of boundary-layer potentialtemperature for convection in the CNRM climate model[Voldoire et al., 2011] is provided.

Victoria Volodina, Daniel WilliamsonUniversity of [email protected], [email protected]

CP2

Reduced Order Model for Random VibroacousticProblems

The robust design of vibroacoustic systems requires a tak-ing into account of possibly numerous sources of uncertain-ties in the model in order to predict their impact on theresponse. It reveals necessary to develop reliable and effi-cient tools for the prediction of such responses and in thisframework, functional approaches for uncertainty quantifi-cation suffer from the highly nonlinear behaviour of theresponse to the input uncertainties. In order to tackle suchproblems, we consider statistical learning methods basedon least squares minimization for high d-dimensional prob-lems where the output is approximated in suitable low-ranktensor formats. The storage complexity of these formatsgrows linearly with the dimension d which makes possi-ble the construction of an approximation using only fewsamples. However the rank of the approximation of thestochastic dynamic response may be high and the methodthus loses its efficiency. The key is to proposea suitablefunctional representation and an adapted parametrizationwith M < d variables for the computation of the responseso to obtain structured approximations with low complex-ity. The latter parametrization is determined automati-cally by coupling a projection pursuit method and low rankapproximation.

Mathilde ChevreuilUniversite de Nantes, [email protected]

Erwan GrelierEcole Centrale de Nantes, [email protected]

Anthony NouyEcole Centrale [email protected]

CP2

Progressively Refining Reduced Order Models forEstimating Failure Probabilities of Dynamical Sys-tems

Reduced order models (ROMs) help to accelerate simu-

lations of dynamical systems. Therefore they are beingincreasingly used for estimating failure probabilities. How-ever, the use of ROMs yields erroneous estimates, primar-ily due to (i) approximation error from size reduction, and(ii) training error. The training error may arise from sub-optimal selection of training points or inaccurate interpola-tion of ROMs. To address these issues, a new algorithm isproposed here that refines ROMs progressively, thereby at-tempting to minimize both errors. Accordingly, the ROMsize is gradually increased to reduce the error from approx-imation, while retaining computational efficiency. Further,the training domain of the ROMs is progressively localizedto reduce the error from training. The algorithm is robustwith respect to the choice of model reduction method andtraining. It can also be easily parallelized as it is based ona statistical simulation. The proposed algorithm is used toestimate the failure probability of a bridge structure underthe action of a moving inertial mass. The uncertainty isassumed to be in the material modulus of elasticity. ROMsare developed using proper orthogonal decomposition, andinterpolated on the tangent space to a Grassmann mani-fold. The algorithm is shown to be able to estimate failureprobabilities with sufficient accuracy while offering signifi-cant computational gain.

Agnimitra DasguptaUniversity of Southern [email protected]

Debraj GhoshDepartment of Civil EngineeringIndian Institute of [email protected]

CP2

Stochastic Analysis and Robust Optimization of aReduced Order Model for Flow Control

This contribution targets a probabilistic robustness andsensitivity analysis for the validation and robust ameliora-tion of a Reduced Order Model (ROM) that was deducedfrom a High Fidelity Model (HFM) simulating the flowfield around a high-lift configuration airfoil. The ROM isto be used for closed-loop flow control by means of Coandablowing. For the robustness analysis a general Polyno-mial Chaos Expansion (gPCE) based surrogate model wasused. In the probabilistic framework, the ROM becomes astochastic ODE with random coefficients, where both thestate of the flow and the coefficients are approximated witha general polynomial chaos expansion (gPCE). This gPCEis also used for an efficient global sensitivity analysis toinvestigate the influence of variation of the coefficients onthe uncertainties of the state of the flow. Besides check-ing the robustness of the model, the objective here is togain more information about the influence of the individualmodes and to validate the confidence region of the derivedmodel on the solution of the ROM by evaluating variancebased global sensitivities. This information can facilitatethe decision making on the number of the needed modesand amelioration of the model by further tuning the ROMcoefficients by Bayesian inversion. This inverse method iscarried out such way to preserve as much of the underlyingphysics of the High Fidelity Model and its model structureas much possible.

Noemi FriedmanInstitute of Scientific ComputingTechnische Universitat [email protected]

UQ18 Abstracts 5

Elmar ZanderTU BraunschweigInst. of Scientific [email protected]

CP2

Reduced Order Modeling for Nonlinear StructuralAnalysis using Gaussian Process Regression

A non-intrusive reduced basis (RB) method is proposedfor parameterized nonlinear structural analysis consider-ing large deformations and elasto-plastic constitutive rela-tions. In this method, a reduced basis is constituted from aset of full-order solutions employing the proper orthogonaldecomposition (POD), then Gaussian process regression(GPR) is used to approximate the projection coefficientsonto the reduced space. The GPR is carried out in the of-fline stage with the aid of active data selection by a greedyalgorithm, and the outputs for different parameters can berapidly obtained as probabilistic distributions in the onlinestage. Due to the complete decoupling of offline and on-line stages, the proposed RB method provides a powerfultool to efficiently solve the parameterized nonlinear prob-lems with various engineering applications such as struc-tural optimization, risk analysis, real-time evaluation andparameter identification. With both geometric and mate-rial nonlinearities taken into account, numerical results arepresented for some typical 1D and 2D examples, illustrat-ing the accuracy and efficiency of the proposed method.

Mengwu GuoEPFL SB MATH MCSSEcole polytechnique federale de [email protected]

Jan S. [email protected]

CP2

Probabilistic Model Validation of Large-scale Sys-tems using Reduced Order Models

Multiple mathematical models are often available to de-scribe a physical system. Identifying which of those modelsare useful to represent the physical system is essential tomaking engineering decisions. In this study, a probabilisticframework is used, where models that cannot reasonablydescribe the measurements are first falsified by control-ling an error criterion, namely false discovery rate (FDR).Among the remaining model classes, appropriate modelsand/or model class(es) are then chosen using a model se-lection algorithm. Finally, the quality of the selected modelclass(es) is rechecked. To alleviate the computational bur-den of such validation exercises for large-scale systems itproposed herein to use reduced order models (ROMs). AROM for each model class is first constructed and trainedover the entire parameter domain for the falsification stepand then adaptively trained for the model selection step.The proposed approach is illustrated using an 11-story 99-degree-of-freedom building where different model classesare formed by considering different flexural rigidities at dif-ferent stories combined with different levels of support con-straints. The ROMs are constructed using proper orthog-onal decomposition. The use of the proposed frameworkis shown to give significant computational savings propor-

tional to those offered by the ROMs.

Erik Johnson, Subhayan De, Agnimitra DasguptaUniversity of Southern [email protected], [email protected],[email protected]

Steven WojtkiewiczClarkson [email protected]

CP2

Quantifying Uncertainty in Reduced Models forDiscrete Fracture Networks

Reduced modeling is becoming increasingly necessary toyielding fast results for complicated, expensive models. Al-though many models rely on a similar modeling methodol-ogy for reduction, ie taking a matrix system and reducing itwhile maintaining of the same properties as the larger sys-tem, there are other avenues. A large-scale computationalmodel for transport was developed at Los Alamos NationalLaboratory that uses discretized meshes for the forward so-lution. A reduced model was created using graph networksto simplify the problem. We propose to model the uncer-tainty in quantities of interest, such as pressure, by usingthe graph model. We propose to test our uncertaintiesagainst well-established results of the high-fidelity modelas proof of trust for use in larger and unknown transportproblems.

Jaime A. Lopez-MerizaldeTULANE [email protected]

James [email protected]

Humberto C. GodinezLos Alamos National LaboratoryApplied Mathematics and Plasma [email protected]

CP3

Random Partial Differential Equations on MovingHypersurfaces

Although partial differential equations with random co-efficients (random PDEs) is very popular and developedfield, it seems that the analysis and numerical analysis ofparabolic random PDEs on surface still appears to be in itsinfancy. Surface PDEs appear in many applications suchas image processing, cell biology, porous media etc. Uncer-tainty of parameters naturally appears in all these models,which motivates us to combine these two frameworks: ge-ometry of domain and uncertainty. In this talk we willfirst consider analysis of random advection-diffusion equa-tion on evolving hypersurfaces. After introducing appro-priate setting for this problem that combines geometry andprobability, we will prove its well-posednees in appropriateSobolev-Bochner space. Both, uniform and log-normal dif-fusion coefficient will be considered. Furthermore, we willintroduce and analyse a surface finite element discretisa-tion of the equation. We will show unique solvability ofthe resulting semi-discrete problem and prove optimal er-ror bounds for the semi- discrete solution and Monte Carlosamplings of its expectation. Our theoretical findings areillustrated by numerical experiments in two and three space

6 UQ18 Abstracts

dimensions.

Ana DjurdjevacFreie Universitat [email protected]

Charlie ElliottUniversity of Warwick, [email protected]

Ralf KornhuberFreie Universitat Berlin, [email protected]

Thomas RannerUniversity of [email protected]

CP3

Uncertainty Quantification of PDEs on RandomDomains using Hierarchical Matrices

We are interested in the first and second moment of thesolution of PDEs with random input data. Previous workshave shown that these moments can be computed by thehierarchical matrix arithmetics in almost linear time if thesolution depends linearly on the data. However, in thecase of random domains the dependence of the solutionof the data is nonlinear. Extending previous perturbationapproaches we can linearize the problem and can computethe first and second moment up to third order accuracy inalmost linear time.

Juergen DoelzTechnische Universitat DarmstadtGraduate School of Excellence [email protected]

Helmut HarbrechtUniversitaet BaselDepartment of Mathematics and Computer [email protected]

CP3

UQ for Nearly Incompressible Linear Elasticity

This talk presents some recent developments in error esti-mation associate with robust formulations of linear elastic-ity. Our focus is on uncertainty quantification. We discussstochastic Galerkin finite element (SGFEM) approxima-tion of two alternative mixed formulations of linear elastic-ity with random material parameters. Specific numericalresults confirm the validity of our theoretical results.

Arbaz KhanThe University of Manchester, Manchester [email protected]

Catherine PowellSchool of MathematicsUniversity of [email protected]

David SilvesterUniversity of Manchester

[email protected]

CP3

Domain Decomposition Solvers for Spectral SfemVersus Non-intrusive Sparse Grid Based Solvers forLarge Stochastic Dimensions

A three-level scalable domain decomposition (DD) algo-rithm which employs an efficient preconditioner at eachlevel is developed to solve the coarse system in the Galerkinprojection based intrusive spectral finite element method.The numerical and parallel scalabilities of this solver arefirst studied for a stochastic PDE with non-Gaussian sys-tem parameters represented as a stochastic process usingup to 25 random variables. The computational efficiencyof the intrusive DD solver is then demonstrated over sparsegrid base non-intrusive method.

Abhijit SarkarAssociate Professorcarleton University, Ottawa, Canadaabhijit [email protected]

Ajit DesaiCarleton University, [email protected]

Mohammad KhalilSandia National [email protected]

Chris PettitUnited States Naval Academy, [email protected]

Dominique PoirelRoyal Military College of Canada, [email protected]

CP3

Optimal Iterative Solvers for Linear Systems withRandom PDE Origins: ‘Balanced Black-box Stop-ping Tests’

This talk will discuss the design and implementation of effi-cient solution algorithms for symmetric indefinite and non-symmetric linear systems associated with FEM approxima-tion of PDE problems (Stokes and Navier–Stokes equationsin particular) with random coefficients. The novel featureof our preconditioned MINRES and GMRES solvers is theincorporation of error control in the ‘natural’ norm in com-bination with a reliable and efficient a posteriori estimatorfor the PDE approximation error. This leads to a robustand optimally efficient stopping criterion: the iteration isterminated as soon as the algebraic error is insignificantcompared to the approximation error. A stopping crite-rion for the (nonlinear) Newton solver is also presented.

Pranjal PranjalSchool of MathematicsUniversity of [email protected]

David SilvesterUniversity of Manchester

UQ18 Abstracts 7

[email protected]

CP3

Advection-Diffusion PDEs with Random Discon-tinuous Coefficients

Advection-diffusion equations arise in various applications,for instance in the modeling of subsurface flows. We con-sider this type of partial differential equations with ran-dom coefficients, which may then account for insufficientmeasurements or uncertain material procurement. To rep-resent, for example, transitions in heterogeneous media,we include spatial discontinuities in the parameters of theequation. For a given temporal discretization, we solve asecond order elliptic problem in each time step where therandom coefficient is given by the sum of a (continuous)Gaussian random field and a (discontinuous) jump part.By combining multilevel Monte Carlo sampling techniquesand pathwise finite element methods we are able to esti-mate moments of the solution to the random partial dif-ferential equation. To stabilize the numerical approxima-tion and accelerate convergence, we introduce an adaptive,pathwise triangulation for the finite element approxima-tion which accounts for the varying discontinuities in eachsample of the coefficients. This is joint work with AndreaBarth (SimTech, University of Stuttgart)

Andreas SteinUniversity of [email protected]

Andrea BarthSimTech, Uni [email protected]

CP4

Parameter Identification for a Viscoplastic Modelwith Damage and Effect of Conditions on Resultsusing Bayesian Approaches

The state of materials and accordingly the properties ofstructures are changing over the period of use, which mayinfluence the reliability and quality of the structure duringits life-time. Therefore identification of the model parame-ters and states of the system is a topic which has attractedattention in the content of structural health monitoring.In this work the focus is on identification of material pa-rameters and states of a viscoplastic damaging material.It is proposed to use Bayesian inverse methods for this.To do so, two steps are considered, solving the forwardand inverse problem. Therefore, first the propagation ofthe a priori parametric uncertainty through the model in-cluding damage describing the behaviour of a steel struc-ture is studied. A non-intrusive Stochastic Finite Ele-ment Method (SFEM) based on polynomial chaos is ap-plied. From the forward model, material parameters andinterval unobservable state variables can be identified us-ing measurement data such as displacement via Bayesianapproaches. In this study, two methods are applied. Thefirst is a Polynomial chaos based update method which is amodification of Kalman filter and the second one is a tran-sitional Markov chain Monte Carlo method. At the end,the results of both methods are compared.We study the effect of load conditions and sensor place-ments, which provide us the observation, on the identifica-tion procedure and how much information they provide tothe identification process.

Ehsan Adeli

Institute for Scientific ComputingBraunschweig, [email protected]

Bojana RosicInstitute for Scientific ComputingBrunswick University of [email protected]

Hermann G. MatthiesInstitute for Scientific ComputingTU [email protected]

CP4

Parameter Calibration and Model Validation ofJWL Equation of State Based on Multi-output

The adjustable parameters in JWL equation of state needto be calibrated to match the detonation simulation re-sults and experimental data. However, no model is per-fect. The model bias in physical models may have a signif-icant impact on the results of parameter calibration. Thecross-validation based on multi-output is one of the effec-tive methods to identify model bias. The cylinder test isthe benchmark experiment to evaluate output energy ofthe explosive and calibrate parameters of JWL equationof state for explosive products. The traditional calibrationmethod is usually only based on radius data in the radialdirection. As the improvement of test technology, both theradius and velocity can be observed in cylinder test. In thispaper, we propose that the Gaussian process and waveletmethods are used to build surrogate models for the radiusand velocity respectively. Then three statistical models areconsidered for calibration, that is both radius and velocity,only radius, only velocity. At last three calibration resultsare used for model validation.

Hua Chen, Guozhao Liu, Haibing Zhou, Shudao ZhangInstitute of Applied Physics and ComputationalMathematicschen [email protected], liu [email protected],zhou [email protected], zhang [email protected]

Zhanfeng SunLaboratory for Shock Wave and Detonation PhysicsResearch,Institute of Fluid Physics, [email protected]

CP4

Bayesian Inference for Estimating Model Discrep-ancy of an Electric Drive Model

The exponential increase of available compute power allowsto leverage the potential of uncertainty quantification (UQ)in new applications assuming an industrial setup. A mainchallenge is related to the fact that the considered modelsare rarely able to represent the true physics and demon-strate a discrepancy compared to measurement data. Fur-ther, an accurate knowledge of considered model parame-ters is usually not available. E.g. fluctuations in manufac-turing processes of hardware components introduce uncer-tainties which must be quantified in an appropriate way.This requires efficient methods for UQ based on Bayesianinference. An important task related to Bayesian inferenceis to accurately estimate parameter distributions from mea-surement data in presence of simulation model inadequacy.

8 UQ18 Abstracts

As a first step, we address this challenge by investigatingthe influence of model discrepancies onto the calibration ofmodel parameters and further consider a Bayesian infer-ence framework including an attempt to correct for modeldiscrepancy by an additional term. Synthetic measurementdata from an industrial application is then used to evalu-ate the framework. First measurement data is created withan electric drive model and second another electric drivemodel containing an artificial model discrepancy is usedto infer physical parameters and the model discrepancyterm. The application shows a promising perspective ofthe framework by good approximation of discrepancy andparameters.

David JohnCorporate Research, Robert Bosch GmbHUniversity [email protected]

Michael SchickRobert Bosch GmbHMechatronic Engineering CR/[email protected]

Vincent HeuvelineEngineering Mathematics and Computing Lab (EMCL)University Heidelberg, [email protected]

CP4

Stochastic Reconstruction of Porous Media fromVoxel Data

We describe a novel method to generate computer modelsof an aluminum foam given its CT-data, such that the gen-erated samples are similar to the target foam in terms ofthe statistics of a selection of microstructural descriptors.The generated samples could later be used in flow simula-tions in order to investigate the relationships between thematerial’s effective properties, such as permeability, con-ductivity, etc., and its microstructure.Currently used methods to generate such samples are eithervery time-intensive or inaccurate (in terms of being unableto match important microstructural descriptors). In orderto address this issue, a novel stochastic model based onthe generalised ellipsoid equation has been devised, whichwhen incorporated into a Markov chain Monte Carlo frame-work, is capable of producing structures that are statisti-cally close to the target structure within a reasonable timeframe. In this presentation, the aforementioned model andthe framework will be described along with an analysis ofthe quality of the reconstructed samples.

Prem Ratan Mohan RamInstitute of Scientific Computing, TU [email protected]

Elmar ZanderTU BraunschweigInst. of Scientific [email protected]

Noemi FriedmanInstitute of Scientific ComputingTechnische Universitat [email protected]

Ulrich RoemerInstitute of Dynamics and Vibrations

TU Braunschweig, [email protected]

CP4

Characterizing Errors and Uncertainties in Non-equilibrium Molecular Dynamics Simulations ofPhonon Transport

Non-equilibrium molecular dynamics (MD) is commonlyused for predicting phonon transport and estimating ther-mal conductivity of material systems. A heat flux is im-posed along a specific direction and the resulting steadystate temperature gradient is used to estimate the thermalconductivity based on Fourier’s law. This methodologyhowever is known to exhibit large discrepancies from ex-perimental measurements of bulk thermal conductivity. Inthis work, we study the impact of size and input heat fluxon such discrepancies for silicon using the Stillinger-Weber(SW) inter-atomic potential. Since thermal conductivityestimates are based on the nominal values of the SW po-tential parameters, we investigate variability in these esti-mates by perturbing the individual parameter values. Fur-thermore, we perform sensitivity analysis and calibrate im-portant parameters in a Bayesian setting using the avail-able experimental data. Non-intrusive polynomial chaossurrogates are employed to overcome computational hur-dles associated with sensitivity analysis as well as Bayesiancalibration. However, since construction of PC surrogatesitself can be expensive, we demonstrate methodologies thatexploit derivate-based sensitivity measures [Kucherenko etal., 2009], and active subspaces [Constantine, 2015] to re-duce the dimensionality of the problem.

Manav VohraUT [email protected]

Sankaran MahadevanVanderbilt [email protected]

CP4

Challenge of Detonation Modeling in ExtremeCondition and its Uncertainty QuantificationMethods

The mathematical and physical model of Solid explosive isdescribed by coupling nonlinear partial differential equa-tions. First of all, the mathematical model of the solidexplosive detonation is nonlinear partial differential equa-tions in extreme condition. Much uncertainty exists in thisempirical model. Second, solid explosive detonation is acomplex dynamical problem with multi-layer, multi-scalesuch as microcosmic, macroscopic, and microscopic pro-cess. Third, in the real numerical prediction process, wefirst search for optimal value the confidence interval of ran-dom (uncertain) variable based on the standard experimentcalibration and uncertain optimization. Then the result isused for prediction. The computation volume of classicalapproach is huge due to wide range of the search space andloss of structure information. We must deal with the com-putational expense and large scale computing problems inorder to get the expected efficiency and stability. Fourth,it is difficult to establish appropriate model to exactly de-scribe this process when considering these uncertainty. It isalso a challenge. Base on the above discussion, this paperanalyze the weakness and strength of current uncertaintyquantification methods, then propose a Bayesian parame-ter calibration method of JWL equation of state in explo-

UQ18 Abstracts 9

sive detonation based on the surrogate model.

Ruili WangProfessorwang [email protected]

Song JiangInstitute of Applied Physics and ComputationalMathematicswang [email protected]

Liang xiaoShandong University of Science and Technologywang [email protected]

Hu lxingzhiChina Aerodynamics Research and Development Centerwang [email protected]

CP5

Uncertainty Quantification of Locally NonlinearDynamical Systems using Polynomial Chaos Ex-pansion

The problem of uncertainty propagation for a nonlineardynamical system involves specifying the statistics or theprobability distribution of the quantities of interest. How-ever, propagating this uncertainty becomes computation-ally burdensome for large-scale complex nonlinear systems.An important class among these large-scale systems arethose with spatially-localized nonlinearities. In this study,a computationally efficient algorithm is proposed using ageneralized polynomial chaos expansion while reducing theorder of the system using a nonlinear Volterra integralequation for locally-nonlinear dynamical systems. A fastFourier transform significantly reduces the computationaleffort of solving the resulting low-order Volterra integralequations. An adaptive procedure for dividing the randomparameter space for the expansions is also introduced hereto further enhance the efficiency of the proposed algorithm.The proposed approach is used to estimate the mean andvariance of the roof acceleration of a 100 degree-of-freedombuilding model with a localized nonlinearity in the base iso-lation layer, which is modeled using a Bouc-Wen hysteresismodel. This building is subjected to a historic earthquakeexcitation. The proposed algorithm shows great compu-tational efficiency achieved compared to standard uncer-tainty quantification methods.

Subhayan De, Erik JohnsonUniversity of Southern [email protected], [email protected]

Steven WojtkiewiczClarkson [email protected]

CP5

Uncertainty Quantification for an Optical GratingCoupler using Adaptive Stochastic Collocation

Due to the progress in nanotechnological manufacturing,photonic components can nowadays be designed by struc-tures which are even smaller than the optical wavelength.These optical devices are a promising technology, e.g., plas-monic structures show great potential for guiding and con-trolling light. Numerical simulations play an importantrole in the design process of such devices. A challenging

problem is that input data is not exactly known, due tomanufacturing imperfections which have a significant im-pact on nanoscale devices. Therefore, statistical informa-tion about S-parameters of such stochastic systems basedon statistical information about the uncertain input data isof great interest. In this work we quantify uncertainties inthe finite element model of a plasmon waveguide [Preineret al, Efficient optical coupling into metal-insulator-metalplasmon modes with subwavelength diffraction gratings,APL, 2008]. The considered structure is periodic in twodimensions and therefore the computational domain canbe confined to a single unit cell on which Maxwell’s sourceproblem is solved. In order to cope with a large num-ber of uncertain and very sensitive parameters an adaptivesparse grid algorithm is applied [Schieche, Unsteady Adap-tive Stochastic Collocation Methods on Sparse Grids, PhDthesis, 2012].

Niklas GeorgGraduate School CE, TU Darmstadt, [email protected]

Ulrich RoemerInstitute of Dynamics and VibrationsTU Braunschweig, [email protected]

Sebastian SchoepsTheorie Elektromagnetischer Felder (TEMF) and GSCETechnische Universitaet [email protected]

Rolf SchuhmannTheoretische ElektrotechnikTU Berlin, [email protected]

CP5

Estimation of Plume Dispersion in HetrogeneousFormations by Transformed Adaptive StochasticCollocation Method

The plume dispersion of contaminants in hetrogeneous for-mations is dominated by spatial variability of flow veloci-ties associated with the hydraulic conductivities. In gen-eral, the solute statistics can be calculated by the MonteCarlo simulations, in which multiple realizations of a plumeare generated. The drawback of the Monte Carlo methodis that it usually requires a large number of realizationsand could be computationally prohibitive. Instead, thestochastic collocation method, based on sparse grids, servesas an alternative approach. However, estimation of plumedispersion using the stochastic collocation method has twochallenges. First, when the correlation length is small com-pared to the space domain, there will be a large numberof random dimensions after KarhunenLove decomposition,then the number of collocation points will increases dra-matically, i.e., curse of dimensionality. Second, the soluteconcentration has an unsmooth profile when the dispersiv-ity is small, hence the convergence rate of the stochasticcollocation method could be rather slow. In this study,a transformed adaptive stochastic collocation method isproposed, which adaptively selects the important randomdimensions and introduce an additional stage to transformthe unsmooth concentration to smooth arrival time. Theproposed method is tested in two- and three-dimensionalexamples with different spatial variances and correlation

10 UQ18 Abstracts

lengths to illustrate its accuracy and efficiency.

Qinzhuo LiaoKing Fahd University of Petroleum and [email protected]

Dongxiao ZhangPeking [email protected]

CP5

Adaptive Sparse Interpolation Methods for Elec-tromagnetic Field Computation with Random In-put Data

In many applications of science and engineering, time-or resource-demanding simulation models are often sub-stituted by inexpensive polynomial surrogates in order toenable computationally challenging tasks, e.g. optimiza-tion or uncertainty quantification studies of field mod-els. For such approaches, the surrogate model’s accu-racy is of critical importance. Moreover, in the case ofmany input parameters, the curse-of-dimensionality sub-stantially hampers the surrogate’s construction. State-of-the-art methods employ interpolation on adaptively con-structed sparse grids, typically based on Clenshaw-Curtis[B. Schieche, Unsteady Adaptive Stochastic CollocationMethods on Sparse Grids, TU Darmstadt, 2012] or, morerecently, Leja [A. Narayan and J.D. Jakeman, AdaptiveLeja Sparse Grid Constructions for Stochastic Collocationand High-Dimensional Approximation, SIAM J. Sci. Com-put., 2014] nodes. These methods provide accurate surro-gate models, mitigating or altogether avoiding the curse-of-dimensionality, at the cost of a relatively small numberof unused original model evaluations. In this work, we shalluse a benchmark example from the field of computationalelectromagnetics in order to compare the aforementionedmethods with respect to computational cost and accuracy.Moreover, we will suggest enhancement approaches, aim-ing to reduce the costs caused by unused model evaluationsduring the surrogate model’s construction.

Dimitrios LoukrezisInstitut Theorie Elektromagnetischer Felder, TUDarmstadtGraduate School Computational Engineering, [email protected]

Ulrich RoemerInstitut fuer Theorie Elektromagnetischer Felder, TEMFTechnische Universitaet [email protected]

Herbert De GersemInstitut Theorie Elektromagnetischer Felder, TUDarmstadtGraduate School Computational Engineering, [email protected]

CP5

Adaptive Pseudo-spectral Projection for Time-dependent Problems

Polynomial Chaos Expansions (PCEs) are popular spec-tral surrogate models typically applied to solve uncertaintypropagation problems. They are used to quantify the ef-fect of statistical fluctuations in simulation model outputs

due to uncertainty in parameter space. Various techniquesexist to compute the coefficients of PCEs. A canonical wayis to approximate the coefficients by numerical quadrature,evaluate the simulation model at the quadrature nodes andpost-process the results. However, this approach suffersfrom aliasing errors, which can be circumvented by tailor-ing the PCE to the numerical quadrature (in our case a”Smolyak sparse grid”) in a special way. This gives rise tothe so-called ”Pseudo-Spectral Projection method (PSP)”,which can be extended to a dimension adaptive version(aPSP). For time-dependent problems, however, the asso-ciated error estimators must take into account variationsof errors in time. Depending on the time interval of in-terest, uncertainties in parameters contribute differently tothe overall uncertainty fluctuations of the simulation modeloutputs. In this talk, we give a short overview on aPSPand investigate the choice of different error estimators for atime-dependent problem with uncertain parameters. Thesimulation model represents a direct current model for anelectrical motor. The focus is on convergence of the er-ror estimators in comparison to ”root mean-square errors”obtained by a set of test simulation data.

Michael SchickRobert Bosch GmbHMechatronic Engineering CR/[email protected]

CP6

Advanced Sensitivity Analysis for Offshore WindCost Modelling

The solution of managerial problems often involves the cre-ation of dedicated decision support models, which are com-plex and potentially involve inputs subject to uncertainty,meaning that the resulting relationships between inputsand outputs are poorly understood. This is the case forthe Offshore Wind Cost Analysis Tool (OWCAT) devel-oped at the EDF Energy R&D UK Centre; given the in-creased complexity, modellers can no longer grasp the re-sponse of the model outputs to variations in inputs basedsolely on intuition. As a result, three global sensitivityanalysis methods have been considered for OWCAT: Mor-ris, Sobol variance-based and PAWN density-based. Spe-cial attention has been paid to the work of Saltelli, wherethe most effective Morris or Sobol estimator is chosen basedon the sample size. In addition, where variance is not agood proxy for uncertainty, the PAWN technique allows themodeller to identify cases where the uncertainty is misrep-resented. A sensitivity analysis toolbox has been createdand benchmarked against a set of well-studied test func-tions, before being applied to OWCAT. The contributionof this talk is to demonstrate the application of these ad-vanced sensitivity analysis techniques not only to theoreti-cal test functions but also to a real-world decision-makingcase in the offshore wind industry. Useful insight into themodel is gained by factor prioritization and factor fixing.

Esteve Borras MoraEDF Energy R&D UK Centre, [email protected]

James SpellingEDF Energy R&D UK [email protected]

Harry van der WeijdeUniversity of Edinburgh

UQ18 Abstracts 11

[email protected]

CP6

The Space of Shapes and Sensitivity Analysis: AnApplication of Differential Geometry

Engineering designers routinely manipulate shapes in engi-neering systems toward design goals formulated as a func-tional over the shape—e.g., shape optimization. Differen-tial geometry provides a set of analytical tools for defininga shape calculus. In particular, we investigate applicationsusing the set of equivalence classes resulting from an S1

embedding in R2 given the set of all re-parameterizations.Combining the presented space of shapes with a Soblev-type metric and corresponding shape sensitivity analysisenables the study of a local subspace of shapes on a Rie-mannian manifold. We discuss a numerical interpretationto estimate important directions in this subspace whichchange a functional the most, on average.

Zach GreyColorado School of [email protected]

CP6

Bayesian Estimation of Probabilistic SensitivityMeasures for Computer Experiments

Simulation-based experiments have become increasinglyimportant for risk evaluation and decision-making in abroad range of applications, in engineering, science andpublic policy. In the presence of uncertainty regardingthe phenomenon under study, in particular, of the simu-lation model inputs, a probabilistic approach to sensitivityanalysis becomes crucial. A number of global sensitivitymeasures have been proposed in the literature, togetherwith estimation methods designed to work at relativelylow computational costs. First in line is the one-sampleor given-data approach which relies on adequate partitionsof the input space. We propose a Bayesian alternative forthe estimation of several sensitivity measures which showsa good performance on synthetic examples, especially forsmall sample sizes. Furthermore, we propose the use of anonparametric approach for conditional density estimationwhich bypasses the need for pre-defined partitions, allow-ing the sharing of information across the entire input spacethrough the underlying assumption of partial exchange-ability. In both cases, the Bayesian paradigm ensures thequantification of the uncertainty in the estimation.

Xuefei LuBocconi [email protected]

Emanuele BorgonovoBocconi University (Milan)ELEUSI Research [email protected]

Isadora Antoniano-VillalobosBocconi UniversityDepartment of Decision [email protected]

CP6

Efficient Evaluation of Reliability-oriented Sensi-

tivity Indices

The role of simulation has kept increasing for the analy-sis of complex systems. In that prospect, global sensitivityanalysis studies how the uncertainty in the code output canbe apportioned to the uncertainty in the different stochas-tic code inputs. However, most of the classical sensitivitymethods, such as the Sobol indices, focus on the centraltendency of the output. When interested in the analysis ofundesirable events associated with small probabilities, suchsensitivity indices may thus be not meaningful. This moti-vated the proposal of reliability-oriented sensitivity indices.However, the evaluation of these indices generally requiresa huge number of dedicated code evaluations, which canlead to extremely high total computational cost. To cir-cumvent this problem, the present work proposes a methodto evaluate several kinds of reliability-oriented sensitivityindices, which is only based on the code evaluations thatwere needed to compute the probability of occurrence ofthe considered undesirable event. After having introducedthe theoretical basis of this method, several applicationswill be presented to illustrate its efficiency.

Guillaume Perrin, Gilles [email protected], [email protected]

CP7

Using Computer Models and UQ to Diagnose Di-astolic Heart Failure

Diastolic heart failure is a currently untreatable conditionthat is caused by stiffness of the heart. We investigatethe possibility of diagnosing the condition using data fromMRI scans, a numerical model of the heart, simulating asingle beat, and uncertainty quantification. To invert themodel we use history matching. This involves building anemulator so that we can predict the model results (withassociated uncertainty) at any point in input space andhence an implausibility score. Those parts of input spacethat are implausible are ruled out. Running further wavesof model runs within the not implausible region reducesit until this region is not reduced any further or becomeszero. The model is then calibrated. If there are no not im-plausible points we have to adjust the model discrepancyhow well we expect the model to reproduce the data. Tocarry this out we need to make a number of choices. Weneed to reduce the dimension of the problem (the initialdimension is high) without losing vital information. Thereare regions of input space where the modelled heart is toostiff to beat and so the model run does not complete. Wedo not want to emulate these regions so we find a way ofpredicting where they are. We also discuss model discrep-ancy. This is a measure of how well we expect the modelto perform for the chosen metric and is an integral part ofhistory matching. We show that not-implausible regionsfor healthy and sick patients are disjoint.

Peter Challenor, Lauric FerratCollege of Engineering, Mathematics and PhysicalSciencesUniversity of [email protected], [email protected]

Steven NiedererKing’s College London

12 UQ18 Abstracts

[email protected]

CP7

Fluid-structure Interaction with Uncertainty inMedical Engineering

In recent decades biomedical studies with living probands(in vivo) and artificial experiments (in vitro) have beencomplemented more and more by computation and simu-lation (in silico). In silico techniques for medical diagnosiscan give enhanced information for the risk stratificationof cardiovascular surgery. Making surgical decisions, highreliability is a requirement for diagnostic methods. Sinceuncertainties are inherent to the measurement-based simu-lation input data, a quantification of their impact is neededfor reliability analysis. For cardiovascular applications,such as the simulation of blood flow through the aorta, thephysiological understanding and modeling has still chal-lenging aspects with respect to the uncertainty. We presenta numerical framework for the three-dimensional simula-tion of aortic blood flow with elastic vessel wall movementincluding uncertainty quantification. The underlying fluid-structure interaction (FSI) problem is calibrated patient-specifically by magnetic resonance imaging. The resolutionof the vessel geometry as well as of the time profile of themeasured flow is limited up to a certain accuracy. Fur-thermore, the non-invasive measurement of the stiffness ofthe aortic vessel wall is highly challenging. This leads touncertainties in the physical parameters of the underlyingmathematical model. We present an approach to efficientlycompute patient-specific aortic FSI simulations incorporat-ing UQ based on high performance computing.

Jonas KratzkeEMCL, IWRHeidelberg [email protected]

Vincent HeuvelineEngineering Mathematics and Computing Lab (EMCL)University Heidelberg, [email protected]

CP7

Bayesian Uncertainty Quantification for EpidemicSpread on Networks

While there exist a number of mathematical approachesto modeling the spread of disease on a network, analyz-ing such systems in the presence of uncertainty introducessignificant complexity. In scenarios where system param-eters must be inferred from limited observations, generalapproaches to uncertainty quantification can generate ap-proximate distributions of the unknown parameters, butthese methods often become computationally expensive ifthe underlying disease model is complex. In this talk, I willapply Π4U, a recent massively parallelizable Bayesian un-certainty quantification framework, to a model of diseasespreading on a network of communities, showing that themethod accurately and tractably recovers system parame-ters and selects optimal models in this setting.

Karen LarsonBrown Universitykaren [email protected]

Zhizhong Chen, Clark BowmanBrown University, USAzhizhong [email protected], clark [email protected]

Panagiotis HadjidoukasCSE-lab, Insititute for Computational Science, D-MAVT,ETH Zurich, [email protected]

Costas PapadimitriouUniversity of ThessalyDept. of Mechanical [email protected]

Petros KoumoutsakosProfessorship for Computing ScienceETH Zurich, [email protected]

Anastasios MatzavinosBrown Universityanastasios [email protected]

CP7

Data-extraction Uncertainty in Meta-analysis ofPublished Medical Data

The data required for meta-analysis are often not availablefor extraction in published medical studies (for example,the true number of deaths). Often these data are missingdue to loss-to-follow up. It may be possible to infer themissing data using other information available in the pa-pers, such as Kaplan Meier survival plots. If the probabil-ity of survival is extracted, an estimated number of deathscould be inferred and a meta-analysis computed. However,the survival probability is an estimate with associated un-certainty. Current meta-analysis models do not accountfor the uncertainty associated with the data estimationsand may result in potentially inaccurate conclusions. Wepropose methods to model the uncertainty that occurs atthe data-extraction step and properly propagate it into thefinal meta-analysis estimates.

Shemra RizzoUC [email protected]

CP7

Uncertainty Quantification for the Reliable Simu-lation of a Blood Pump Device

Heart failure (HF) is a severe cardiovascular disease, it hap-pens when the heart muscle is so weakened such that itcan not provide sufficient blood as body needs. More than23 million people are suffered by HF worldwide. Despitethe modern transplant operation is well established, thelack of heart donations becomes a big restriction on trans-plantation frequency. Therefore, ventricular assist devices(VADs) can play an important role in supporting patientsduring waiting period and after the surgery. We consideran intrusive Galerkin projection based on the generalizedPolynomial Chaos (gPC). The intrusive Galerkin approachcan represent stochastic process directly at once with Poly-nomial Chaos series expansions, and therefore optimizesthe total computing effort comparing with classical non-intrusive methods. We compared different preconditioningtechniques for a steady state simulation of a blood pumpconfiguration in our previous work, the comparison showsthat an inexact multilevel preconditioner has a promisingperformance. In this work, we show an instationary bloodflow through a Food Drug Administration blood pump con-figuration with Galerkin Projection method, which is im-

UQ18 Abstracts 13

plemented in our open source Finite Element library Hi-flow3. Three uncertainty sources are considered: inflowboundary condition, rotor angular speed and dynamic vis-cosity, the numerical results are demonstrated with morethan 40 Million degrees of freedom by using supercom-puter.

Chen SongHITS gGmbH, [email protected]

Vincent HeuvelineHITS gGmbH, HeidelbergHeidelberg [email protected]

CP8

High Performance Computing for UncertaintyQuantification: Challenges and Perspectives forFlow Problems

The available compute power on high performance comput-ers (HPC) is still increasing exponentially. This tremen-dous increase of capacity in the last decade is mainly dueto the growth of the number of cores used for a given nu-merical simulation. Beside scalability and data locality is-sues taking full advantage of intrinsic features remains achallenging task, even for pure deterministic models. Typ-ical setups for which this issues may become highly crit-ical can be found e.g. in computational fluid mechanics(CFD). Such problems are usually modeled by the Navier-Stokes equations, which include parameters which can notalways be assumed to be known accurately. Taking suchuncertainties into account by stochastic models results ina parametrization by a set of independent random vari-ables. This leads to a blown up system size requiring evenmore scalable numerical schemes in order to take advantageof the capabilities of high performance computers. Thistalk discusses the impact of emerging hardware technol-ogy in HPC for uncertainty quantification (UQ) appliedto CFD. The link between contemporary deterministic ap-proach and intrusive schemes based on stochastic Galerkinmethods using Polynomial Chaos will be established. Asimplified performance model based on the computationalintensity is presented and allows to evaluate the quality andthe efficiency of UQ approaches in the context of CFD. Nu-merical experiments assuming complex flow problems arepresented.

Vincent Heuveline, Saskia HauptEngineering Mathematics and Computing Lab (EMCL)University Heidelberg, [email protected], [email protected]

CP8

Eigenspace-based Uncertainty Characterization inLarge-Eddy Simulation of Turbulent Flow

Large-Eddy Simulation (LES) has gained significant im-portance as a high-fidelity technique for the numerical res-olution of complex turbulent flow. In comparison to DirectNumerical Simulation (DNS), LES approaches reduce thecomputational cost of solving turbulent flow by removingsmall-scale information from the conservation equations vialow-pass filtering. However, the effects of the small scaleson the resolved flow field are not negligible, and thereforetheir contribution in the form of Sub-Filter Stresses (SFS)needs to be modeled. As a consequence, the assumptions

introduced in the closure formulations result in potentialsources of structural uncertainty that can affect the Quan-tities of Interest (QoI). Moreover, unlike DNS, the resolvedfluctuations at a size just larger than the ”filter width”still have significant energy. This complicates the controlof numerical errors which become important sources of nu-merical uncertainty that can impact the QoIs both directlythrough the resolved scales and indirectly via the modeledSFS. The aim of this work is to characterize the impor-tance of these two types of uncertainty and their impact onthe QoIs by means of recently developed eigenspace-basedstrategies that decompose the SFS tensor in magnitude,shape and orientation, to facilitate the analysis. In thepresentation, the strategy will be described in detail andresults from LES of turbulent flow will be discussed.

Lluis JofreStanford UniversityMechanical [email protected]

Stefan P. DominoSandia. National [email protected]

Gianluca IaccarinoStanford UniversityMechanical [email protected]

CP8

Predictive Simulations for Calculating Wind Loadson Buildings

Computational fluid dynamics can be used in wind engi-neering applications to quantify pressure loads on build-ings and assess wind hazards, but simulations of atmo-spheric boundary layer flows are strongly influenced byuncertainty in the inflow boundary condition and the tur-bulence model. In the present work we quantify the un-certainty in a Reynolds-averaged Navier-Stokes simulationof a wind tunnel experiment of a high-rise building. Theobjective is to predict the mean pressure coefficient dis-tribution on the lateral building facade with a quantifiedconfidence interval, and to compare the results to the ex-perimental data. The uncertainty in the inflow condition ischaracterized using 3 uncertain parameters (reference ve-locity, roughness length and model orientation), and thenpropagated to the quantities of interest using a polynomialchaos expansion approach. The results indicate that theuncertainty in the inflow conditions is non negligible, butinsufficient to explain the discrepancy with the wind tun-nel test data. We subsequently investigate the uncertaintyrelated to the turbulence model, using a framework that in-troduces perturbations in the Reynolds stress tensor. Theperformance of this method is evaluated by comparing theresults to a large-eddy simulation of the same building con-figuration. The results demonstrate some shortcomings inthe existing approach, and indicate that a multi-fidelitysimulation framework might be required to achieve the ob-jective.

Giacomo LambertiColumbia [email protected]

Catherine GorleStanford University

14 UQ18 Abstracts

[email protected]

CP8

Estimation of Uncertainty of Turbulence ModelPredictions in SU2

Reliable predictions for turbulent flows are of fundamentalimportance for a variety of engineering analysis and designapplications. Herein, Reynolds-Averaged Navier Stokesmodels represent the workhorse for industrial design andinvestigations. Owing to the closure assumptions utilizedin their formulation, RANS models introduce a significantmeasure of model-form uncertainty in the predictions. Theestimation and eventually, reduction of these uncertaintiesis of great interest to ensure that turbulent models canbe reliable predictive tools in industrial analysis and de-sign. Many methodologies to estimate this uncertainty arecurrently being developed. However, these are typicallyimplemented only in proprietary codes. A vast majorityof industrial simulations involving turbulent flows utilizecommercial or open-source CFD codes. At present, suchturbulence model-form uncertainty estimation is not avail-able on any such software suite. We implement and demon-strate the Eigenspace Perturbation framework for turbu-lence model-form uncertainty estimation in the SU2 CFDsuite. SU2 is an open-source compressible analysis and de-sign framework that utilizes unstructured mesh technologyto represent complex geometric shapes typical of indus-trial practice. This implementation is tested over a seriesof turbulent flows, benchmark test cases and flows of en-gineering import, and the computed uncertainty boundsare contrasted against experimental data & high-fidelitysimulations.

Jayant MukhopadhayaStanford UniversityAeronautics and Astronautics [email protected]

Aashwin A. MishraCenter for Turbulence Research,Stanford University & NASA [email protected]

Gianluca IaccarinoStanford UniversityMechanical [email protected]

Juan J. AlonsoDepartment of Aeronautics and AstronauticsStanford [email protected]

CP8

Validation of a Framework for Data Assimilationand Uncertainty Quantification for Urban FlowPredictions

The numerical prediction of urban wind flows is funda-mental to ensure the design of resilient and sustainablecities. For example, computational fluid dynamics resultscan help to optimize pedestrian wind comfort, air qual-ity, natural ventilation strategies, and the deployment ofwind turbines. However, the significant variability and un-certainty associated with the atmospheric boundary layerposes an important challenge to the accuracy of the re-sults. To understand the effect of uncertainties in the

models and develop better predictive tools, we started apilot study that combines a numerical analysis with a fieldmeasurement campaign on Stanford Universitys campus.We consider the incoming wind direction and magnitudeas uncertain parameters and perform a set of Reynolds-averaged Navier-Stokes simulations to build a polynomialchaos expansion response surface at each sensor location.We subsequently assimilate the measurement data fromstrategically deployed sensors, using an inverse ensembleKalman filter to iteratively infer the probability distribu-tions for the inflow parameters. Once these distributionsare obtained, the forward analysis is repeated to obtainpredictions for the flow field in the entire urban canopyand the results are compared with the experimental datafrom several other wind sensors. This procedure enablesa better characterisation of the uncertainty in the resultsand improves the confidence in our wind flow predictions.

Jorge Sousa, Catherine GorleStanford [email protected], [email protected]

CP9

Global Optimization of Expensive Functions usingAdaptive Radial Basis Functions Based SurrogateModel via Uncertainty Quantification

Global optimization of expensive functions has importantapplications in physical and computer experiments. Itis a challenging problem to develop efficient optimizationscheme, because each function evaluation can be costly andthe derivative information of the function is often not avail-able. We propose a novel global optimization frameworkusing adaptive Radial Basis Functions (RBF) based sur-rogate model via uncertainty quantification. The frame-work consists of two iteration steps. It first employs anRBF-based Bayesian surrogate model to approximate thetrue function, where the parameters of the RBFs can beadaptively estimated and updated each time a new pointis explored. Then it utilizes a model-guided selection cri-terion to identify a new point from a candidate set forfunction evaluation. The selection criterion incorporatesthe expected improvement (EI) of function prediction andits uncertainties. We conduct simulation studies with stan-dard test functions to show that the proposed method ismore efficient and stable in searching the global optimizercompared with a prominent existing method.

Ray-Bing ChenDepartment of StatisticsNational Cheng Kung [email protected]

Yuan WangWells [email protected]

C. F. Jeff WuGeorgia Institute of [email protected]

CP9

Solving Stochastic Optimal Power Flow Problemvia Polynomial Chaos Expansions

Uncertainty quantification has recently become of signifi-cant importance in the field of power systems. This de-velopment originates, among others, from the huge influxof renewable energy sources to the power grid. The natu-

UQ18 Abstracts 15

ral fluctuations of wind/solar energy call for a structuredconsideration of uncertainties such that a reliable supplyof electrical energy can be provided in the future. The so-called optimal power flow (OPF) problem is a cornerstonein the operational planning of power systems. The goal ofthis nonconvex optimization problem is to determine op-erating points for power systems such that a (monetary)cost is minimized while technical limitations (e.g. genera-tion limits, line flow limits) and the nonconvex power flowrestrictions are respected. The talk will discuss how theOPF problem formulation and solution change in the pres-ence of stochastic uncertainties, leading to stochastic OPF.Starting from existing approaches we will show that poly-nomial chaos expansion allows a unified framework to for-mulate and solve stochastic OPF problems in the presenceof (non)Gaussian uncertainties. Polynomial chaos allowsto answer affirmatively what generation policies are opti-mal, i.e. how the power generation is adjusted optimally inthe presence of stochastic uncertainties. We will supportour findings by IEEE test case simulations implemented inJulia. To this end, first steps towards a Julia package forgeneralized polynomial chaos are presented as a byproduct.

Tillmann Muehlpfordt

Karlsruhe Institute of Technology (KIT)[email protected]

Timm Faulwasser, Veit HagenmeyerKarlsruhe Institute of [email protected], [email protected]

CP9

Topology Optimization using Conditional Value atRisk

Traditional approaches to formulating stochastic topologyoptimization problems for additive manufacturing tend touse linear combinations of the mean and standard deviationof a design figure of merit (e.g., compliance). However,this choice of objective/constraint underweights tail eventsfor non-normal random variables. We propose replacingthese mean-plus-standard-deviation expressions with theconditional value-at-risk (CVaR), and argue that it bettercompensates for worst-case tail events.

Geoffrey M. OxberryLawrence Livermore National [email protected]

CP9

Uncertainty Quantification for Stochastic Approx-imation Limits using Chaos Expansion

We investigate the analysis of uncertainty quantificationof the limit of a stochastic approximation (SA for short)algorithm. Typically, this limit φ� is the zero of a func-tion h written as an expectation of H ; this naturally arisesin various problems about optimization, statistics, controletc. In our study, the limit φ� is subjected to uncertaintythrough a parameter θ (related to the probabilistic modelused for the expectation or to the definition of H), and weaim at deriving the probabilistic distribution of φ�(θ) asθ has a distribution π. Our approach consists in decom-posing φ� on a L2(π)-orthogonal basis (Chaos Expansion),and in computing the basis coefficients using a suitabledimension-increasing SA algorithm. Under mild assump-tions, we prove the almost-sure convergence of this SA algo-rithm in the underlying infinite-dimensional Hilbert space.Numerical results support our theoretical analysis. No-

tably, the possibility of letting the number of estimatedexpansion coefficients tend to infinity appears not only asa necessary ingredient for proving the theoretical conver-gence of the algorithm, but also as a key feature from anumerical performance point of view, including for the es-timation of the lower order coefficients.

Uladzislau StazhynskiEcole [email protected]

Stephane C. CrepeyEvry University, [email protected]

Gersende FortUniversite de [email protected]

Emmanuel GobetEcole [email protected]

CP10

A Bayesian Approach for Quantifying the Uncer-tainty of Physical Models Integrated into Com-puter Codes

We adopt a Bayesian approach to address an inverse prob-lem arising in uncertainty quantification. The issue is to as-sess the uncertainty of physical models integrated into com-puter codes by means of indirect physical measurements.To do so, a non-intrusive approach is followed in this workwhere the code is taken as a “black-box” function. Af-ter introducing a statistical model that links the availablephysical measurements to the corresponding code outputs,a Gibbs sampler based on the full conditional distributionsis laid out in both linear and non linear settings. However,as the computer simulations are moderately expensive, ittends to become impractical in reasonable time. To over-come this difficulty, we propose to replace the computercode with a Gaussian process emulator which can deliverboth robust and quick predictions of the code outputs. Anindustrial application to a thermal-hydraulic code at thesystem scale is displayed where the uncertainty of two con-densation models for safety injections is assessed.

Guillaume Damblin, Pierre [email protected], [email protected]

CP10

When Models and Data Disagree: Sparse Resolu-tions to Inconsistent Datasets in B2BDC

Bound-to-Bound Data Collaboration (B2BDC) providesa deterministic optimization-based framework for Uncer-tainty Quantification. In this approach, parameters areconstrained by combining both polynomial surrogate mod-els and experimental observations with interval uncer-tainty. A collection of such models and observations istermed a dataset and carves out a feasible region in theparameter space. If the models and data agree withinuncertainty, then this feasible set is non-empty and thedataset is consistent. In real-world application, however, itis often the case that diverse collections of experiments andobservations are inconsistent. Revealing the source of thisinconsistency, i.e., identifying which models and/or obser-

16 UQ18 Abstracts

vations are problematic, is essential. In this presentation,we discuss a constraint relaxation-based approach, entitledthe vector consistency measure, for investigating datasetswith numerous sources of inconsistency. This tool seeks asparse set of relaxations by minimizing an L1-penalty. Thebenefits of this vector consistency approach over a previ-ous method of consistency analysis are demonstrated intwo realistic gas combustion examples.

Arun Hegde, Wenyu Li, James OrelukUniversity of California [email protected], [email protected],[email protected]

Andrew PackardU of California at [email protected]

Michael FrenklachUniversity of California [email protected]

CP10

Beyond Black-boxes in Model-based Bayesian In-verse Problems

Two of the most important roadblocks in the solution ofmodel-based Bayesian inverse problems are: a) the curse ofdimensionality, b) the difficulty in quantifying structural,model errors. We propose a novel framework which recaststhe solution of PDEs encountered in continuum thermody-namics as probabilistic inference over function spaces. Thisformulation foregoes the expensive solution of a forward oradjoint problem by considering all physical state variablesas latent, random variables constrained by the governingequations of the forward model. It also provides an al-ternative approach for solving inverse problems, since itdefines a unified, probabilistic view where both the differ-ential operator as well as the observations represent sourcesof information that is incorporated into the final poste-rior. In contrast to traditional, black-box approaches, theprobabilistic modeling and resolution of individual statevariables allows to quantify inadequacies of the model in aphysically interpretable manner and to assess their effectin the solution.

Phaedon S. KoutsourelakisTechnical University [email protected]

Maximilian KoschadeTechnical University of [email protected]

CP10

4D-Var Data Assimilation using Exponential Inte-grators

We consider four-dimensional variational (4D-Var) data as-similation problems for evolutionary differential equations.For the time discretization, exponential propagation iter-ative methods of Runge-Kutta type (EPIRK) are used,which can achieve a high order of accuracy and are wellsuited for stiff problems. Based on (noisy) measurements,given at different points in time, we maximize the poste-rior using gradient based optimization in combination witha discrete adjoint approach. We use a W-method, whichallows model independent approximations of the Jacobianwhile preserving the accuracy of the scheme. Additionally,

the occurrence of Hessian matrices in the adjoint problemcan be avoided. The discrete adjoint code itself is obtainedusing algorithmic differentiation. Two application exam-ples will be considered for illustration: we will use 4D-Varto estimate the initial conditions of the Lorenz-96 modeland discuss a computational magnetics problem, where theunknown parameters characterize the nonlinearity of themagnetic material.

Ulrich RoemerInstitut fuer Theorie Elektromagnetischer Felder, TEMFTechnische Universitaet [email protected]

Mahesh NarayanamurthiVirginia Tech, [email protected]

Adrian SanduComputational Science LaboratoryVirginia Polytechnic Institute and State [email protected]

CP10

Bayesian Inversion for High Dimensional Systemsusing Data Assimilation

A poor estimation of model parameters is one of the mainsources of uncertainty in any physical system. Over theyears, different data assimilation methods have been imple-mented to acquire improved estimations of model parame-ters. They correct the uncertain parameters by assimilat-ing the observed data based on the likelihood function ofBayesian framework in such a way that the mathematicalmodel approximates the true state as closely and consis-tently as possible. However, most of these methods areeither developed on the assumption of Gaussian pdfs orhave the limitation of high computational expenses. Thusthere is a high demand of new more efficient methods. Inthis research work, we propose a generic sequential MonteCarlo (SMC) approach as a combination of ensemble trans-form particle filter and tempering. Ensemble transformparticle filter is a data assimilation technique developedon the backbone of Bayesian approach with the frame-work of linear transport problem. While tempering en-sures a smoother transition from the prior distribution toposterior distribution. We implement a twin experimentbased on Darcy flow model to examine the performance ofthis new approach with other SMC methods. The numeri-cal experiments demonstrate that in physical systems withnon-Gaussian distributions the proposed method is moreefficient in accurately capturing the multi-modal posteriordistribution.

Sangeetika [email protected]

Svetlana DubinkinaCWI, [email protected]

Marco IglesiasUniversity of [email protected]

CP10

Inverse Problem for Random-parameter Dynami-

UQ18 Abstracts 17

cal Systems

The problem of estimation of parameters of a dynamicalsystem from discrete data can be formulated as the prob-lem of inverting the map from the parameters of the sys-tem to points along a corresponding trajectory. Presentedwill be an overview of recent results in this area, specifi-cally, (i) conditions for identifiability of linear and linear-in-parameter systems from a single trajectory, (ii) analyt-ical and numerical estimates of the maximal permissibleuncertainty in the data for which the qualitative featuresof the inverse problem solution for linear systems (such asexistence, uniqueness, or attractor properties) persist, and(iii) techniques for identification of deterministic processeswith uncertain parameters from distribution of data.

David SwigonDepartment of MathematicsUniversity of [email protected]

Shelby StanhopeTemple [email protected]

Jon RubinUniversity of [email protected]

CP11

Self-Exciting Point Processes and UncertaintyQuantification in Recording and Forecasting LongDuration Episodic Phenomena Like VolcanicEvents

Several sources of uncertainty affect our knowledge of thepast eruptive record in every volcanic system. Unknownsub-sequences of past eruptions with ill-constrained dat-ing are common, and sometimes the under-recording issuesmay be substantial. Clustering phenomena are often ob-served in the data, violating any simplifying assumptionof memorylessness. Moreover, the necessity of gathering asufficiently large dataset of past events often implies goingfar into the past, and hence dealing with the progressiveevolution of the volcanic system. Frequency, size, chem-istry and location of activity can greatly change in the longterm. We present the modeling choices that we made incase studies of Campi Flegrei caldera (Italy), and Long Val-ley volcanic region (CA, USA) to deal with these problems.We use a Cox process to enable uncertainty quantificationin the outputs. We test alternative self-exciting and non-stationary assumptions to compensate for system evolu-tion, without discarding the older, but still useful informa-tion. In particular: (i) forward simulation of the processesallows for their self-development, and can flexibly adaptto the observed history without needing an arbitrary pre-scription of trends or additional long-lasting informationstructures; (ii) the processes fit well to the most recentlyobserved data, but they also use the entire past history tocalibrate their evolving features.

Andrea BevilacquaUniversity at BuffaloCenter for Geohazard [email protected]

Abani PatraDepartment of Mechanical & Aerospace EngineeringUniversity at [email protected]

Marcus BursikDept. of GeologySUNY at [email protected]

Augusto NeriIstituto Nazionale di Geofisica e VulcanologiaSezione di [email protected]

E. Bruce PitmanDept. of MathematicsUniv at [email protected]

CP11

Bayesian Inference on Uncertain Kinetic Parame-ters for the Pyrolysis of Composite Ablators

The heat shield of high speed reentry spacecraft is oftenmade up of porous ablative thermal protection materials(TPMs) that can accommodate high heating rates and heatloads through phase change and mass loss. When the tem-perature increases, those materials absorb heat and start topyrolyze, releasing gases that interact with the surround-ing flow. Modeling the species production and the materialdecomposition rate is important for their use in numeri-cal simulations for the robust determination of heat shieldthickness. To this end, pyrolysis experiments have beenperformed on TPMs in order to determine the kinetic pa-rameters of chemical laws that govern mass loss and speciesproduction rates [Wong et al., Polym. Degrad. Stabil.,112:122131, 2015]. Samples are heated in a furnace andthe mass loss is measured while the species produced arecollected. In this talk, we introduce the context of physico-chemical modeling of ablation, the experiments, and thesources of uncertainty. We present the model used to linkthe kinetic parameters to the experimental observations bymeans of Arrhenius laws. We then state the formulationof the inverse problem in a Bayesian framework and wediscuss the elaboration of an appropriate likelihood func-tion. We finally present results of the inversion procedure,with particular attention to the influence of the number ofspecies taken into account in the model.

Joffrey CoheurUniversite de [email protected]

Thierry Maginvon Karman Institute, [email protected]

Philippe ChatelainUniversite Catholique de [email protected]

Maarten ArnstUniversite de [email protected]

CP11

Bayesian Updating for Uncertain Condition Stateusing Monitoring and Sequential Inspections

Structural Health Monitoring using permanent sensors hasbeen a fast growing management tool during the lastdecade. This is mostly due to technological advances in

18 UQ18 Abstracts

This progress resulted in decreasing cost and improvingefficiency. However, till this date, the installation of sen-sors on every measurable feature of the structure stills pro-hibitory costly. As such one must rely on the measures ofa few judiciously placed sensors to predict the conditionstates of the structure elements. The low number of sensorsinduces various types of uncertainties related to measure-ment techniques and structural behavior. In this paper,the uncertain condition states of the structural membersare considered in the integrity assessment. An approach isproposed for a Bayesian updating of the condition states,based on the output of the monitoring sensors. The pos-terior probability distribution describing the magnitude ofthe defect for each element is further updated by inspect-ing a few elements in an optimal sequence such that theposterior probability distribution for the remaining not in-spected elements reaches an optimal threshold of likelihoodabout their condition states. A numerical application ofthe proposed methodology is presented. A sensitivity anal-ysis is performed for the optimal cost with respect to var-ious parameters such as sensor configuration, inspectionresult uncertainty, etc., in order to highlight the potentialbenefits of the proposed approach.

Christelle GearaSaint Joseph University - Mar Roukoz - [email protected]

Rafic FaddoulESIB, Saint-Joseph University, Mar Roukos, PoB. 11-154,[email protected]

Alaa ChateauneufUniversite Clermont Auvergne, CNRS, Institut Pascal,[email protected]

Wassim RaphaelESIB, Saint-Joseph University, Mar Roukos, PoB. 11-154,[email protected]

CP11

A Bayesian Coarse-graining Approach to the Solu-tion of Stochastic Partial Differential Equations

Well-established methods for the solution of stochastic par-tial differential equations (SPDEs) such as stochastic col-location methods or polynomial chaos expansions typicallystruggle with high dimensionality of the stochastic space.Moreover, finite element approximations to the forwardmodel require discretizations small enough to resolve thevariability of the underlying random fields, leading to alarge and expensive set of algebraic equations. We intro-duce a data-driven, physically-inspired probabilistic surro-gate modeling approach which combines Bayesian coarse-graining of the involved random fields with multi-fidelitytechniques that allow accurate reconstruction of the finescale response given the repeated, but much cheaper eval-uation of the PDE on a coarser discretization. By automat-ically detecting and retaining only features of the stochas-tic process which are most predictive to trace back thefine scale solution, we are able to work in the regime ofhigh dimensional uncertainties (thousands) but only fewforward model runs (tens). The model achieves sublinearcomplexity and is fully probabilistic, i.e. we obtain pre-dictive distributions that accurately quantify both modeland limited data uncertainties. The proposed approach is

validated with different problems in random media.

Constantin GrigoTechnical University of [email protected]

Phaedon S. KoutsourelakisTechnical University [email protected]

CP11

Bayesian Model Averaging Kriging

Metamodels are widely used to approximate the responseof computationally-intensive engineering simulations for avariety of applications. They are formulated based on anobservation of the original numerical model. Among thevariety of metamodeling techniques, Kriging has gainedsignificant popularity. Kriging models the underlying in-put/output function by combining two parts: a regres-sion model that expresses the global mean behavior, anda zero-mean stationary Gaussian Process (GP) that cap-tures localized deviations from the global one. Conven-tional Kriging approaches construct the regression modelwith a single set of basis functions. Selection between dif-ferent candidate sets of basis functions is typically deter-ministically performed, choosing a single model based onsome optimality criterion. In this contribution, a Bayesianmodel class averaging formulation is adopted to considerdifferent sets of basis functions in establishing the Krigingpredictions. Each of these models is weighted by its re-spective posterior model probability when aggregating thefinal predictions. To enable the computationally efficientapplication, a data-driven and tractable prior distributionis proposed, and multiple strategies in the inference andprediction stage are presented. Numerical examples showthat the proposed Kriging implementation is capable ofconstructing a more accurate metamodel than other alter-natives with comparable computational cost.

Alexandros A. Taflanidis, Jize ZhangUniversity of Notre [email protected], [email protected]

CP11

Climate Model Discrepancy: Thinking Outside ofthe UQ Toolbox

The Uncertainty Quantification community has widely ac-cepted the need to quantify and include model discrepancywhen estimating the parameters of complex physical mod-els. Due to well-known identifiability problems with tradi-tional Bayesian approaches, structured prior knowledge isrequired for model discrepancy in order to reach appropri-ate parameter estimates. Climate models represent a spe-cial challenge for existing UQ methods. Climate model cal-ibration (what the community calls tuning), if done the tra-ditional Kennedy-OHagan way, would require structuredprior specification for discrepancy over many of the mas-sive spatio-temporal fields that make up the terabytes ofoutput produced by climate models and satellite/stationobservations. We will argue that the way these models areconstructed makes the specification of well informed struc-tured discrepancy judgements over these fields infeasible.We present a delayed-Bayesian approach to parameter in-ference that involves first verifying the models capability tosimulate emergent climate processes, then using the resultsof this analysis for structured prior discrepancy modelingand a formal calibration to complete tuning. These ideas

UQ18 Abstracts 19

will be presented in the form of a case study working withthe Canadian climate model CanAGCM4.

Daniel WilliamsonUniversity of [email protected]

CP12

Nonparametric Functional Calibration of Com-puter Models

Standard methods in computer model calibration treat thecalibration parameters as constant throughout the domainof control inputs. In many applications, systematic vari-ation may cause the best values for the calibration pa-rameters to change between different settings. When notaccounted for in the code, this variation can make thecomputer model inadequate. We propose a framework formodeling the calibration parameters as functions of thecontrol inputs to account for a computer model’s incom-plete system representation in this regard while simulta-neously allowing for possible constraints imposed by priorexpert opinion. We demonstrate how inappropriate mod-eling assumptions can mislead a researcher into thinkinga calibrated model is in need of an empirical discrepancyterm when it is only needed to allow for functional de-pendence of the calibration parameters on the inputs. Weapply our approach to plastic deformation of a visco-plasticself-consistent material in which the critical resolved shearstress is known to vary with temperature.

Andrew [email protected]

Sez AtamturkturClemson [email protected]

CP12

The Interacting Particle System Method Adaptedto Piecewise Deterministic Processes

The reliability assessment of complex power generation sys-tems generally relies on simulation methods. When thesystem failure is a rare event, the MC methods are compu-tationally intensive and we need to use a variance reductionmethod to accelerate the reliability assessment of such sys-tem. Among variance reduction methods, one may thinkof particles filters methods such as the interacting parti-cle system method (IPS). This method is well suited toindustrial applications, because it does not require muchknowledge about the system.Such power generation systems often follow deterministicdynamics which can be altered by isolated components’failures, components’ repairs, or automatic control mech-anisms. In order to model these dynamic hybrid systems,we use piecewise deterministic Markovian processes. Whensimulated on a short period of time, such processes tend tooften generate the same deterministic trajectory, thus lim-iting the efficiency of the IPS method for which it is prefer-able to generate many different trajectories on short inter-vals of time. To reduce this phenomenon, we propose anadaptation of the IPS method based on the memorizationmethod: conditioning the generated trajectories to avoidthe most probable ones while computing exactly the influ-ence of the most probable trajectories. Our adaptation ofthe SMC method yields an strongly consistent estimator ofthe reliability, which has the advantage of having a smaller

asymptotic variance.

Thomas A. GaltierEDF R&D / [email protected]

CP12

Approximate Optimal Designs for MultivariatePolynomial Regression

We introduce a new approach aiming at computing approx-imate optimal designs for multivariate polynomial regres-sions on compact (semi-algebraic) design spaces. We usethe moment-sum- of-squares hierarchy of semidefinite pro-gramming problems to solve numerically the approximateoptimal design problem. The geometry of the design is re-covered via semidefinite programming duality theory. Thiswork shows that the hierarchy converges to the approxi-mate optimal design as the order of the hierarchy increases.Furthermore, we provide a dual certificate ensuring finiteconvergence of the hierarchy and showing that the approx-imate optimal design can be computed with our method.As a byproduct, we revisit the equivalence theorem of theexperimental design theory: it is linked to the Christoffelpolynomial and it characterizes finite convergence of themoment-sum-of-square hierarchies.

Fabrice GamboaInstitut de Mathematiques de Toulouse. AOC ProjectUniversity of [email protected]

Yohann De CastroLaboratoire de Mathematiques [email protected]

Didier HenrionLAAS-CNRS, University of Toulouse, [email protected]

Roxana HessLAAS CNRS [email protected]

Jean-Bernard [email protected]

CP12

Probabilistic Models and Sampling on Manifolds

We present a new approach for sampling probability mod-els supported on a manifold. The manifold is character-ized through its manifold diffusion subspace, the probabil-ity model is constructed using kernel density estimationand sampling is carried out using a Ito equation projectedon the manifold. The procedure serves a number of ob-jectives that are ubiquitous in uncertainty quantification.First, by acknowledging an intrinsic structure discoveredfrom numerically generated simulations, their scatter as-sociated with parametric variations is considerably smallerthan viewed in an ambient space. Fewer may thus be re-quired to achieve similar statistical precision. This capa-bility is even more significant for OUU problems wherethe stochastic simulator is integrated into an optimizationloop. We demonstrate this capability on high-dimensionalUQ problems with very expensive forward simulators. Sec-ond, the projected Ito provides an accurate probabilistic

20 UQ18 Abstracts

surrogate for large high-dimensional datasets.

Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]

Christian SoizeUniversity of Paris-EstMSME UMR 8208 [email protected]

CP12

Experiment Design in Non-linear Regression withAdditional Random Parameters

It is supposed that sequentially in time a variable is mea-sured with random errors. This variable depends on severalexplanatory variables as well as on a position of a sensorand the relationship is non-linear. Moreover, the regres-sion function contains some additional parameters that arerandom but their distribution is known. The goal of sta-tistical inference is to find an optimal position of a sensorfor estimating parameters of interest. As we may look atthe same problem from different perspectives, the optimaldesigns may differ substantially.

Daniela JaruskovaCzech Technical University in [email protected]

CP12

Quantifying Uncertainties with Distribution Ele-ment Trees

The estimation of probability densities based on data isimportant in many applications such as for example inthe development of Bayes classifiers in supervised learn-ing. In this work, we present a new type of density es-timator [D.W. Meyer, Statistics and Computing, 2017,doi:10.1007/s11222-017-9751-9] that is highly adaptive andbased on distribution element trees (DET). The DET esti-mator is particularly advantageous when dealing with largesamples and/or samples of high dimensionality. In a rangeof applications we demonstrate the high accuracy and supe-rior performance in comparison to existing state-of-the-artdensity estimators such as adaptive kernel density estima-tion or Polya and density tree methods. Finally, we pointout the possibility of using DETs as an elegant means forconditional random number generation or smooth boot-strapping.

Daniel W. MeyerInstitute of Fluid [email protected]

CP13

Statistical Learning in Tree-based Tensor Format

Tensor methods are widely used tools for the approxima-tion of high dimensional functions. Such problems areencountered in uncertainty quantification and statisticallearning. The high dimensionality imposes to use spe-cific techniques, such as rank-structured approximations.In this lecture, we present a statistical learning algorithmfor the approximation in tree-based tensor format whichare tensor networks whose graphs are dimension partition

trees. This tensor format includes the Tucker format, theTensor-Train format, as well as the more general Hierarchi-cal tensor formats. It can be interpreted as a deep neuralnetworks with a particular architecture. The proposed al-gorithm uses random evaluations of the function to providea tree-based tensor approximation, with adaptation of theunderlying approximation spaces, of the tree-based rankand of the dimension partition tree, based on statisticalestimates of the approximation error. The method is illus-trated on different examples.

Erwan GrelierEcole Centrale de Nantes, [email protected]

Anthony NouyEcole Centrale [email protected]

Mathilde ChevreuilUniversite de Nantes, [email protected]

CP13

Low-rank Dynamic Mode Decomposition: OptimalSolution in Polynomial-time

This work studies linear low-rank approximations of high-dimensional dynamical systems using dynamic mode de-composition (DMD). Searching a low-rank approximationcan be formalised as attempting to solve a non-convex op-timisation problem. However, state-of-the-art algorithmsonly provide sub-optimal solutions to this problem. Thiswork shows that there exists a closed-form optimal solu-tion, which can be computed in polynomial-time, and givesa characterisation of the �2-norm of the approximation er-ror. The theoretical results serve to design low-complexityalgorithms building low-rank approximations by SVD ofthe optimal solution or low-rank DMD. The algorithmsperformance is analysed by numerical simulations usingsynthetic and physical data benchmarks.

Patrick HeasInria Centre Rennes - Gretagne [email protected]

Cedric HerzetInria Centre Rennes - Bretagne [email protected]

CP13

Contrast Enhancement in Electrical ImpedanceTomography using the Approximation Error Ap-proach

We consider electrical impedance tomography (EIT) imag-ing of the brain. The brain is surrounded by the poorlyconducting skull, which causes a partial shielding effectleading to weak sensitivity for the imaging of the brain tis-sue. We propose a statistical learning based approach toenhance the contrast in brain imaging. With this approach,the conductivity of the target is modelled as a combinationof two unknowns, spatially distributed background and theshielding (skull) layer. The model discrepancy between theforward model with and without the unknown shieldinglayer is treated as a modelling error noise in the observationmodel, and then the uncertainty related to the modellingerror noise is handled by using the Bayesian approxima-tion error approach. With this approach, the image re-

UQ18 Abstracts 21

construction consist of recovering the spatially distributedbackground conductivity and a low rank approximation ofthe modelling error, which is then used for estimating anapproximation for the unknown shielding layer. The ap-proach is evaluated with simulations and phantom data.

Ville P. KolehmainenDepartment of Applied PhysicsUniversity of Eastern [email protected]

Antti NissinenRocsole [email protected]

Jari KaipioDepartment of MathematicsUniversity of [email protected]

Marko VauhkonenUniversity of Eastern [email protected]

CP13

A Weighted Reduced Basis Method for ParabolicPDEs with Random Data

We consider the reduced basis method applied to aparametrized, linear parabolic PDE with random input.Statistics of outputs of interest are estimated with theMonte Carlo (MC) method. In order to speed up the com-putation, the reduced basis method approximates the so-lution manifold on a low-dimensional subspace, which isconstructed by the POD-greedy procedure using rigorouserror estimators.The focus of this study is to improve the reduced spaceconstruction regarding the expected solution and expectedoutput error using the idea of a weighted POD-greedymethod. As a reference, a weighted POD is used. It buildsup an optimal reduced space that minimizes the energynorm of the error in a mean square sense. A comparisonbetween the non-weighted and weighted approach is drawn.

Christopher SpannringGraduate School Computational Engineering,TU [email protected]

Sebastian UllmannTU [email protected]

Jens LangGraduate School of Computational Engineering,TU Darmstadtlangmathematik.tu-darmstadt.de

CP13

Active-subspace Analysis of Up-crossing Probabil-ity for Shallow-water Model

A major challenge in marine engineering simulations is toquantify the interdependence of the uncertainties in theinput and output model parameters. We consider thisinterdependence for stochastic ocean waves in a shallow-

water wave propagation model, with uncertainties in thekinematic boundary condition (BC). The BC involves 302random variables that, together with a JONSWAP wavespectrum, define the nonlinear wave surface elevation. Wediscretize the wave propagation model using generalizedPolynomial Chaos (gPC) and a Sparse-Grid non-intrusiveStochastic Collocation Method. Initially, the number ofsparse grid points is infeasibly high, and it is therefore re-duced using an active-subspace analysis (ASA) [Paul G.Constantine, Active Subspaces Emerging Ideas for Dimen-sion Reduction in Parameter Studies, Society for Indus-trial and Applied Mathematics, 2015], where importantgeneral directions in the parameter space are identified.We then compute the four first statistical moments of thewave surface elevation, and use them in a moment-basedGauss transformation to analyze the up-crossing probabil-ity for the shallow-water model. We demonstrate the ad-vantages and the effectiveness of the ASA-gPC approachagainst classical Monte Carlo implementations.

Kenan ehic, Mirza KaramehmedovicDepartment of Applied Mathematics and ComputerScienceTechnical University of [email protected], [email protected]

CP14

Universal Prediction Distribution

The use of surrogate models instead of computationally ex-pensive simulation codes is very convenient in engineering.Roughly speaking, there are two kinds of surrogate mod-els: the deterministic and the probabilistic ones. These lastare generally based on Gaussian assumptions. The mainadvantage of probabilistic approach is that it provides ameasure of uncertainty associated with the surrogate modelin the whole space. This uncertainty is an efficient tool toconstruct strategies for various problems such as predictionenhancement, optimization or inversion. Away from theGaussian case, many surrogate models are also availableand useful. Nevertheless, they are not all naturally embed-dable in some stochastic frame. Hence, they do not providenaturally any prediction error distribution. We propose auniversal method to define a measure of uncertainty suit-able for any surrogate model: deterministic, probabilisticand ensembles. It relies on Cross-Validation sub-modelspredictions. This empirical distribution may be computedin much more general frames than the Gaussian one. It al-lows the definition of many sampling criteria. We give andstudy adaptive sampling techniques for global refinement,inversion and an extension of the so-called Efficient GlobalOptimization algorithm applicable for all types of surro-gate models. The performances of these new algorithmsare studied both on toys models and on an engineeringdesign problem.

Malek Ben SalemMines de [email protected]

Olivier RoustantMines [email protected]

Fabrice GamboaInstitut de Mathematiques de Toulouse. AOC ProjectUniversity of [email protected]

22 UQ18 Abstracts

Lionel [email protected]

CP14

Emulating Dynamic Non-linear Simulators usingGaussian Processes

In this paper, we examine the emulation of non-linear de-terministic computer models where the output is a timeseries, possibly multivariate. Such computer models sim-ulate the evolution of some real-world phenomena overtime, for example models of the climate or the function-ing of the human brain. The models we are interested inare highly non-linear and exhibit tipping points, bifurca-tions and chaotic behaviour. Each simulation run is tootime-consuming to perform nave uncertainty quantifica-tion. We therefore build emulators using Gaussian pro-cesses to model the output of the code. We use the Gaus-sian process to predict one-step ahead in an iterative wayover the whole time series. We consider a number of waysto propagate uncertainty through the time series includingboth the uncertainty of inputs to the emulators at timet and the correlation between them. The methodologyis illustrated with a number of examples. These includethe highly non-linear dynamical systems described by theLorenz and Van der Pol equations. In both cases we willshow that we not only have very good predictive perfor-mance but also have measures of uncertainty that reflectwhat is known about predictability in each system.

Hossein MohammadiUniversity of [email protected]

Peter ChallenorCollege of Engineering, Mathematics and PhysicalSciencesUniversity of [email protected]

Marc GoodfellowUniversity of [email protected]

CP14

Finite-dimensional Gaussian Approximation withLinear Inequality Constraints

Gaussian processes (GPs) is one of the most famous non-parametric Bayesian frameworks for modelling stochasticprocesses. Introducing inequality constraints in GP mod-els can lead to more realistic uncertainty quantifications inlearning a great variety of real-world problems. However,existing GP frameworks either cannot satisfy constraintseverywhere in the output space, or are restricted to spe-cific types of inequality conditions. Covariance parameterestimation is also a challenge when inequalities are consid-ered. To the best of our knowledge, the finite-dimensionalapproach from Maatouk and Bay (2017) is the only Gaus-sian framework that can satisfy inequality conditions ev-erywhere (either boundedness, monotonicity or convexity).However, their proposed framework still presents some lim-itations to be improved. Our contributions are threefold.First, we extend their approach to deal with general setsof linear inequalities. We wrote the posterior distribu-tion given the interpolation and inequality constraints as atruncated multivariate Gaussian distribution. Second, weexplore several Markov Chain Monte Carlo techniques to

approximate the posterior. Third, we investigate theoret-ical properties of the constrained likelihood for covarianceparameter estimation. According to experiments on bothartificial and real data, our full framework together with aHamiltonian Monte Carlo-based sampler provides efficientresults on both data fitting and uncertainty quantification.

Andres F. Lopez-LoperaMines [email protected]

Francois BachocInstitut de Mathematiques de [email protected]

Nicolas DurrandeEcole des Mines de [email protected]

Olivier RoustantMines [email protected]

CP14

Surrogate Modeling of Two Nested Codes withFunctional Outputs

Gaussian process surrogate models are widely used to em-ulate computationally costly computer codes. In this workwe are interested in the case of two nested computer codes,whose outputs are functional. By nested computer codes,we mean that the functional output of the first code is aninput of the second code. By working on the structureof the covariance functions, we propose a computationallyefficient surrogate model of the functional output of thesecond code as a function of all the inputs of the nestedcomputer code.

Sophie Marque-PucheuCEAUniversite Paris Diderot Paris [email protected]

Guillaume [email protected]

Josselin GarnierEcole [email protected]

CP14

Experimental Design for Non-parametric Correc-tion of Misspecified Dynamical Models

We consider a class of misspecified dynamical models wherethe governing term is only approximately known. Underthe assumption that observations of the system’s evolu-tion are accessible for various initial conditions, our goalis to infer a non-parametric correction to the misspecifieddriving term such as to faithfully represent the system dy-namics and devise system evolution predictions for unob-served initial conditions. We model the unknown correc-tion term as a Gaussian Process and analyze the problemof efficient experimental design to find an optimal correc-tion term under constraints such as a limited experimentalbudget. We suggest a novel formulation for experimentaldesign for this Gaussian Process and show that approxi-

UQ18 Abstracts 23

mately optimal (up to a constant factor) designs may beefficiently derived by utilizing results from the literatureon submodular optimization. Our numerical experimentsexemplify the effectiveness of these techniques.

Gal ShulkindMassachusetts Institute of [email protected]

CP15

Multi-Index Quasi-Monte Carlo and H-Matrices

We consider a new method to generate normal or log-normal random fields from [Feischl et al., 2017+] whichbuilds on fast matrix-vector multiplication via H-matrices.The method proves to be robust with respect to the covari-ance length of the random field, and is particularly efficientfor very smooth and very rough random fields. Moreover,the method applies to a fairly general class of covariancefunctions and is not limited to the stationary case. We usethis new method in combination with quasi-Monte Carlointegration, to solve a Poisson equation with random co-efficient. Moreover, to exploit the inherent sparsity of theapproximation, and to obtain an efficient algorithm, we usethe Multi-Index quasi-Monte Carlo approach in three coor-dinate directions: the finite-element approximation error,the approximation error of the random field, and the inte-gration error of the quasi-Monte Carlo rule. This allows usto significantly reduce the computational time

Michael FeischlVienna University of [email protected]

CP15

Stochastic Least-Squares Petrov-Galerkin Methodfor Parameterized Linear Systems

We consider the numerical solution of parameterized linearsystems where the system matrix, the solution, and theright-hand side are parameterized by a set of uncertain in-put parameters. We explore spectral methods in which thesolutions are approximated in a chosen finite-dimensionalsubspace. It has been shown that the stochastic Galerkinprojection technique fails to minimize any measure of thesolution error. As a remedy for this, we propose a novelstochastic least-squares Petrov–Galerkin (LSPG) method.The proposed method is optimal in the sense that it pro-duces the solution that minimizes a weighted �2-norm ofthe residual over all solutions in a given finite-dimensionalsubspace. Moreover, the method can be adapted to min-imize the solution error in different weighted �2-norms bysimply applying a weighting function within the least-squares formulation. In addition, a goal-oriented semi-norm induced by an output quantity of interest can beminimized by defining a weighting function as a linear func-tional of the solution. We establish optimality and errorbounds for the proposed method, and extensive numeri-cal experiments show that the weighted LSPG methodsoutperforms other spectral methods in minimizing corre-sponding target weighted norms.

Kookjin LeeUniversity of Maryland College [email protected]

Kevin T. CarlbergSandia National [email protected]

Howard C. ElmanUniversity of Maryland, College [email protected]

CP15

A Provably Stable Coupling of Numerical Integra-tion and Stochastic Galerkin Projection

In many real world applications, the uncertainties are notuniformly distributed throughout the spatial domain. Inthese situations a combination of an intrusive and non-intrusive method can be beneficial and more efficient. Inthis work, we consider a coupling between an intrusive andnon-intrusive method. The intrusive approach uses a com-bination of polynomial chaos and stochastic Galerkin pro-jection. The non-intrusive method uses numerical integra-tion by combining quadrature rules and probability densityfunctions of the prescribed uncertainties. A provably sta-ble coupling procedure between the two methods will bepresented. The coupling procedure is constructed for a hy-perbolic system of equations. The theoretical results areverified by numerical experiments.

Jan NordstromDepartment of MathematicsLinkoping [email protected]

Markus K. WahlstenDepartment of MathematicsLinkoping University SE 581 83 Linkoping, [email protected]

Oskar AlundDepartment of MathematicsLinkoping [email protected]

CP15

Utilizing Multisymmetry Properties in Uncer-tainty Quantification

Many interesting applications in physics that can be mod-eled by stochastic partial differential equations exhibitsymmetries that can be exploited to reduce computationaleffort. The numerical solution of such problems requiresthe numerical integration of high-dimensional integrals, forexample in stochastic collocation methods or Bayesian es-timation.Since the curse of dimensionality is encountered frequentlyin Uncertainty Quantification and problems turn out to becomputationally highly demanding, we are interested inreducing the complexity of such problems. This can beachieved by making use of permutation-invariance proper-ties in solution approaches, the most important example ofsuch a property being multisymmetry.Here we discuss extensions and applications of the ideasand techniques presented in our previous work on cubatureformulas for multisymmetric functions, setting our focus onproblems such as the solution of stochastic partial differen-tial equations and introducing new ideas such as basis re-duction techniques to adapt existing solution approaches tothe idea of capitalizing on multisymmetry properties. Weshow with numerical examples that the presented approachis highly efficient compared to conventional methods thatdo not utilize multisymmetry properties.

Gudmund PammerVienna University of Technology

24 UQ18 Abstracts

[email protected]

Stefan RiggerBOKU [email protected]

Clemens HeitzingerVienna University of [email protected]

CP15

An Adaptive (Quasi-) Monte Carlo Method forForward Uncertainty Quantification in DifferentialEquations with Random Coefficients

We present an adaptive (quasi-) Monte Carlo method forthe solution of differential equations with random coeffi-cients. The algorithm adaptively constructs an estimatorfor the expected value of some functional of the solutionuntil an user-specified tolerance is satisfied. The perfor-mance of the algorithm is studied. The efficiency of MonteCarlo and quasi-Monte Carlo method is compared.

Kan ZhangIITIllinois Institute of [email protected]

Fred J. HickernellIllinois Institute of TechnologyDepartment of Applied [email protected]

CP16

Aeroacoustics of Cavity Flow Analyzed with Mul-tilevel Monte Carlo and Non-intrusive PolynomialChaos Methods

In this presentation, we apply two non-intrusive uncer-tainty propagation methods to aeroacoustic simulationswith uncertain input, exemplified by noise generation incavity flow. The noise generated via Rossiter modes andturbulent rumble is particularly relevant, for example inautomobile exterior aerodynamics. The underlying deter-ministic large eddy simulations are carried out using a dis-continuous Galerkin (DG) method solving the full com-pressible Navier-Stokes equations, where the aeroacousticsources are directly resolved. We first apply the multi-level Monte Carlo (MLMC) method to this problem set-ting. The high spatial and temporal order of the employedDG discretization have several important implications forthe MLMC framework: In turbulent flow, the theoreti-cal convergence order of the DG method is not reached.An empirical update at runtime of the estimated stochas-tic level errors and optimal number of samples on eachlevel become necessary. Level convergence is not alwaysobtained, and depends on the chosen quantity of interest.In a high-order environment, different grid resolutions canbe achieved through both h- and p-refinement. Here, weuse a combined h-p-refinement. We validate the method bycomparing convergence rates and efficiency to results ob-tained with a non-intrusive polynomial chaos method witha sparse grid approach. We also identify the most influen-tial input parameters for cavity flow noise generation.

Jakob Duerrwaechter, Thomas KuhnInstitute of Aerodynamics and Gas DynamicsUniversity of Stuttgart

[email protected],[email protected]

Fabian MeyerInstitute of Applied Analysis and Numerical SimulationUniversity of [email protected]

Andrea BeckInstitute of Aerodynamics and Gas DynamicsUniversity of [email protected]

Christian RohdeInstitute of Applied Analysis and Numerical SimulationUniversity of [email protected]

Claus-Dieter MunzInstitut fur Aerodynamik und Gasdynamik (IAG)[email protected]

CP16

Uncertainty Quantification of Rans TurbulenceModels Using Bayesian Deep Learning with SteinVariational Gradient Descent

The Reynolds-averaged Navier-Stokes (RANS) is one ofthe most commonly used CFD methods due to its lowcomputational cost and ability ?to handle extremely highReynolds numbers. ?A parametrized turbulence model isintroduced ??resulting in significant uncertainties ?in thesimulation results. ? In recent years, machine learningapproaches have become essential for considering modelerror and ? uncertainty quantification ?in RANS simula-tions? and to improve turbulence modeling. In this work,we ?present a? ?Bayesian ?deep learning? approach using? ?Stein Variational gradient descent ? ?to quantify modeluncertainties in RANS. We train our model using a set ofsimple flows with several different geometries with high-fidelity ?simulations??. We ?demonstrate the predictivecapability of the model as well as its ability to accuratelycompute flow statistics using limited training data.

Nicholas Geneva, Nicholas ZabarasUniversity of Notre [email protected], [email protected]

CP16

Application of Machine Learning Algorithms forthe Classification of Regions of RANS Discrepancy

Reynolds Averaged Navier Stokes models represent theworkhorse for industrial research and design, with a vastmajority of such turbulence simulations utilizing RANSmodels. Owing to the assumptions and simplifications uti-lized during their formulation, RANS models introduce asignificant measure of model form uncertainty in the simu-lation results. To establish RANS models as reliable toolsin engineering design, there is a need for accurate quantifi-cation of these discrepancies. Recent work using eigenspaceperturbations has shown significant promise is providingsuch uncertainty bounds. However, at present, such meth-ods have to cater to a worst case scenario and apply themaximally permissible perturbation, at all points in theflow. Bereft of a spatial marker function for RANS uncer-tainty, the eigenspace perturbation methods tend to over-estimate the predictive discrepancy. We apply, appraise

UQ18 Abstracts 25

and analyze the efficacy of select data driven algorithms toformulate universal marker functions. In this talk, we com-pare and contrast the results from Random Forests, Sup-port Vector Machines and Adaboost decision trees. Thetraining sets included a database of canonical flow config-urations. Using these, classifiers were developed for eachdifferent nature of projection, assigning each point in thecomputational domain into a specific class. We exhibitthat these classifiers are robust and are able to generalizeto flows significantly different from the training set.

Aashwin A. MishraCenter for Turbulence Research,Stanford University & NASA [email protected]

Gianluca IaccarinoStanford UniversityMechanical [email protected]

CP16

Physics-Derived Approaches to Multi-physicsModel Form Uncertainty Quantification: Applica-tion to Turbulent Combustion Modeling

The major challenge in multi-physics modeling is the sheernumber of models, across multiple physical domains, thatare invoked, each of which involves its own set of as-sumptions and subsequent model errors. Quantificationof model errors, that is, model form uncertainty quantifi-cation, is critical in assessing which model dominates thetotal uncertainty and should be targeted for improvements.The fundamental challenge in quantifying model error is intranslating model assumptions, inherently coupled to thephysics, into mathematical statements of uncertainty. Inthis work, two methods are presented for deriving modelerror estimates directly from the physics, bypassing theneed for training data. The first method, peer models,uses models with differing assumptions to derive an uncer-tainty estimate. This peer model approach relies on modelvariability as an estimate for model error, which is deter-mined from what can be reasonably assumed about theunderlying physics to be modeled. The second method, hi-erarchical models, uses a higher-fidelity model to estimatethe uncertainty in a lower-fidelity model. This hierarchicalapproach can be leveraged as the basis for an uncertainty-adaptive modeling approach that minimizes computationaleffort given an acceptable model error threshold. These ap-proaches to model form uncertainty quantification are ap-plied within the context of Large Eddy Simulation (LES)modeling of turbulent combustion.

Michael E. MuellerPrinceton [email protected]

CP16

Uncertainty Quantification of Rans Initialization inModeling Shock-driven Turbulent Mixing

As we know the initial conditions play a role in modeling ofRANS. In this talking we quantify initial-data uncertaintiesin simulation of the inverse chersov shock tube. A highorder finite volume multi-component flow simulating modelis coupled with a polynomial chaos to propagate the initial-data uncertainties to the system quantities of interest. Theresults shows that the initial condition is highly influencing

to the mixing process.

Yan-Jin WangInstitute of Applied Physics and ComputationalMathematicswang [email protected]

CP17

On the Accuracy of Free Energy Defect Computa-tions in Atomistic Systems

The macroscopic behavior of a crystalline material isstrongly dependent on the type and distribution of defectspresent. This talk describes the analysis of the free energyof defect formation for the model problem of a constrained1D system, and its convergence properties in the thermo-dynamic limit.

Matthew DobsonUniversity of Massachusetts [email protected]

Hong Duong, Christoph OrtnerUniversity of [email protected], [email protected]

CP17

Addressing Global Sensitivity in Chemical KineticModels using Adaptive Sparse Grids

Chemical kinetic models often carry very large parameteruncertainties and show a strongly non-linear response withrapid changes over relatively small parameter ranges. Inaddition, the dimensionality of the parameter space cangrow to a large number, without a priori known structurebetween the dimensions. Using a real life model for thewater splitting on a Cobalt oxide catalyst as a prototypicalexample, we demonstrate an adaptive sparse grid strategy(ASG) for the global sensitivity analysis of such modelsbased on the Analysis Of Variances. The ASG utilizeslocal adaptivity, to address the rapid changes, as well asdimension adaptivity, to exploit the typical intrinsic lowdimensional parameter dependence in reaction networks.We will discuss, how ASG can be extended to a multilevelstrategy, when the model response is not deterministic butmust be obtained from sampling a stochastic model.

Sandra Dopking, Sebastian MateraInstitute f. MathematicsFreie Universitaet [email protected], [email protected]

CP17

Providing Structure to Experimental Data: ALarge-Scale Heterogeneous Database for Collabo-rative Model Validation

Experimental data is often stored in various file formatsand unfortunately, requires highly specialized codes toparse each individual set of data. The lack of a struc-tured format makes it challenging to find relevant exper-imental data across a diverse collection for model valida-tion and uncertainty quantification (V/UQ). The PrIMeData Warehouse (http://primekinetics.org) has open andflexible data models which provide much-needed struc-ture to experimental data by using standard XML files.Every data value is distinctly labeled, making it simpleto search for and make use of data from several sources

26 UQ18 Abstracts

through a single search query. Recent additions of thou-sands of coal devolatilization and oxidation experimentshighlighted the flexibility of the PrIMe data models, en-abling researchers to quickly incorporate validation datafrom different sources in their V/UQ analysis. We willpresent an example utilizing the experimental data fromPrIMe to validate a reduced char oxidation physics model.A Bound-to-Bound Data Collaboration consistency analy-sis will be used for validation of the char oxidation modelwith data in both air and oxy-coal conditions. Uncertaintyin the kinetic rate parameters between char carbon and O2,H2O, and CO2 will also be quantified.

James Oreluk, Arun Hegde, Wenyu LiUniversity of California [email protected], [email protected],[email protected]

Andrew PackardU of California at [email protected]

Michael FrenklachUniversity of California [email protected]

CP17

On the Fly Coarse-graining in Molecular DynamicsSimulations

We present a general framework for enhancing sampling ofhighly complex distributions e.g. occurring in peptide sim-ulations having several local free-energy minima by biasingthe dynamics and learning a probabilistic coarse-grained(CG) model simultaneously. Its main component repre-sents a bayesian CG model which is learned on the fly dur-ing the simulation of the target distribution. We in turnuse insights, gathered from that CG model, to bias the fine-grained potential in order to enhance the exploration of theconfigurational space and overcome high energy barriers.Next to biasing dynamics, the CG model serves as prob-abilistic predictor while we quantify the predictive uncer-tainty arising due to information loss from coarse-graining.An important component of the presented methodologybuilds the coarse-to-fine mapping which implicitly extracts,without any physical insight, lower dimensional collectivevariables, the CG variables. We propose a mixture modelserving as coarse-to-fine mapping while we sequentially addmixture components and increase the mappings flexibility.Moreover, the complexity of the employed components isconsecutively increased by adding features based on an in-formation theoretic metric. We demonstrate the capabili-ties of the proposed methodology in the context of peptidesimulations.

Markus SchoeberlTechnical University of [email protected]

Nicholas ZabarasUniversity of Notre [email protected]

Phaedon S. KoutsourelakisTechnische Universitat Munchen, Germany

[email protected]

CP18

Bayesian Calibration for Models with Nonlinear In-equality Parameter Constraints

This work is focused on developing a Bayesian calibra-tion technique to probabilistically characterize modelswhose parameters are governed by nonlinear inequalityconstraints. While Bayesian model calibration has be-come increasingly popular, the state-of-the-art methodsexperience limitations for models with constrained param-eters. The proposed technique builds the joint posteriordistribution of the parameters entirely within the feasibleparameter space defined by the given constraints. Thisis accomplished through an enrichment of the Hamilto-nian Monte Carlo (HMC) sampling technique, part of theMarkov Chain Monte Carlo family of methods, in whichany arbitrary parameter constraints (e.g., linear or nonlin-ear inequalities) are evaluated during sampling. In HMC,the system of equations describing Hamiltonian dynamicssimulate the evolution of a Markov chain; the constraintevaluation is directly embedded into the solution of thatsystem of equations. Thus, the proposed method offers anintrinsic enforcement of constraints, in contrast to tradi-tional means of extrinsic enforcement. The proposed tech-nique is demonstrated through an application to a con-stitutive model for hyperelasticity that is calibrated usingexperimental data of brain tissue from two distinct stud-ies. The constitutive model parameters are constrained bythe Drucker stability criterion, which restricts the soften-ing behavior of hyperelastic models to ensure the physicalmeaningfulness of their predictions.

Patrick Brewick, Kirubel TeferraUS Naval Research [email protected],[email protected]

CP18

Inverse Uncertainty Quantification Applied to anIndustrial Model with Measurement Data

The demand on reliable information about complex phys-ical and computational simulation models is continuouslyincreasing. In an industrial context the use of uncertaintyquantification (UQ) methods gain more and more relevanceand one integral part consists of evaluating distributions onparameters for those models. The contribution presentsan electric drive with a test bench hardware which allowsto generate systematic measurements with underlying un-certainties. The distributions of the parameter uncertain-ties are in general unknown in advance but can be de-fined by means of adequate tuning of the test bench. Thisknowledge can be used afterwards to validate the approach.In this context, Bayesian inference is applied to estimatethe parameter distributions based on a detailed simulationmodel and the recorded measurement data. The resultsare obtained by a Markov Chain Monte Carlo (MCMC)method in combination with a Polynomial Chaos (PC) sur-rogate model. Our focus is to extract relevant parts of themeasurement data to compose a likelihood function, whichcontains the required information. Labelling the measure-ment parts is done using a global sensitivity analysis basedon Sobol’ indices gained from the PC coefficients. The ap-proximated parameter uncertainties are compared to thereal values of the test bench hardware and an outlook ofscalability and applicability for industrial models with fo-

UQ18 Abstracts 27

cus on mechatronic systems is provided.

Philipp Glaser, Kosmas PetridisRobert Bosch GmbHStuttgart, [email protected],[email protected]

Michael SchickRobert Bosch GmbHMechatronic Engineering CR/[email protected]

Vincent HeuvelineEngineering Mathematics and Computing Lab (EMCL)University Heidelberg, [email protected]

CP18

Simulation-based Uncertainty Quantification forAtmospheric Remote Sensing Retrievals

Earth-observing satellites provide substantial volumes ofdata with comprehensive spatial and temporal coveragethat inform many environmental processes. The opera-tional processing of remote sensing data often must becomputationally efficient to make data products availableto the scientific community in a timely fashion. This isparticularly true for an inverse problem that is known as aremote sensing retrieval, which is the process of producingestimates of geophysical quantities of interest from satelliteobservations of reflected sunlight. There are several classesof retrieval algorithms, and this work highlights a com-mon probabilistic framework for quantifying uncertaintyimparted by choices made in the retrieval process. We il-lustrate tools for investigating retrieval error distributionsfrom Monte Carlo experiments applied to retrievals of at-mospheric temperature, water vapor, and trace gas con-centrations.

Jonathan Hobbs, Amy Braverman, Hai NguyenJet Propulsion LaboratoryCalifornia Institute of [email protected],[email protected], [email protected]

CP18

Bayesian Calibration of Expensive Computer Mod-els with Input Dependent Parameters

Computer models, aiming at simulating complex physicalprocedures, often involve complex parametrisations whoseoptimal parameter values may be different at differentmodel input values. Traditional model calibration methodscannot address such cases because they assume that theseoptimal values are invariant to the inputs. We present afully Bayesian methodology which is able to produce inputdependent optimal values for the calibration parameters, aswell as it characterizes the associated uncertainties via pos-terior distributions. A particular highlight of the methodis that it can also address problems where the computermodel requires the selection of the sub-model from a set ofcompeting ones, but the choice of the best sub-model maychange with respect to the input values. Central to themethodology is the idea of modelling the unknown modelparameter as binary treed process whose index is the modelinput, and specifying a hierarchical prior model. Suitablereversible jump operations are proposed to facilitate thechallenging computations. The performance of the method

is assessed against a benchmark example. We apply ourmethod to to address a real application, with a large-scaleclimate model, where interest lies on the selection of dif-ferent sub-models at different spatial regions.

Georgios KaragiannisPurdue [email protected]

Alex KonomiUniversity of [email protected]

Guang LinPurdue [email protected]

CP18

Spatial Statistical Downscaling for ConstructingHigh-resolution Nature Runs in Global ObservingSystem Simulation Experiments

Observing system simulation experiments (OSSEs) havebeen widely used as a rigorous and cost-effective way toguide development of new observing systems, and to eval-uate the performance of new data assimilation algorithms.Nature runs (NRs), which are outputs from deterministicmodels, play an essential role in building OSSE systems forglobal atmospheric processes because they are used bothto create synthetic observations at high spatial resolution,and to represent the “true’ atmosphere against which theforecasts are verified. However, most NRs are generatedat resolutions coarser than actual observations. Here, wepropose a principled statistical downscaling framework toconstruct high-resolution NRs via conditional simulationfrom coarse-resolution numerical model output. We usenonstationary spatial covariance function models that havebasis function representations. This approach not only ex-plicitly addresses the change-of-support problem, but alsoallows fast computation with large volumes of numericalmodel output. We also propose a data-driven algorithmto select the required basis functions adaptively, in or-der to increase the flexibility of our nonstationary covari-ance function models. In this article we demonstrate thesetechniques by downscaling a coarse-resolution physical NRat a native resolution of 1◦ latitude × 1.25◦ longitude ofglobal surface CO2 concentrations to 655,362 equal-areahexagons.

Pulong MaUniversity of [email protected]

Emily L. KangDepartment of Mathematical SciencesUniversity of [email protected]

Amy Braverman, Hai NguyenJet Propulsion LaboratoryCalifornia Institute of [email protected], [email protected]

CP18

Mean-based Preconditioning for the HelmholtzEquation in Random Media

The Helmholtz equation models single-frequency acoustic

28 UQ18 Abstracts

waves and is challenging to solve numerically even in theabscence of randomness. As a result, the matrices arisingfrom finite element discretisations of the Helmholtz equa-tion, especially for high-frequency waves, require precon-ditioning. However, when solving a forward UQ problemusing a Monte Carlo-type method, using a different pre-conditioner for each realisation of the random medium isinfeasible. In our mean-based preconditioning strategy, wecalculate a preconditioner based on the mean of the ran-dom medium, and then use this preconditioner for all thelinear systems arising from the realisations of the randommedium. We present a general class of random media forwhich we can prove an a priori bound on the solution ofthe Helmholtz equation. For these media we can prove thatour mean-based preconditioning strategy is accurate, andwe present computations confirming this. We also presentcomputations indicating that our method is accurate for awider class of random media than those supported by theo-retical results. This preconditioning strategy also gives riseto a technique for increasing the speed of MCMC proce-dures for the statistical inverse problem, which is of inter-est in imaging and geophysics. We also discuss new resultsabout the equivalence of the different possible formulationsof the forward problem, with these underlying the theoret-ical results above.

Ivan G. GrahamUniversity of [email protected]

Owen R. PemberyDepartment of Mathematical SciencesUniversity of [email protected]

Euan SpenceUniversity of [email protected]

CP19

Design of Experiments-based Geological Uncer-tainty Quantification of Co2-Assisted GravityDrainage (gagd) Process in Heterogeneous Multi-layer Reservoirs

The Gas-Assisted Gravity Drainage (GAGD) process re-sults in better sweep efficiency and higher microscopic dis-placement to enhance the bypassed oil recovery. Unlikethe conventional gas injection, the GAGD process takesadvantage of natural reservoir fluids segregation to providegravity stable oil displacement. It consists of placing a hor-izontal producer near the bottom of the pay zone and in-jecting gas through vertical wells. The GAGD process wasevaluated considering high-resolution compositional reser-voir model that includes 3D lithofacies and petrophysi-cal property distribution. Three quantiles of P10, P50,and P90 were incorporated in reservoir model along withDesign of Experiments (DoE) to study the heterogeneityand anisotropy effects on the GAGD process. The param-eters are horizontal permeability, anisotropy ratio, porecompressibility, effective reservoir porosity, aquifer size, inaddition to CO2 injection rate, maximum oil productionrate, minimum BHP in horizontal producers, and maxi-mum BHP in injection wells. Experimental Design hasled to determine the most dominant factors affecting theGAGD performance. Among all the different property re-alizations, it was clearly seen that anisotropy ratio has di-rect impact on the process as the 3D lithofacies modelingincluded Sand, Shaly Sand, and Shale that impede the ver-

tical fluids movement; however, the immiscible CO2 injec-tion indicated the insensitivity of the GAGD process toreservoir heterogeneities.

Watheq J. Al-Mudhafar, Dandina N. RaoLouisiana State [email protected], [email protected]

CP19

Uncertainty Quantification of Textile Composites:A Multi-Scale Approach

Stochastic analysis has been performed in a multi-scaleframework to study the resulting deformation character-istics and bending stresses of laminated woven composites.To this end, a novel adaptive sparse locally refined meta-model based uncertainty quantification (UQ) frameworkhas been developed for response approximation, consider-ing stochastic micro-mechanical and geometric properties.Sensitivity analysis has been carried out to ascertain theinfluence of stochastic parameters on the output. In con-text to the undertaken multi-scale structural application,a 3-D elasto-plastic constitutive model of graphite-epoxyplain woven composite has been developed. Analysis hasbeen performed using both asymptotic and analytical meanfield homogenization techniques. As a practical real-life ap-plication, a macro-scale finite element model of laminatedwoven composite has been developed and analyzed usingproperties obtained from micro-scale to predict responsecharacteristics of a wind turbine rotor blade subjected tovarious realistic in-plane and out-of-plane loading. Thesignificance of the study lies in the fact that stochasticityin micro-structural attributes have strong influence on thesystem performance. Therefore, to ensure robustness andreliability of large-scale structures, it is crucial to accountsuch forms of uncertainties and realistic multi-scale mod-elling techniques.

Tanmoy Chatterjee, Rohit Raju MadkeIIT [email protected], [email protected]

Rajib ChowdhuryDepartment of Civil EngineeringIIT [email protected]

CP19

Sensitivity Analysis and Data Assimilation forFracture Simulations Model

Fracture propagation plays a key role for a number of ap-plications of interest to the geophysical community. In thiswork we implement a global sensitivity analysis test as wellas data assimilation to the Hybrid Optimization SoftwareSuite (HOSS), a multi-physics software tool based on thecombined finite-discrete element method, that is used todescribe material deformation and failure. We implementthe Fourier Amplitude Sensitivity Test (FAST) to explorehow each parameter influence the model fracture and to de-termine the key model parameters that have the most im-pact on the model. Based on the results from the sensitivityanalysis we implement a Gaussian Mixture Model (GMM)assimilation method to determine the best parameter val-ues to match different experimental data. The GMM areassimilation methods that are able to effectively repre-sent the probability density function of non-linear mod-els through a mixture of Gaussian functions. The resultsshow a good agreement between assimilated model simula-

UQ18 Abstracts 29

tion and experimental data from Split Hopkinson PressureBar (SHPB) experiments.

Humberto C. GodinezLos Alamos National LaboratoryApplied Mathematics and Plasma [email protected]

CP19

Derivative-based Expression of Sobol’s Total Index

A deep exploration of mathematical complex models is of-ten made by using model-free methods. Variance-basedsensitivity analysis, Derivative-based sensitivity analysis,and multivariate sensitivity analysis (MSA) are ones ofthem. In one hand, variance-based sensitivity analysis(VbSA) allows for assessing the effects of inputs on themodel outputs, including the order and the strength of in-teractions among inputs. On the other hand, Derivative-based sensitivity measure (DGSM), which is computation-ally more attractive than VbSA, provides only the globaleffect of inputs. It is worth interesting to combine theadvantages of both methods and to come up with a newapproach. So far, the upper and lower bounds of Sobol’indices have been investigated. The upper bound of theSobol total index is a (known) constant times the DGSMindex. Thus, a small value of the upper bound of the Soboltotal index means that the associated input does not act inthe model. However, a big value does not bring much in-formation, regarding factors classification. In this abstract,we provide a new mathematical expression of Sobol’s to-tal index based on model derivatives for any probabilitydistribution of inputs that is dominated by the Lebesguemeasure. The new expression of the Sobol total index is afunction of model partial derivatives, the density function,and the the cumulative distribution function of a given in-put.

Matieyendou LamboniUniversity of [email protected]

CP20

Uniform Sampling of a Feasible Set of Model Pa-rameters

Physically based modeling of complex systems usuallyleads to a large dimensionality in the parameter space. Un-certainties are in general inevitable for those parameters.Quantification of uncertainty is therefore crucial for devel-oping predictive models. Bound-to-Bound Data Collabo-ration (B2BDC) is a systematic methodology that com-bined model and data to evaluate and ultimately reducethe joint uncertainty of model parameters. The feasibleset is defined as the region in the parameter space thatis consistent with the experimental data. The ability ofgenerating uniformly distributed samples within the fea-sible set allows stochastic representation of the region ofinterest. Despite the successful development and elegantformat of different sampling methods, their performancein large-scale realistic situations becomes computationallylimited. Here, we present an acceptance-rejection samplingapproach aided by approximation strategy and dimensionreduction. The method offers uniformly distributed sam-ples in low-dimensional applications and good approxi-mations in high-dimensional cases, with a tractable com-putational expense. Quantitative analysis demonstratesefficiency-accuracy trade-off of the approach. The methodis also compared to the Adaptive Metropolis (AM) sam-

pling method and showed better computational perfor-mance in several test cases.

Wenyu LiUniversity of California [email protected]

Arun HedgeUniversity of California, [email protected]

James Oreluk, Michael FrenklachUniversity of California [email protected], [email protected]

Andrew PackardU of California at [email protected]

CP20

Distribution Surrogates for Efficient UQ in Multi-physics Problems

Multi-physics analysis under uncertainty can be compu-tationally expensive since it involves a nested double-loopanalysis, where the multidisciplinary analysis is carried outin the innermost loop, and uncertainty analysis in the outerloop. In some applications (e.g., automotive or aerospacevehicle design), there is also a third optimization loop. Toalleviate the high computational expense, this talk presentsa novel distribution surrogate-based approach, which helpscollapse the traditional nested simulation process by en-forcing the multi-physics compatibility condition simulta-neously within the uncertainty analysis. A distributionsurrogate directly provides the probability distribution ofthe output as opposed to a deterministic surrogate, whichprovides a point value of the output at any given inputand requires multiple runs. Multi-physics analysis underuncertainty involves both forward prediction and inferencesince system output computation at any input requires thesatisfaction of multidisciplinary compatibility. Several dis-tribution surrogates, with and without analytical inference,such as multivariate Gaussian, Gaussian copula, Gaussianmixture and Bayesian network are discussed, and their per-formance, in terms of modeling effort and prediction accu-racy, are compared against Monte Carlo results of fullycoupled multi-physics simulations.

Saideep Nannapaneni, Sankaran MahadevanVanderbilt [email protected],[email protected]

CP20

Looking the Wrong Way: Beyond Principal Com-ponents in Computer Model Calibration

When faced with large spatio-temporal computer modeloutput, calibration is commonly performed by projectingthe data onto a principal component (PC) basis derivedfrom the ensemble, before emulators are constructed forthe coefficients on this basis (Higdon et al 2008). This hasbecome the standard approach in UQ for high-dimensionaloutput (e.g. climate models). However, this is flawed whenthe ensemble, and hence the PC basis, does not contain thekey patterns from the observed data we are attempting tocalibrate the model to. We show that in this case, we guar-antee the conclusion that the computer model cannot accu-rately represent the observations up to model discrepancy

30 UQ18 Abstracts

(what we term the ‘terminal case’), regardless of whetherthis is true. This problem has been evident in every climatefield we have encountered, highlighting the prevalence andimportance of this problem. We present a methodology foroptimal projection-based computer model calibration forhigh-dimensional model output that overcomes this flawin existing approaches. Our method consists of a simpletest allowing us to identify whether we are in the terminalcase, and provides an efficient algorithm for overcomingthis problem via an optimal rotation of the PC basis. Thisallows important directions or patterns in the output space(i.e. those more similar to the observations) to be incorpo-rated into the basis, enabling input parameters leading tothe observations to be identified, should they exist.

James M. Salter, Daniel WilliamsonUniversity of [email protected], [email protected]

CP20

Fourier Decomposition Methods for Efficient Gen-eration of Random Fields

Fourier methods for the representation of random fieldsand sample generation enjoy widespread use for their sim-plicity and efficient implementation when FFT methods areemployed (e.g. in the form of circulant embeddings). How-ever, their use is mostly restricted to rectangular domainsand sampling-based methods (e.g. MC, QMC). For spec-tral methods on non-rectangular domains only little hasbeen done so far. In this talk, we want to take a deeperlook into the application of Fourier methods for spectralstochastic methods on arbitrary domains. Furthermore, itwill be shown how to use the Fourier decomposition of thecovariance kernel in order to accelerate the computation ofthe Karhunen-Love expansion (KLE).

Elmar ZanderTU BraunschweigInst. of Scientific [email protected]

CP20

On the Quantification and Propagation of Impre-cise Probabilities in High Dimensions with Depen-dencies

As a generalized probability theory, imprecise probabilityallows for partial probability specifications and is appli-cable when data is so limited that a unique probabilitydistribution cannot be identified. The primary researchchallenges in imprecise probabilities relate to quantificationof epistemic uncertainty and improving the efficiency ofuncertainty propagation with imprecise probabilities par-ticularly for complex systems in high dimensions, at thesame time considering dependence among random vari-ables. The classical approach to address variable depen-dence from limited information is to assume that the ran-dom variables are coupled by a Gaussian dependence struc-ture and rely on a transformation into independent vari-ables. This assumption is often subjective, inaccurate, andnot justified by the data. Recently, a new UQ methodol-ogy has been developed by the authors for quantifying andefficiently propagating imprecise probabilities with inde-pendent uncertainties created by a lack of statistical data.In this work, we generalize this new UQ methodology toovercome the limitations of the independence assumptionby modeling the dependence structure of high dimensionalrandom variables using imprecise vine copulas. The gen-

eralized approach achieves particularly precise estimatesfor UQ and efficient uncertainty propagation of impreciseprobabilities in high dimensions with dependencies.

Jiaxin Zhang, Michael D. ShieldsJohns Hopkins [email protected], [email protected]

MS1

Episodic, Non-linear and Non-Gaussian: Uncer-tainty Quantification for Cloud, Precipitation, Fireand Ice

The uncertainty distributions of forecasts of episodic vari-ables such as clouds, precipitation, fire and ice often fea-ture a finite probability of non-existence and/or skewness.Furthermore, many of these variables feature strongly non-linear dynamics. For such variables, existing data assim-ilation techniques such as 4DVAR, the Ensemble KalmanFilter (EnKF) and the Particle Filter (PF) fail when afinite amount of the variable is observed but the prior fore-cast uncertainty distribution assigns a probability densityof zero to this observation. Here we extend the previouslydeveloped Gamma, Inverse-Gamma and Gaussian (GIGG)variation on the EnKF to accommodate finite probabili-ties of non-existence. The resulting GIGG-Delta filter hasthe remarkable property that when rain (for example) isobserved, the GIGG-Delta filter always produces a pos-terior ensemble of raining ensemble members even if notone of the prior forecast ensemble members contain rain.In addition, the GIGG-Delta filter accurately solves Bayestheorem when the prior and observational uncertainties aregiven by gamma and inverse-gamma pdfs, respectively. Toaccount for non-linearity, a new iterative balancing proce-dure is proposed that improves the ability of the EnKF toproduce balanced posterior states. The approach is tested,illustrated and compared with existing techniques using ahierarchy of idealized models.

Craig BishopNaval Research [email protected]

Derek J. PosseltUniv. of Michgan, Ann [email protected]

MS1

Reducing Precision in Ensemble Data Assimilationto Improve Forecast Skill

We present a new approach for improving the efficiencyof data assimilation, by trading numerical precision forcomputational speed. Future supercomputers will allow agreater choice of precision, so that models can use a level ofprecision that is commensurate with the model uncertainty.Data assimilation is inherently uncertain due to the useof relatively long assimilation windows, noisy observationsand imperfect models. Thus, the larger rounding errors in-curred from reducing precision may be within the toleranceof the system. Lower precision arithmetic is cheaper, andso by reducing precision in ensemble data assimilation, wecan redistribute computational resources towards, for ex-ample, a larger ensemble size. We will present results onhow lowering numerical precision affects the performanceof an ensemble data assimilation system, consisting of theSPEEDY intermediate complexity atmospheric model andthe local ensemble transform Kalman filter. We will re-duce the precision of the system beyond single precision

UQ18 Abstracts 31

(using an emulation tool), and compare the results withsimulations at double precision. The lowest level of pre-cision is related to the level of uncertainty present: withgreater uncertainty, precision is less important. Addition-ally, by reinvesting computational resources from loweringprecision into the ensemble size, higher skill analyses andforecasts can be achieved.

Samuel HatfieldUniv. of [email protected]

Peter D. Dueben, Matthew ChantryUniversity of [email protected],[email protected]

Tim PalmerUniversity of Oxford, [email protected]

MS1

Balanced Data Assimilation for Highly-OscillatoryMechanical Systems

We apply data assimilation algorithms to highly-oscillatoryHamiltonian systems with state variables that act on dif-ferent time scales. There exist certain balance relations be-tween those variables and if they are nonlinear, the linearstructure of LETFs (linear ensemble transform filters) re-sults in violations of those relations and therefore spuriousoscillations on the fast time scale are excited. We presenttwo strategies to deal with that issue. One addresses theproblem by blending between different models during thetime-stepping in the forecast part of the sequential dataassimilation algorithm. The other one minimises a costfunctional including the residual of the balance relation asa penalty term after the performance of the filtering step.We test both methods on two different scenarios: a highlyoscillatory Hamiltonian system and a thermally embeddedhighly oscillatory Hamiltonian system. The results of theexperiments show that we are able to drastically reduce thespurious oscillations in all cases.

Maria ReinhardtInstitut fur Mathematik, Universitat Potsdamreinhardt [email protected]

Gottfried Hastermann, Rupert KleinFreie Universitat [email protected],[email protected]

Sebastian ReichUniversitat [email protected]

MS1

Singular Likelihoods to Prevent Particle Filter Col-lapse

Particle filters are a method for nonlinear, non-Gaussiandata assimilation and uncertainty quantification. They op-erate by assigning positive weights to members of an en-semble of forecasts based on how close the ensemble mem-bers are to observations. Unfortunately particle filters typi-cally require an impossibly high number of ensemble mem-bers to avoid ‘collapse’: a single member having essen-

tially all the weight, and the others having none. This talkpresents a method to reduce the ensemble size required forparticle filters. It measures the ‘distance’ of an ensemblemember to the observations using a metric that de-weightserrors at small scales. This is tantamount to describingthe observation error as a generalized random field. Hugeperformance improvements compared to standard imple-mentations are shown in a simple example.

Gregor RobinsonUniv. of Colorado, [email protected]

MS2

Mutual Information-based Experimental Designfor Problems in Nuclear Engineering

In this presentation, we will discuss the use of a mutualinformation-based experimental design framework to em-ploy validated high-fidelity codes to calibrate low-fidelitycodes and guide moving sensor strategies. For the firstproblem, one objective is to specify where in the designspace high-fidelity codes should be evaluated to maximizeinformation pertaining to low-fidelity model parameters. Atheoretical issue, which we will discuss, entails the develop-ment of algorithms that accommodate the inherent corre-lation between parameters and responses. We will demon-strate the performance of these algorithms for thermal-hydraulics and fuels codes employed when simulating lightwater reactor designs. We will additionally discussion howthe mutual information-based framework can be employedto guide moving sensors to determine the location and in-tensity of radiation sources in an urban environment.

Ralph C. SmithNorth Carolina State UnivDept of Mathematics, [email protected]

Brian WilliamsLos Alamos National [email protected]

Isaac Michaud, John MattinglyNorth Carolina State [email protected], [email protected]

Jason HiteDepartment of Nuclear Engineering, North Carolina StateUniversity, P.O.Box 7909, Raleigh, NC 27695, [email protected]

MS2

Mutual Information Estimation in High Dimen-sions

Mutual information is the most common maximization cri-teria in optimal experimental design as it is a natural mea-sure of dependence between two random variables. How-ever, the estimation of mutual information in high dimen-sions remains a challenge. Initial results are presented inreplacing the estimation of mutual information in high di-mensions with a distance measure in low dimensions.

Gabriel Terejanu, Xiao LinUniversity of South Carolina

32 UQ18 Abstracts

[email protected], [email protected]

MS2

Optimal Experimental Design for Prediction Usinga Consistent Bayesian Approach

Experimental data is often used to infer valuable infor-mation about parameters for models of physical systems.However, the collection of experimental data can be costlyand time consuming. In such situations we can only af-ford to gather some limited number of experimental data,however not all experiments provide the same amount ofinformation about the processes they are helping inform.Consequently, it is important to design experiments in anoptimal way, i.e., to choose some limited number of ex-perimental data to maximize the value of each experiment.Moreover, the optimality of the experiment must be chosenwith respect to the penultimate objective. We present anefficient approach for optimal experimental design based ona recently developed consistent Bayesian formulation. Wedemonstrate that the optimal design depends on whetherwe are primarily interested in inference of parameters or theprediction of quantities of interest that cannot be observeddirectly. Numerical results will be presented to illustrateboth types of optimal experimental design and to highlightthe differences.

Tim WildeyOptimization and Uncertainty Quantification Dept.Sandia National [email protected]

Troy ButlerDept. of Mathematics and StatisticsUniversity of Colorado - [email protected]

John D. JakemanSandia National [email protected]

MS2

Solving Integer Programming Problems in Designof Experiments

We present a numerical method for approximating the so-lution of convex integer programs stemming from optimalexperimental design. The statistical setup consists of aBayesian framework for linear inverse problems for whichthe direct relationship is described by a discretized integralequation. Specifically, we aim to find the optimal sensorplacement from a set of candidate locations where dataare collected with measurement error. The convex objec-tive function is a measure of the uncertainty, describedhere by the posterior covariance matrix, of the discretizedlinear inverse problem solution. The resulting convex in-teger program is relaxed producing a lower bound. Anupper bound is obtained by extending the sum-up round-ing approach to multiple dimensions. For this extension,we provide an analysis of its accuracy as a function of thediscretization mesh size. We show asymptotic optimality ofthe integer solution defining the upper bound for differentexperimental design criteria (A, D, E - optimal), by prov-ing the convergence to zero of the gap between upper andlower bounds as the mesh size goes to zero and the numberof candidate locations goes to infinity in a fixed propor-tion to the total number of mesh points. The techniqueis demonstrated on a two-dimensional gravity surveyingproblem for both A-optimal and D-optimal sensor place-

ment where our designs yield better procedures comparedto random placements.

Jing Yuuniversity of [email protected]

Mihai AnitescuArgonne National LaboratoryMathematics and Computer Science [email protected]

MS3

Randomized Newton and Quasi-Newton Methodsfor Large Linear Least Squares Problems

We discuss randomized Newton and randomized quasi-Newton approaches to efficiently solve large linear least-squares problems where the very large data sets present asignificant computational burden (e.g., the size may exceedcomputer memory or data are collected in real-time). Inour proposed framework, stochasticity is introduced in twodifferent frameworks as a means to overcome these com-putational limitations, and probability distributions thatcan exploit structure and/or sparsity are considered. Ourresults show, in particular, that randomized Newton iter-ates, in contrast to randomized quasi-Newton iterates, maynot converge to the desired least-squares solution. Numer-ical examples, including an example from extreme learningmachines, demonstrate the potential applications of thesemethods.

Matthias Chung, Julianne ChungDepartment of MathematicsVirginia [email protected], [email protected]

David A. KozakColorado School of [email protected]

Joseph T. SlagelDepartment of MathematicsVirginia [email protected]

Luis TenorioColorado School of [email protected]

MS3

A Probabilistic Subspace Bound, with Applicationto Active Subspaces

We analyze a dimension reduction approach for the iden-tification of response surfaces. Given a function f thatdepends on m parameters, one wants to identify a low-dimensional “active subspace” along which f is most sen-sitive to change, and then approximate f by a responsesurface over this active subspace. More specifically, letX ∈ R

m be random vectors corresponding to a probabil-ity density ρ(x), and let E be the expected value of the”squared” gradient of f(X). An ”active subspace” is aneigenspace associated with the strongly dominant eigen-values E. To reduce the cost, we compute the eigenspaceinstead from a Monte Carlo approximation S. We presentbounds on the number of Monte Carlo samples so that,with high probability, the angle between the eigenspaces

UQ18 Abstracts 33

of E and S is less than a user-specified tolerance. Ourprobabilistic approach resembles the low-rank approxima-tion of kernel matrices from random features, but withmore stringent accuracy measures. Our bounds representa substantial improvement over existing work: They arenon-asymptotic; fully explicit; allow tuning of the successprobability; and do not depend on the numberm of param-eters, but only on the numerical rank (intrinsic dimension)of E. They also suggest that Monte Carlo sampling canbe efficient in the presence of many parameters, as long asthe underlying function f is sufficiently smooth.

Ilse IpsenNorth Carolina State UniversityDepartment of [email protected]

Ralph Smith, John HolodnakNorth Carolina State [email protected], [email protected]

MS3

Recovery from Random Observations of Non-linearLow-rank Structures

Abstract not available at time of publication.

Malik Magdon-IsmailRensselaer Polytechnic [email protected]

Alex GittensUC [email protected]

MS3

Maximize the Expected Information Gain inBayesian Experimental Design Problems: A FastOptimization Algorithm Based on Laplace Approx-imation and Randomized Eigensolvers

Optimal experimental design (OED) for Bayesian nonlin-ear inverse problems governed by partial differential equa-tions (PDEs) is an extremely challenging problem. First,the parameter to be inferred is often a spatially correlatedfield and it leads after discretization to a high dimen-sional parameter space. Second, the forward model is of-ten extremely complex and computationally expensive tosolve. A common objective function for OED is the ex-pected information gain (EIG). Naıve evaluation of EIG isunfeasible for large-scale problems due to the large num-ber of samples required in the double-loop Monte Carlo.To overcome these difficulties, we invoke an approxima-tion of the EIG based on the Laplace approximation of theposterior. Each evaluation of the objective function thenrequires computing a sample average approximation (overpossible realization of the data) of the information gain(IG) between the Laplace approximation and the Gaus-sian prior distribution. An analytical formula for the IG isavailable and it involves the log-determinant and trace ofthe posterior covariance operator. Randomized eigensolveralgorithms allows us to efficiently estimate such invariantsat a cost that is independent of the dimension of the pa-rameter space, thus allowing for a scalable evaluation of theobjective function. Variational adjoint methods are thenused to efficiently compute the gradient of the OED objec-tive function. An application to inverse scattering will be

presented.

Umberto VillaUniversity of Texas at [email protected]

Omar GhattasThe University of Texas at [email protected]

MS4

Approximate Integral Methods for Fast Model Di-agnostics

Nuisance parameters increase in number with more obser-vations making them impossible to scale to realistic prob-lems. One way to make estimation possible for these mod-els is to marginalize out nuisance parameters thereby re-ducing the effective dimension of the model. Since accuratehigh dimensional numerical integration is unfeasible, thestandard tool for marginalization is to apply the LaplaceApproximation, a second order Taylor expansion enablingthe required high dimensional integral to be approximatedas the kernel of a Gaussian density. While this makes theestimation of structural parameters feasible it comes witha heavy assumption which is asymptotically correct. Theproblem is that the required asymptotics are unattainableon the nuisance parameters for which the approximation isapplied. In this talk we present a probabilistic integrationtool as a fast diagnostic assessment of the validity of theLaplace approximation. Furthermore, we showcase how tofold the probabilistic integrator into the marginalizationand estimation methodology.

Dave A. CampbellSimon Fraser [email protected]

MS4

Bayesian Probabilistic Numerical Methods for In-dustrial Process Monitoring

The use of high-power industrial equipment, such as large-scale mixing equipment or a hydrocyclone for separationof particles in liquid suspension, demands careful monitor-ing to ensure correct operation. The task of monitoringthe liquid suspension can be posed as a time-evolving in-verse problem and solved with Bayesian statistical meth-ods. In this paper, we extend Bayesian methods to incor-porate statistical models for the error that is incurred inthe numerical solution of the physical governing equations.This enables full uncertainty quantification within a prin-cipled computation-precision trade-off, in contrast to theover-confident inferences that are obtained when numeri-cal error is ignored. The method is cast with a sequentialMonte Carlo framework and an optimised implementationis provided in Python.

Jon CockayneThe University of [email protected]

MS4

Convergence Rates of Gaussian ODE Filters

A recently introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs)employs Gaussian (Kalman) filtering to solve initial value

34 UQ18 Abstracts

problems. These methods model the true solution x andits first q derivatives a priori as a Gauss–Markov processX, which is then iteratively conditioned on information onx from evaluations of f . We prove local convergence ratesof order hq+1 for all previously introduced versions of thisGaussian ODE filter, as well as global convergence rates oforder h1 in the case of q = 1, integrated Brownian motionprior and maximum a posteriori measurement model, andprovide precision/uncertainty requirements on evaluationsof f for these rates. Thereby, this paper removes multiplerestrictive assumptions for these local convergence rates,and provides a first global convergence result for GaussianODE filters as well as a first analysis for the explicit incor-poration of evaluation uncertainty in such models.

Hans KerstingMax Planck Institute for Intelligent [email protected]

Tim SullivanFree University of Berlin / Zuse Institute [email protected]

Philipp HennigMax Planck Institute for Intelligent [email protected]

MS4

Bayesian Probabilistic Numerical Methods

Numerical computation — such as numerical solution of aPDE — can modelled as a statistical inverse problem in itsown right. The popular Bayesian approach to inversion isconsidered, wherein a posterior distribution is induced overthe object of interest by conditioning a prior distribution onthe same finite information that would be used in a classicalnumerical method, thereby restricting attention to a mean-ingful subclass of probabilistic numerical methods distinctfrom classical average-case analysis and information-basedcomplexity. The main technical consideration here is thatthe data are non-random and thus the standard Bayes’ the-orem does not hold. General conditions will be presentedunder which numerical methods based upon such Bayesianprobabilistic foundations are well-posed, and a sequentialMonte-Carlo method will be shown to provide consistentestimation of the posterior. The paradigm is extended tocomputational “pipelines’, through which a distributionalquantification of numerical error can be propagated. Asufficient condition is presented for when such propagationcan be endowed with a globally coherent Bayesian inter-pretation, based on a novel class of probabilistic graphicalmodels designed to represent a computational work-flow.The concepts are illustrated through explicit numerical ex-periments involving both linear and non-linear PDE mod-els.

Tim SullivanFree University of Berlin / Zuse Institute [email protected]

Jon CockayneThe University of [email protected]

Chris OatesUniversity of Technology [email protected]

Mark Girolami

Imperial College [email protected]

MS5

Speeding Up Sequential Tempered MCMC for FastBayesian Inference and Uncertainty Quantification

Bayesian methods are critical for quantifying the behav-iors of complex physical systems. In this approach, wecapture our uncertainty about a system’s properties usinga probability distribution and update this understandingas new information becomes available. We can then alsomake probabilistic predictions that incorporate uncertain-ties to evaluate system performance and make decisions.While Bayesian methods are critical, they are often compu-tationally intensive, particularly for posterior uncertaintyquantification, which necessitates the development of newapproaches and algorithms. In this talk, I will discuss mycontributions to a family of population MCMC algorithms,which I call Sequential Tempered MCMC (ST-MCMC) al-gorithms. These algorithms combine 1) a notion of temper-ing, to gradually transform a population of samples fromthe prior to the posterior through a series of intermediatedistributions, 2) importance resampling, and 3) MCMC forthe parallel sample chains. Using the sample populationand prior information, we can adapt the proposal distri-bution to better utilize global information about the pos-terior as the ST-MCMC algorithm evolves. This enablesfaster sampling compared to standard implementations ofST-MCMC like methods. I will then illustrate the applica-tion of theses algorithms to problems in Bayesian inference,model selection, and posterior reliability assessment.

Thomas A. CatanachCalifornia Institute of [email protected]

MS5

Implicit Sampling for Stochastic Differential Equa-tions

Monte Carlo algorithms that generate weighted samplesfrom a given probability distribution, often known as im-portance samplers, are used in applications ranging fromdata assimilation to computational statistical mechanics.One challenge in designing weighted samplers is to ensurethe variance of the weights, and that of the resulting esti-mator, are well-behaved. Recently, Chorin, Tu, Morzfeld,and coworkers have introduced a novel class of weightedsamplers called implicit samplers, which possess a numberof nice properties. This talk concerns an asymptotic anal-ysis of implicit samplers in the small-noise limit. I willdescribe a simple higher-order implicit sampling schemesuggested by the analysis, and explain how it can be ex-tended to stochastic differential equatons.

Jonathan GoodmanCourant [email protected]

Andrew LeachUniversity of [email protected]

Kevin K. LinDepartment of MathematicsUniversity of [email protected]

UQ18 Abstracts 35

Matthias MorzfeldUniversity of ArizonaDepartment of [email protected]

MS5

Local Ensemble Kalman Filter with a Small SampleSize

Ensemble Kalman filter (EnKF) is an important data as-similation method for high-dimensional geophysical sys-tems. Efficient implementation of EnKF in practice of-ten involves the localization technique, which updates eachcomponent using only information within a local radius.We rigorously analyze the local EnKF (LEnKF) for lin-ear systems, and show that the filter error can be domi-nated by the ensemble covariance, as long as 1) the samplesize exceeds the logarithmic of state dimension and a con-stant that depends only on the local radius; 2) the forecastcovariance matrix admits a stable localized structure. Inparticular, this indicates that with small system and obser-vation noises, the filter error will be accurate in long timeeven if the initialization is not.

Xin T. TongNational University of SingaporeDepartment of [email protected]

MS6

A Variational Approach to Probing ExtremeEvents in Turbulent Dynamical Systems

Extreme events are ubiquitous in a wide range of dynamicalsystems, including turbulent fluid flows, nonlinear waves,large-scale networks, and biological systems. We proposea variational framework for probing conditions that trig-ger intermittent extreme events in high-dimensional non-linear dynamical systems. We seek the triggers as theprobabilistically feasible solutions of an appropriately con-strained optimization problem, where the function to bemaximized is a system observable exhibiting intermittentextreme bursts. The con- straints are imposed to ensurethe physical admissibility of the optimal solutions, that is,significant probability for their occurrence under the nat-ural flow of the dynamical system. We apply the methodto a body-forced in- compressible Navier-Stokes equation,known as the Kolmogorov flow. We find that the intermit-tent bursts of the energy dissipation are independent of theexternal forcing and are instead caused by the spontaneoustransfer of energy from large scales to the mean flow vianonlinear triad interactions. The global maximizer of thecor- responding variational problem identifies the responsi-ble triad, hence providing a precursor for the occurrence ofextreme dissipation events. Specifically, monitoring the en-ergy transfers within this triad allows us to develop a data-driven short-term predictor for the intermittent bursts ofenergy dissipation. We assess the performance of this pre-dictor through direct numerical simulations.

Mohammad [email protected]

Themistoklis SapsisMassachusetts Institute of Techonology

[email protected]

MS6

Predictability of Extreme-causing Weather Pat-terns in the Midlatitude Turbulence

Blocking events are rare, large-scale, high-pressure (anticy-clonic) weather systems that persist for more than five daysand cause extreme events such as heat waves, cold spells,and extreme precipitation events. The underlying dynam-ics and the intrinsic predictability of blocking events arepoorly understood and weather models have low predic-tion skills (compared to zonal flows) for these events. Re-cent studies have shown that blocking events arise from theinternal dynamics of the tropospheric midlatitude turbu-lence and that ocean, topography, and other components ofthe Earth system play a secondary role. Building on thesefindings and recent progress in data-driven reduced-ordermodeling, we aim at studying predictability and predic-tion of blocking events. We employ a hierarchy of climatemodels, starting with a two-layer quasi-geostrophic (QG)model (the minimal physical model for these events) andan idealized global climate model (GCM). We conduct aperfect-model analysis in the QG model to determine theintrinsic predictability of blocking events and compare itwith the predictability of zonal flows. We then apply anumber of data-driven methods, based on Koopman oper-ator and Fluctuation-Dissipation Theorem, and to the QGand GCM simulations seek to identify local or global pat-terns that provide short-term prediction skills for blockingevents.

Pedram HassanzadehMechanical Engineering and Earth ScienceRice [email protected]

MS6

New Statistically Accurate Algorithms for Fokker-Planck Equations in Large Dimensions and Pre-dicting Extreme Events

Solving the Fokker-Planck equation for high-dimensionalcomplex dynamical systems is an important issue. Inthis talk, efficient statistically accurate algorithms are de-veloped for solving both the transient and the equilib-rium solutions of Fokker-Planck equations associated withhigh-dimensional nonlinear turbulent dynamical systemswith conditional Gaussian structures, which contain manystrong non-Gaussian features such as intermittency andfat-tailed PDFs. The algorithms involve a hybrid strat-egy that requires only a small number of ensembles L.Here, a conditional Gaussian mixture in a high-dimensionalsubspace via an extremely e?cient parametric method iscombined with a judicious Gaussian kernel density esti-mation in the remaining low-dimensional subspace. Thentwo effective strategies, a judicious block decomposition ofthe conditional covariance matrix and statistical symme-try, are developed and incorporated into these algorithmsto deal with much higher dimensions even with orders inthe millions and thus beat the curse of dimension. Thealgorithms are applied to a 1000-dimensional stochasticcoupled FitzHugh-Nagumo model for excitable media. Anaccurate recovery of both the transient and equilibriumnon-Gaussian PDFs requires only 1 sample! In addition,the block decomposition facilitates the algorithms to effi-ciently capture the non-Gaussian features at different loca-tions in a 240-dimensional two-layer inhomogeneous Lorenz

36 UQ18 Abstracts

96 model using only 500 samples.

Andrew MajdaCourant Institute [email protected]

MS6

Closed-loop Reduced-order Control of ExtremeEvents in High-dimensional Systems

Rare events frequently occur in nonlinear and high-dimensional systems such as turbulent fluid flows and pre-mixed flames. Due to their extreme nature, they can pushengineering systems out of their operating envelope and re-duce their lifetime. Here, we develop a control frameworkto suppress the occurence of these extreme events. Our ap-proach is based on the Optimally Time-Dependent (OTD)modes, that have recently been used to formulate reli-able indicators for the short-term prediction of rare events[Phys. Rev. E 94, 032212 (2016)]. First, we infer effec-tive actuator locations from the spatial shape of the OTDmodes, which align exponentially fast with the transientlymost unstable subspace of the system. We then designan appropriate control law based on a reduced dynamicalsystem obtained from the projection of the original sys-tem onto the OTD modes. We demonstrate the efficiencyof the resulting control setup on relevant fluid dynamicalexamples.

Saviz MowlaviMassachusetts Institute of [email protected]

Themistoklis SapsisMassachusetts Institute of [email protected]

MS7

Sparsity in Low-rank Tensor Decompositions

Abstract not available at time of publication.

Alex GorodetskySandia National [email protected]

John D. JakemanSandia National [email protected]

MS7

Induced Distribution Sampling for Sparse Approx-imations

We present a new sampling scheme for computing sparseapproximations to compressible polynomial chaos expan-sions. We first approximate the continuous distributionwith a discrete set; this discrete set is constructed by sam-pling from the induced polynomial distribution. This con-struction ensures that, with a minimal number of samplesand with high probability, the discrete set is a good emu-lator for the continuous distribution. This large sample setis subsequently sparsely subsampled, and function evalua-tions on this subsampled set are used in computing a sparsecoefficient vector using weighted L1 optimization. The pro-cedure is general, easily applicable to any tensor-productmeasure, and any polynomial index set. We show numer-ous examples illustrating the effectiveness of this algorithm

in computing sparse approximations.

Mani RaziUniversity of [email protected]

Ben AdcockSimon Fraser Universityben [email protected]

Simone BrugiapagliaMOXPolitecnico di [email protected]

Akil NarayanUniversity of [email protected]

MS7

Title Not Available

Abstract not available at time of publication.

Dongbin XiuOhio State [email protected]

MS7

Alternating Direction Method for Enhancing Spar-sity of the Representation of Uncertainty

Compressive sensing has become a powerful addition touncertainty quantification when only limited data is avail-able. In this talk I will introduce a general framework toenhance the sparsity of the representation of uncertainty inthe form of generalized polynomial chaos expansion. Thisframework uses an alternating direction method to iden-tify new sets of random variables through iterative rota-tions such that the new representation of the uncertaintyis sparser. Consequently, it increases both the efficiencyand accuracy of the compressive sensing-based uncertaintyquantification method. Numerical examples will be shownto demonstrate the effectiveness of this method.

Xiu YangPacific Northwest National [email protected]

MS8

Using the Problem Symmetries to Improve Surro-gates Models

Sometimes phenomena present symmetries, which allowus to obtain free information. We investigate multiphasecylindrical detonation in two-dimensions. We impose a tri-modal perturbation in the particle volume fraction (PVF).We seek wave numbers that maximize the departure fromaxisymmetry at 500 ?s. We divide the domain into sectorsin the azimuthal coordinate and we compute the differenceof PVF in the sector with most particles and PVF in thesector with least particles. The amplitude for the threemodes is the same and there is no phase shift, so the or-der of the wave-numbers does not matter. Hence for eachcombination of points we have 5 mirror points, i.e. the casewith wave numbers (2,4,3) has mirror points (2,3,4) (3,2,4)(3,4,2) (4,2,3) (4,3,2). The symmetry helps to improve thesurrogate model. Because our simulations are very expen-

UQ18 Abstracts 37

sive we use multi-fidelity surrogates (MFS) to combine HFand low fidelity (LF) models to achieve accuracy at reason-able cost. The LF is obtained with coarser discretizationachieving 85% cost savings. Two MFS were fitted: withoutusing mirror points (70LF, 19HF) and using them (420LF,114HF). The accuracy was measured using cross-validationerror (CV), The inclusion of mirror points allowed a 46%reduction in CV error.

Marıa Giselle Fernandez-Godino, Raphael Haftka, S.BalachandarUniversity of [email protected], [email protected], [email protected]

MS8

Multi-fidelity Modeling for Optimizing Battery De-sign

We develop a new mathematical framework to study theoptimal design of air electrode microstructures for lithium-oxygen (Li-O2) batteries. The design parameters to char-acterize an air-electrode microstructure include the poros-ity, surface-to-volume ratio, and parameters associatedwith the pore-size distribution. A surrogate model for dis-charge capacity is first constructed as a function of thesedesign parameters. In particular, a Gaussian process re-gression method, co-kriging, is employed due to its ac-curacy and efficiency in predicting high-dimensional re-sponses from a combination of multifidelity data. Specif-ically, a small sample of data from high-fidelity simu-lations are combined with a large sample of data ob-tained from computationally efficient low-fidelity simula-tions. The high-fidelity simulation is based on a multiscalemodeling approach that couples the microscale (porescale)and macroscale (device-scale) models, while the low-fidelitysimulation is based on an empirical macroscale model.The constructed response surface provides quantitative un-derstanding and prediction about how air electrode mi-crostructures affect the discharge capacity of Li-O2 batter-ies. The succeeding sensitivity analysis via Sobol indexesand optimization via genetic algorithm offer reliable guid-ance on the optimal design of air electrode microstructures.The proposed mathematical framework can be generalizedto investigate other new energy storage techniques and ma-terials.

Wenxiao PanUniversity of Wisconsin [email protected]

Xiu Yang, Jie Bao, Michelle WangPacific Northwest National [email protected], [email protected],[email protected]

MS8

Linking Gaussian Process Regression with Data-driven Manifold Embeddings for Robust NonlinearInformation Fusion

In statistical modeling with Gaussian Process Regression,it has been shown that combining (few) high-fidelity datawith (many) low-fidelity data enhances prediction accu-racy, compared to prediction based on the few high-fidelitydata only. Such information fusion techniques for multi-fidelity data commonly require strong cross-correlation be-tween the high- and low-fidelity models. In real applica-tions, however, cross-correlation across data at differentfidelities may be obscured due to nonlinearity or multi-

valuedness, resulting in degeneration of established tech-niques. Here, we present a robust mathematical algorithmfor multiple fidelity information fusion that holds promisefor alleviating the problem of hidden cross-correlations. Weintroduce higher-dimensional embeddings of the data, inwhich the high- and low-fidelity cross-correlations becomemore clear, and can be better exploited. The embeddingtheorems guarantee the one-to-oneness between the twodata set embeddings. Then, the high-fidelity model turnsinto a function of the low-fidelity model on the appropriatemanifold, which drastically enables established informa-tion fusion schemes. We describe the proposed frameworkand demonstrate its effectiveness through some benchmarkproblems, including models with discontinuities. The workestablishes a useful synergy between theoretical concepts(observability, embedding theorems) and data-driven ap-plications of machine learning.

Lee SeungjoonJohn Hopkins Universityseungjoon [email protected]

George Em KarniadakisBrown Universitygeorge [email protected]

Ioannis KevrekidisDept. of Chemical EngineeringPrinceton [email protected]

MS8

Deep Neural Networks for Multifidelity Uncer-tainty Quantification

A common scenario in computational science is the avail-ability of a suite of simulators solving the same physicalproblem, such that the simulators are at varying levelsof cost/fidelity. High fidelity simulators are more accu-rate but are computationally expensive, whereas low fi-delity models can be evaluated cheaply while suffering fromreduced accuracy. Additionally, many engineering appli-cations are characterized by model parameters / bound-ary conditions / initial conditions etc. which are notknown exactly. The surrogate-based approach to uncer-tainty quantification requires one to evaluate the forwardmodel enough times such that one can construct a cheap(but accurate) approximation to the numerical solver. Ifthe outputs from the simulators of varying fidelities arecorrelated, one can leverage information from low-fidelitysimulators to augment information from a small numberof evaluations of the high-fidelity simulator to constructan accurate surrogate model. In this work, we demon-strate a deep neural network (DNN) approach for con-structing accurate surrogate models for uncertainty quan-tification by fusing information from multifidelity sources.DNNs are naturally suited for multifidelity information fu-sion because of the hierarchical representation of informa-tion. Our approach is validated on synthetic examples anddemonstrated on real application examples from advancedmanufacturing.

Rohit Tripathi, Ilias BilionisPurdue [email protected], [email protected]

MS9

Robust and Scalable Methods for the Dynamic

38 UQ18 Abstracts

Mode Decomposition

The dynamic mode decomposition (DMD) is a low-rank ap-proximation of time-series data which can be interpreted asthe best fit linear dynamical system approximation of thedata. Because of this interpretation, the DMD has applica-tions in reduced order modeling and control. We considerthe DMD from an algorithmic perspective and show thatmodern methods from the variable projection literature al-low the DMD to be computed in a general setting. Inparticular, we show that the DMD can be computed in amanner which is robust to outliers in the data.

Travis AskhamUniversity of [email protected]

MS9

Nonparametric Estimation for Stochastic Dynami-cal Systems

Given time series observations of a noisy system, how canwe learn effective equations of motion? We frame this prob-lem as nonparametric estimation of the drift f and diffusionΓ terms in systems of stochastic differential equations. Wefocus on models of the form dXt = f(Xt)dt+ΓdWt whereΓ is constant and Wt is Brownian motion. Given suffi-ciently high-frequency time series, we can directly attackthe estimation problem using maximum likelihood tech-niques. This results in the same least-squares problem forthe drift field f considered by Brunton, Proctor, and Kutz[PNAS, 2016]. When only low-frequency time series areavailable, we use diffusion bridge sampling to augment thedata, leading to an expectation maximization (EM) algo-rithm. We study the performance of the EM approach as afunction of signal-to-noise ratio, and compare EM againstother techniques.

Harish S. BhatUniversity of California, MercedApplied [email protected]

MS9

Identifying Nonlinear Dynamics and Intrinsic Co-ordinates under Uncertainty

Abstract not available at time of publication.

Steven BruntonUniversity of [email protected]

MS9

Sparse Identification of Nonlinear Dynamics forModel Predictive Control in the Low-data Limit

Abstract not available at time of publication.

Eurika KaiserUniversity of [email protected]

MS10

UQ with Dependent Variables in Wind Farm Ap-plications

One of the important aspects of UQ is the choice of input

samples for complex (computationally expensive) simula-tions. For independent inputs, several well-known methodssuch as stochastic collocation exist. In case of dependentinputs for which only a dataset is available (while the un-derlying distribution is unknown), it is not clear how toadapt the existing methods from the independent case. Wepropose the use of partitioning and clustering methods toselect the input samples and use an output approximationwhich is piecewise constant on the clusters. This leadsto substantial improvement over Monte-Carlo sampling forstrongly dependent datasets. Furthermore, we discuss anew method to find and quantify dependencies in largedatasets. This method is based on estimating the Renyientropy by minimum spanning trees (MSTs), including ap-proximations to the MST for large datasets. The proposedmethod can be useful for sensitivity analysis. In this case,quantification of the dependencies between inputs and out-put leads to an ordering of the input variables, where thestrongest dependency indicates that the corresponding in-put variable is most strongly affecting the output. Withthis goal in mind, a testcase is presented which appliesthese methods to the estimation of the power output of awind turbine, given a dataset of weather conditions.

Anne EggelsCWI AmsterdamThe [email protected]

Daan CrommelinCWI [email protected]

MS10

Uncertainty Quantification in Large-scale Multi-physics Applications using Exascale Approaches

The study of complex physical systems is often based oncomputationally intensive high-fidelity simulations. Tobuild confidence and improve the prediction accuracy ofsuch simulations, the impact of uncertainties on the quan-tities of interest must be measured. This, however, requiresa computational budget that is typically a large multiple ofthe cost of a single calculation, and thus may become un-feasible for expensive simulation models featuring a largenumber of uncertain inputs and highly non-linear behavior.In this regard, multi-fidelity methods have become increas-ingly popular in the last years as acceleration strategies toreduce the computational cost. These methods are basedon hierarchies of generalized numerical resolutions or modelfidelities, and attempt to leverage the correlation betweenhigh- and low-fidelity models to obtain a more accurate sta-tistical estimator without requiring a large number of ad-ditional high-fidelity calculations. Exascale computing re-sources promise to facilitate the use of these approaches onlarger scale problems by providing 1-10k times augmentedfloating-point capacity, but at expenses of requiring morecomplex data management as memory is expected to be-come more heterogeneous and distributed. The objectiveof this work, therefore, is to explore the performance ofmulti-fidelity strategies in large-scale multiphysics applica-tions using Exascale-ready computational tools based onthe Legion parallel programming system.

Gianluca Iaccarino, Lluis JofreStanford UniversityMechanical [email protected], [email protected]

Gianluca Geraci

UQ18 Abstracts 39

Sandia National Laboratories, [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

MS10

An Efficient Reliability Analysis Tool for the Com-putation of Low Tail Probabilities and ExtremeQuantiles in Multiple Failure Regions: Applicationto Organic Rankine Cycles

Calculation of tail probabilities is of fundamental impor-tance in several domains, such as for example risk assess-ment. One major challenge consists in the computation oflow-failure probability and multiple-failure regions, espe-cially when an unbiased estimation of the error is required.Methods developed in literature rely mostly on the con-struction of an adaptive surrogate, tackling some problemssuch as the metamodel building criterion and the globalcomputational cost, at the price of a generally biased esti-mation of the failure probability. In this study, we proposea novel algorithm permitting to both building an accuratemetamodel and to provide a statistically consistent error.In fact, the proposed method provides a very accurate unbi-ased estimation even for low failure probability or multiplefailure regions. Moreover, another importance samplingtechnique is proposed in this work, permitting to drasti-cally reduce the computational cost when estimating somereference values, or when a very weak failure-probabilityevent has to be computed directly from the metamodel.Several numerical examples are carried out, showing thevery good performances of the proposed method with re-spect to the state-of-the-art in terms of accuracy and com-putational cost. The proposed method is finally appliedto the reliability assessment of turbine cascades in OrganicRankine cycle flows.

Nassim RazaalyINRIA Bordeaux [email protected]

Pietro M. CongedoINRIA Bordeaux Sud-Ouest (FRANCE)[email protected]

MS10

Closure Models for Quantifying Uncertainty inMultiphase Flow Transport Problems

Multi-phase flow prediction is of crucial importance inchemical processes, power plant and pipelines. Simulat-ing multi-phase flow for realistic conditions is difficult,and simplifications are necessary to make computationstractable. A prominent example is the cross-sectional av-eraged one-dimensional two-fluid model (TFM), in whichthe effect of the smallest scales is captured via closure re-lations, like an interfacial friction model. However, thesimplification introduced by averaging introduces a majorissue, namely that the TFM becomes conditionally well-posed. This conditional well-posedness, and the uncer-tainties associated with closure relations, is a main openproblem in the two-fluid model community. We take a newapproach in solving the issue of well-posedness and clo-sure relations in multi-phase flow. First, we perform high-fidelity (2D/3D) computations with two-phase flow CFDcodes and use these to compute the unclosed quantities

present in the 1D two-fluid model. Second, a neural net-work is trained that creates a functional relation betweenthe averaged quantities and the closure terms. The neu-ral network is built to include physical constraints, suchas conservation law information, and trained under condi-tions for which the TFM is well-posed. Third, the trainedmodel is evaluated and tested for conditions under whichthe conventional TFM is ill-posed, resulting in new physics-constrained closure models for multi-phase flow problems.

Benjamin Sanderse

Centrum Wiskunde & Informatica (CWI)Amsterdam, the [email protected]

Sirshendu MisraCWI [email protected]

Yous van [email protected]

MS11

Optimal Energy Storage Scheduling in ElectricityMarkets with Multiscale Uncertainty

We present a mathematical programming framework tostudy revenue opportunities for different classes of en-ergy systems using historical wholesale market prices andphysics-based models. For California markets, we showthat up to 60% of revenue opportunities come from real-time markets, which requires strategic bidding to exploitprice volatility. With this motivation, we compare differ-ent price forecasting methods for multiscale energy prices.An especially challenging aspect of energy markets is ex-ploiting the wealth of data. For example, the Californiamarkets generated over 1 trillion unique spatial and time-varying price datum in 2015. For these reasons, we con-sider Gaussian Process Latent Variable Models to generatelow-dimensional probabilistic forecasts to use as scenariosin stochastic programming formulations for bidding intoenergy markets.

Alexander W. DowlingUniversity of Notre [email protected]

MS11

Uncertainty Quantification for Carbon CaptureSystems

Uncertainty quantification (UQ) methods can help obtainreliable predictions for wide variety of carbon capture sys-tems. Carbon capture systems are often represented bycomplex computational models and have many sourcesof uncertainty. These models often may be constrainedto experimental data. Some of these sources of uncer-tainty include unknown model parameter values and phys-ical constants, model form error, and experimental error.To explore these sources of uncertainty, a Bayesian cal-ibration framework is often used; this calibration frame-work may include a model discrepancy term between themodel and reality, and when necessary stochastic emulatortrained to limited output from a computationally expen-sive model. This talk will discuss several advancements inUQ methodology for carbon capture systems as part of theCarbon Capture Simulation for Industry Impact (CCSI2);calibration for solvent submodels, challenges with calibra-

40 UQ18 Abstracts

tion/emulation for multi-scale models with a large numberof parameters, upscaling uncertainty, and issues with ex-perimental design.

Peter W. Marcy, Troy Holland, K. Sham Bhat, ChristineAnderson-Cook, James GattikerLos Alamos National [email protected], [email protected], [email protected],[email protected], [email protected]

MS11

Estimating Uncertainities using Neural NetworkSurrogates and Dropout

Using machine learning has been shown to be a powerfulmeans to explore, understand, and optimize the perfor-mance of inertial confine fusion (ICF) experiments. Thereis also a need to understand what are the uncertainty es-timates in the predictions. Recent work has shown thatlarge neural networks trained with the dropout regulariza-tion technique are approximations to a Gaussian processmodel. Using this technique we investigate the uncertaintyin predictions for ICF simulations and determine how tobest interpret the results of such predictions.

Ryan McClarrenTexas A&M [email protected]

MS11

Real-time Data Assimilation in Natural Gas Net-works

We present a real-time data assimilation framework forlarge-scale natural gas networks. We demonstrate thatthe initial state is not observable given junction flow andpressure measurements in general but that the use of asteady-state prior is sufficient to infer the spatial fields ofthe networks during dynamic transients. We also demon-strate that, under a steady-state prior, junction pressuremeasurements are sufficient to infer the spatial fields. Wealso provide results using models of different physical res-olution and demonstrate that these can be implementedrather easily using Plasmo.jl (a Julia-based graph-basedmodeling framework).

Victor M. ZavalaUniversity of [email protected]

MS12

Mathematical Modeling and Sampling of Stochas-tic Nonlinear Constitutive Laws on Smooth Mani-folds

In this talk, we address the construction and sampling ofhigh-dimensional stochastic models for spatially dependentanisotropic strain energy functions indexed by complex ge-ometries. The models account for constraints related toconvexity, coerciveness and consistency at small strains inthe almost sure sense, hence making the representationsmathematically consistent. The construction of an effi-cient sampling algorithm is subsequently proposed. Thenumerical strategy is based on solving a set of stochasticdifferential equations involving some underlying diffusionfields. The latter are specifically parametrized by local ge-ometrical features inherited from the manifolds that define

the boundaries of the domain under consideration. Vari-ous applications on vascular tissues are finally presented todemonstrate the relevance of the stochastic framework.

Brian Staber, Johann GuilleminotDepartment of Civil and Environmental EngineeringDuke [email protected], [email protected]

MS12

Bayesian Uncertainty Quantification in the Predic-tion of Thermodynamical, Mechanical and Elec-tronic Properties of Alloys using the Cluster Ex-pansion Method

We present a framework for quantifying uncertainties whenusing the cluster expansion method as a surrogate modelfor quantum simulations in the prediction of material prop-erties of alloys. Sparse Gaussian processes are used forsimultaneous predictions in both the configuration spaceof alloys and temperature domain. We will show for thefirst time rigorous results in the calculation of phase dia-grams of alloys with error bars induced by model error andepistemic uncertainty from training our models with lim-ited data. Our approach has been used for several othertemperature-dependent thermodynamic, mechanical andelectronic properties including band structure and densityof states (DOS).

Sina Malakpour Estalaki, Nicholas ZabarasUniversity of Notre [email protected], [email protected]

MS12

Stochastic Modeling of Multiscale Materials

Predictive modeling and design under uncertainty for com-plex systems can be achieved through a successful couplingof modeling tools, uncertainty quantification tools, and as-similation tools of available experimental data at a multi-tude of scales. Two challenges are associated with couplingthese tools: 1) to characterize all available evidence in aprobabilistic setting so as to capture key dependencies, and2) to efficiently and accurately transform this evidence intoinference concerning the design objective function. Thesechallenges highlight the importance of accurate characteri-zation of probabilistic priors and evaluations of likelihoods.Each of these steps is typically simplified for analytical andcomputational convenience. We present and demonstratean approach for addressing these challenges for non-crimpfabric composites with minimum assumptions, by relyingon polynomial chaos representations of both input param-eters and multiscale material properties. These surrogatesprovide an accurate map from parameters (and their uncer-tainties) at all scales and the system-level behavior relevantfor design. A key important feature of our proposed proce-dure is its efficiency and accuracy. It is a distinct approachto composite deign where the statistical dependence be-tween the various material properties of the composites isnot ignored, but rather introduced to impose mechanisticconstraints on all the iterations within the design process.

Loujaine MehrezDepartment of Aerospace and Mechanical EngineeringUniversity of Southern [email protected]

Roger Ghanem

UQ18 Abstracts 41

University of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]

MS12

Identifying Sample Properties of Random Fieldsthat Yield Response Maxima

We consider two types of stochastic inverse problems thatarise from the mapping A(x,Z) �→ U(x, Z) where A(x,Z)is a random field, x is the spatial variable, Z is a randomvector, and U(x, Z) is the stochastic response. The firsttype is concerned with identifying the distribution of therandom vector Z given probabilistic information of func-tionals of the response. The randomness in the responseis due to the random field A(x, Z) rather than observa-tion noise, which may or may not be present. This typeof inverse problems is ill-posed even if the inverse map-ping is defined as point to set. We demonstrate throughnumerical examples and theoretical arguments the limita-tions of existing methods in obtaining meaningful solutionsto this inverse problem. The second type of inverse prob-lems assumes that the law of the random vector Z and allcomponents of the mapping are fully specified. Our objec-tive is to identify features of samples of the input randomfield A(x,Z) which yield specified properties of responsefunctionals, e.g., extremes of U(x, Z) exceeding specifiedthresholds. We investigate this type of inverse problem us-ing an example in mechanics and with the aid of surrogatemodels for the forward map.

Wayne Isaac T. UyCenter for Applied MathematicsCornell [email protected]

Mircea GrigoriuCornell [email protected]

MS13

Active Subspaces in Parameterized Dynamical Sys-tems

Defining, analyzing, and computing time-dependent activesubspaces in dynamical systems. Implementing dynamicmode decomposition (DMD) [Schmid, 2010], sparse identi-fication for nonlinear dynamical systems (SINDy) [Bruntonet al., 2016], and other methods to discover and reconstructdynamical systems that model global parameter sensitiv-ity. Example applications include the Michaelis-Mentenenzyme kinetics system, a 325-parameter combustion ki-netics system, and the Lorenz system. This work providesunique insight into driving parameters in biological andengineering systems.

Izabel P. Aguiar, Paul ConstantineUniversity of [email protected],[email protected]

MS13

The Underlying Connections Between Identifiabil-ity, Sloppiness, and Active Subspaces

Parameter space reduction and parameter estimation—and

more generally, understanding the shape of the informationcontained in models with observational structure—are es-sential for many questions in mathematical modeling anduncertainty quantification. As such, different disciplineshave developed methods in parallel for approaching thequestions in their field. Many of these approaches, includ-ing identifiability, sloppiness, and active subspaces, use re-lated ideas to address questions of parameter dimensionreduction, parameter estimation, and robustness of infer-ences and quantities of interest. In this talk, we will discusshow active subspace methods have intrinsic connections tomethods from sensitivity analysis and identifiability, andpresent a common framework for these approaches. A par-ticular form of the Fisher information matrix (FIM), whichwe denote the sensitivity FIM, is fundamental to many ofthese methods. Through a series of examples and casestudies, we illustrate the properties of the sensitivity FIMin several contexts. We illustrate how the interplay be-tween local and global and linear and non-linear can affectthese methods, and also show how borrowing tools fromthe other approaches can give new insights.

Marisa Eisenberg, Andrew F. BrouwerUniversity of [email protected], [email protected]

MS13

Gauss–Christoffel Quadrature for Inverse Regres-sion

Sufficient dimension reduction (SDR) provides a frame-work for reducing the predictor space dimension in regres-sion problems. We consider SDR in the context of deter-ministic functions of several variables such as those arisingin computer experiments. Here, SDR serves as a method-ology for uncovering ridge structure in functions, and twoprimary algorithms for SDRsliced inverse regression (SIR)and sliced average variance estimation (SAVE)approximatematrices of integrals using a sliced mapping of the re-sponse. We interpret this sliced approach as a Riemannsum approximation of the particular integrals arising ineach algorithm. We employ multivariate tensor productGauss-Christoffel quadrature and orthogonal polynomialsto produce new algorithms that improve upon the Rie-mann sum-based numerical integration in SIR and SAVE.We call the new algorithms Lanczos-Stieltjes inverse re-gression (LSIR) and Lanczos-Stieltjes average variance es-timation (LSAVE) due to their connection with Stieltjes’method (and Lanczos’ related discretization) for generatinga sequence of polynomials that are orthogonal to a givenmeasure. We show that the quadrature-based approach ap-proximates the desired integrals, and we study the behaviorof LSIR and LSAVE in numerical examples. As expectedin high order numerical integration, the quadrature-basedLSIR and LSAVE exhibit exponential convergence in theintegral approximations compared to the first order con-vergence of the classical SIR and SAVE.

Andrew GlawsDepartment of Computer ScienceUniversity of Colorado [email protected]

Paul ConstantineUniversity of Colorado [email protected]

MS13

Identifiability of Linear Compartmental Models:

42 UQ18 Abstracts

The Singular Locus

Structural identifiability is the question of whether the pa-rameters of a model can be recovered from perfect data. Animportant class of biological models is the class of linearcompartmental models. Using standard differential algebratechniques, the question of whether a given model is gener-ically locally identifiable is equivalent to asking whetherthe Jacobian matrix of a certain coefficient map, arisingfrom the input-output equations, is generically full rank.A natural next step is to study the set of parameter valueswhere the Jacobian matrix drops in rank, which we referto as the locus of non-identifiable parameter values, or, forshort, the singular locus. In this talk, we give a formulafor these coefficient maps in terms of acyclic subgraphs ofthe model’s underlying directed graph and study the casewhen the singular locus is defined by a single equation, thesingular locus equation. In addition to giving a full under-standing of the set of identifiable parameter values, we alsoshow that the singular locus equation can be used to showcertain submodels are generically locally identifiable. Wedetermine the equation of the singular locus for two fami-lies of linear compartmental models, cycle and mammillary(star) models with input and output in a single compart-ment. We also state a conjecture for the correspondingequation for a third family: catenary (path) models.

Nicolette MeshkatSanta Clara [email protected]

Elizabeth GrossSan Jose State [email protected]

Anne ShiuTexas A&M [email protected]

MS14

On the Interaction of Observation and Prior ErrorCorrelations

The importance of prior error correlations in data assimila-tion has long been known, however, observation error corre-lations have typically been neglected. Recent progress hasbeen made in estimating and accounting for observationerror correlations, allowing for the optimal use of denserobservations. Given this progress, it is now timely to askhow prior and observation error correlations interact andhow this affects the value of the observations in the analy-sis. Addressing this question is essential to understandingthe optimal design of future observation networks for high-resolution numerical weather prediction. The interactionof the prior and observation error correlations is illustratedwith a series of 2-variable experiments in which the map-ping between the state and observed variables (the obser-vation operator) is allowed to vary. In an optimal system,the reduction in the analysis error variance and spread ofinformation is shown to be greatest when the observationand prior errors have complementary statistics. This canbe explained in terms of the relative uncertainty of theobservations and prior on different spatial scales. It willbe shown how the results from these simple 2-variable ex-periments can be used to inform the optimal observationdensity for higher dimensional problems.

Alison M. FowlerUniv. of [email protected]

Sarah Dance, Joanne WallerUniversity of [email protected], [email protected]

MS14

State Estimation for a Filtered Representation ofa Chaotic Field

The numerical simulation of chaotic fields (e.g., tur-bulence, atmospheric convection, large-eddy simulation,wave-breaking, etc.) invariably leads to using a numeri-cal grid (mesh) that is coarser than what is required toresolve all of the important physical processes being de-scribed by the set of governing partial differential equa-tions. This coarse mesh will therefore miss important phe-nomena that the observational instruments used for stateestimation will observe. We show here that the perfor-mance of state estimation can be improved by account-ing for this fact that some physical processes are missingfrom the model simulations but observed by the observa-tional instruments. We show how to properly construct aBayesian framework for the situation where the numericalmodel is attempting to predict a truncated version of amuch higher resolution state-vector and the observationsthat are being assimilated are observing the elements ofthis high-resolution state-vector. Then, we go on to de-scribe a multi-resolution particle filtering framework thatillustrates the main points of this theory as applied to astochastic version of the Henon Map.

Daniel HodyssNaval Research [email protected]

Peter O. SchwartzLawrence Berkeley [email protected]

MS14

Feature Data Assimilation: A Tool for Model Tun-ing

Data-driven methods for extracting structures or featuresfrom a system of interest have been useful in many fields,including oceanography and geophysics, and in the studyof dynamical systems. Correspondingly, the uses of thesestructures or features in data assimilation is of interest. Abrief survey of feature data assimilation is presented, andthe Bayesian formulation(s) of feature data assimilationare constructed. We then present several applications tomodel tuning, in which focusing on a particular feature inthe data allows one to better tune model parameters torepresent that feature.

John MacleanUniversity of North Carolina at Chapel [email protected]

MS14

Addressing Uncertainty in Cloud Microphysics Us-ing Radar Observations and Bayesian Methods

A leading-order source of error in numerical weather fore-casts is the representation of clouds and precipitation,which is rife with uncertainties and approximations. Thisis, in part, an unavoidable consequence of the high degreeof complexity inherent in modeling clouds and precipita-tion, as well as the lack of sufficiently detailed observa-

UQ18 Abstracts 43

tions. However, much can still be done to quantify andconstrain these uncertainties with the observations that doexist, such as widely available polarimetric weather radars.I will present research on three projects where MarkovChain Monte Carlo samplers are used to perform Bayesianestimation to gain microphysical information from radarobservations. The first combines information on ice crys-tals from in situ cloud probes together with observationsfrom two vertically-pointing radars, in order to improve un-derstanding of the ice properties that influence large thun-derstorm systems. The second is a Bayesian cloud propertyretrieval framework to leverage the information content ofmulti-frequency cloud radars. The third is a bulk micro-physics parameterization scheme developed to eschew ad-hoc parameterization assumptions in favor of scheme struc-ture and parameters that are entirely informed by radarobservations themselves. An attendant advantage of ourapproach in all three projects is the robust (generally non-Gaussian and multivariate) estimate of uncertainty that isa reflection of our prior and observational uncertainties.

Marcus van Lier-WalquiNASA Goddard Institute for Space [email protected]

MS15

Efficient Randomized Methods for D-Optimal Sen-sor Placement for Infinite-dimensional BayesianLinear Inverse Problems Governed by PDEs

We address the problem of optimal experimental design(OED) for large-scale Bayesian linear inverse problems gov-erned by PDEs. Specifically, the goal is to find optimalplacement of sensors, where observational data are col-lected, so as to maximize the expected information gain.That is, we rely on the Bayesian D-optimal criterion givenby the expected Kullback-Leibler divergence from poste-rior to prior. We introduce efficient randomized methodsfor fast computation of OED objective and its gradient,and rely on sparsifying penalty functions to enforce spar-sity of the sensor placements. Numerical results illustratingour framework will be provided in the context of D-optimalsensor placement for optimal reconstruction of initial statein an advection-diffusion problem.

Alen AlexanderianNC State [email protected]

Elizabeth Herman, Arvind SaibabaNorth Carolina State [email protected], [email protected]

MS15

Fast Methods for Bayesian Optimal ExperimentalDesign

We cast data assimilation problem into a model inadequacyproblem which is then solved by a Bayesian approach. TheBayesian posterior is then used for Bayesian Optimal Ex-perimental Design (OED). Our focus is on the A- and D-optimal OED problems for which we construct scalable ap-proximations that involve: 1) randomized trace estimators;2) Gaussian quadratures; and 3) trace upper bounds. Un-like most of contemporary approaches, our methods workdirectly with the inverse of the posterior covariance, i.e.the Hessian of the regularized misfit for linear data assim-ilation problems, and hence avoiding inverting large ma-trices. We show that the efficiency of our methods can be

further enhanced with randomized SVD. Various numericalresults for linear inverse convection-diffusion data assimila-tion problems will be presented to validate our approaches.

Sriramkrishnan MuralikrishnanDepartment of Aerospace EngineeringThe University of Texas at [email protected]

Brad MarvinUniversity of Texas at [email protected]

Tan Bui-ThanhThe University of Texas at [email protected]

MS15

Optimal Positioning of Mobile Radiation SensorsUsing Mutual Information

Locating and identifying a radiation source with informa-tion from stationary detectors is a well-documented inverseproblem, but the optimal placement of such detectors is amore challenging task. In particular, optimal sensor place-ment depends on the location of the source, which is gen-erally not known a priori. Mobile radiation sensors, whichcan adapt to the given problem, are an attractive alter-native. While most mobile sensor strategies designate atrajectory for sensor movement, we instead employ mu-tual information to choose the next measurement locationfrom a discrete set of design conditions. We use mutualinformation, based on Shannon entropy, to determine themeasurement location that will give the most informationabout the source location and intensity.

Kathleen SchmidtLawrence Livermore National [email protected]

Ralph C. SmithNorth Carolina State UnivDept of Mathematics, [email protected]

Deepak Rajan, Ryan GoldhahnLawrence Livermore National [email protected], [email protected]

Jason HiteDepartment of Nuclear Engineering, North Carolina StateUniversity, P.O.Box 7909, Raleigh, NC 27695, [email protected]

John MattinglyNorth Carolina State [email protected]

MS15

Sparse Sensor Placement in Bayesian Inverse Prob-lems

We present a sparse optimization framework for the place-ment of measurement sensors in the sense of optimal de-sign of experiments for infinite-dimensional Bayesian in-verse problems . Here, the measurement setup is modelledby a regular Borel measure on the spatial domain. After

44 UQ18 Abstracts

deriving an A-optimal design criterion, involving a sparseregularizer, we prove well-posedness of the problem anddiscuss first order optimality conditions . We present anapproximation framework for the problem under consider-ation and discuss a priori estimates as well as the efficientnumerical solution. The talk is concluded by numericalexperiments.

Daniel WalterTechnische Universitat [email protected]

MS16

Convergence Properties of a Randomized Quasi-Newton Method for Least Squares Solutions to Lin-ear Systems

Linear systems are pervasive throughout mathematics andthe sciences. With the size of the systems we are solvingoutpacing the increase in computing power, new tools mustbe developed to solve problems in the regime of big data.We propose a randomized iterative quasi-Newton methodwhich converges almost surely to the least squares solutionof a linear system. Utilizing curvature information allowsfor faster descent but comes at the cost of inverting a high-dimensional Hessian. We show that preconditioning thedata with a random matrix at each iteration allows thetradeoff between the rate of convergence and the cost periteration to be favorable in terms of the time to conver-gence. We provide an uppper bound on the asymptoticrate of convergence and compare performance to that ofmore recognizable methods. We conclude with a discus-sion of future work: regularization, randomized dimensionselection, and asynchronous parallelization.

David A. KozakColorado School of [email protected]

Julianne Chung, Matthias Chung, Joseph T. SlagelDepartment of MathematicsVirginia [email protected], [email protected], [email protected]

Luis TenorioColorado School of [email protected]

MS16

A Unifying Framework for Randomization Meth-ods for Inverse Problems

We propose a unified approach for constructing scal-able randomized methods for large-scale inverse problems.From this broader general framework, four different ex-isting randomized methods for solving inverse problems(eNKF, RML, PCGA, and RMA) as well as new meth-ods can be derived. This new unified theoretical under-standing will help further the development and analysis ofrandomized methods for solving large-scale inverse prob-lems. In particular, from our numerical comparisons andpast work, we will demonstrate that misfit randomizationmethods lead to improved solutions compared to methodsthat randomize the prior part, and offer an intuition tosupport this idea.

Ellen B. LeICES, UT [email protected]

Brad MarvinUniversity of Texas at [email protected]

Tan Bui-ThanhThe University of Texas at [email protected]

MS16

Subsampled Second Order Machine Learning andScalable Quantification of Uncertainties

Abstract not available at time of publication.

Michael Mahoney, Fred RoostaUC [email protected], [email protected]

MS16

Low-Rank Independence Samplers for Bayesian In-verse Problems

In Bayesian inverse problems, the posterior distribution isused to quantify uncertainty about the reconstructed solu-tion. MCMC algorithms often are used to draw samplesfrom the posterior distribution; however, their implementa-tions can be computationally expensive. We present a com-putationally efficient scheme for sampling high-dimensionalGaussian distributions in ill-posed Bayesian linear inverseproblems. Our approach uses a proposal distribution basedon a low-rank approximation of the prior-preconditionedHessian. We show the dependence of the acceptance rateon the number of eigenvalues retained and discuss condi-tions under which the acceptance rate is high. Numericalexperiments in image deblurring and computerized tomog-raphy show the benefits of our approach.

Arvind SaibabaNorth Carolina State [email protected]

Alen AlexanderianNC State [email protected]

Johnathan M. BardsleyUniversity of [email protected]

Andrew [email protected]

Sarah VallelianStatistical and Applied Mathematical Sciences [email protected]

MS17

Why Uncertainty Matters in Deterministic Com-putations: A Decision Theoretic Perspective

Probabilistic numerical methods have recently become anactive area of research, for the purpose of quantifying epis-temic uncertainty in deterministic computations, such asnumerical integration, optimisation, numerical linear alge-bra and solutions for differential equations. Such uncer-tainty is caused by limited computational budges in solving

UQ18 Abstracts 45

numerical tasks in practice, and arises as discretisation er-rors in approximating continuous objects. In the literature,several motivations have been given for such approaches,including propagation of uncertainties in a pipeline of nu-merical computations, aiming at the assessment of the re-sulting scientific conclusions, or at the optimal allocationof computational resources. In this talk, I will argue thatanother important motivation is provided by decision prob-lems, where one is required to chose an optimal action fromgiven candidate actions associated with possible losses. Iwill discuss how probabilistic numerical methods can playfundamental roles in such decision problems, showcasingseveral motivating examples.

Motonobu KanagawaMax Planck Institute for Intelligent Systems, [email protected]

MS17

Compression, Inversion and Approximate Princi-pal Component Analysis of Dense Kernel Matricesat Near-linear Computational Complexity

Many popular methods in machine learning, statistics, anduncertainty quantification rely on priors given by smoothGaussian processes, like those obtained from the Materncovariance functions. The resulting covariance matricesare typically dense, leading to (often prohibitive) O

(N2

)

or O(N3

)computational complexity.

In this work, we prove rigorously that the denseN × N kernel matrices obtained from elliptic bound-ary value problems and measurement points distributedapproximately uniformly in a d-dimensional domaincan be Cholesky factorised to accuracy ε in computa-tional complexity O

(N log2(N) log2d(N/ε)

)in time and

O(N log(N) logd(N/ε)

)in space. For the closely related

Matern covariances we observe very good results in prac-tise, even for parameters corresponding to non-integer or-der equations. As a byproduct, we obtain a sparse PCAwith near-optimal low-rank approximation property anda fast solver for elliptic PDE. We emphasise that our al-gorithm requires no analytic expression for the covariancefunction.Our work connects the probabilistic interpretation of theCholesky factorisation, the screening effect in spatialstatistics, and numerical homogenisation. In particular, re-sults from the game theoretic approach to numerical anal-ysis (“Gamblets’) allow us obtain rigorous error estimates.

Florian SchaeferCalifornia Institute of [email protected]

Tim SullivanFree University of Berlin / Zuse Institute [email protected]

Houman OwhadiApplied [email protected]

MS17

Boundary Value Problems: A Case Study forNested Probabilistic Numerical Methods

Boundary value problems (BVPs) typically require two

tools from numerical analysis: an approximation to the dif-ferential operator, as well as a nonlinear equation solver.Thus, BVPs are ideal candidates to study the interaction ofnumerical methods. Motivated by recent advances in prob-abilistic numerics, we propose to model the BVP solutionwith a Gaussian process and solve the nonlinear equationby iterations of Bayesian quadrature. The result is a prob-abilistic BVP solver that encapsulates the different sourcesof approximation uncertainty in its predictive posterior dis-tribution. We test our method on a set of benchmark prob-lems and indicate how Bayesian decision theory could usethe probabilistic description to refine the solution.

Michael SchoberBosch Center for Artificial [email protected]

MS17

Probabilistic Implicit Methods for Initial ValueProblems

Numerical integrators for IVPs that return probabilitymeasures instead of point estimates can be used to enrichthe description of uncertainty in procedures requiring nu-merical computation of ODEs. An ever-expanding familyof these methods are in development, within the recentlyemerging framework of Probabilistic Numerical Methods.One approach is to define a functional prior probabilitymeasure over the entire solution path, use time-steppingiterators to generate data, then output a functional pos-terior. Another approach, which allows for more inter-pretable analogues of classical algorithms, is based on ran-domising the integrator path at each step, aiming to matchthe error in the underlying method. This was introducedin Conrad et al. (2016), which provided an elegant conver-gence proof of the randomised method and discussed thecomplex issue of calibration. Teymur et al. (2016) consid-ered an extension to Adams-type multistep methods andLie et al (2017) provided some rigorous generalisations tothe theory. This talk will briefly survey this approach, anddiscuss a new construction derived from implicit (multi-step) integrators which boosts numerical stability.

Onur TeymurImperial College [email protected]

MS18

MCMC for High Energy X-Ray Radiography

Image deblurring via deconvolution can be formulated asa hierarchical Bayesian inverse problem, and numericallysolved by Markov Chain Monte Carlo (MCMC) methods.Numerical solution is difficult because

• inconsistent assumptions about the data outside ofthe field of view of the image lead to artifacts nearthe boundary; and

• the Bayesian inverse problem is high-dimensional forhigh-resolution images.

The numerical MCMC framework I present addresses theseissues. Boundary artifacts are reduced by reconstructingthe image outside the field of view. Numerical difficultiesthat arise from high-dimensions are mitigated by exploitingsparse problem structure in the prior precision matrix.

Jesse AdamsThe University of Arizona

46 UQ18 Abstracts

[email protected]

MS18

Iterative Construction of Gaussian Process Surro-gate Models for Bayesian Inference in Combustion

A new algorithm is developed to tackle the issue of sam-pling non-Gaussian model parameter posterior probabilitydistributions, which arise from solutions to Bayesian in-verse problems characterized by expensive forward models.Gaussian process (GP) regression is utilized in construct-ing a surrogate model of the posterior distribution. Aniterative approach is adopted for the optimal selection ofpoints in parameter space used to train the GP surface.The efficacy of this new algorithm is tested on inferring re-action rate parameters in a hydrogen-oxygen combustionmodel.

Leen AlawiehInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]

Marcus DayLawrence Berkeley National [email protected]

Jonathan GoodmanCourant [email protected]

John B. BellCCSELawrence Berkeley [email protected]

MS18

Rigorous Integration of Reduced-order Models inBayesian Inference via Statistical Error Models

Performing Bayesian inference with large-scale models canbe computationally costly due to the large number of for-ward solves incurred by posterior-sampling methods. Tomitigate this cost, large-scale forward models are oftenreplaced with an inexpensive surrogate model. Unfortu-nately, this strategy generally leads to incorrect posteriordistributions, as the likelihood is incorrect: it does notreflect the error incurred by the surrogate model. Thistalk presents a strategy for reducing the computationalcost of Bayesian inference while simultaneously improv-ing the accuracy of the posterior by accounting for theerror of the reduced surrogate model. The method (1)replaces the large-scale forward model with a projection-based reduced-order model, and (2) employs a statisticalerror model (constructed via the ROMES method) to quan-tify the discrepancy between the reduced-order model andthe large-scale model. The resulting likelihood rigorouslyaccounts for the epistemic uncertainty introduced by thereduced-order model, thereby enabling correct posteriorsto be inferred, even when the reduced-order model itself isinaccurate.

Kevin T. CarlbergSandia National [email protected]

Wayne Isaac T. Uy

Center for Applied MathematicsCornell [email protected]

Fei LuDepartment of Mathematics, UC BerkeleyMathematics group, Lawrence Berkeley [email protected]

Matthias MorzfeldUniversity of ArizonaDepartment of [email protected]

MS18

Data Assimilation with Stochastic Reduced Models

In weather and climate prediction, data assimilation com-bines data with dynamical models to make prediction, us-ing ensembles of solutions to represent the uncertainty.Due to limited computational resources, reduced modelsare needed and coarse-grid models are often used, andthe effects of the subgrid scales are left to be taken intoaccount. A major challenge is to account for the mem-ory effects due to coarse graining while capturing the keystatistical-dynamical properties. We propose to use non-linear autoregression moving average (NARMA) type mod-els to account for the memory effects, and demonstrateby an example of the Lorenz 96 system that the resultingNARMA type stochastic reduced models can capture thekey statistical and dynamical properties and therefore canimprove the performance of ensemble prediction in dataassimilation.

Fei LuDepartment of Mathematics, UC BerkeleyMathematics group, Lawrence Berkeley [email protected]

Alexandre ChorinDepartment of MathematicsUniversity of California at [email protected]

Xuemin TuUniversity of [email protected]

MS19

A Sequential Sampling Strategy for Extreme EventStatistics in Nonlinear Dynamical Systems

We develop a method for the evaluation of extreme eventstatistics associated with nonlinear dynamical systems, us-ing a very small number of samples. From an initial datasetof design points, we formulate a sequential strategy thatprovides the next-best data point (set of parameters) thatwhen evaluated results in improved estimates of the prob-ability density function (pdf) for a scalar quantity of in-terest. The approach utilizes Gaussian process regres-sion to perform Bayesian inference on the parameter-to-observation map describing the quantity of interest. Wethen approximate the desired pdf along with uncertaintybounds utilizing the posterior distribution of the inferredmap. The next-best design point is sequentially deter-mined through an optimization procedure that selects the

UQ18 Abstracts 47

point in parameter space that maximally reduces uncer-tainty between the estimated bounds of the pdf prediction.Since the optimization process utilizes only informationfrom the inferred map it has minimal computational cost.Moreover, the special form of the criterion emphasizes thetails of the pdf. The method is applied to estimate the ex-treme event statistics for a very high-dimensional systemwith millions degrees of freedom: an offshore platform sub-jected to three-dimensional irregular waves. It is demon-strated that the developed approach can accurately deter-mine the extreme event statistics using orders of magnitudesmaller number of samples compared with traditional ap-proaches.

Mustafa [email protected]

Themistoklis SapsisMassachusetts Institute of [email protected]

MS19

Predicting Statistical Response and ExtremeEvents in Uncertainty Quantification ThroughReduced-order Models

The capability of using imperfect statistical and stochasticreduced-order models to capture crucial statistics in turbu-lent flow and passive tracers is investigated. Much simplerand more tractable block-diagonal models are proposed toapproximate the complex and high-dimensional turbulentflow equations. The imperfect model prediction skill is im-proved through a judicious calibration of the model errorsusing leading order statistics. A systematic framework ofcorrecting model errors with empirical information theoryis introduced, and optimal model parameters under thisunbiased information measure can be achieved in a trainingphase before the prediction. It is demonstrated that cru-cial principal statistical quantities in the most importantlarge scales can be captured efficiently with accuracy usingthe reduced-order model in various dynamical regimes ofthe flow field with distinct statistical structures.

Di [email protected]

MS19

Complementing Imperfect Models with Data forthe Prediction of Extreme Events in Complex Sys-tems

A major challenge in projection-based order reductionmethods for nonlinear dynamical systems lies in choosinga set of modes that can faithfully represent the overall dy-namics. Modes lacking in number or dynamical importancemay lead to significant compromise in accuracy, or worse,completely different dynamical behaviors in the model. Inthis work, we present a framework for using data-drivenmodels to assist dynamical models, obtained through pro-jection, when the reduced set of modes are not necessar-ily optimal. We make use of the long short-term memory(LSTM), a recurrent neural network architecture, to ex-tract latent information from the reduced-order time seriesdata and derive dynamics not explicitly accounted for bythe projection. We apply the framework to projected dy-namical models of differing fidelities for prediction of inter-

mittent events such as the Kolmogorov flow.

Zhong [email protected]

Themistoklis SapsisMassachusetts Institute of [email protected]

MS20

Design of Optimal Experiments for CompressiveSampling of Polynomial Chaos Expansions

Polynomial chaos (PC) expansions are basis representa-tions that are frequently used to construct surrogate mod-els for complex problems, where model inputs are uncer-tain. Recently much work has been done regarding vari-ous input parameter sampling strategies for the purpose ofconstructing PC approximations. Further, concepts fromthe design of optimal experiments (DOE) have been ap-plied to construct PC surrogates via compressed sensingor over determined least squares approximation. In gen-eral, optimal design are constructed by generating a largefinite number of randomly generated candidate inputs thena subset of the candidate samples that optimizes a givenobject function is identified. While some research has beendone to incorporate ideas from DOE to PC approximation,the role of sampling strategies used to generate candidateinputs has received less attention. In this work, we com-pare the performance of various sampling strategies used togenerate candidate designs and the resulting effects thosestrategies have on the optimal designs constructed from theDOE and the quality of PC approximations.

Paul Diaz, Jerrad HamptonUniversity of Colorado, [email protected], [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

MS20

Sparse Approximation for Data-driven PolynomialChaos Expansion and their Applications in UQ

In this talk, we will discuss collocation method viacompressive sampling for recovering arbitrary PolynomialChaos expansions (aPC). Our approach is motivated by thedesire to use aPC to quantify uncertainty in models withrandom parameters. The aPC uses the statistical momentsof the input random variables to establish the polynomialchaos expansion and can cope with arbitrary distributionswith arbitrary probability measures. To identify the aPCexpansion coefficients, we use the idea of Christoffel sparseapproximation. We present theoretical analysis to moti-vate the algorithm. Numerical examples are also providedto show the efficiency of our method.

Ling Guo, Yongle LiuShanghai Normal [email protected], [email protected]

Akil NarayanUniversity of [email protected]

48 UQ18 Abstracts

Tao ZhouInstitute of Computational MathematicsChinese Academy of [email protected]

MS20

Title Not Available

Abstract not available at time of publication.

Clayton G. WebsterOak Ridge National [email protected]

MS20

L1 Minimization Method for Link Flow Correction

A computational method, based on �1-minimization, is pro-posed for the problem of link flow correction, when theavailable traffic flow data on many links in a road net-work are inconsistent with respect to the flow conservationlaw. Without extra information, the problem is generallyill-posed when a large portion of the link sensors are un-healthy. It is possible, however, to correct the corruptedlink flows accurately with the proposed method under arecoverability condition if there are only a few bad sensorswhich are located at certain links. We analytically identifythe links that are robust to miscounts and relate them tothe geometric structure of the traffic network by introduc-ing the recoverability concept and an algorithm for com-puting it. The recoverability condition for corrupted linksis simply the associated recoverability being greater than1. In a more realistic setting, besides the unhealthy linksensors, small measurement noises may be present at theother sensors. Under the same recoverability condition, ourmethod guarantees to give an estimated traffic flow fairlyclose to the ground-truth data and leads to a bound forthe correction error. Both synthetic and real-world exam-ples are provided to demonstrate the effectiveness of theproposed method.

Penghang YinDepartment of Mathematics, [email protected]

Zhe Sun, Wenlong JinDepartment of Civil and Environmental Engineering, [email protected], [email protected]

Jack XinDepartment of [email protected]

MS21

Uncertainty Quantification in High-dimensionalDynamical Systems Using a Data-driven Low-rankApproximation

We present a non-intrusive and data-driven method forconstructing a low-rank approximation of high-dimensionalstochastic dynamical systems. This method requires asnapshot sequence of samples of the stochastic field in theform of A ∈ R

n×s×m where n is the number of observ-able data points, s is the number of samples and m is thenumber time steps. These samples may be generated usingdeterministic solvers or measurements. In this methodol-ogy, the time-dependent data is approximated by an r-

dimensional reduction in the form of: Ar(t) = U(t)Y(t)T

whereU(t) ∈ Rn×r is a set of deterministic time-dependent

orthonormal basis and Y(t) ∈ Rs×r are the stochastic

coefficients. We derive explicit evolution equations forU(t) and Y(t) and use the data to solve these equa-tions. We demonstrate that this reduction technique cap-tures the strongly transient stochastic responses with high-dimensional random dimensions. We demonstrate the ca-pability of this method for time-dependent fluid mechanicsproblems.

Hessam BabaeeUniversity of [email protected]

MS21

Warpings, Embeddings, and Latent Variables: TheQuest of Learning from Multi-fidelity Data

The success of machine learning approaches to multi-fidelity modeling relies on their ability to distill use-ful cross-correlations between low- and high-fidelity data.Models based on linear auto-regressive schemes haveyielded significant computational expediency gains oncases where such cross-correlations are predominately lin-ear, however they fail to handle cases for which cross-correlations are complex. By identifying the capabilitiesand limitations of linear auto-regressive models, we pro-pose a general framework for multi-fidelity modeling usingdeep neural networks. Motivated by the general embed-ding theorems of Nash, Takens, and Whitney, we demon-strate how nonlinear warpings, high-dimensional embed-dings, and latent variables can help us disentangle factorsof variation in the cross-correlation structure between low-and high-fidelity data. Finally, we put forth a collectionof benchmark problems that and can help us analyze theadvantages and limitations of existing algorithms, as wellas highlight the effectiveness of the proposed approach.

Paris PerdikarisMassachusetts Institute of [email protected]

MS21

Hidden Physics Models: Machine Learning of Par-tial Differential Equations

While there is currently a lot of enthusiasm about ”bigdata”, useful data is usually ”small” and expensive to ac-quire. In this paper, we present a new paradigm of learningpartial differential equations from small data. In particu-lar, we introduce hidden physics models, which are essen-tially data-efficient learning machines capable of leverag-ing the underlying laws of physics, expressed by time de-pendent and nonlinear partial differential equations, to ex-tract patterns from high-dimensional data generated fromexperiments. The proposed methodology may be appliedto the problem of learning, system identification, or data-driven discovery of partial differential equations. Ourframework relies on Gaussian processes, a powerful toolfor probabilistic inference over functions, that enables usto strike a balance between model complexity and data fit-ting. The effectiveness of the proposed approach is demon-strated through a variety of canonical problems, spanning anumber of scientific domains, including the Navier-Stokes,Schrodinger, Kuramoto-Sivashinsky, and time dependentlinear fractional equations. The methodology provides apromising new direction for harnessing the long-standingdevelopments of classical methods in applied mathemat-

UQ18 Abstracts 49

ics and mathematical physics to design learning machineswith the ability to operate in complex domains withoutrequiring large quantities of data.

Maziar Raissi, George Em KarniadakisBrown Universitymaziar [email protected], george [email protected]

MS21

Physics-based Machine Learning via Adaptive Re-duced Models and Multi-fidelity Modeling

New technologies are changing the way we think about de-signing and operating future engineering systems. In par-ticular, the combination of sensing technologies and com-putational power brings new opportunities for data-drivenmodeling and data-driven decision-making. Yet data alonecannot deliver the levels of predictive confidence and mod-eling reliability demanded for safety-critical systems. Forthat, we must build on the decades of progress in rigorousphysics-based modeling and associated uncertainty quan-tification. This talk discusses our work at the intersectionof physics-based and data-driven modeling, with a focuson the design of next-generation aircraft. We show howadaptive reduced models combined with machine learn-ing enable dynamic decision-making onboard a structural-condition-aware UAV. We show how multifidelity formula-tions exploit a rich set of information sources to achievemultidisciplinary design under uncertainty for future air-craft concepts.

Laura MaininiPolitecnico di [email protected]

Renee SwischukTexas A&M [email protected]

Karen [email protected]

MS22

Title Not Available

Abstract not available at time of publication.

Pierre F. [email protected]

MS22

Control of Weakly Observed Nonlinear DynamicalSystems using Reinforcement Learning

Abstract not available at time of publication.

Lionel MathelinLIMSI - CNRS, [email protected]

Alex GorodetskySandia National [email protected]

Laurent Cordier

Institute PPRIME, [email protected]

MS22

An Information-theoretic Approach to SelectingData-driven, Dynamical Systems via Sparse Re-gression

Abstract not available at time of publication.

Joshua L. ProctorInstitute for Disease [email protected]

MS22

Title Not Available

Abstract not available at time of publication.

Zhizhen ZhaoUniversity of [email protected]

MS23

Bayesian Inference for Estimating DiscrepancyFunctions in Rans Turbulence Model Closures

Due to their computational tractability, closure mod-els within the Reynold-Averaged Navier-Stokes (RANS)framework remain a vital tool for modeling turbulentflows of engineering interest. However, it is well knownthat RANS predictions are locally corrupted by epistemicmodel-form uncertainty. A prime example of a modelingassumption with only a limited range of applicability isthe Boussinesq hypothesis. We therefore aim to locallyperturb the Boussinesq Reynolds stress tensor at locationsin the flow domain where the modeling assumptions arelikely to be invalid. However, inferring such perturbationsat every location in the computational mesh leads to ahigh-dimensional inverse problem. To make the problemtractable, we propose two additional transport equationswhich perturb the shape of the modeled Reynolds stresstensor, such that the spatial structure is of the perturbationis pde-constrained. Exploring the posterior distribution ofthe Reynolds stresses (or other quantities of interest), cannow be achieved through a low-dimensional Bayesian in-ference procedure on the coefficients of the perturbationmodel. We will demonstrate our framework on a range offlow problems where RANS models are prone to failure.

Wouter N. EdelingStanford [email protected]

Gianluca IaccarinoStanford UniversityMechanical [email protected]

MS23

Inference of Model Parameters in a Debris FlowModel Using Experimental Data

Calibration of complex fluid flow models from experimentaldata is not a straightforward problem, particularly whenthe data are difficult to interpret, incomplete or scarce,with the risk of reaching incorrect conclusions. This worktackles the problem of inferring in a Bayesian setting the

50 UQ18 Abstracts

parameters of a complex non-linear debris-flow model fromexperimental data (see [D. L. George et al., Proceedings ofthe Royal Society of London A, 2014]). The data consistin time series of the debris flow thickness in a dam-breakexperiment. However, limited information is available re-garding the processing and the precision of the data. OurBayesian methodology constructs a surrogate model of thetime series of debris flow elevation, using preconditionedpolynomial chaos expansions, in order to reduce the com-putational effort to compute the MAP value of the pa-rameters and sampling from the distribution. We contrastthe results of the inference using the raw time series withthe inference based on few selected features extracted fromthe original data. The comparison reveals that the idea ofworking on some important features is a relevant approachfor this problem involving large modelling errors. Severalreasons support this claim: the easier prescription of suit-able and simpler likelihood definition; the computationalcomplexity reduction; and the possibility of obtaining abetter assessment of the posterior uncertainty by prevent-ing over fitting of the parameters.

Maria I. Navarro [email protected]

Olivier Le MaitreLaboratoire d’Informatique pour la Mecanique et [email protected]

Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]

David GeorgeU.S. Geological SurveyCascades Volcano [email protected]

Kyle T. MandliColumbia UniversityApplied Physics and Applied [email protected]

Omar KnioKing Abdullah University of Science and Technology(KAUST)[email protected]

MS23

Uncertainty Quantification Strategies in Systemsof Solvers

The simulation of complex multi-physics phenomena of-ten requires the use of coupled solvers, modelling differ-ent physics (fluids, structures, chemistry, etc) with largelydiffering computational complexities. We call Systems ofSolvers (SoS) a set of interdependent solvers where the out-put of an upstream solver can be the input of a down-stream solvers. A system of solvers typically encapsulatea large number of uncertain input parameters, challengingclassical Uncertainty Quantification (UQ) methods such asspectral expansions and Gaussian process models whichare affected by the curse of dimensionality. In this work,we develop an original mathematical framework, based onGaussian Processes (GP) to construct a global metamodelof the uncertain SoS that can be used to solve forward and

backward UQ problems. The key idea of the proposed ap-proach is to determine a local GP model for each solverof the SoS. Several training strategies are possible from apriori training set generation and active learning methodsthat identify the most unreliable GP and proposes addi-tional training points to improve the overall system pre-diction. The targeted systems of solvers are typically usedto simulate complex multi physics flows. In particular, inthis work, the methodology is applied to Space object reen-try predictions.

Francois J. SansonINRIA Bordeaux [email protected]

Olivier Le MaitreLaboratoire d’Informatique pour la Mecanique et [email protected]

Pietro M. CongedoINRIA Bordeaux Sud-Ouest (FRANCE)[email protected]

MS23

Comparison of Different Approximation Tech-niques for Uncertain Time Series Arising in OceanSimulations

The analysis of time series is a fundamental task in manyflow simulations such as oceanic and atmospheric flows. Amajor challenge is the construction of a faithful and accu-rate time-dependent surrogate with a manageable numberof samples. Several techniques have been tested to handlethe time-dependent aspects of the surrogate including adirect approach, low-rank decomposition, auto-regressivemodel and global Bayesian emulator. These techniquesrely on two popular methods for uncertainty quantifica-tion, namely Polynomial chaos expansion and Gaussianprocesses regression. The different techniques were testedand compared on the uncertain evolution of the sea surfaceheight forecast at two locations exhibiting different levelsof variance. Two ensembles sizes were considered as wellas two versions of polynomial chaos (ordinary least squaresor ridge regression) and Gaussian processes (exponential orMatern covariance function) to assess their impact on theresults. Our conclusions focus on the advantages and thedrawbacks, in terms of accuracy, flexibility and computa-tional costs of the different techniques.

Pierre [email protected]

Mohamed IskandaraniRosenstiel School of Marine and Atmospheric SciencesUniversity of [email protected]

MS24

Risk-averse Optimal Power Flow via SurrogateModels

We present various numerical approaches for risk-adverseoptimal power flow calculations based on stochastic surro-gate modeling of power systems driven by renewable powersources. In our approaches, we minimize an averse riskmeasure of surrogate models of the generation cost. Suchsurrogates approximate the map between cost and random

UQ18 Abstracts 51

and decision spaces. We propose two approaches for con-structing cost surrogates: In the first approach we con-struct a basis expansion for the cost, with the coefficientsestimated via sparse sampling from a limited set of randomrealizations of the system. In the second approach, we de-rive a quasi-Fokker-Planck equation, or probability densityfunction (PDF) equation, for the state of the system. Thestationary solution of the PDF equation is employed to ap-proximate the PDF of the system’s state, which is in turnused to estimate risk cost measures.

David A. Barajas-Solano, Xiu YangPacific Northwest National [email protected], [email protected]

Alexandre M. [email protected]

MS24

Assimilating Data in Stochastic Dynamics

In this talk we introduce a statistical treatment of inverseproblems with stochastic terms. Here the solution of theforward problem is given by a distribution represented nu-merically by an ensemble of simulations. We develop astrategy to solve such inverse problems by expressing theobjective as finding the closest forward distribution thatbest explains the distribution of the observations in a cer-tain metric. We propose to use metrics typically employedin forecast verification.

Emil M. ConstantinescuArgonne National LaboratoryMathematics and Computer Science [email protected]

Noemi PetraUniversity of California, [email protected]

Cosmin G. PetraLawrence Livermore NationalCenter for Advanced Scientific [email protected]

Julie BessacArgonne National [email protected]

MS24

Optimization and Design of Complex EngineeringSystems using High-performance Computing

The optimization of many engineering systems reaches ex-treme size and unprecedented computational complexitydue to the need to consider high fidelity physics and ac-count for uncertainties. We will present scalable optimiza-tion algorithms and solvers for solving two such optimiza-tion problems: i. material optimization for the design ofstiff structures using additive manufacturing; and, ii. eco-nomic optimization and control of large scale power gridsystems under stochastic loads and generation. Our com-putational approach is based on scalable nonlinear pro-gramming algorithms, namely interior-point methods, andis specifically aimed at matching and surpassing the par-allelism of existing physics solvers (e.g., finite elementsolvers for the elasticity problem governing material opti-

mization) on massively parallel computing platforms. Forthis we developed specialized decomposition within theoptimization, namely Schur complement-based decompo-sition for stochastic optimization and domain decomposi-tion equipped with low-rank representations of the Hessianfor material optimization. Finally, we present large scalesimulations of the two motivating applications on the U.S.Department of Energy’s supercomputers and provide a de-tailed discussion of the effectiveness and possible furtherimprovement of the proposed techniques.

Cosmin G. PetraLawrence Livermore NationalCenter for Advanced Scientific [email protected]

MS24

PDF Estimation for Power Grid Systems via SparseRegression

We present a numerical approach for approximating theprobability density function (PDF) of quantities of inter-est (QoIs) of dynamical systems subject to uncertainty,with application to power grid systems driven by uncer-tain load and power generation. We employ Hermite poly-nomial expansions to construct a surrogate model for themap from random parameters to QoIs, which is then usedto estimate the PDF of the QoIs via sampling. The co-efficients of this expansion are estimated via compressedsensing from a small set of random realizations. The in-trinsic low-dimensional structure of the map is exploitedto identify a linear transformation of random space via it-erative rotations that enhances the sparsity of the Hermiterepresentation. The proposed sparse sampling approachrequires significantly less realizations of the full dynamicsof the system than the standard least squares method toconstruct the surrogate model. The PDF estimates pro-vided by our approach are more accurate in terms of theKullback-Leibler divergence than Monte Carlo estimateswhile using fewer realizations.

Xiu Yang, David A. Barajas-SolanoPacific Northwest National [email protected], [email protected]

Alexandre M. [email protected]

William RosenthalPacific Northwest National [email protected]

MS25

Stochastic Modeling of Uncertainties in Fast Essen-tial Antarctic Ice Sheet Model

Quantifying the uncertainty in projections of sea-level riseinduced by ice-sheet dynamics is a pivotal problem in as-sessing climate-change consequences. Sources of uncer-tainty in ice-sheet models include uncertain climate forcing,basal friction, sub-shelf melting, initialization, and othermodel features. Accounting for uncertainty in ice-sheetmodels (ISMs) is challenging: (i) the stochastic modelingof sources of uncertainty can be challenged by data scarcityand imperfect representations of physical processes, and(ii) the computational cost of ISMs can hinder their in-tegration in Earth system models and limit the numberof simulations available for uncertainty propagation. In

52 UQ18 Abstracts

this talk, we address the uncertainty quantification of sea-level rise projections by using the new fast essential ISMf.ETISh that affords computational tractability by limitingits complexity to the essential interactions and feedbackmechanisms of ice-sheet flow. After highlighting how sim-ulations require an initialization procedure that precedestheir use in predicting climate-change effects, we will dis-cuss the sources of uncertainty in this ISM and the in-formation available for their stochastic modeling. We willshow the stochastic modeling of a subset of these sourcesof uncertainty, followed by a propagation of uncertaintythrough the ISM and a sensitivity analysis to help iden-tify the dominant sources of uncertainty in sea-level riseprojections.

Kevin BulthuisUniversite de [email protected]

Lionel Favier, Frank PattynUniversite Libre de [email protected], [email protected]

Maarten ArnstUniversite de [email protected]

MS25

On the Robustness of Variational Multiscale ErrorEstimators for the Forward Propagation of Uncer-tainty

In this talk, we focus on the definition of a family of de-terministic error estimators to characterize the discretiza-tion error for uncertainty quantification purposes. We firstaddress the construction of a posteriori error estimatorsbased on the Variational Multiscale (VMS) method withorthogonal subscales for the convection-diffusion-reaction(CDR) problem. Here, the global error estimator (ratherthan the distribution of the local error over the domain)is selected as the quantity of interest. The behavior ofthe error estimator is subsequently characterized introduc-ing system-parameter uncertainties in the coefficients ofthe CDR problem. The propagation of these uncertaintiesis conducted using a polynomial chaos expansion and aprojection method, and convergence with respect to meshrefinement is finally investigated.

Oriol Colomes, Guglielmo ScovazziDuke [email protected], [email protected]

Johann GuilleminotDepartment of Civil and Environmental EngineeringDuke [email protected]

MS25

Coarse Approximation of Highly Oscillatory Ran-dom Elliptic Problems

We consider an elliptic problem with highly oscillatory,possibly random coefficients. We show how to approximateit using a problem of the same type, but with constant anddeterministic coefficients, that are defined by an optimiza-tion procedure. The approach is robust to the fact that theinformation on the oscillatory coefficients in the equationcan be incomplete. In the limit of infinitely small oscilla-tions of the coefficients, we illustrate the links between this

approach and the classical theory of homogenization, basedon solving corrector problems. Comprehensive numericaltests and comparisons will be discussed, that show the po-tential practical interest of the approach. This is joint workwith Claude Le Bris (Ecole des Ponts and Inria) and SimonLemaire. Reference: https://arxiv.org/abs/1612.05807

Frederic LegollLAMI, [email protected]

MS25

Surrogate-based Bayesian Inversion for the ModelCalibration of Fire Insulation Panels

We outline an approach for the calibration of a finite ele-ment model (FEM), describing the heat transfer in insula-tion when exposed to fire, by using a full Bayesian inferenceapproach. The considered FEM uses so-called temperaturedependent effective material parameters. These parametersare required for applying the improved component additivemethod to determine the fire resistance of e.g. timber frameassemblies. We herein propose a full Bayesian procedure,that allows the sought model parameters to be calibrated inan efficient way. We distinguish two approaches, namely(1) a standard Bayesian parameter estimation approachand (2) a more involved hierarchical Bayesian modelingapproach. The standard Bayesian approach is applicableto the case of single measurements, where a best fit con-sidering measurement and modeling errors is sought. Thehierarchical approach can be applied to cases where multi-ple measurements under differing experimental conditionsare available, so as to capture the variability of effectivematerial parameters across experiments. Bayesian infer-ence is typically carried out using computationally expen-sive Markov Chain Monte Carlo (MCMC) simulations. Tocircumvent these limitations, we use a surrogate modelingtechnique (polynomial chaos expansion). This surrogatemodel can be used in MCMC simulations instead of the ac-tual forward model and helps reducing the computationalburden to a feasible level.

Paul-Remo WagnerETH [email protected]

Reto Fahrni, Michael Klippel, Bruno SudretETH [email protected], [email protected], [email protected]

MS26

Inherent Limitations to Parameter Estimation inCancer Incidence Data

Multistage clonal expansion models are used to tie cancerincidence data to the modeled underlying biological pro-cesses of carcinogenesis. Characterizing the amount andform of the information available in cancer incidence datafor this class of models is vitally important for understand-ing the implications of parameter estimates for the under-lying biology and for predictions of future cancer incidencerates. We use profile likelihood methods from practicalidentifiability analysis to show that the usual cancer inci-dence data has inherently less information than theoreticalstructural identifiability analysis suggests, allowing us tosystematically reduce the dimension of the estimated pa-

UQ18 Abstracts 53

rameter space.

Andrew F. BrouwerUniversity of [email protected]

Rafael MezaUniversity of MichiganDepartment of [email protected]

Marisa EisenbergUniversity of [email protected]

MS26

Structural Identifiability Analysis of Matrix Mod-els for Structured Populations

Matrix models are commonly used in ecology and naturalresource management, where they provide suitable model-ing frameworks for age and stage structured populations.In this talk, we will examine how the model parameteriza-tion affects behavior and parameter estimation using iden-tifiability and sensitivity analysis methods. We use bothstructural and practical identifiability analysis to examinewhich parameters can be estimated uniquely from data.We then use a combination of local and global sensitivityanalysis to evaluate which parameters are strong potentialintervention targets. The methods developed for examin-ing the matrix models considered here can also be general-ized to a range of matrix and difference equation models.

Ariel Cintron-AriasDepartment of Mathematics and StatisticsEast Tennessee State [email protected]

MS26

Parameter Identifiability and Effective Theories inPhysics, Biology, and Beyond

The success of science is due in large part to the hierarchicalnature of physical theories. These effective theories modelnatural phenomena as if the physics at macroscopic lengthscales were almost independent of the underlying, shorter-length-scale details. The efficacy of these simplified mod-els can be understood in terms of parameter identifiability.Parameters associated with microscopic degrees of freedomare usually unidentifiable as quantified by the Fisher Infor-mation Matrix. I apply an information geometric approachin which a microscopic, mechanistic model is interpreted asa manifold of predictions in data space. Model manifoldsare often characterized by a hierarchy of boundaries–faces,edges, corners, hyper-corners, etc. These boundaries cor-respond to reduced-order models, leading to a model re-duction technique known as the Manifold Boundary Ap-proximation Method. In this way, effective models can besystematically derived from microscopic first principles fora variety of complex systems in physics, biology, and otherfields.

Mark K. TranstrumDepartment of Physics and AstronomyBrigham Young University

[email protected]

MS27

Adaptive Meshfree Backward SDE Filter

An adaptive meshfree approach is proposed to solve thenonlinear filtering problem based on forward backwardstochastic differential equations. The algorithm relies onthe fact that the solution of the forward backward stochas-tic differential equations is the unnormalized filtering den-sity as required in the nonlinear filtering problem. Adap-tive space points are constructed in a stochastic manner toimprove the efficiency of the algorithm. We also introducea Markov chain Monte Carlo resampling method to addressthe degeneracy problem of the adaptive space points. Nu-merical experiments are carried out against the auxiliaryparticle filter method in the rugged double well potentialproblem as well as the multi-target tracking problem todemonstrate the superior performance of our algorithm.

Feng BaoUniversity of Tennessee at [email protected]

Vasileios MaroulasUniversity of Tennessee at [email protected]

MS27

Multilevel Picard Approximations for High-dimensional Nonlinear Parabolic Partial Differ-ential Equations and High-dimensional NonlinearBackward Stochastic Differential Equations

We introduce, for the first time, a family of algorithms forsolving general high-dimensional nonlinear parabolic par-tial differential equations with a polynomial complexity inboth the dimensionality and the reciprocal of the accu-racy requirement. The algorithm is obtained through adelicate combination of the Feynman-Kac and the Bismut-Elworthy-Li formulas, and an approximate decompositionof the Picard fixed- point iteration with multi-level accu-racy. The algorithm has been tested on a variety of non-linear partial differential equations that arise in physicsand finance, with very satisfactory results. Analyticaltools needed for the analysis of such algorithms, includ-ing a nonlinear Feynman-Kac formula, a new class of semi-norms and their recursive inequalities, are also introduced.They allow us to prove that for semi-linear heat equations,the computational complexity of the proposed algorithm isbounded by O(d ∗ ε−4) for any d > 0, where d is the di-mensionality of the problem and ε ∈ (0, 1) is the prescribedaccuracy.

Martin HutzenthalerUniversitat [email protected]

MS27

Deep Optimal Stopping: Solving High-dimensionalOptimal Stopping Problems with Deep Learning

Optimal stopping problems suffer from the curse of dimen-sionality. This talk concerns a new deep learning methodfor solving high-dimensional optimal stopping problems. Inparticular, the introduced algorithm can be employed forsolving optimal equipment replacement problems and forthe pricing of American options with a large number of

54 UQ18 Abstracts

underlyings. The presented algorithm is in fact applica-ble to a very broad class of financial derivatives (such aspath-dependent American, Bermudan, and Asian options)as well as multifactor underlying models (such as Heston-type stochastic volatility models, local volatility models,and jump-diffusion models). Moreover, the algorithm al-lows to approximatively compute both the price as wellas an optimal exercise strategy for the American optionclaim. Numerical results on benchmark problems are pre-sented which suggest that the proposed algorithm is highlyeffective in the case of many underlyings, in terms of bothaccuracy and speed.

Sebastian BeckerFrankfurt UniversityInstitute of [email protected]

Patrick CheriditoETH ZurichDepartment of [email protected]

Arnulf JentzenETH [email protected]

Timo WeltiETH [email protected]

MS27

Bayesian Inference via Filtering Equations for Fi-nancial Ultra-high Frequency Data

We propose a general partially-observed framework ofMarkov processes with marked point process observationsfor ultra-high frequency (UHF) transaction price data, al-lowing other observable economic or market factors. Wedevelop the corresponding Bayesian inference via filter-ing equations to quantify parameter and model uncer-tainty. Specifically, we derive filtering equations to char-acterize the evolution of the statistical foundation such aslikelihoods, posteriors, Bayes factors and posterior modelprobabilities. Given the computational challenge, we pro-vide a convergence theorem, enabling us to employ theMarkov chain approximation method to construct consis-tent, easily-parallelizable, recursive algorithms. The algo-rithms calculate the fundamental statistical characteristicsand are capable of implementing the Bayesian inference inreal-time for streaming UHF data, via parallel computingfor sophisticated models. The general theory is illustratedby specific models built for U.S. Treasury Notes transac-tions data from GovPX and by Heston stochastic volatilitymodel for stock transactions data.

Yong ZengUniversity of Missouri at Kansas CityDepartment of Mathematics and [email protected]

Grace Xing HuSchool of Economics and FinanceUniversity of Hong [email protected]

David KuipersDepartment of FinanceUniversity of Missouri-Kansas City

[email protected]

Junqi YinOak Ridge National [email protected]

MS28

Sobol’ Indices for Sensitivity Analysis with Depen-dent Inputs

When performing global sensitivity analysis (GSA), it isoften assumed, for the sake of simplicity, for lack of infor-mation or for sheer expediency, that uncertain variablesin the model are independent. It is intuitively clear-andeasily confirmed through simple examples-that applying aGSA method designed for independent variables to a set ofcorrelated variables generally leads to results that hard tointerpret, at best. We generalize the probabilistic frame-work for GSA pioneered by Sobol’ to problems with corre-lated variables; this is done by reformulating his indices interms of approximation errors rather than variance analy-sis. The implementation of the approach and its computa-tional complexity are discussed and illustrated on syntheticexamples.

Joseph L. HartNorth Carolina State [email protected]

Pierre GremaudDepartment of MathematicsNorth Carolina State [email protected]

MS28

Global Sensitivity Analysis of Models with Depen-dent and Independent Inputs

Global sensitivity analysis (GSA) is used to identify key pa-rameters whose uncertainty most affects the model output.This information can be used to rank variables, fix or elim-inate unessential variables and thus decrease problem di-mensionality. Variance-based Sobol sensitivity indices (SI)are most frequently used in practice owing to their effi-ciency and ease of interpretation. Most existing techniquesfor GSA were designed for models with independent inputs.In many cases, there are dependencies among inputs, whichmay have a significant impact on the results. Such depen-dences in a form of correlations have been considered in ourprevious works. There is a wide class of models involvinginequality constraints which impose structural dependen-cies between model variables. In this case, the parameterspace may assume any shape depending on the number andnature of constraints. This class of problems encompassesa wide range of situations encountered in the natural sci-ences, engineering, economics, etc, where model variablesare subject to certain limitations imposed e.g. by conser-vation laws, geometry, costs etc. In this talk, we considertwo approaches for estimating Sobol SI for constraint GSA.The classical one is based on using direct integral formu-las. In the second approach, a metamodel of the originalfull model is constructed first and then this metamodel isused for estimating Sobol SI. This approach significantlyreduces the cost of evaluation Sobol SI.

Sergei S. KucherenkoImperial College [email protected]

UQ18 Abstracts 55

Oleksiy KlymenkoUniversity of Surrey, Guildford GU2 7XH, [email protected]

Nilay ShahImperial College London, London SW7 2AZ, [email protected]

MS28

Goal-oriented Sensitivity Analysis UsingPerturbed-law Based Indices

One of the most critical hypothesis in uncertainty propaga-tion studies is the choice of the distributions of uncertaininput variables which are propagated through a compu-tational model. In general, such probability distributionscome from various sources : statistical inference, designor operation rules, expert judgment, etc. In all cases, theprobabilistic models for these variables are established witha certain level of accuracy or confidence. Hence, bring-ing stringent justifications to the overall approach requiresquantifying the impact of the assumptions made on theinput laws on the output criterion. Perturbed Law basedIndices (PLI) perform such a robustness analysis, differ-ent from the standard sensitivity analysis methods. Theprinciple is to assess the influence of a perturbation on aparameter or a moment of the input distribution, on someoutput quantity of interest. Any quantity of interest canbe considered, as the mean of the model output, its vari-ance, a probability that the output exceeds a threshold ora quantile of the output.

Thibault Delage, Bertrand Iooss, Anne-Laure Popelin,Roman SueurEDF R&[email protected], [email protected], [email protected], [email protected]

MS28

Shapley Effects for Sensitivity Analysis with De-pendent Inputs

Global sensitivity analysis of a numerical model consistsin quantifying, by the way of sensitivity indices, the con-tributions of each of its input variables in the variabilityof its output. Based on the functional variance analysis,the popular Sobol indices present a difficult interpretationin the presence of statistical dependence between inputs.Recently introduced, the Shapley effects, which consist ofallocating a part of the variance of the output at each in-put, are promising to solve this problem. In this talk, fromseveral analytical results, we study the effects of linear cor-relation between some Gaussian input variables on Shapleyeffects, comparing them to classical first-order and totalSobol indices. This illustrates the interest, in terms of in-terpretation, of the Shapley effects in the case of dependentinputs. We also empirically study the numerical conver-gence of estimating the Shapley effects. For the engineer-ing practical issue of computationally expensive models, weshow that the substitution of the model by a metamodelmakes it possible to estimate these indices with precision.The results presented during that talk include joint workwith Art Owen (Stanford University, USA) and BertrandIooss (EDF, Paris, France).

Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, France

[email protected]

MS29

High-dimensional Stochastic Inversion via AdjointModels and Machine Learning

Performing stochastic inversion on a computationally ex-pensive forward simulation model with a high-dimensionaluncertain parameter space (e.g. a spatial random field)is computationally prohibitive even with gradient informa-tion provided. Moreover, the ‘nonlinear’ mapping fromparameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus ham-pering the use of efficient inversion algorithms designed formodels with Gaussian assumptions. In this paper, we pro-pose a novel Bayesian stochastic inversion methodology,characterized by a tight coupling between a gradient-basedLangevin Markov Chain Monte Carlo (LMCMC) methodand a kernel principal component analysis (KPCA). Thisapproach addresses the ’curse-of-dimensionality’ via KPCAto identify a low-dimensional feature space within thehighdimensional and nonlinearly correlated spatial randomfield. Moreover, non-Gaussian full posterior probabilitydistribution functions are estimated via an efficient LM-CMC method on both the projected lowdimensional fea-ture space and the recovered high-dimensional parameterspace. We demonstrate this computational framework byintegrating and adapting recent developments such as data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to solve inverse problems inlinear elasticity.

Xiao ChenLawrence Livermore National LaboratoryCenter for Applied Scientific [email protected]

MS29

An Iterative Local Updating Ensemble Smootherfor High-dimensional Inverse Modeling with Mul-timodal Distributions

Ensemble smoother (ES) has been widely used in high-dimensional inverse modeling. However, its application islimited to problems where uncertain parameters approxi-mately follow Gaussian distributions. For problems withmultimodal distributions, using ES directly would be prob-lematic. One solution is to use a clustering algorithm toidentify each mode, which is not very efficient when thedimension is high or the number of modes is large. Al-ternatively, we propose in this paper a very simple andefficient algorithm, i.e., the iterative local updating ensem-ble smoother (ILUES), to explore multimodal distributionsin high-dimensional problems. This algorithm is based onupdating local ensembles of each sample in ES to explorepossible multimodal distributions. To achieve satisfactorydata matches in nonlinear problems, we adopt an itera-tive form of ES to assimilate the measurement multipletimes. Five numerical case studies are tested to show theperformance of the proposed method. To show its applica-bility in practical problems, we test the ILUES algorithmwith three inverse problems in hydrological modeling thathave multimodal prior distribution, multimodal posteriordistribution and a large number of unknown parameters,respectively.

Guang LinPurdue [email protected]

56 UQ18 Abstracts

Jiangjiang ZhangZhejiang [email protected]

Weixuan LiExxon [email protected]

Lingzao ZengZhejiang [email protected]

Laosheng WuUniversity of California [email protected]

MS29

Using Surrogate Models to Accelerate Bayesian In-verse UQ

The laser flash experiment is a wellknown technique usedto determine the unknown thermal diffusivity of a material.The temperature of a material specimen is measured usingan infrared sensor at discrete times over a time intervalcontaining a heating laser flash. The temperature of thespecimen is modelled as the solution of a timedependentpartial differential equation (PDE). As their actual valuesare unknown, some parameters of the PDE are modelledas random variables. Such parameters include the coef-ficients (including the thermal diffusivity) and boundaryconditions. We pose the problem of estimating the ther-mal diffusivity as a Bayesian inverse problem, aiming toapproximate the posterior distribution, given experimen-tal data obtained from a laser flash experiment. Markovchain Monte Carlo (MCMC) sampling algorithms requirecalculation of the posterior density at each iteration. Here,each evaluation of the posterior requires the timedependentPDE to be solved numerically. This results in an incrediblycomputationally expensive routine, taking days to produceenough samples to approximate the posterior sufficientlywell. We use a surrogate model within an MCMC rou-tine to reduce this cost. Specifically, we use a stochasticGalerkin FEM solution to the PDE as a surrogate for re-peatedly computing deterministic FE solutions. Investiga-tions into how both the speed and accuracy of the approx-imation of the posterior are affected by this replacementare presented.

James RynnUniversity of [email protected]

Simon CotterUniversity of ManchesterSchool of [email protected]

Catherine PowellSchool of MathematicsUniversity of [email protected]

Louise WrightNational Physical Laboratory

[email protected]

MS29

Learning Physical Laws from Noisy Data

We present a new data-driven approach to learn ordinaryand partial differential equations, particularly, shallow wa-ter equations and Navier-Stokes equations. The key ideais to identify the terms in the underlying fluid equationsand to approximate the weights of terms with error barsusing Bayesian machine learning algorithms. In particular,Bayesian sparse feature selection and parameter estimationare performed. Numerical experiments show the robust-ness of the learning algorithms with respect to noisy dataand its ability to learn various candidate equations with er-ror bars to represent the quantified uncertainty. Also, com-parisons with sequential threshold least squares and lassoare studied from noisy measurements. Prediction with er-ror bars using our algorithm is illustrated at the end.

Sheng Zhang, Guang Lin, jiahao ZhangPurdue [email protected], [email protected], [email protected]

MS30

Bridging High Performance Computing for Exper-imental Neutron Sciences

This talk will focus on mathematics developed to helpbridge the computational and experimental facilities atOak Ridge National Laboratory (ORNL). This talk willdemonstrate that sparse sampling methods for large scaleexperimental data can utilize HPC to analyze data for ex-perimental neutron sciences. Sparse sampling has the abil-ity to provide accurate reconstructions of data and imageswhen only partial information is available for measurement.Sparse sampling methods have demonstrated to be robustto measurement error. These methods have the potentialto scale to large computational machines and analysis largevolumes of data.

Rich ArchibaldOak Ridge National [email protected]

MS30

Efficient Numerical Methods for StochasticSchrodinger Equations

We study SIMPLEC algorithm for stochastic Schrodingerequations.

Jialin HongChinese Academy of [email protected]

MS30

Accounting for Model Error from UnresolvedScales in Ensemble Kalman Filters by StochasticParameterization

In this talk, the use of discrete-time stochastic parameter-ization will be investigated by numerical experiments toaccount for model error due to unresolved scales in en-semble Kalman filters. The parameterization produces animproved non-Markovian forecast model, which generates

UQ18 Abstracts 57

high quality forecast ensembles and improves filter perfor-mance. Results are compared with the methods of dealingwith model error through covariance inflation and local-ization (IL), using as an example the two-layer Lorenz-96system.

Xuemin TuUniversity of [email protected]

Fei LuDepartment of Mathematics, UC BerkeleyMathematics group, Lawrence Berkeley [email protected]

Alexandre ChorinDepartment of MathematicsUniversity of California at [email protected]

MS30

A Probabilistic Analysis and Rare Event Study of aDynamical Queue for Modeling Human Operators

We study a model for task management for human opera-tors. A novel approach to modeling the system is by con-sidering a dynamical queue, in which a dynamical quantitycalled the utilization ratio tracks the approximate stresslevel an operator experiences while working. The case whenthe service time of a task entering service is a deterministicfunction of the utilization ratio has been well studied. Thefunction was found, via experimental studies, to follow aYerkes-Dodson law, a principle from psychology where hu-man operators work more efficiently with more stress upto a point, where further increases in stress will decreasethe effective work rate. We introduce a probabilistic ver-sion of the queue, where the service time is conditionallydistributed on the value of the Yerkes-Dodson law and theutilization ratio. We analyze the behavior of the probabilis-tic dynamical queue when the service time distribution isconditionally exponential via the method of supplementaryvariables. We find its Chapman-Kolmogorov equations andstudy its stationary distribution. We also study generalservice time distributions via simulation. Rare events, suchas the probability of buffer overflow, are also considered.Methodology adapted to this problem include perfect sim-ulation via coupling from the past. Importance samplingand splitting methods based on large deviations are used tostudy rare events. Improvements to rare event simulationis considered with use with transport maps.

Benjamin J. ZhangMassachusetts Institute of [email protected]

Tuhin SahaiUnited Technologies Research [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS31

A Scalable Design of Experiments Framework for

Optimal Sensor Placement

We present the design of experiments framework for sen-sor placement in the context of state estimation of naturalgas networks. We aim to compute optimal sensor locationswhere observational data are collected, by minimizing theuncertainty in parameters estimated from Bayesian inverseproblems, which are governed by partial differential equa-tions. The resulting problem is a mixed-integer nonlin-ear program. We approach it with two recent heuristicsthat have the potential to be scalable for such problems:a sparsity-inducing approach and a sum-up rounding ap-proach. We investigate two objectives: the total flow vari-ance and the A-optimal design criterion. We conclude thatthe sum-up rounding approach produces shrinking gapswith increased meshes. We also observe that convergencefor white noise measurement error is slower than for thecolored noise case. For the A-optimal design, the solutionis close to the uniform distribution, but for the total flowvariance the pattern is noticeably different.

Jing Yuuniversity of [email protected]

Mihai AnitescuUniversity of [email protected]

Victor M. ZavalaUniversity of [email protected]

MS31

Accelerated MCMC using Bayesian Optimization

Markov Chain Monte Carlo (MCMC) has become the maincomputational workhorse in scientific computing for solv-ing statistical inverse problems. It is difficult however touse MCMC algorithms when the likelihood function is com-putationally expensive to evaluate. The status quo tacklesthis problem by emulating the computationally models up-front based on a number of forward simulations, and thenthe emulator is used in the MCMC simulation instead ofthe actual model. However, this strategy has an obviousdrawback: how should one choose the initial number of for-ward simulations to capture the complexity of the modelresponse? A novel Metropolis-Hastings algorithm is pro-posed to sample from posterior distributions correspondingto computationally expensive simulations. The main inno-vation is emulating the likelihood function using Gaussianprocesses. The proposed emulator is constructed on thefly as the MCMC simulation evolves and adapted basedon the uncertainty in the acceptance rate. The algorithmis tested on a number of benchmark problems where it isshown that it significantly reduces the number of forwardsimulations.

Asif Chowdhury, Gabriel TerejanuUniversity of South [email protected], [email protected]

MS31

Leader Selection in Stochastically Forced Consen-sus Network

We are interested in assigning a pre-specified number ofnodes as leaders in order to minimize the mean-square de-viation from consensus in stochastically forced networks.

58 UQ18 Abstracts

This problem arises in several applications including con-trol of vehicular formations and localization in sensor net-works. For networks with leaders subject to noise, we showthat the Boolean constraints (a node is either a leader orit is not) are the only source of nonconvexity. By relax-ing these constraints to their convex hull we obtain a lowerbound on the global optimal value. We also use a sim-ple but efficient greedy algorithm to identify leaders andto compute an upper bound. For networks with leadersthat perfectly follow their desired trajectories, we identifyan additional source of nonconvexity in the form of a rankconstraint. Removal of the rank constraint and relaxationof the Boolean constraints yields a semidefinite programfor which we develop a customized algorithm well-suitedfor large networks. Several examples ranging from regu-lar lattices to random graphs are provided to illustrate theeffectiveness of the developed algorithms.

Fu LinUnited Technologies Research [email protected]

MS31

Optimal Experimental Design for Metallic FatigueData

The optimal planning of follow-up experiments requiresthe specification of a model of interest and the applica-tion of efficient computational methods for the explorationof the design space. In this talk, we introduce a designproblem that arises from the information gathered fromfatigue tests performed at the Battelle Memorial Instituteupon unnotched sheet specimens of 75S-T6 aluminum al-loys. For a comprehensive statistical analysis of the origi-nal fatigue dataset, the calibration of some proposed mod-els to deal with right-censored data and the treatment ofmodel comparison, see Babuska, I. et al., Bayesian infer-ence and model comparison for metallic fatigue data, Com-put. Methods Appl. Mech. Engrg. 304 (2016) 171-196.Under a Bayesian framework, given some selected models,we apply different computational techniques for the estima-tion of the expected information gain to determine optimalfollow-up experiments.

Marco ScavinoUniversidad de la RepublicaMontevideo, [email protected]

MS32

Bayesian Quadrature for Multiple Related Inte-grals

Bayesian probabilistic numerical methods are a set of toolsproviding posterior distributions on the output of numer-ical methods. The use of these methods is usually moti-vated by the fact that they can represent our uncertaintydue to incomplete/finite information about the continuousmathematical problem being approximated. In this talk,we demonstrate that this paradigm can provide additionaladvantages, such as the possibility of transferring infor-mation between several numerical methods. This allowsusers to represent uncertainty in a more faithfully man-ner and, as a by-product, provide increased numerical ef-ficiency. We propose the first such numerical method byextending the well-known Bayesian quadrature algorithmto the case where we are interested in computing the inte-gral of several related functions. We then demonstrate itsefficiency in the context of multi-fidelity models for com-

plex engineering systems, as well as a problem of globalillumination in computer graphics.

Francois-Xavier BriolUniversity of [email protected]

MS32

Adaptive Bayesian Quadrature for ApproximateInference

Bayesian Quadrature (BQ) is a is a rigorous frameworkfor constructing posterior measures over integrals F =∫f(x) dx and allows for the incorporation of explicit pri-

ors, active selection of evaluation nodes, as well as a soundtreatment of uncertainty. Despite its algebraic beauty, BQhas found little practical use in machine learning. Com-putational cost cubic in the number of evaluation nodescountervails their spatial coverage and prevents GP-basedBQ from scaling to high-dimensional tasks. To achievecompetitiveness with state-of-the-art MCMC, the BQ priorhas to be tailored to the specific integration task. We fo-cus on approximate inference as field of application, whichinvolves the computation of a potentially high-dimensionalintegral, the evidence. In this setting, speed-up comparedto vanilla BQ can be achieved by specifically modeling apositive integrand and by evaluating the integrand in anadaptive manner in order to discover high-mass regions.I will report on strategies as well as challenges in mak-ing BQ practicable even for high-dimensional integrationproblems.

Alexandra GessnerMax-Planck Institute for Intelligent [email protected]

MS32

Adaptive Bayesian Cubature using Quasi-MonteCarlo Sequences

Multidimensional integrals are often evaluated by sim-ple Monte Carlo methods. Quasi-Monte Carlo methodsemploy correlated sequences whose empirical distributionfunction better mimics the distribution defining the inte-gral. The author and his collaborators have recently de-veloped methods for adaptively determining the samplesize, n, of (quasi-)Monte Carlo methods needed to sat-isfy the user’s error requirements. This talk describes anapproach where the integrand is assumed to arise from aGaussian process. Because quasi-Monte Carlo sequencesand their matching kernels are used, the computationalcost is O(n log n), rather than the typical O(n3) requiredto determine the cubature weights and estimate the theparameters in the covariance kernel. The credible intervalis used to determine the number of data sites required tomeet the predetermined error tolerance. We provide nu-merical examples.

Fred J. HickernellIllinois Institute of TechnologyDepartment of Applied [email protected]

MS32

Fully Symmetric Sets for Efficient Large-ScaleProbabilistic Integration

Kernel-based approximation methods typically suffer fromthe characteristic cubic computational and quadratic space

UQ18 Abstracts 59

complexity in the number of data points, hindering theiruse in many applications where a lot of data is required.We show that kernel (or Bayesian) quadrature rules, thatcarry a probabilistic interpretation allowing for meaningfulquantification of uncertainty inherent to integral approxi-mations, can be constructed efficiently for up to millionsof points and in high dimensions if the node set is a unionof fully symmetric sets, each such set consisting of all co-ordinate permutations and sign changes of a given vector.The central observation permitting (under some rather laxassumptions on symmetricity of the kernel and the integra-tion measure) fast and exact computation of kernel quadra-ture weights is that all nodes in one fully symmetric setshare the same weight. Consequently, the number of dis-tinct weights that need to be computed is greatly dimin-ished. For selecting the fully symmetric sets we propose afew different schemes, the most promising of these beingsparse grids. The main drawback of the method is that weare not able to extend it beyond computing the quadratureweights. In particular, principled fitting of kernel hyper-parameters, especially in high dimensions, remains a chal-lenge.

Toni Karvonen, Simo SarkkaAalto [email protected], [email protected]

MS33

A Spatially-correlated Bayesian Gaussian ProcessLatent Variable Model for Dimensionality Reduc-tion

An increasingly critical ingredient for uncertainty quantifi-cation tasks with high-dimensional stochastic spaces is theconstruction of latent variable models that can discover andrepresent underlying low-dimensional structure present inthe data. Furthermore, one often has knowledge about thestochastic inputs in the form of an implicit model that canbe sampled to generate realizations, but does not providean explicit probability measure over the stochastic space.Such models often provide a picture that is much morerealistic than random field models such as Gaussian pro-cesses whose appeal lies in ones ability to easily performdimensionality reduction analytically (e.g. via KLE). Inthis work, we extend the Bayesian Gaussian process la-tent variable model [M. Titsias and N. D. Lawrence, AIS-TATS10] by incorporating explicit prior spatial informa-tion into the model reflecting the characteristics commonlyfound in heterogeneous physical systems while exploitingthe spatial structure on which observations are taken tomake the method computationally efficient. Our method-ology allows us to perform dimensionality reduction withlimited data and provides uncertainty estimates about thediscovered latent variables thanks to its Bayesian formula-tion. We demonstrate the utility of the developed modelby doing inference on a variety of example data sets andcompare its performance to other approaches.

Steven Atkinson, Nicholas ZabarasUniversity of Notre [email protected], [email protected]

MS33

Scalable Inference with Transport Maps

Bayesian inference models often result in high-dimensionalnon-standard distributions, against which there is seldoman efficient and accurate way of computing integrals. Wefirst review the framework of transport maps, where we

construct an homeomorphism able to approximate the in-tractable distribution by a tractable one. Its constructionis based on the solution of a variational problem definedover the infinite-dimensional space of monotone transports.We then show how these transports can be used to solvehigh-dimensional Bayesian inference problems by exploit-ing their intrinsic low-dimensional structure. Applicationsin engineering, physics and finance will be used to showcasethe methodology.

Daniele Bigoni, Alessio Spantini, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected], [email protected]

MS33

An Approximate Empirical Bayesian Method forLarge-scale Linear-gaussian Inverse Problems

Abstract not available at time of publication.

Jinglai LiShanghai JiaoTong University, [email protected]

MS33

A Discrete Sequential Optimal Transport methodfor Bayesian Inverse Problems

We present the Sequential Ensemble Transform (SET)method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesianinverse problems. Solving large-scale inverse problems iscritical for some of the most relevant and impactful scien-tific endeavors of our time. Therefore, constructing novelmethods for solving Bayesian inverse problems in morecomputationally efficient ways can have a profound impacton the science community. This research derives the novelSET method for exploring a posterior by solving a sequenceof discrete optimal transport problems, resulting in a seriesof transport maps which map prior samples to posteriorsamples, allowing for the computation of moments of theposterior. We show both theoretical and numerical results,indicating this method can offer superior computational ef-ficiency when compared to other Sequential Monte Carlo(SMC) methods. The SET method is shown to be superiorto SMC in cases where partical degenercy is more likely tooccur.

Aaron MyersUniversity of Texas at Austin - [email protected]

Tan Bui-ThanhThe University of Texas at [email protected]

Alexandre H. ThieryNational University of [email protected]

Kainan [email protected]

MS34

Provably Convergent Multi-fidelity Bayesian Infer-

60 UQ18 Abstracts

ence using Adaptive Delayed Acceptance

Abstract not available at time of publication.

Tiangang CuiMonash [email protected]

MS34

Incorporating Epistemic Uncertainty from Lower-fidelity Models in Bayesian Inverse Problems

This paper is concerned with the solution of model-based,Bayesian inverse problems. As many other UQ tasks, thisis hindered by the significant cost associated with each for-ward model evaluation and the large number of unknowns.A popular strategy to alleviate these difficulties has beenthe use of surrogate, lower-fidelity models which are muchcheaper to evaluate. Independently of the particulars, theuse of such surrogates introduces an additional source of(epistemic) uncertainty. That is, regardless of the amountof training data, the lower-fidelity model is incapable ofpredicting exactly the output(s) of interest of the reference,high-fidelity model. We adopt an augmented Bayesianstrategy that can account for this additional source of un-certainty and reflect it in the posterior of the unknownmodel parameters. The flexibility in terms of the numberand type of lower-fidelity model(s) employed, allows the useof advanced inference strategies making use of derivativeswhich further alleviate the computational burden. Fur-thermore, the framework lends itself naturally to adaptiverefinements by identifying regions in the unknown param-eter space where additional, high-fidelity runs are neededin order to most efficiently refine the estimates obtained.We demonstrate the performance of various Bayesian cou-pling strategies between low and high-fidelity models in thecontext of continuum thermodynamics.

Maximilian KoschadeTechnical University of [email protected]

Phaedon S. KoutsourelakisTechnical University [email protected]

MS34

A Bayesian Interpretation of Kernel-based Meth-ods for Multifidelity Approximation

A recently developed technique for forward uncertaintyquantification (UQ) uses the induced reproducing kernelHilbert space from a low-fidelity parametric model as atool to understand the parametric variability of a moreexpensive high-fidelity model. This has been the corner-stone of a multifidelity simulation technique that seeks toattain accuracy comparable to an expensive high-fidelitymodel with the cost of an inexpensive low-fidelity model.We show that this methodology, explicitly developed forforward UQ problems, has an interesting Bayesian inter-pretation by introducing suitable notions of prior and pos-terior distributions.

Akil NarayanUniversity of Utah

[email protected]

MS34

Multifidelity Transport Maps for Bayesian Infer-ence

Computational expense often prevents the use of Bayesiantechniques on large time-critical applications. Transportmaps, which are nonlinear random variable transforma-tions, can be constructed offline (before collecting data)and thus have the potential to significantly reduce the on-line (after collecting data) cost of Bayesian inference. Theoffline construction phase however, can also be computa-tionally intractable. In this work, we outline a new multi-fidelity approach to sample-based transport map construc-tion. Computational costs and performance are discussedin the context of sea-ice modeling.

Matthew ParnoUS Army Cold Regions Research and Engineering Lab(CRREL)[email protected]

MS35

Mathematical and Statistical Model Misspecifi-cations in Modeling Immune Response in RenalTransplant Recipients

We examine uncertainty in clinical data from a kidneytransplant recipient infected with BK virus and investi-gate mathematical model and statistical model misspeci-fications in the context of least squares methodology. Adifference-based method is directly applied to data to de-termine the correct statistical model that represents theuncertainty in data. We then carry out an inverse prob-lem with the corresponding iterative weighted least squarestechnique and use the resulting modified residual plots todetect mathematical model discrepancy. This process isimplemented using both clinical and simulated data. Ourresults demonstrate mathematical model misspecificationwhen both simpler and more complex models are assumedcompared to data dynamics.

Neha MuradCenter for Research in Scientific ComputationNorth Carolina State University, Raleigh, NC, [email protected]

MS35

Estimating the Distribution of Random Parame-ters in, and Deconvolving the Input Signal to, aDiffusion Equation Forward Model for a Transder-mal Alcohol Biosensor

The distribution of random parameters in a distributed pa-rameter model with unbounded input and output for thetransdermal transport of ethanol is estimated. The modeltakes the form of a diffusion equation with the input be-ing the blood alcohol concentration (BAC) and the out-put being the transdermal alcohol concentration (TAC).Our approach is based on the reformulation of the under-lying dynamical system in such a way that the randomparameters are now treated as additional space variables.When the distribution to be estimated is assumed to bedefined in terms of a joint density, estimating the distribu-tion is equivalent to estimating the diffusivity in a multi-dimensional diffusion equation and thus well-establishedfinite dimensional approximation schemes, functional an-

UQ18 Abstracts 61

alytic based convergence arguments, optimization tech-niques, and computational methods may all be employed.Once the forward population model has been identified ortrained based on a sample from the population, the re-sulting distribution can then be used to deconvolve theBAC input signal from the biosensor observed TAC out-put signal. In addition, our approach allows for the directcomputation of corresponding credible bands without sim-ulation. We use our technique to estimate bivariate nor-mal distributions and deconvolve BAC from TAC based ondata from a population that consists of multiple drinkingepisodes from a single subject and a population consistingof a single episode from multiple subjects.

Melike Sirlanci, I. Gary RosenDepartment of MathematicsUniversity of Southern [email protected], [email protected]

MS35

Individual Level Modeling of Uncertainty in Infec-tious Diseases

Appropriately modeling the uncertainty that underlies in-dividual responses is crucial to building a useful predictivemodel. Using sparse individual data to scale up and cal-ibrate a population-level model has long been recognizedas a challenging and important problem in clinical trials ofpreventive and therapeutic medicine. Recent efforts havesuggested that representing individual data as such, andnot as a response from an ‘average’ individual, providesan improved informative framework, with which to moreaccurately anticipate successes or failures to proposed in-tervention strategies.

Karyn SuttonSenior Research Scientist, Institute for Disease [email protected]

MS35

A Review of Algorithmic Tools for Causal EffectQuantification

Researchers in applied statistics have developed a numberof methods for the quantification of treatment effect sizeand variability from observational (i.e., non-experimental)data. However, the these methods suffer two major limi-tations. First, the proposed models may suffer from mis-specifications as a result of marginalizing (unobserved la-tent confounders) and conditioning (non-random selectioneffects) relative to the data-generating model. Second, thedata collected is often censored or incomplete. As a resultof these uncertainties, inference methods associated withsuch models are inefficient at best and totally incorrect atworst. In this talk, several recent theoretical developmentsare reviewed with a particular focus on algorithmic consid-erations for the design of estimators that are both efficientand robust. A few examples will be selected from fields ofpublic health, epidemiology, biology, etc.

Clay ThompsonResearch Statistician Developer, SAS [email protected]

MS36

Gromovs Method for Stochastic Particle Flow Non-linear Filters

We present a new exact stochastic particle flow filter, us-

ing a theorem of Gromov. Our filter is many orders ofmagnitude faster than standard particle filters for highdimensional problems, and our filter beats the extendedKalman filter accuracy by orders of magnitude for difficultnonlinear problems. Our theory uses particle flow to com-pute Bayes rule, rather than a pointwise multiply. We donot use resampling of particles or proposal densities or anyMCMC method. But rather, we design the particle flowwith the solution of a linear first order highly underdeter-mined PDE. We solve this PDE as an exact formula whichis valid for arbitrary smooth nowhere vanishing densities.Gromov proves that there exists a nice solution to a linearconstant coefficient PDE for smooth functions if and onlyif the number of unknowns is sufficiently large (at leastthe number of linearly independent equations plus the di-mension of the state vector). A nice solution of the PDEmeans that we do not need any integration, and hence itis very fast. To dispel the mystery of Gromovs theoremwe show the simplest non-trivial example. We also showseveral generalizations of Gromovs theorem. Particle flowis similar to optimal transport, but it is much simpler andfaster because we avoid solving a variational problem. Op-timal transport is deterministic whereas our particle flowis stochastic, like all such algorithms that actually workrobustly.

Fred DaumRaytheonfrederick e [email protected]

MS36

A Critical Overview of Controlled Interacting Par-ticle Systems for Nonlinear Filtering

As the first talk in the Minisymposium, I will introducethe control-based feedback particle filter algorithm and de-scribe its relationship to the ensemble Kalman filter (e.g.,Reich) as well as other types of particle flow algorithms(e.g., Daum). I will review results from studies that haveshown favorable numerical comparisons for the control-based algorithms when compared to the importance sam-pling based algorithms (e.g., Berntorp, Stano, Surace). Iwill describe the common aspects of the control-based algo-rithms, in particular, the partial differential equation (pde)that needs to be solved to obtain the control law. I willcarefully describe the associated numerical problem in theinteracting particle settings and review the state-of-the-artalgorithms for obtaining the solution. I will close with adiscussion of open theoretical and numerical problems inthis area. Advances on some of these problems (e.g., solu-tion of the pde) will also be discussed as part of the othertalks (Meyn, Daum) in the Minisymposium.

Prashant G. MehtaDepartment of Mechanical and Industrial EngineeringUniversity of Illinois at Urbana [email protected]

MS36

Feedback Particle Filter and the Poisson Equation

The particle filter is now a standard approximation of thenonlinear filter for discrete time hidden Markov models.The feedback particle filter is a more recent innovation thatis also based on particles. The major difference is that there-sampling step in the standard algorithm is replaced byan innovation gain. This talk will begin with the basicfeedback particle filter algorithm, and focus on methodsto approximation the gain function K. A fascinating and

62 UQ18 Abstracts

valuable representation of the gain is as the gradient of asolution to Poisson’s equation. Specifically, under generalconditions, K = ∇h, where h solves

Dh = −c ,

D is the differential generator associated with a Langevindiffusion, and c is the observation function (centered tohave zero mean). Poisson’s equation arises in many otherareas of control and statistics, and there are many ap-proaches available for approximations. Among these isa variant of the celebrated TD-learning algorithm. It isshown that the Langevin generator admits special struc-ture that leads to more efficient approximation techniquesbased on either a finite dimensional basis or kernel meth-ods.

Sean MeynUniversity of [email protected]

MS36

The Neural Particle Filter: Scalability and Biolog-ical Implementation

The robust estimation of dynamical hidden features, suchas the position of prey, based on sensory inputs is oneof the hallmarks of perception. This dynamical estima-tion can be rigorously formulated by nonlinear Bayesianfiltering theory. Recent experimental and behavioral stud-ies have shown that animals performance in many tasks isconsistent with such a Bayesian statistical interpretation.However, it is presently unclear how a nonlinear Bayesianfilter can be efficiently implemented in a network of neuronsthat satisfies some minimum constraints of biological plau-sibility. We argue that the Neural Particle Filter, whichis very similar to the Feedback Particle Filter, can serveas a model of filtering by circuits in the brain, and discusssome of the specific problems of interpreting such a particlesystem as a network of neurons. Special attention is givento the problems of scalability in the number of dimensionsand biological plausibility.

Simone Carlo SuraceInstitute of Neuroinformatics, University of Zurich [email protected]

Jean-Pascal Pfister, Anna KutschireiterInstitute of NeuroinformaticsUniversity of Zurich and ETH [email protected], [email protected]

MS37

Optimal Experimental Design using Laplace BasedImportance Sampling

In this talk, the focus is on optimizing strategies for the effi-cient computation of the inner loop of the classical double-loop Monte Carlo for Bayesian optimal experimental de-sign. We propose the use of the Laplace approximationas an effective means of importance sampling, leading toa substantial reduction in computational work. This ap-proach also efficiently mitigates the risk of numerical un-derflow. Optimal values for the method parameters are de-rived, where the average computational cost is minimizedsubject to a desired error tolerance. We demonstrate thecomputational efficiency of our method, as well as for amore recent approach that approximates using the Laplacemethod the return value of the inner loop. Finally, we

present a set of numerical examples showing the efficiencyof our method. The first example is a scalar problem thatis linear in the uncertain parameter. The second exampleis a nonlinear scalar problem. The last example deals withsensor placements in electrical impedance tomography torecover the fiber orientation in laminate composites.

Joakim [email protected]

MS37

Optimal Design of Experiments in the ChemicalIndustry

Validated dependable mathematical models are of evergrowing importance in science and engineering, and to-day also driven by a serious demand of the industry. Inthis context the new paradigmatic concept of digital twinsmay be worth mentioning, which is becoming more andmore attractive among researchers from different industrialcompanies. Its idea is that every new system, product or aphysical or economical process should be accompanied bya digital twin, which comprises an ensemble of mathemat-ical models and algorithms. It would ideally accompanythe process and be used among others to analyze data, pre-dict malfunctioning and perform optimal operation. How-ever, the application of mathematical models for the sim-ulation, and even more for the optimization and control ofcomplex engineering processes requires their thorough val-idation and calibration by parameter and state estimationbased on process data, which should preferably be obtainedby optimized experiments, and optimized measurement de-signs. For the latter, very efficient numerical methods forthe complex non-standard optimal control problem of op-timal experimental design for dynamic processes were de-veloped, which have proven to be a powerful mathematicalinstrument of high economic impact in industrial practice- often cutting experimental costs by a factor of 5 to 10!The talk will especially address numerous applications ofmodel validation methods from chemical engineering.

Georg BockUniversity of [email protected]

Ekaterina KostinaHeidelberg [email protected]

MS37

Optimal Experimental Design Problem as Mixed-integer Optimal Control Problem

The optimization of one or more dynamic experiments inorder to maximize the accuracy of the results of a parame-ter estimation subject to cost and other technical inequal-ity constraints leads to very complex non-standard opti-mal control problems. One of the difficulties is that theobjective function is a function of a covariance matrix andtherefore already depends on a generalized inverse of theJacobian of the underlying parameter estimation problem.Another difficulty is that in case of sampling design, wedeal with an optimal control problems with mixed-integercontrols. We are interested in the application of the Pon-tryagins maximum principle in order to understand thestructure of the sampling design.

Ekaterina Kostina

UQ18 Abstracts 63

Heidelberg [email protected]

MS37

Generalized Laplace Method for Optimal Experi-mental Design for Non-Gaussian Posteriors

The Laplace method has been shown to be very efficient incomputing the information criteria in the Bayesian designof experiments. The assumption is usually that the poste-rior distribution concentrates at a single point - the MAPestimate. We extend the Laplace method to two more gen-eral scenarios. In the first scenario, the posterior distribu-tion has a compact support thus can be approximated by atruncated Gaussian. In the second scenario, the unknownparameters cannot be fully determined by the data fromthe proposed experiments, hence the information gain iscomputed only in the subspace where data is informative.

Quan LongUnited Technologies Research [email protected]

MS38

Bayesian Filtering for Periodic, Time-varying Pa-rameter Estimation

Many systems arising in biological applications are subjectto periodic forcing. In these systems, the forcing parameteris not only time-varying but also known to have a periodicstructure. We show how nonlinear Bayesian filtering tech-niques can be employed to estimate periodic, time-varyingparameters, while naturally providing a measure of uncer-tainty in the estimation. Results are demonstrated usingdata from several biological applications.

Andrea ArnoldWorcester Polytechnic [email protected]

MS38

Uncertainty in Estimation using the Prohorov Met-ric Framework

Glioblastoma Multiforme (GBM) is a malignant brain can-cer with a tendency to both migrate and proliferate. Wehave modeled GBM using a random differential equationversion of the reaction-diffusion equation, where the diffu-sion parameters D and growth rates are random variables.We investigate the ability to perform the inverse problem torecover the probability distributions of D and the growthrates along with measures of their uncertainty using theProhorov metric. We give a brief overview of use of theProhorov metric which is equivalent to the weak* topol-ogy on the space of probability measures when imbeddedin the topological dual of the space of bounded continuousfunctions. This represents joint efforts with Erica Rutterand Kevin Flores at NCSU.

H. T. BanksNC State UniversityCenter for Research in Scientific [email protected]

MS38

A Bayesian Framework for Strain Identification

from Mixed Diagnostic Samples

We present a statistical framework and computationalmethods for disambiguation of pathogen strains in mixedDNA samples. Our method is relevant for applications inpublic health and can be used, for example, to identifypathogens and monitor disease outbreak. Mathematically,strain identification aims at simultaneously estimating a bi-nary matrix and a real-valued vector such that their prod-uct is approximately equal to the measured data vector.The problem has a similar structure to blind deconvolu-tion, but binary constraints are present in the formulationand enforced in our approach. Following a Bayesian ap-proach, we derive a posterior density and present two MAPestimation methods that exploit the structure of the prob-lem. The first one decouples the problem into smaller in-dependent subproblems, whereas the second one convertsthe problem to a task of convex mixed-integer quadraticprogramming. The decoupling approach also provides anefficient way to integrate over the posterior.

Lars RuthottoDepartment of Mathematics and Computer ScienceEmory [email protected]

Lauri MustonenAalto UniversityDepartment of Mathematics and Systems [email protected]

MS38

Physical-model-based, Data-driven Approach To-ward Noninvasive Prediction of Intracranial Pres-sure

Intracranial pressure (ICP) monitoring is clinically signif-icant to patients suffering cerebral decreases. However,clinically accepted ICP measurement data can only be ob-tained by invasive techniques, which have high risks forpatients. Although recent developments in mathematicalmodeling on the cerebral hemodynamics have enabled us tobetter understand the mechanisms that drive ICP dynam-ics, the accuracy of the model predictions is far from beingclinically accepted. On the other hand, noninvasive mea-surement techniques are widely used in clinical practice,and thus more and more ICP-related measurement databecome available. In this work, the objective is to leveragethe use of both the physical model and ICP-related datato better estimate intracranial dynamics noninvasively. Tothis end, a simulation-based, data-driven framework is de-veloped based on a multiscale model of cerebral hemody-namics and iterative ensemble Kalman method. Measure-ment data of cerebral blood flow velocities (CBFV) fromnoninvasive techniques, e.g., Transcranial Doppler Ultra-sound, are assimilated. Numerical results demonstrate themerits of the proposed framework.

Jian-Xun Wang, Jeffrey PyneUniversity of California, [email protected], [email protected]

Xiao HuUniversity of California, San [email protected]

Shawn ShaddenUniversity of California, Berkeley

64 UQ18 Abstracts

[email protected]

MS39

Data-driven and Reduced Order Modeling : AnAttempt at a Taxonomy of Approaches

With the growing interest in data-driven and reduced or-der approaches, there is a pressing need to classify differentproblem formulations and approaches using a consistentlanguage. Since this field is inherently multi-disciplinary,even the use of basic terms such as ”prediction” or ”esti-mation” tend to be confusing and often misused. In thistalk, an attempt is made at establishing a taxonomy of ap-proaches in data-driven and reduced order modeling. Dif-ferent classes of problems that are addressed by these ap-proaches will be categorized and examples from literaturewill be provided. The intent of this talk is to set the stagefor the use of a common terminology such that the com-munity can - in the future - efficiently navigate literatureand more easily exchange ideas.

Karthik DuraisamyUniversity of Michigan Ann [email protected]

MS39

Offline-enhanced Reduced Basis Method throughAdaptive Construction of the Surrogate TrainingSet

The Reduced Basis Method (RBM) is a popular certifiedmodel reduction approach for solving parametrized par-tial differential equations. One critical stage of the offlineportion of the algorithm is a greedy algorithm, requiringmaximization of an error estimate over parameter space.In practice this maximization is usually performed by re-placing the parameter domain continuum with a discrete”training” set. When the dimension of parameter spaceis large, it is necessary to significantly increase the sizeof this training set in order to effectively search parame-ter space. Large training sets diminish the attractivenessof RBM algorithms since this proportionally increases thecost of the offline phase. In this work we propose novelstrategies for offline RBM algorithms that mitigate thecomputational difficulty of maximizing error estimates overa training set. The main idea is to identify a subset of thetraining set, a ”Surrogate Training Set” (STS), on whichto perform greedy algorithms. The STS’s we constructare much smaller in size than the full training set, yet ourexamples suggest that they are accurate enough to inducethe solution manifold of interest at the current offline RBMiteration. We demonstrate the algorithm through numer-ical experiments, showing that it is capable of accelerat-ing offline RBM procedures without degrading accuracy,assuming that the solution manifold has rapidly decayingKolmogorov width.

Jiahua JiangUniversity of Massachusetts, [email protected]

Akil NarayanUniversity of [email protected]

Yanlai ChenUniversity of Massachusetts Dartmouth

[email protected]

MS39

Inverse Regression-based Uncertainty Quantifica-tion for High Dimensional Models

Many uncertainty quantification (UQ) approaches sufferfrom the curse of dimensionality, that is, their compu-tational costs become intractable for problems involvingmany uncertainty parameters. In these situations, theclassic Monte Carlo (MC) often remains the preferred

method of choice because its convergence rate O(n1/2),where n is the required number of model simulations, doesnot depend on the dimension of the problem. However,many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of in-terest (QoI) is often caused by only a few latent param-eters varying within a low-dimensional subspace, knownas the sufficient dimension reduction (SDR) subspace inthe statistics literature. Motivated by this observation,we propose two inverse regression-based UQ algorithms(IRUQ) for high-dimensional problems. Both algorithmsuse an inverse regression method, e.g., the sliced inverseregression (SIR) method, to convert the original high-dimensional problem to a low-dimensional one, which isthen efficiently solved by building a response surface forthe reduced model, e.g., via the polynomial chaos expan-sion. While the first algorithm relies on the existence of anexact dimension reduction subspace, the second algorithmapplies to a broader set of problems where the dimensionreduction is approximate.

Weixuan LiExxon [email protected]

Guang LinPurdue [email protected]

Bing LiDepartment of StatisticsPenn [email protected]

MS39

Dimension Reduction-Accelerated Parameter Esti-mation

A grand challenge in uncertainty quantification is the so-lution of inverse problem governing by partial differentialequations (PDEs). The forward problem is usually charac-terized by a large dimensional parameter field. As a result,direct solving this hundred to thousand-parameter prob-lem is computational infeasible. Instead we rely on theBayesian solution to seek the most probable combination ofthe parameters. The Markov chain Monte Carlo (MCMC)method provides a powerful tool to generate the posteriordistributions of the parameters as well as the most prob-able point in the parameter space. In this work, we alsoutilize the gPC surrogates and combine the state to the artdimensional reduction technique to accelerate the MCMCevaluation. The data driven dimensional reduction basedon the conditional random field provides a tractable ap-proach to estimate the spatial dependent coefficient withonly a few measurements.

Jing LiPacific Northwest National Lab

UQ18 Abstracts 65

[email protected]

Alexandre M. [email protected]

MS40

Efficient Bathymetry Estimation in the Presence ofModel and Observation Uncertainties

The high cost and complex logistics of using ship-basedsurveys for bathymetry estimation have encouraged theuse of remote sensing techniques. Data assimilation meth-ods combine the remote sensing data and hydrodynamicmodels to estimate the unknown bathymetry and the cor-responding uncertainties. In particular, several recent ef-forts have combined Kalman Filter-based techniques suchas ensemble-based Kalman filters with indirect video-basedobservations to address the bathymetry inversion problem.However, these methods often suffer from ensemble collapseand uncertainty underestimation. We present scalable androbust methods for bathymetry estimation problem withthe presence of model and observation uncertainty andshow the robustness and accuracy of the proposed meth-ods with nearshore bathymetry estimation and river depthimaging examples.

Hojat Ghorbanidehno, Jonghyun LeeStanford [email protected], [email protected]

Matthew FarthingUS Army Corps of Engineers ERDCVicksburg, [email protected]

Tyler HesserCoastal and Hydraulics LaboratoryUS Army Engineer Research and Development [email protected]

Peter K. KitanidisDept. of Civil and Environmental EngineeringStanford [email protected]

Eric F. DarveStanford UniversityMechanical Engineering [email protected]

MS40

Approximate Bayesian Inference under ReducedModel in Inverse Problem and Uncertainty Quan-tification

Abstract not available at time of publication.

Nilabja GuhaTexas A&M [email protected]

MS40

Bayesian Inference and Multiscale Model Reduc-tion for Inverse Problems

In the talk, we present a strategy for accelerating poste-

rior inference for statistical inverse problems. In many in-ference problems, the posterior may be concentrated in asmall portion of the entire prior support. It will be muchmore efficient if we build and simulate a surrogate onlyover the significant region of the posterior. To this end,we construct a reduced order model, and an intermedi-ate distribution is built using the approximate samplingdistribution based on the reduced order model. Then wereconstruct another reduced order based on the interme-diate distribution to explore a surrogate posterior. ForBayesian inference, Markov chain Monte Carlo is used toexplore the surrogate posterior density, which is based onthe surrogate likelihood and the intermediate distribution.The idea is extended to Bayesian data assimilation, wherethe priors are dynamically updated as more observationdate is available. The reduced order model is also dynam-ically updated in the data assimilation. Compared withthe surrogate model directly based on the original prior,the proposed method improves the efficiency of the pos-terior exploration and reduces uncertainty for parameterestimation and model prediction. The proposed approachis studied in context of multiscale model reduction and ap-plied to the inverse problems for time fractional diffusionmodels in porous media.

Lijian JiangLos Alamos National [email protected]

Yuming Ba, Na OuHunan [email protected], [email protected]

MS40

An Adaptive Reduced Basis Anova Method forHigh-dimensional Bayesian Inverse Problems

Abstract not available at time of publication.

Qifeng LiaoShanghaiTech [email protected]

Jinglai LiShanghai JiaoTong University, [email protected]

MS41

Ordered Line Integral Methods for Computing theQuasi-potential

The quasi-potential is a key function in the Large Devia-tion Theory. It characterizes the difficulty of the escapefrom the neighborhood of an attractor of a stochastic non-gradient dynamical system due to the influence of smallwhite noise. It gives an estimate of the invariant proba-bility distribution in the neighborhood of the attractor upto the exponential order and allows one to readily com-pute the maximum likelihood escape path. I will present anew family of methods for computing the quasi-potentialon a regular mesh in 2D and 3D named the Ordered LineIntegral Methods (OLIMs). Similar to Vladimirsky’s andSethian’s Ordered Upwind methods, OLIMs employ thedynamical programming principle. Contrary to it, they (i)feature a new hierarchical approach for the use of com-putationally expensive triangle updates, and (ii) directlysolve local minimization problems using quadrature rulesinstead of solving the corresponding Hamilton-Jacobi-typeequation by the first order finite difference upwind scheme.

66 UQ18 Abstracts

These differences enable a notable speed-up and a dramaticreduction of error constants (up to 1000 times).

Maria K. CameronUniversity of [email protected]

MS41

Computing the Quasi-Potential in Systems withAnisotropic Diffusion

It is natural to model a number of biological phenomena us-ing SDEs with varying and anisotropic diffusion term. Anexample is the model for gene regulation in Lambda Phage(Aurell and Sneppel, 2002). The question of interest thereis the transition from the phase where the phage repro-duces within the cell without killing it to the one whereit kills it. I will introduce a modification of an OrderedLine Integral Method for the case of variable anisotropicdiffusion for computing the quasi-potential, the key func-tion determining transition rates and Maximum LikelihoodPaths (MLPs) in the small noise case. A collection of accu-racy tests will be presented. The effects of the anisotropyon the quasi-potential and MLPs will be demonstrated onthe Maier-Stein model. Finally, an application of the pro-posed solver to the analysis of the Lambda Phage generegulation model will be discussed.

Daisy DahiyaUniversity of [email protected]

MS41

Optimal and Robust Control for Piecewise-deterministic Processes.

Piecewise-deterministic (PD) processes provide a conve-nient framework for modeling non-diffusive stochastic per-turbations in a planning environment. (E.g., changes inthe wind direction when planning a path for a sailboat.)These stochastic switches are typically modeled through acontinuous-time Markov chain on a set of ”deterministicmodes”, with transition probabilities estimated using his-torical data. This mathematical formalism leads to a sys-tem of weakly-coupled PDEs (one for each mode) and addi-tional computational challenges. In this talk we will discussseveral notions of optimality and robustness for controllingPD processes, focusing on numerical techniques needed tomake them practical. [Joint work with Zhengdi Shen.]

Alexander VladimirskyDept. of MathematicsCornell [email protected]

MS41

Rare Event Study on the Checkpoint Activation inthe Budding Yeast Cell Cycle

Abstract not available at time of publication.

Peijie ZhouPeking [email protected]

MS42

Compressive Sensing with Cross-validation and

Stop-sampling for Sparse Polynomial Chaos Ex-pansions

Compressive sensing is a powerful technique for recover-ing sparse solutions of underdetermined linear systems,which is often encountered in UQ of expensive and high-dimensional physical models. We perform numerical stud-ies employing several compressive sensing solvers that tar-get the unconstrained LASSO formulation, with a fo-cus on linear systems that arise in the construction ofpolynomial chaos expansions. With core solvers of l1 ls,SpaRSA, CGIST, FPC AS, and ADMM, we develop tech-niques to mitigate overfitting through an automated selec-tion of regularization constant based on cross-validation,and a heuristic strategy to guide the stop-sampling deci-sion. Practical recommendations on parameter settings forthese techniques are provided and discussed. The over-all method is applied to large eddy simulations of super-sonic turbulent jet-in-crossflow involving a 24-dimensionalinput. Through empirical phase-transition diagrams andconvergence plots, we illustrate sparse recovery perfor-mance with polynomial chaos bases, accuracy and com-putational expense tradeoffs between bases of different de-grees, and practicability of conducting compressive sensingfor a realistic, high-dimensional physical application. Wefind ADMM to have demonstrated empirical advantagesthrough consistent lower errors and faster computationaltimes.

Xun Huan, Cosmin Safta, Khachik Sargsyan, ZacharyVane, Guilhem LacazeSandia National [email protected], [email protected],[email protected], [email protected],[email protected]

Joseph C. OefeleinSandia National LaboratoriesCombustion Research [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

MS42

Enhanced Sparse Recovery of Polynomial ChaosExpansions Using Dimension Adaptation andNear-optimal Sampling

Compressive sampling has become a widely used approachto construct polynomial chaos surrogates when the numberof available simulation samples is limited. Originally, theseexpensive simulation samples would be obtained at randomlocations in the parameter space. In this presentation, wediscuss algorithms that can be used to (1) reduce the di-mensionality of the polynomial representation and (2) iden-tify the sample locations in a proper way. The proposeddimension reduction approach incrementally explores sub-dimensional expansions for a sparser/more accurate recov-ery. The near-optimal algorithm builds upon coherence-optimal sampling, and identifies near-optimal sample loca-tions that lead to enhancement of cross-correlation prop-erties of the measurement matrix. We provide several nu-merical examples to demonstrate the proposed strategiesproduce substantially more accurate results, compared toother strategies in the literature.

Negin Alemazkoor, Hadi Meidani

UQ18 Abstracts 67

University of Illinois at [email protected], [email protected]

MS42

Time and Frequency Domain Methods for SparseBasis Selections in Random Linear Dynamical Sys-tems

We consider linear dynamical systems consisting of ordi-nary differential equations with random variables. A quan-tity of interest is defined as an output of the system. Werepresent the quantity of interest in a polynomial chaosexpansion, which includes orthogonal basis polynomials.Now the aim is to determine a low-dimensional approxi-mation (or sparse approximation), where just a few basispolynomials are required for a sufficiently accurate rep-resentation of the quantity of interest. We investigate atime domain method and a frequency domain method forthe identification of a low-dimensional approximation. Onthe one hand, the stochastic Galerkin method is appliedto the random linear dynamical system. A frequency do-main analysis of the Galerkin system yields a measure forthe importance of basis polynomials and thus a strategyfor the selection of basis functions. On the other hand, asparse minimization chooses the basis polynomials by infor-mation from the time domain only. Therein, an orthogonalmatching pursuit produces an approximate solution of theminimization problem. The two approaches are comparedusing a test example from a mechanical application. Nu-merical results are presented. In particular, we illustratethe choice of the basis polynomials in the two methods.

Roland PulchUniversity of [email protected]

John D. JakemanSandia National [email protected]

MS42

High-dimensional Function Approximation ViaWeighted L1 Minimization with Gradient-augmented Samples

Many physical problems involve approximating high-dimensional functions with a limited number of samplingpoints. It is seen that the high-dimensional function in-terpolation problem has various applications such as un-certainty quantification. For example, in order to com-pute a quantity of interest for the parametric PDE, high-dimensional function approximation is often required. Inthis talk, we present the work of interpolating a high-dimensional function using the weighted �1 minimizationtechnique when both points for the original function andits derivatives are sampled. With additional derivative in-formation, we see a numerical improvement over the caseonly samples points from the original function. Theoreti-cally, we show that with exactly the same complexity as inthe case of function samples only, the gradient enhancedmethod gives a better error bound in the Sobolev-typenorm as opposed to the L2 norm.

Yi Sui, Ben AdcockSimon Fraser University

[email protected], ben [email protected]

MS43

Multi-reduction MCMC Methods for Bayesian In-verse Problem

We propose a Monte-Carlo methods for Bayesian inverseproblem with nested reduced-order models. Details as-sumptions and rigorous proofs on the efficiency of the pro-posed multi-reduction MCMC will be provided. Numeri-cal results will be presented to validate the theoretical ap-proaches.

Tan Bui-ThanhThe University of Texas at [email protected]

Viet Ha HoangNanyang Technological [email protected]

MS43

Multilevel DILI

In this paper we combine two state-of-the-art MCMC al-gorithms: Multilevel MCMC (MLMCMC) and Dimension-Independent Likelihood-Informed (DILI) MCMC. Boththese algorithms have been shown to significantly improveon existing MCMC algorithms in the context of high-dimensional PDE-constrained Bayesian inverse problems.We will show that the gains of the two approaches arecomplementary and that the combination of the two out-performs each of the individual algorithms. While MLM-CMC reduces the computational cost by shifting most ofthe work from high-fidelity approximations of the underly-ing PDE to coarser and cheaper ones, DILI uses low-rankapproximations of the Hessian of the posterior to obtaingood proposals within the core MCMC step. In particular,in DILI MCMC the high-dimensional parameter space issplit into a low-dimensional Likelihood-Informed Subspaceand its complement, resulting in low rejection rates and fastmixing. We show that a hierarchy of likelihood-informedsubspaces across the different levels of the MLMCMC al-gorithm can be defined consistently and updated cheaply,even when the number of parameters increases as the levelincreases. We introduce a proposal scheme that asymptot-ically converges to the correct posterior distribution of theparameters on each level, given the distribution of the pa-rameters on the coarser levels, even when the input randomfield is not parametrised via a Karhunen-Loeve expansion.Numerical experiments confirm the gains.

Gianluca DetommasoUniversity of [email protected]

Tiangang CuiMonash [email protected]

Robert ScheichlUniversity of [email protected]

MS43

Multilevel Ensemble Transform Methods for

68 UQ18 Abstracts

Bayesian Inference

Ensemble transform methods are useful frameworks for aBayesian approach to data-assimilation. For example, anEnsemble Transform Particle Filter (ETPF) uses a deter-ministic transform from forecast to analysis ensembles ateach assimilation step, instead of a traditional random re-sampling step. With the use of localisation, one can also ex-tend the scheme to high-dimensional cases. This providesa framework to implement data-assimilation on PDEs con-taining random coefficients or that have trajectories mod-elled by a Brownian Motion. However the computationalcost of propagating ensembles of realisations of these PDEscan be very large. This talk details algorithms to applythe efficient multilevel Monte Carlo (MLMC) approach toensemble transform methods, in particular the ETPF; dis-cussing its effectiveness. The challenge here is being ableto couple the analysis ensembles of the coarse and fine dis-cretizations in the MLMC framework; this is an active areaof research. Through proof of concepts, the algorithms con-sidered are shown to speed-up the propagation of ensembleswithin filtering estimators, with a fixed order of accuracy,from their standard counterparts.

Alastair GregoryImperial College [email protected]

MS43

Multilevel Sequential2 Monte Carlo for BayesianInverse Problems

The identification of parameters in mathematical modelsusing noisy observations is a common task in uncertaintyquantification. We employ the framework of Bayesian in-version: we combine monitoring and observational datawith prior information to estimate the posterior measureof a parameter. Specifically, we are interested in the proba-bility measure of a diffusion coefficient of an elliptic partialdifferential equation. In this setting, the sample space ishigh-dimensional, and each sample of the solution of thepartial differential equation is expensive. To address theseissues we propose a novel Sequential Monte Carlo samplerfor the approximation of the posterior measure. Classical,single-level Sequential Monte Carlo constructs a sequenceof probability measures, starting with the prior measure,and finishing with the posterior measure. The intermediateprobability measures arise from a tempering of the likeli-hood, or, equivalently, a rescaling of the noise. The resolu-tion of the discretisation of the partial differential equationis fixed. In contrast, our estimator employs a hierarchy ofdiscretisations to decrease the computational cost. Impor-tantly, we construct a sequence of intermediate probabilitymeasures by decreasing the temperature and increasing thediscretisation level at the same time. We present numeri-cal experiments in 2D space and compare our estimator tosingle-level Sequential Monte Carlo and other alternatives.

Jonas LatzTechnical University of [email protected]

Iason PapaioannouEngineering Risk Analysis GroupTU [email protected]

Elisabeth UllmannTU Munchen

[email protected]

MS44

Improving Sub-grid-scale Approximations inGlobal Atmospheric Models using Data-drivenTechniques

Direct numerical simulation of the atmosphere and oceanon any spatial or temporal scale of interest is not feasi-ble given the inherently turbulent nature of the dynam-ics. Because a typical climate model has a horizontal gridsize of 160km, which is far larger than the turbulent dis-sipation scale, it uses approximate closures to representunresolved dynamics. In particular, moist atmosphericconvection is amongst the most important unresolved pro-cesses, but its multiscale behavior severely challenges tra-ditional approaches based on simplified physical modeling.To circumvent these issues, we have developed a regressionbased approach to convective closure using a near-globalatmospheric simulation with a cloud-permitting resolution(4km) as a training dataset. The large size of the do-main allows a two-way coupling between convection andthe large-scale circulation. First, we coarse-grain the modelvariables into 160 km grid boxes. Then, we diagnose a sim-ple linear model which regresses the coarse-grained sourceterms onto the average vertical profiles of humidity, tem-perature, and velocity in each horizontal grid cell. Thismodel is able to explain about 40% of the observed varianceof the source terms, and we further test its performanceby using it as the closure in a prognostic coarse-resolutionsimulation.

Noah D. BrenowitzUniversity of [email protected]

Pornampai Narenpitak, Christopher BrethertonDept. of Atmos. SciencesUniversity of [email protected], [email protected]

MS44

Data-driven Determination of Koopman Eigen-functions using Delay Coordinates

The Koopman operator induced by a dynamical systemprovides an alternate method of studying the system asan infinite dimensional linear system, and has applica-tions to attractor reconstruction and forecasting. Koop-man eigenfunctions represent the non-mixing componentof the dynamics. They factor the dynamics, which canbe chaotic, into quasiperiodic rotation on tori. Here, wedescribe a method in which these eigenfunctions can beobtained from a kernel integral operator, which also anni-hilates the continuous spectrum. We show that incorpo-rating a large number of delay coordinates in constructingthe kernel of that operator results, in the limit of infinitelymany delays, in the creation of a map into the discretespectrum subspace of the Koopman operator. This en-ables efficient approximation of Koopman eigenfunctionsfrom high-dimensional data in systems with point or mixedspectra.

Suddhasattwa Das, Dimitrios GiannakisCourant Institute of Mathematical SciencesNew York University

UQ18 Abstracts 69

[email protected], [email protected]

MS44

Parsimonious Model Selection using Genomic Datafor Outbreak Intervention

We discuss novel computational methods for tracking dis-ease in human populations using pathogen genomes. Pop-ulation models of disease are often studied as complexdynamical systems with many nonlinear interactions. Afull mechanistic modeling perspective would incorporateall scales of the system from intra-host pathogen behav-ior to inter-host social network structure. We have in-stead concentrated on parsimonious models complementedby fast algorithms that can potentially enable meaningfulintervention decisions. We built a likelihood-based nestedmodel selection framework for virus mobility informed bylinked pathogen genome sequences. Then we demonstratedour methodology on Ebola virus genomes collected fromSierra Leone during the 2013-2016 outbreak in West Africa.We discovered that the outbreak showed a temporal changein model preference coincident with a documented inter-vention, the Western Area Surge.

Kyle B. Gustafson, Joshua L. ProctorInstitute for Disease [email protected], [email protected]

MS44

Improving Accuracy and Robustness of ArtificialNeural Networks to Discover Dynamical Systemsfrom Data

Despite the successful application of machine learning(ML) in fields such as image processing and speech recogni-tion, only a few attempts has been made toward employingML to discover/represent the dynamics of complex physicalsystems. In this work, we study the use of artificial neuralnetworks to a) discover dynamical systems from data, andb) to discover sub-scale dynamics in reduced order modelsof dynamical systems and point out several critical issueswith existing ML techniques. We propose novel approachesto improve predictive capabilities and robustness of neuralnetworks for applications in non-linear dynamical systems.The use of cascading cost functions and global regulariza-tion will be discussed in detail.

Shaowu PanUniversity of [email protected]

Karthik DuraisamyUniversity of Michigan Ann [email protected]

MS45

Sensitivity Analysis for Flocking and Synchroniza-tion Models

In this talk, we first introduce two random flocking andsynchronization models, namely random Cucker-Smalemodel and random Kuramoto model and then present localsensitivity analysis for those models based on the uniformstability analysis and propagation of Sobolev regularity inrandom parameter space. This is a joint work with Shi Jin(UW-Madison) and Jinwook Jung (SNU).

Seung Yeal Ha

Seoul National UniversitySouth [email protected]

MS45

Hypocoercivity Based Sensitivity Analysis andSpectral Convergence of the Stochastic GalerkinApproximation to Collisional Kinetic Equationswith Multiple Scales and Random Inputs

In this talk, the speaker will present a general frameworkto study a general class of linear and nonlinear kineticequations with random uncertainties from the initial dataor collision kernels, and their stochastic Galerkin approx-imations, in both incompressible Navier-Stokes and Euler(acoustic) regimes. First, we show that the general frame-work put forth in [C. Mouhot and L. Neumann, Nonlin-earity, 19, 969-998, 2006; M. Briant, J. Diff. Eqn., 259,6072-6141, 2005] based on hypocoercivity for the deter-ministic kinetic equations can be adopted for sensitivityanalysis for random kinetic equations, which gives rise toexponential convergence of the random solution toward the(deterministic) global equilibrium. Then we use such the-ory to study the stochastic Galerkin (SG) methods for theequations, establish hypocoercivity of the SG system andregularity of its solution, and spectral accuracy and expo-nential decay of the numerical error of the method in aweighted Sobolev norm. This is a joint work with Shi Jin.

Liu LiuICESUniversity of Texas [email protected]

MS45

Bayesian Estimation for Transport Equations forNanocapacitors

We use and evaluate different Bayesian estimation methodsto quantify uncertainties in model parameters in nanoca-pacitors. Here, randomness arises due to process varia-tions; also, parameters that cannot be measured directlyare to be determined.The methods include the direct approach, the Markov-chain Monte-Carlo (MCMC) method, and an iterative ver-sion of the latter that we have developed, where we use thecalculated posterior distribution as the prior distributionfor a new MCMC analysis. We investigate the influenceof the number of samples in each Markov chain and thenumber of iterations on the total computational work andthe error achieved. In addition, we discuss the methodsfor estimating the posterior distribution based on samplesprovided by the MCMC analysis.We apply our algorithms to the Poisson-Boltzmann andPoisson-Nernst-Planck equations which arise from mod-eling nanoelectrode biosensors, which have recently beenused to detect minute concentrations of target parti-cles. This technology has many applications in precisionmedicine. Numerical examples show the estimation of pa-rameters such as ionic concentration, size of Stern layer,and the sizes of multiple electrodes (multilevel Bayesian es-timation) of sensors for which experimental data are avail-able.

Benjamin Stadlbauer, Leila Taghizadeh, Jose A. MoralesEscalante, Clemens HeitzingerVienna University of [email protected],

70 UQ18 Abstracts

[email protected],[email protected],[email protected]

Andrea CossettiniUniversity of [email protected]

Luca SelmiUniversita di Modena e Reggio [email protected]

MS46

Weighted Reduced Order Methods forParametrized PDEs with Random Inputs

In this talk we discuss a weighted approach for the reduc-tion of parametrized partial differential equations with ran-dom input, focusing in particular on weighted approachesbased on reduced basis (wRB) [Chen et al., SIAM Journalon Numerical Analysis (2013); D. Torlo et al., submitted(2017)] and proper orthogonal decomposition (wPOD) [L.Venturi et al., submitted (2017)]. We will first presentthe wPOD approach. A first topic of discussion is relatedto the choice of samples and respective weights accordingto a quadrature formula. Moreover, to reduce the com-putational effort in the offline stage of wPOD, we will em-ploy Smolyak quadrature rules. We will then introduce thewRB method for advection diffusion problems with domi-nant convection with random input parameters. The issueof the stabilization of the resulting reduced order modelwill be discussed in detail.

Francesco BallarinSISSA, International School for Advanced [email protected]

Davide TorloUniversitat [email protected]

Luca VenturiCourant Institute of Mathematical SciencesNew York, United [email protected]

Gianluigi RozzaSISSA, International School for Advanced StudiesTrieste, [email protected]

MS46

Optimal Approximation of Spectral Risk Measureswith Application to PDE-constrained Optimization

Many science and engineering applications require the con-trol or design of a physical system governed by partial dif-ferential equations (PDEs). More often then not, PDE in-puts such as coefficients, boundary and initial conditions,and modeling assumptions are unknown, unverifiable orestimated from data. Such problems can be formulatedas risk-averse optimization problems in Banach space. Inthis talk, I concentrate on spectral risk measures for PDE-constrained optimization. In particular, I develop an op-timal quadrature approximation of spectral risk measures.This approximation is provably convergent and results in aconsistent approximation of the original optimization prob-

lem. In addition, I show that a large class of spectral riskmeasures can be generated through the risk quadrangle. Iconclude with numerical examples confirming these results.

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS46

Two Basic Hierarchical Structures MakingStochastic Programming What It Is

Two fundamentally different lines of recurring structuresin stochastic programming are reported. Behind the firstthere is the idea of recourse, from its simplest occurrence,consequently called the same way, to settings consideredadvanced structures either due to the decision pattern(two-stage, multi-stage) or the complexity of the decisionspaces. The second line comes from discrete mathematicsand occurs when recourse involves combinatorial decisions.In different levels of abstraction, it is the fact that belowany positive integer there are at most finitely many oth-ers. Under the names of Dickson and Gordan the princi-ple/lemma is forming a cornerstone of symbolic computa-tion.

Ruediger SchultzUniversity of [email protected]

MS46

Estimation of Tail Distributions Using Quantileand Superquantile (CVaR) Values

We minimized superquntile (CVaR) of a system describedby PDEs with uncertain parameters. Quantile and su-perquantile (CVaR) were estimated with a small number ofsamples (e.g. 20 samples). The paper discusses approachesfor estimating and minimizing low probability characteris-tics, such as quantile and superquantile (CVaR) with highconfidence levels (e.g., 0.999).

Stan UryasevUniversity of [email protected]

MS47

Reduced-order Stochastic Modeling and Non-Gaussian Data Assimilation for Marine Ecosystems

Characterizing and predicting ocean phenomena that arehigh-dimensional and have significant uncertainties, suchas marine ecosystems, requires computationally efficientmodels. We utilize an efficient method for simultane-ous estimation of state variables, parameters, and modelequations for a regional ocean model coupled with a bio-geochemical model. The method uses a generalization ofthe Polynomial Chaos and Proper Orthogonal Decompo-sition equations, the dynamically orthogonal (DO) evolu-tion equations, for probabilistic predictions, and a Gaus-sian mixture model DO (GMM-DO) filtering algorithm forBayesian data assimilation and inference. We showcaseresults for Southern New England and the Mid-AtlanticBight using a regional modeling system that predicts ma-rine ecosystem dynamics using primitive equations coupledto a biogeochemical model. We then discuss the resultsof the joint estimation of the system’s state variables and

UQ18 Abstracts 71

parameter values, with learning of the form of the biogeo-chemical equations. Advisor: Pierre Lermusiaux and Ab-hinav Gupta, Massachusetts Institute of Technology, USA

Christiane AdcockMassachusetts Institute of [email protected]

MS47

Statistical Modelling of Breast Cancer Risk forGreater Predictive Accuracy

With 1 in 8 women in the United States at risk of be-ing diagnosed with breast cancer at some point in herlifetime, the development and accuracy of breast cancerprediction models is pertinent to reducing the morbidityand mortality rates associated with the disease. Threesuch prediction models are the Gail, BCSC (Breast CancerSurveillance Consortium), and Tyrer-Cuzick models, eachof which determine a woman’s risk of breast cancer fromrisk factors including family history of cancer and mammo-graphic density. However, these models have been shownto vary in accuracy among women with different ethnicbackgrounds, which is why the Athena Breast Health Net-work, an extensive program at UCI that integrates clinicalcare and research to drive innovation in the prevention ofbreast cancer, is building and testing new breast cancer riskassessment models using machine learning. Experiment-ing with algorithms such as k-nearest neighbors (KNN),support vector machines (SVM), and decision trees (DT),Athena hopes to evaluate new breast cancer risk modelsand their predictive accuracies. Advisor: Argyrios Ziogasand Hanna Lui Park, University of California, Irvine, USA

Alyssa ColumbusUniversity of California, [email protected]

MS47

Subsurface Impedance Characterization withBayesian Inference

Methods for directly observing the evolution of a snowpackand the movement of water through snow are required forvalidating remote sensing and modeling estimates. Thedensity and water content of snow are closely related toits electrical properties, thus Electrical Impedance Tomog-raphy (EIT) can be used to characterize the subsurfaceproperties. In this work, a computational feasibility studywas conducted to determine if a sensing instrument utiliz-ing EIT can be used to learn about the electrical propertiesof its surroundings. We apply statistical methods, such asBayesian inference, to this inverse problem by character-izing the uncertainty that remains in the electrical prop-erties after taking into account voltage observations fromthe instrument. Advisor: Matthew Parno, US Army ColdRegions Research and Engineering Laboratory (CRREL)

Cassie LumbrazoClarkson University.

MS47

Dynamic Sequential Filtering in Association withJoint State-parameter Estimation

We are proposing a surrogate based strategy for joint state-parameter estimation via sequential filtering. Our ap-

proach starts with an ensemble Kalman filter estimate ofthe prior density. Typically one then applies the Kalmanupdate incorporating the next observation to obtain a pos-terior estimate. Instead we will incorporate a surrogatemodel at this stage which will enable a through samplingof the prior and allow for a particle filter type update ofthe posterior. We will apply this strategy to Lorenz-96model as proof-of-concept. Mentor: Dr. Elaine Spiller,Marquette University, USA

Louis NassMarquette University.

MS47

Eulerian vs Lagrangian Data Assimilation

Data assimilation is a process by which a physical modeland observational data are combined to produce a moreaccurate estimate of the state of a system. It is often usedwhen uncertainty (e.g., noise) is present in the systemsevolution or in the observational data. For these typesof systems, numerical methods must quantify this uncer-tainty in addition to estimating the mean behavior of thesystem. Examples of the successful application of data as-similation include predicting weather patterns and oceancurrents. We apply a specific data assimilation technique,the Discrete Kalman Filter, to estimate a velocity field.We demonstrate how blending a model with data allows usto infer the state of the system with better accuracy thaneither the model or the data alone would provide. We sim-ulate the estimation of flow fields with different types ofobservers (floating versus fixed in space) and observationaldata (position versus velocity measurements) to analyzethe effects this has on estimation skill. Advisor: RichardMoore

MS48

Goal Oriented Sensitivity Indices and SensitivityIndices Based on Wasserstein Costs

Abstract not available at time of publication.

Thierry KleinInstitut de Mathematiques de ToulouseUMR CNRS 5219 Universite de [email protected]

MS48

Sensitivity Analysis Based on Cramr Von MisesDistance

In this talk, following an idea of Borgonovo and co-authorsin 2007, we define and study a new sensitivity index basedon the Cramr von Mises distance. This index appears to bemore general than the Sobol one as it takes into account thewhole distribution of the random variable and not only thevariance. Furthermore, we study the statistical propertiesof its Pick and Freeze estimator.

Agnes LagnouxInstitut de Mathematiques de ToulouseUniversite Toulouse [email protected]

Fabrice GamboaInstitut de Mathematiques de Toulouse. AOC ProjectUniversity of Toulouse

72 UQ18 Abstracts

[email protected]

Thierry KleinInstitut de Mathematiques de ToulouseUMR CNRS 5219 Universite de [email protected]

MS48

Statistical Methodology for Second Level Sensitiv-ity Analysis with Dependence Measures for Nu-merical Simulators

Many physical phenomena are modeled by numerical sim-ulators, which take several uncertain parameters as inputsand can be time expensive. Global sensitivity analyses arethen performed to evaluate how the input uncertaintiescontribute to the variation of the code output. For this,we focus here on dependence measures based on reproduc-ing kernel Hilbert spaces (HSIC). In some cases, the dis-tributions of the inputs may be themselves uncertain andit is important to quantify the impact of this uncertaintyon global sensitivity analysis results. We call it the second-level sensitivity analysis. To achieve this, we propose a sta-tistical methodology based on a single Monte Carlo loop:First, we draw a unique sample S built from a unique (suit-ably chosen as a reference) probability distribution of theinputs and the computation of corresponding code outputs.Then, for various probability distributions of the input pa-rameters, we perform a global sensitivity analysis: HSICare computed from sample S and using modified estima-tors. Finally, we realize the 2nd-level sensitivity analysisby estimating 2nd-level HSIC between the probability dis-tributions of the inputs and the results of global sensitivityanalysis. For this, specific kernels are chosen. We applythis methodology on an analytical example to evaluate theperformances of our procedures and illustrate how it allowsfor an efficient 2nd-level sensitivity analysis with very fewcomputer experiments.

Anouar MeynaouiCEA, DEN, DERF-13108, Saint-Paul-lez-Durance, [email protected]

Amandine MarrelCEA Cadarache, [email protected]

Beatrice [email protected]

MS48

Sensitivity Indices for Outputs on a RiemannianManifold

Abstract not available at time of publication.

Leonardo MorenoUniversidad de la [email protected]

MS49

Ensemble Filtering with One-step-ahead Smooth-ing

Ensemble Kalman filters (EnKFs)sequentially assimilate

data into a dynamical model to determine estimates of itsstate and parameters. At each EnKF assimilation cycle, anensemble of statesis first integrated forward with the modelfor forecasting. The forecasted ensemble is thenupdatedwith incoming observations based on a Kalman correctionstep. The forecast-update cycle is however not the on-lypathway to compute the state analysis. Here we reversethe order of these steps followingthe one-step-ahead (OSA)smoothing formulation of the Bayesian filtering problemto propose a new class of EnKFs that constrain the state-parameters ensembles with futureobservation. Exploitingfuture observations should bring in more information thathelp mitigating for the suboptimal character of the EnKFs;being formulated for linear and Gaussian systems, and im-plemented with limitedensembles and crude approximatenoise statistics. We further extend the OSA-EnKF frame-work to biasestimation and for efficient assimilation intoone-way coupled models. Numerical results of various ap-plicationswill be presented to demonstrate the efficiency oftheof the proposed approach.

Naila [email protected]

Boujemaa Ait-El-FquihKing Abdullah University of Science and Technology(KAUST)Division of Applied Mathematics and [email protected]

Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]

MS49

Non-Gaussian Data Assimilation through KernelDensity Estimation

Non-Gaussian features are typical in many application ar-eas of data assimilation such as geophysical fluid systemsand thus it is essential to effectively capture the Non-Gaussian features for accurate and stable state estimates.We compare particle filters and ensemble-based methodsand propose a new class of data assimilation using Kerneldensity estimation that can captures non-Gaussian featureseffectively using relatively a small number of samples andis robust and stable for high-dimensional problems.

Yoonsang LeeCourant Institute of Mathematical SciencesNew York [email protected]

MS49

Localization for MCMC Sampling High-dimensional Posterior Distributions with BandedStructure

We investigate how ideas from covariance localization innumerical weather prediction can be used to efficientlysample high-dimensional distributions with banded covari-ance and precision matrices. The main idea is to exploitbanded structure during problem formulation and Markovchain Monte Carlo (MCMC) sampling. In particular, wepropose to solve high-dimensional Bayesian inverse prob-lems with nearly banded structure (i.e., small off-diagonal

UQ18 Abstracts 73

elements) by first replacing the problem with a banded ver-sion, setting small off-diagonal elements to zero, and thensolving the modified problem using a Metropolis-within-Gibbs sampler that exploits this banded structure. Wediscuss conditions under which posterior moments of themodified problem are close to those of the original prob-lem. Under the same conditions, the convergence rate ofan associated sampler is independent of dimension. Wepresent our ideas in the context of Gaussian problems,where mathematical formulations are precise and for whichconvergence analysis can be made rigorous.

Matthias MorzfeldUniversity of ArizonaDepartment of [email protected]

MS49

Model Parameter Estimation using Nonlinear En-semble Algorithms

It is now generally acknowledged that introducing variabil-ity in parameters in an ensemble data assimilation and pre-diction context can increase the realism of the predictionsystem. However, parameter perturbation is complicated.Many parameters are nonlinearly related to the model out-put, transfer functions that map from state to observa-tion space may also be nonlinear, and prior parameter dis-tributions may be poorly known and are often boundedat zero. Our approach to parameter estimation uses aMarkov chain Monte Carlo (MCMC) algorithm to samplethe Bayesian posterior distribution of model parameters.We then test approximate data assimilation methodologiesthat are less computationally expensive using the MCMCposterior distribution as a reference. In this presentation,we show results from parameter estimation experiments us-ing a MCMC algorithm. We demonstrate the various typesof nonlinearity present in cloud microphysical parameteri-zations, in particular. We then show results from variousensemble filter algorithms, including the Ensemble Trans-form Kalman Filter (ETKF), and a newly developed filterbased on Gamma and Inverse Gamma distributions (theGamma - Inverse Gamma (GIG) filter). The ETKF ex-hibits well known problems associated with nonlinearity inthe parameter - model state relationship. In contrast, theGIG produces a posterior estimate that is more accurate inboth state and observation space, and closely approximatesthe Bayesian solution generated by MCMC.

Derek J. PosseltUniv. of Michgan, Ann [email protected]

Craig BishopNaval Research [email protected]

MS50

Hierarchical Bayesian Sparsity: �2 Magic.

Hierarchical Bayesian models with suitable choices of hy-perpriors, combined with Krylov subspace iterative solverscan be very effective at promoting sparsity in the solutionof inverse problems while retaining the �2 computationalefficiency. In this talk we will present some results aboutthe convergence of these methods for different families ofhyperpriors and show related computed examples.

Daniela Calvetti

Case Western Reserve UnivDepartment of Mathematics, Applied Mathematics [email protected]

MS50

Large Scale Spatial Statistics with SPDEs, GM-RFs, and Multi-scale Component Models

Combining multiple and large data sources of historicaltemperatures into unified spatio-temporal analyses is chal-lenging from both modelling and computational points ofview. This is not only due to the size of the problem,but also due to the highly heterogeneous data coverageand the latent heterogeneous physical processes. The EU-STACE project will give publicly available daily estimatesof surface air temperature since 1850 across the globe forthe first time by combining surface and satellite data usingnovel statistical techniques. Of particular importance is toobtain realistic uncertainty estimates, due to both obser-vation uncertainty and lack of spatio-temporal coverage.To this end, a spatio-temporal multiscale statistical Gaus-sian random field model is constructed, with a hierarchy ofspatially non-stationary spatio-temporal dependence struc-tures, ranging from weather on a daily timescale to climateon a multidecadal timescale. Connections between SPDEsand Markov random fields are used to obtain sparse matri-ces for the practical computations. The extreme size of theproblem necessitates the use of iterative solvers, which re-quires using the multiscale structure of the model to designan effective preconditioner. We will also discuss some newcomputational developments towards full Bayesian estima-tion of the dependence parameters, and fast computationof posterior predictive variances.

Finn LindgrenUniversity of [email protected]

MS50

Hierarchical Stochastic Partial Differential Equa-tions for Bayesian Inverse Problems

We construct hierarchical models based on stochastic pa-rameters and Gaussian Markov random fields (GMRF).By stacking two GMRFs via their stochastic partial dif-ferential equation presentation, we can make the hierachi-cal model suitable for large-scale problems. We choose theprior field to be Matrn, and we study hyperprior field mod-els for the Matrn prior length-scale field. One choice, for aone-dimensional problem, is exponentially distributed hy-perprior process, and we give it as autoregressive AR(1)process to keep the computational complexity to minimal.We treat measurement noise variance and hyperprior pa-rameters as unknowns in the posterior distribution. Hyper-parameters and hyperpriors tend to be strongly coupled aposteriori, which leaves vanilla MCMC algorithms ineffi-cient. Hence, we propose two MCMC algorithms for thedifferent hierarchical models, which are both free of param-eter tuning. One, corresponds to an adaptive Metropolis-within-Gibbs scheme, and the other one employs Ellipti-cal Slice Sampling combined with a reparametrisation todecouple hyperparameters. The developed methodologypermits computationally efficient inference and uncertaintyquantification about all the estimates in the model. Weapply methodology developed to interpolation and X-raytomography.

Lassi Roininen

74 UQ18 Abstracts

University of [email protected]

Karla Monterrubio GomezUniversity of [email protected]

Sari LasanenSodankyla Geophysical ObservatoryUniversity of [email protected]

MS50

Hierarchical Gaussian Processes in Bayesian In-verse Problem

A major challenge in the application of sampling methodsin Bayesian inverse problems is the typically large compu-tational cost associated with solving the forward problem.To overcome this issue, we consider using a Gaussian pro-cess emulator to approximate the forward map. This re-sults in an approximation to the solution of the Bayesianinverse problem, and more precisely in an approximate pos-terior distribution. In this talk, we analyse the error in theapproximate posterior distribution, in the case where thehyper-parameters specifying the Gaussian process emula-tor are unknown and learnt on the fly from evaluations ofthe forward model.

Aretha L. TeckentrupUniversity of [email protected]

Andrew StuartComputing + Mathematical SciencesCalifornia Institute of [email protected]

MS51

Goal-oriented Optimal Design of Experiments forBayesian Inverse Problems

Computer models play an essential role in forecasting com-plicated phenomena such as the atmosphere, ocean dy-namics, volcanic eruptions among others. These modelshowever are usually imperfect due to various sources ofuncertainty. Measurements are snapshots of reality thatare collected as an additional source of information. Pa-rameter inversion and data assimilation are means to fus-ing information obtained from measurement, model, priorknowledge, and other available sources to produce reliableand accurate description (the analysis) of the underlyingphysical system. The accuracy of the analysis is greatlyinfluenced by the quality of the observational grid designused to collect measurements. Sensor placement can beviewed as optimal experimental design (OED) problem,where the locations of the sensors define an experimentaldesign. There are many criteria for choosing an optimalexperimental design, such as minimization of the uncer-tainty in the output (e.g., minimization of the trace of theposterior covariance). Including the end-goal (predictions)in the experimental design leads to a goal-oriented OEDapproach that can be used in several applications. In thistalk, we outline the idea of goal-oriented optimal design ofexperiments for PDE-based Bayesian linear inverse prob-lems. We also present numerical results from a standardadvection-diffusion problem.

Ahmed Attia

[email protected]

Alen AlexanderianNC State [email protected]

Arvind SaibabaNorth Carolina State [email protected]

MS51

Bayesian Experimental Design for Stochastic Bio-chemical Systems

Getting deep and meaningful insights into the functionof biological systems requires careful experimental setups.While technological advances have frequently led to fishingexpeditions the complexity of e.g. a cell or tissue meanthat such data often lacks the power to elucidate underly-ing mechanisms. In this context simulation-based experi-mental design is increasingly being recognized as a promis-ing route for detailed analyses of cellular behaviour. Insilico simulations are then used, for example, to designexperiments that allow us to estimate the parameters ofmathematical models describing biophysical and biochem-ical systems. So far, however, most of these approacheshave focused on deterministic modelling approaches. Herewe discuss a generalization that addresses the particularproblems faced when dealing with stochastic dynamicalsystems. An information theoretical framework is used toidentify experimental conditions that maximise the mutualinformation between experimental data, and the parame-ters describing the stochastic dynamical system. We illus-trate our approach on a set of exemplar models of stochas-tic molecular systems.

Fei HeImperial [email protected]

Juliane LiepeMax Planck Institute for Biophysical [email protected]

Sarah FilippiImperial [email protected]

Michael StumpfImperial College LondonTheoretical Systems [email protected]

MS51

Optimal Experimental Design for RKHS BasedCorrection of Mis-specified Dynamic Models

All models are wrong, yet, our goal in this study is to infera functional correction for a mis-specified dynamic model,where the systems dynamics is only known approximately.Here two questions can be considered: what would be aneffective functional form to represent the correction term?And given such, how the corrected model can be learnt ef-ficiently, given a limited experimental budget? Assumingthe availability of observations of the system for variousinitial conditions, we propose a formulation for estimatingcorrection terms in a Reproducing Kernel Hilbert Space

UQ18 Abstracts 75

(RKHS) of candidate correction functions. Further, weanalyze the problem of performing efficient experimentaldesign to find an optimal correction term under a limitedexperimental budget and experimental constraints. Theproblem is computationally intractable when formulated inthe D-Bayes optimality framework. However, by introduc-ing a suitable approximate proxy, the problem can be castas a sub-modular optimization problem, for which thereare computationally efficient and approximately optimalsolvers. Numerical experiments exemplify the efficiency ofthese techniques.

Lior HoreshIBM [email protected]

Gal ShulkindMassachusetts Institute of [email protected]

Haim AvronTel Aviv [email protected]

MS51

Subspace-driven Observation Selection Strategiesfor Linear Bayesian Inverse Problems

Many inverse problems may involve a large number of ob-servations. Yet these observations are seldom equally in-formative; moreover, practical constraints on storage, com-munication, and computational costs may limit the num-ber of observations that one wishes to employ. We in-troduce strategies for selecting subsets of the data thatyield accurate approximations of the inverse solution. Thisgoal can also be understood in terms of optimal experi-mental design. Our strategies exploit the structure of in-verse problems in the Bayesian statistical setting, and arebased on optimal low-rank approximations to the posteriordistribution—extended to the observation space.

Jayanth [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS52

Filtering Without a Model and Without an Obser-vation Function: Data-driven Filtering

Modern methods of data assimilation rely on two key as-sumptions: that a mathematical model of the physical sys-tem and an observation function (which maps the modelstate to the measurements) are both known. In absenceof either, outright implementation of standard techniquessuch as ensemble Kalman filtering is impossible or subjectto large error. In this talk, we will present recent advancesin data-driven filtering which attempt to solve these ex-treme problems in data assimilation.

Franz HamiltonDepartment of MathematicsNorth Carolina State University

[email protected]

MS52

Sensitivity of Network Dynamics Reconstruction

Consider a dynamical network that is observed at a subsetof network nodes. If the network is strongly connected,the dynamics at all unobserved nodes can generically bereconstructed, in theory. However, the conditioning ofthe reconstruction will vary depending on the relation ofthe observed to unobserved nodes, and reconstruction atsome nodes may be difficult in practice. A variational dataassimilation method is used to quantify this conditioningbased on the observation subset, and the effect of networkgeometry is discussed.

Timothy SauerDepartment of Mathematical SciencesGeorge Mason [email protected]

MS52

Nonlinear Kalman Filtering for Parameter Estima-tion with Censored Observations

The use of Kalman filtering, as well as its nonlinear ex-tensions, for the estimation of state variables and modelparameters has played a pivotal role in many fields of scien-tific inquiry where observations of the system are restrictedto a subset of variables. However in the case of censoredobservations, where measurements of the system beyond acertain detection point are impossible, the estimation prob-lem is complicated. Without appropriate consideration,censored observations can lead to inaccurate estimates. Inthis talk, we present a modified version of the extendedKalman filter to handle the case of censored observationsin nonlinear systems. We validate this methodology in asimple oscillator system first, showing its ability to accu-rately reconstruct state variables and track system param-eters when observations are censored. Finally, we utilizethe nonlinear censored filter to analyze censored datasetsfrom patients with hepatitis C and human immunodefi-ciency virus.

Hien TranCenter for Research in Scientific ComputationNorth Carolina State [email protected]

MS52

Parameter Estimation using Linear ResponseStatistics - Theory and Numerical Scheme

We propose a new parameter estimation method for Itodiffusions using linear response statistics. In this talk, thetheory will be briefly reviewed, and a corresponding practi-cal numerical scheme based on the idea of surrogate modelwill be introduced together with some sensitivity analy-sis. As an important example, the Langevin system will bediscussed in depth.

He ZhangDepartment of MathematicsThe Pennsylvania State [email protected]

John Harlim, Xiantao LiPennsylvania State University

76 UQ18 Abstracts

[email protected], [email protected]

MS53

A Data Driven Approach for Uncertainty Quantifi-cation with High Dimensional Arbitrary RandomData

One of the grand challenges in uncertainty quantificationroots in the implicit knowledge of underlying distributionfunction of the complex system. Traditional approachesshow limitations to quantify the uncertainty propagationassociated with such systems. We develop a data-drivenapproach to accurately construct the surrogate model forthe quantify of interest, which is irrespective of the de-pendency and the analytical form of the underlying dis-tribution. Our method is demonstrated in challengingproblem such high-dimensional PDE systems and realis-tic biomolecule system, and can be generalized to othersystems with high dimensional arbitrary random space.

Huan LeiPacific Northwest National [email protected]

Jing LiPacific Northwest National [email protected]

Nathan BakerPacific Northwest National [email protected]

MS53

Multi-fidelity Uncertainty Propagation of Physics-based Nondestructive Measurement Simulationsusing Co-kriging

Probability of detection (POD) is a widely establishedmethod for measuring the reliability of nondestructive eval-uation (NDE) systems. Typically, POD is determined ex-perimentally. However, POD methods can be enhancedby utilizing information from physics-based computationalmeasurement simulations. These are called model-assistedPOD (MAPOD) methods. The main elements of MAPODmethods are (1) identify a set of uncertainty input param-eters and their probability distributions, (2) propagate theuncertain parameters through the computational models,and (3) build POD curves. One of the key challenges ofperforming MAPOD with accurate physics-based measure-ment simulations is the computational expense. In particu-lar, the computational models are time-consuming to eval-uate and the process of propagating the uncertain inputparameters, Step 2 in the MAPOD process, needs a largenumber of evaluations using conventional approaches. Thistalk presents the use of multi-fidelity modeling to acceleratethe uncertainty propagation process of the MAPOD analy-sis. In particular, co-kriging is utilized to fuse informationfrom models of varying degree of fidelity to construct a fastsurrogate model, which is utilized for the uncertainty prop-agation with Monte Carlo sampling (MCS). The approachis demonstrated on MAPOD analysis problems using ul-trasonic testing simulations and eddy current simulations.The proposed approach is compared with direct MCS, andMCS with Kriging surrogate models.

Leifsson LeifurDepartment of Aerospace EngineeringIowa State [email protected]

Xiaosong DuIowa State [email protected]

MS53

Bi-directional Coupling between a PDE-domainand an Adjacent Data-domain Equipped withMulti-fidelity Sensors

The increasing collection of data in essentially all facetsof our lives has heralded concomitant growth in statisti-cal and machine learning (ML) techniques to analyze thedata. Researchers are looking for data-driven algorithmsthat will solve a partial differential equation (PDE) as aninverse problem. However, few focuses on combining theinformation provided by the traditional PDE solver andthe big data. In this work we are devoted to propagatinginformation between those two paradigms, and specifically,we address on coupling the solution of a PDE governed do-main with the big data domain. Our method builds uponthe Schwarz alternating method and the recent work onnumerical Gaussian process regression (GPR). We furtherextend our method to deal with data with variate fidelities,using an auto-regressive multi-fidelity model. The effec-tiveness of our domain decomposition algorithm is demon-strated using examples of Helmholtz equations in both 1Dand 2D domains.

Dongkun ZhangDivision of Applied Mathematics, Brown Universitydongkun [email protected]

Yang LiuDivision of Applied MathematicsBrown Universityliu [email protected]

George Em KarniadakisBrown Universitygeorge [email protected]

MS54

Exploiting Ridge Approximations for Bayesian In-ference

Inexpensive surrogates are key for reducing the cost ofparameter studies involving large-scale, complex compu-tational models with many input parameters. A ridge ap-proximation is one class of surrogate that models a quantityof interest as a nonlinear function of a few linear combina-tions of the input parameters. In parameter studies ridgeapproximations allow the low-dimensional structure to beexploited, reducing the effective dimension. We introduce anew, fast algorithm for constructing a ridge approximationwhen the nonlinear function is a polynomial by minimiz-ing the least squares mismatch between the surrogate andthe quantity of interest on a given set of inputs. Then, forfunctions that exhibit this ridge structure, we show howto build quadrature rules that exploit this structure andreduce the cost of high dimensional integration.

Jeffrey M. HokansonUniversity of Colorado [email protected]

Paul ConstantineUniversity of ColoradoBoulder

UQ18 Abstracts 77

[email protected]

MS54

Dimension Reduction for Remote Sensing andData Fusion

Remote sensing of vertical profiles of green house gasesfrom satellites and ground based instruments typically re-sults in ill-posed inverse problems where the degrees of free-dom of the signal is low and prior information has a signif-icant role. To be able to efficiently extract the informationcontent of the observations, likelihood informed dimensionreduction is applied. Further, when observations from dif-ferent sources are assimilated together, the procedure canbe seen as high dimensional time series analysis. If theunderlying fields to be modeled are smooth, we can utilizesimilar dimension reduction techniques. This will resultas an efficient Kalman smoother algorithm. The talk willexplain the theory behind the algorithms and give appli-cation examples in the field of environmental monitoring.

Marko LaineFinnish Meteorological [email protected]

MS54

Adaptive Dimension Reduction to AccelerateInfinite-dimensional Geometric MCMC

Bayesian inverse problems highly rely on efficient and ef-fective inference methods for uncertainty quantification.Infinite-dimensional MCMC algorithms, directly definedon function spaces, are robust under refinement (throughdiscretization, spectral approximation) of physical models.Recently, a class of algorithms also starts to take advan-tage of geometric information provided by, for example,quadratic approximation of the parameter-to-observationmap so that they are capable of exploring complex prob-ability structures, as frequently arise in UQ for PDE con-strained inverse problems. However, the required geomet-ric information is very expensive to obtain in high dimen-sions. The issue can be mitigated by dimension reductiontechniques. By carefully splitting the unknown parameterspace into a low-dimensional geometry-concentrated sub-space, and an infinite-dimensional geometry-flat subspace,one may then apply geometry-informed MCMC algorithmsto the low-dimensional subspace and simpler methods tothe infinite-dimensional complement. In this work, we ex-plore randomized linear algebraic algorithms as efficientdimension reducing tools to adaptively obtain low dimen-sional subspaces. This is applied to speed up particular ge-ometric infinite-dimensional MCMC methods. The speed-up of dimension-reduced algorithms is tested on both a sim-ulation example and a real application in turbulent com-bustion.

Shiwei LanCalifornia Institute of [email protected]

MS54

Certified Dimension Reduction for NonlinearBayesian Inverse Problems

A large-scale Bayesian inverse problem has a low effectivedimension when the data are informative only on a low-dimensional subspace of the input parameter space. De-

tecting this subspace is essential for reducing the com-plexity of the inverse problem. For example, this sub-space can be used to improve the performance of MCMCalgorithms and to facilitate the construction of surro-gates for expensive likelihood functions. Several differentmethods have recently been proposed to construct sucha subspace—in particular, gradient-based approaches likethe likelihood-informed subspace method and the activesubspace method, or non-gradient-based methods like thetruncated Karhunen-Loeve decomposition. In the contextof nonlinear inverse problems, however, these methods areessentially heuristics whose approximation properties arepoorly understood. We develop a new method which al-lows bounding the Kullback-Leibler divergence between theposterior distribution and the approximation induced bycertain decompositions. We then identify the subspacethat minimizes this bound, and compute it using gradi-ents of the likelihood function. This approach allows theapproximation error to be rigorously controlled. A numer-ical comparison with existing methods favorably illustratesthe performance of our new method.

Olivier Zahm, Alessio SpantiniMassachusetts Institute of [email protected], [email protected]

Tiangang CuiMonash [email protected]

Kody LawOak Ridge National [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS55

Point Cloud Discretization of Fokker-Planck Oper-ators for Committor Functions

The committor functions provide useful information to theunderstanding of transitions of a stochastic system betweendisjoint regions in phase space. In this work, we developa point cloud discretization for Fokker-Planck operatorsto numerically calculate the committor function, with theassumption that the transition occurs on an intrinsicallylow-dimensional manifold in the ambient potentially highdimensional configurational space of the stochastic system.Numerical examples on model systems validate the effec-tiveness of the proposed method.

Rongjie LaiRensselaer Polytechnic [email protected]

Jianfeng LuMathematics DepartmentDuke [email protected]

MS55

Rare Event Analysis on Random Elliptic PDEswith Small Noise

Partial differential equations with random inputs have be-come popular models to characterize physical systems with

78 UQ18 Abstracts

uncertainty. In this talk, I will present asymptotic rareevent analysis for such elliptic PDEs with random inputs.In particular, we consider the asymptotic regime that thenoise level converges to zero suggesting that the system un-certainty is low, but does exists. We develop sharp approx-imations of the probability of a large class of rare events.

Xiaoou LiUniversity of [email protected]

Jingchen Liudepartment of statisticsColumbia [email protected]

Xiang ZhouDepartment of MathematicsCity Univeristy of Hong [email protected]

Jianfeng LuMathematics DepartmentDuke [email protected]

MS55

A Multilevel Approach Towards Unbiased Sam-pling of Random Elliptic Partial Differential Equa-tions

Partial differential equation is a powerful tool to charac-terize various physics systems. In practice, measurementerrors are often present and probability models are em-ployed to account for such uncertainties. In this paper, wepresent a Monte Carlo scheme that yields unbiased esti-mators for expectations of random elliptic partial differen-tial equations. This algorithm combines multilevel MonteCarlo and a randomization scheme proposed by Rhee andGlynn (2012, 2013). Furthermore, to obtain an estimatorwith both finite variance and finite expected computationalcost, we employ higher order approximations.

Xiaoou LiUniversity of [email protected]

Jingchen Liudepartment of statisticsColumbia [email protected]

Shun XuColumbia [email protected]

MS55

Model Reduction for Diffusion-like Processes nearLow-dimensional Manifolds in High Dimensions

We assume we have access to a (large number of expen-sive) simulators that can return short simulations of high-dimensional stochastic system, and introduce a novel sta-tistical learning framework for learning automatically afamily of local approximations to the system, that canbe (automatically) pieced together to form a fast globalreduced model for the system, called ATLAS. ATLAS

is guaranteed to be accurate (in the sense of producingstochastic paths whose distribution is close to that of pathsgenerated by the original system) not only at small timescales, but also at large time scales, under suitable as-sumptions on the dynamics. We discuss applications tohomogenization of rough diffusions in low and high dimen-sions, as well as relatively simple systems with separationsof time scales, and deterministic chaotic systems in high-dimensions, that are well-approximated by stochastic dif-ferential equations.

Mauro MaggioniDepartment of Mathematics, Duke [email protected]

MS56

Polynomial Approximation of High-dimensionalFunctions on Irregular Domains

Smooth, high-dimensional functions defined on tensor do-mains can be approximated using simple orthonormal basesformed as tensor products of one-dimensional orthogo-nal polynomials. On the other hand, constructing or-thogonal polynomials on irregular domains is a difficultand computationally-intensive task. Yet irregular domainsarise in many practical problems, including uncertaintyquantification, model-order reduction, optimal control andnumerical PDEs. In this talk I will discuss the approxi-mation of smooth, high-dimensional functions on irregulardomains using a method known as polynomial frame ap-proximation. Importantly, this method corresponds to ap-proximation in a frame, rather than a basis; a fact whichleads to several key differences, both theoretical and nu-merical in nature. However, this method requires no or-thogonalization or parametrization of the domain bound-ary, thus making it suitable for very general domains. Iwill discuss theoretical results for the approximation er-ror, stability and sample complexity of this algorithm, andshow the methods suitability for high-dimensional approxi-mation through independence (or weak dependence) of theguarantees on the ambient dimension d. I will also presentseveral numerical results, and highlight some open prob-lems and challenges.

Ben AdcockSimon Fraser Universityben [email protected]

Daan HuybrechsDepartment of Computer ScienceUniversity of [email protected]

MS56

Convergence of Sparse Polynomial Collocation inInfinite Dimensions

Abstract not available at time of publication.

Oliver G. ErnstTU ChemnitzDepartment of [email protected]

MS56

Optimal Weighted Least-squares Methods for Ap-

UQ18 Abstracts 79

proximation in High Dimension

In arbitrary dimension d, we consider the problem of re-constructing an unknown bounded function u defined ona domain X ⊆ Rd from noiseless or noisy samples of uat n points. We measure the reconstruction error in theL2(X, dρ) norm for some probability measure dρ. Given alinear space Vm with dim(Vm) ≤ n, we study the weightedleast-squares approximation of u from Vm based on inde-pendent random samples. We establish results in expec-tation and in probability for the convergence of weightedleast-squares estimators. These results show that for an op-timal choice of the sampling measure dμ and weight func-tion, which depends on Vm and dρ, stability and optimalaccuracy of the estimators are achieved under the mild con-dition that n scales linearly with m, up to an additionallogarithmic factor. Afterwards we discuss sampling meth-ods for the efficient generation of independent random sam-ples from the optimal measure dμ, and finally show somenumerical results when dρ is a product measure and Vm

are multivariate polynomial spaces.

Giovanni MiglioratiUniversit e Pierre et Marie [email protected]

Albert CohenUniversite Pierre et Marie Curie, Paris, [email protected]

MS56

A Domain-decomposition-based ApproximationTechnique for Convection-dominated PDEs withRandom Velocity Fields

We developed a new model-order reduction technique forPDEs with random field input. In our approach, domaindecomposition is used to reduce the parametric dimensionin local physical domains; proper orthogonal decomposi-tion is used to reduce the degree of freedom along the in-terfaces and within subdomains, and sparse approximationis used to construct surrogate to the local stiffness matri-ces obtained from Schur complement. The main advan-tages of our method are: (i) the complexity of the surrogatemodel is independent the FEM mesh size (online-offline de-composition); (ii) being able to handle both colored noiseand discrete white noise (i.e., piecewise constant randomfields); and (iii) promising accuracy in solving convection-dominated transport with random velocity fields.

Guannan Zhang, Lin MuOak Ridge National [email protected], [email protected]

MS57

Inferring on the Parameters of a MicroscopicModel from the Estimated Parameters of a Macro-scopic One

Estimating the unknown parameters of a microscopic,stochastic model from large scale observations poses anumber of challenges, and requires the development ofnew methodological approaches. In the Bayesian statis-tical framework, stochastic forward models often lead tointractable likelihood densities, making standard statisti-cal computations a challenge. Such models are addressedby Approximate Bayesian Computing (ABC). We proposean alternative approach combining Bayesian statistics andparticle techniques for inference beyond the limits of model

scales. The viability of the approach is demonstrated onthree computed examples describing diffusion, chemical ki-netics, and cell culture assay, and a connection with exist-ing ABC methods is discussed.

Daniela CalvettiCase Western Reserve UnivDepartment of Mathematics, Applied Mathematics [email protected]

Margaret Callahan, Erkki SomersaloCase Western Reserve [email protected], [email protected]

MS57

Numerical Posterior Distribution Error Controland Bayes Factors in the Bayesian UncertaintyQuantification of Inverse Problems

In the Bayesian analysis of Inverse Problems most relevantcases the forward maps are defined in terms of a system of(O, P)DE’s that involve numerical solvers. These solversthen lead to a numerical (approximated) posterior distri-bution. Recently several results have been published onthe regularity conditions required on such numerical meth-ods to ensure converge of the numerical to the theoreticalposterior. However, more practical guidelines are needed.I present some recent results that, by using Bayes Factors,one can see that the numerical posterior tends to the theo-retical posterior in the same order as the numerical solverused in the forward map. Moreover, when error estimatesare available for the solver we can use a bound on this er-rors, proven to lead to basically error free posteriors. Thatis,given that we are observing noisy data, we may toleratean amount (relative to the data noise) of numerical error inthe solver, and end up with a basically error free posterior.In this talkI will show these results, present some examplesin ODEs and PDEs and comment on the generalizations tothe infinite dimensional setting.

J. Andres ChristenCIMAT, [email protected]

Marcos A. CapistranCIMAT [email protected]

Miguel A. MorelesCentro de Investigacion [email protected]

MS57

Multilevel Sparse Leja Approximations in BayesianInversion

The Bayesian inversion of complex, high-dimensional ap-plications is challenging, both mathematically and compu-tationally. In addition, the prior distribution – the knownor assumed knowledge about the solution before any mea-surement data is accounted for – has a large influence onthe solution of the inversion, i.e. the posterior distribution.In this talk, we propose an approach to tackle the afore-mentioned challenges, as follows. We employ multileveladaptive sparse grid collocation to construct a surrogateof the underlying forward model such that, as we perform

80 UQ18 Abstracts

the multilevel decomposition, we sequentially update theprior knowledge. To this end, to discretize the problem’sdomain, we use a hierarchy of finite elements with a res-olution increasing by a constant factor. To discretize thestochastic domain, we use adaptive sparse approximationsconstructed on weighted Leja sequences. While on the firstlevel we construct the surrogate via discretizing the priorspace, starting with the second level we take advantage ofthe multilevel decomposition and we construct weightedLeja points using the posterior from the previous level asweight function; since sparse grids are defined on tensor-product spaces, we first need to transform these posteriorsinto separable distributions via e.g., variational Bayes ap-proaches or copula transforms. To obtain a comprehensiveoverview of our approach, we test it in several problems.

Ionut-Gabriel FarcasTechnical University of MunichDepartment of Scientific Computing in Computer [email protected]

Jonas LatzTechnical University of [email protected]

Elisabeth UllmannTU [email protected]

Tobias NeckelTechnical University of [email protected]

Hans-Joachim BungartzTechnical University of Munich, Department ofInformaticsChair of Scientific Computing in Computer [email protected]

MS57

Iterative Update of Modeling Error in Computa-tional Inverse Problems

Inverse problems are characterized by their sensitivity tonoise in the data. In addition to the exogenous observationnoise, errors due to discretization, model uncertainties suchas ill-determined model parameters should be taken intoaccount, in particular when the quality of the data is goodand the modeling errors dominate. In the Bayesian frame-work, the mode errors allow a description as random vari-able with a distribution that can be approximated based onthe current information about the unknown interest, suchas the prior distribution. In this talk, some recent resultsconcerning iterative updating of the modeling error are dis-cussed. The work is based on collaboration with DanielaCalvetti, Matt Dunlop and Andrew Stuart.

Erkki SomersaloCase Western Reserve [email protected]

MS58

Model Error in Co2 Retrievals for the Oco-2 Satel-lite

The Orbiting Carbon Observatory 2 (OCO-2) collectsspace based measurements of atmospheric CO2. The CO2measurements are indirect, the instrument observes radi-

ances (reflected sunlight) over a range of wavelengths anda physical model is inverted to estimate the atmosphericCO2. This inference is in fact an estimation of physicalparameters, which can be both biased and overconfidentwhen model error is present but not accounted for. TheOCO-2 operational data processing addresses this problemin a few different ways, e.g. with a post-inference bias cor-rection procedure based on ground measurements. Thistalk will discuss methods to account for informative modelerror directly in the inversion procedure to lessen bias andprovide more reliable uncertainty estimates.

Jenny BrynjarsdottirCase Western Reserve UniversityDept. of Mathematics, Applied Mathematics [email protected]

MS58

Embedded Model Error and Bayesian Model Selec-tion for Material Variability

The advent of fabrication techniques like additive manufac-turing has focused attention on the considerable variabil-ity of material response due to defects and other micro-structural aspects. This variability motivates the develop-ment of an enhanced design methodology that incorporatesinherent material variability to provide robust predictionsof performance. In this work, we develop plasticity mod-els, with calibrated model errors, capable of capturing therange of experimental observations due to material vari-ability using Bayesian inference. The model error is in-troduced by embedding the modeled stochasticity throughcalibrated distributions on the physical parameters of themodel itself [Sargsyan et al., 2015]. The choice of physicalparameters used in such embedding is made using Bayesianmodel selection method [Verdinelli et al., 1996]. We quan-tify the aleatory uncertainty present in the experimentalobservations consisting of high-throughput micro-stress-strain curves of additively manufactured stainless steel. Wedemonstrate that predicted confidence intervals effectivelycaptures the scatter in the experimentally-observed quan-tities of interest.

Mohammad Khalil, Francesco Rizzi, Ari Frankel,Coleman Alleman, Jeremy Templeton, Jakob Ostien,Brad BoyceSandia National [email protected], [email protected],[email protected], [email protected],[email protected], [email protected],[email protected]

Reese [email protected]

MS58

A Stochastic Operator Approach to RepresentingModel Inadequacy

Models are imperfect representations of complex physicalprocesses. Representing the uncertainties caused by usinginadequate models is crucial to making reliable predictions.We present a model inadequacy representation in the formof a stochastic operator acting on the state variable anddiscuss methods of incorporating prior knowledge of modelshortcomings and relevant physics. This formulation is de-veloped in the context of an inadequate model for contam-

UQ18 Abstracts 81

inant transport through heterogeneous porous media.

Teresa PortoneICESUT [email protected]

Damon McDougallInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]

Robert D. MoserUniversity of Texas at [email protected]

Todd A. OliverPECOS/ICES, The University of Texas at [email protected]

MS58

Physics-constrained Data-driven Modeling of Com-putational Physics

With the proliferation of high quality datasets, there is atendency to assume that physical modeling can be replacedby data-driven modeling. In this talk, we show some ex-amples of the limitations of relying solely on data to con-struct models of physical problems. A pragmatic solutionis to combine physics-based models with data-based meth-ods and pursue a hybrid approach. The rationale is asfollows: by blending data with existing physical knowl-edge and enforcing known physical constraints, one canimprove model robustness and consistency with physicallaws and address gaps in data. This would constitute adata-augmented physics-based modeling paradigm. An ap-proach that introduces physics constraints with statisticalinference, machine learning and uncertainty propagationwill be introduced. Examples will be presented in applica-tions involving the modeling of fluid flows and moleculardynamics.

Anand Pratap SinghUniversity of [email protected]

Karthik DuraisamyUniversity of Michigan Ann [email protected]

MS59

Calibration, Compensation, Parameter Estima-tion, and Uncertainty Quantification for Nanoelec-trode Array Biosensors

This paper presents the use of a statistical approach to es-timate physical/electrochemical parameters of impedancespectroscopy experiments performed with a realistic na-noelectrodes array biosensor platform. The Bayesian es-timation methodology is based on the combination ofnanobiosensor simulations, performed with the ENBIOStool, with Markov-Chain Monte Carlo (MCMC) analy-ses. A simple 1D electrode-electrolyte geometry is firstconsidered as a validation test case, allowing the accu-rate estimation of Stern layer permittivity and salt con-centration, as set by a reference analytical model. Then,full 3D analyses of the nanoelectrodes array system areperformed in order to estimate a number of relevant pa-

rameters for measurements in electrolyte. Furthermore,moving to more challenging test cases, size/permittivityof microparticles suspended in electrolyte will also be dis-cussed. This methodology allows for the determination ofimpedance spectroscopy data parameters, and quantifica-tion of parameter uncertainties in these multi-variable de-tection problems. It is thus a very promising approach inorder to improve the precision of biosensor measurementpredictions, which are intrinsically affected by many pa-rameters.

Andrea Cossettini, Paolo ScarboloUniversity of [email protected],[email protected]

Jose Escalante, Benjamin StadlbauerVienna University of [email protected],[email protected]

Naseer MuhammadUniversity of [email protected]

Leila Taghizadeh, Clemens HeitzingerVienna University of [email protected],[email protected]

Luca SelmiUniversita di Modena e Reggio [email protected]

MS59

Optimal Multi-level Monte Carlo and AdaptiveGrid Refinement for the Stochastic Drift-diffusion-poisson System

In this work, we propose an adaptive multilevel Monte-Carlo algorithm to solve the stochastic drift-diffusion-Poisson system. The method is used to quantify noise andfluctuations in various nanoscale devices, e.g., double-gateMOSFETs and FinFETs. The fluctuation due to the ran-dom position and the random number of impurity atoms isa challenging problem and leads to the uncertainty in thedevice. Here, we use the finite-element method to modelthe charge transport. In order to analyze the error, wefirst prove an a-priori error estimate and show that theerror converges quadratically. Afterwards, we use an a-posteriori error estimate to define a local error indicator foreach element and determine in which part of the domainthe error is particularly high and, therefore, more mesh el-ements are necessary. In the developed technique, in themultilevel Monte-Carlo setting, adaptive mesh refinementaccording to the error indicator derived is used instead ofuniform mesh refinement. For a given error tolerance ε, werefine the mesh as long as the discretization error is largerthan ε2/2. Then, we define an optimization problem to cal-culate the optimal number of samples. In other words, thetotal work is minimized subject to the condition that thestatistical error is equal to ε2/2. Compared with uniformmesh refinement, better statistical and discretization errorconvergence is achieved and lower computational work isobtained.

Amirreza KhodadadianTechnical University Vienna (TU Vienna)[email protected]

82 UQ18 Abstracts

Maryam ParvziTechnical University Vienna (TU Vienna)[email protected]

Clemens HeitzingerVienna University of [email protected]

MS59

Maximum-principle-satisfying Second-order Intru-sive Polynomial Moment Scheme

Using standard intrusive techniques when solving hyper-bolic conservation laws with uncertainties can lead to os-cillatory solutions as well as non-hyperbolic moment sys-tems. The Intrusive Polynomial Moment (IPM) methodensures hyperbolicity of the moment system while restrict-ing oscillatory over- and undershoots to specified bounds.In this talk, we derive a second-order discretization of theIPM moment system which fulfills the maximum principle.This task is carried out by investigating violations of thespecified bounds due to the optimization method as wellas the limiter used in the scheme. This investigation givesweaker conditions on the entropy that is used, allowingthe choice of an entropy which enables choosing the ex-act minimal and maximal value of the initial condition asbounds. Solutions calculated with the derived scheme arenon-oscillatory while fulfilling the maximum principle. Asthe scheme is second-order accurate, we are able to heavilyreduce the numerical costs.

Jonas KuschRWTH Aachen [email protected]

Martin FrankKarlsruhe Institute of [email protected]

Graham AlldredgeFU Berlingraham.alldredge@fu-berli

MS59

Sensitivity Analysis and High Dimensional KineticEquation with Uncertainty

We study the Vlasov-Poisson-Fokker-Planck system withuncertainty and multiple scales. Here the uncertainty,modeled by random variables, enters the solution throughdifferent sources, while the multiple scales lead the systemto its high-field or parabolic regimes. With the help ofproper Lyapunov-type inequalities, under some mild con-ditions on the uncertainty, the regularity of the solutionin the random space, as well as exponential decay of thesolution to the global Maxwellian, are established underSobolev norms, which are uniform in terms of the scalingparameters. These are the first hypocoercivity results fora nonlinear kinetic system with random inputs. Based onthe sensitivity analysis in random space, we established thespectral convergence of the generalized Polynomial Chaosstochastic Garlerkin method. We also proved a reducedbasis method breaks the curse of dimensionality for severalkinetic equations with high dimensional random inputs.Numerical examples is given to verify the properties at theend.

Yuhua ZhuUniversity of Wisconsin Madison

[email protected]

Shi JinShanghai Jiao Tong University, China and theUniversity of [email protected]

MS60

An Adaptive Local Reduced Basis Trust-regionMethod for Risk-averse PDE-constrained Opti-mization

The numerical solution of risk-averse PDE-constrained op-timization problems requires substantial computational ef-fort resulting from the discretization of the underlying PDEin both the physical and stochastic dimensions. To prac-tically solve problems with high-dimensional uncertainties,one must intelligently manage the individual discretizationfidelities throughout the optimization iteration. In thiswork, we combine an inexact trust-region algorithm withthe recently developed local reduced basis approximationto efficiently solve risk-averse optimization problems withPDE constraints. The main contribution of this work is anumerical framework for systematically constructing sur-rogate models for the trust-region subproblem and the ob-jective function using local reduced basis approximations.We demonstrate the effectiveness of our approach througha numerical example.

Wilkins AquinoProfessorDept. of Civil and Environmental Engineering, [email protected]

MS60

Risk-averse Topology Optimization

Topology optimization is an iterative design process whichoptimally distributes a volume of material by minimizingan objective and fulfilling a set of constraints. Each it-eration step requires a solution of a PDE describing thephysics of the problem. The material distribution is repre-sented by using a density field which takes values one if apoint is occupied with material and zero otherwise. Reg-ularization techniques ensure the existence of a solution.The density field is obtained by a set of transformations ap-plied on intermediate density like fields. Recently these in-termediate fields have been identified to match transforma-tions in nano- and micro-scale manufacturing. These sim-ilarities lead to optimized solutions described directly bycontrol parameters supplied to the manufacturing equip-ment. Imperfections in the production are modeled as ad-ditional random fields and random variables affecting thesystem response directly. The introduction of uncertain-ties leads to full digitalization of the design-manufacturingchain without any human intervention. However, such adesired scenario increases further the already high compu-tational requirements. Thus, the goal of the presentation isto discuss and to compare different techniques for loweringof the computational burden. The methods transform theinitial topology optimization problem in computationallyfeasible one.

Boyan S. LazarovDepartment of Mechanical EngineeringTechnical University of Denmark

UQ18 Abstracts 83

[email protected]

MS60

Scalable Algorithms and Software for PDE-constrained Optimization under Uncertainty

Optimal control and design of physical systems modeled bypartial differential equations (PDEs) is an important goalof many science and engineering applications. Typically,PDE inputs such as model coefficients, boundary condi-tions and initial conditions are unknown and estimatedfrom noisy, incomplete data. The numerical solution ofthese problems poses a number of theoretical, algorithmicand software challenges. We begin this talk by introducingrisk measures as a means of controlling uncertainty in themodel parameters. Second, we review several optimiza-tion formulations that enable scalable solution techniques,and outline a few numerical algorithms. Third, we discusstheir implementations in the Rapid Optimization Library(ROL). To conclude, we solve risk-neutral and risk-aversePDE-constrained optimization problems using ROL in twoapplication areas: optimal control of thermal fluids andtopology optimization of elastic structures.

Denis RidzalSandia National [email protected]

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS60

Sparse Solutions in Optimal Control of PDEs withUncertain Coefficients

Sparse solutions of optimal control problems governed byelliptic PDEs with uncertain coefficients are studied. In ap-plications, regions with nonzero controls can be interpretedas optimal locations for control devices. Two formulationsare presented, one where the target is a deterministic opti-mal control that optimizes the mean control objective, anda formulation aiming at stochastic controls that all sharethe same sparsity structure. I will focus on the stochas-tic control problem governed by a linear elliptic PDE withlinearly entering infinite-dimensional uncertain parameterfield. I propose and analyze a norm reweighting algorithmfor the solution of this problem. Combined with low-rankoperator approximations, this results in an efficient solu-tion method that avoids iteration over the uncertain pa-rameter space. Numerical examples with the Laplace andHelmholtz PDE operators are presented.

Georg StadlerCourant Institute for Mathematical SciencesNew York [email protected]

MS61

An Adaptive Multi-fidelity Metamodel for UQ andOptimization Based on Polyharmonic Spline

Ship hydrodynamic performance is significantly affected byoperational and environmental uncertainty, stemming fromspeed, sea state and wave spectrum parameters, heading.The quantification of the effects of these uncertain param-eters on the ship performance (resistance, motions, etc.)

is evaluated by coupling hydrodynamic solvers (possiblyhigh-fidelity) with UQ methods. Furthermore, in order todefine robust and reliable hull forms, the estimators pro-vided by UQ are included as merit factors in a simulation-based design optimization process. The latter can be com-putationally very expensive and therefore efficient method-ologies for both UQ and optimization are sought after.Here, an adaptive multi-fidelity metamodel (AMFM) ispresented to reduce the computational effort while main-taining highly accurate predictions. The AMFM combinesthe interpolation of high- and low-fidelity solutions basedon an ensemble of polyharmonic spline. The metamodeltraining sets (low- and high-fidelity) are refined based onthe scatter in the predictions. A method is presented forselecting adaptively which fidelity needs to increase thetraining set size. The use of AMFM with UQ and optimiza-tion methods is discussed. The proposed AMFM is appliedto the UQ and hull form optimization of a destroyer-typeship, subject to uncertain cruise speed, where high- andlow-fidelity solutions are provided by RANS and potential-flow solvers, respectively.

Matteo DiezCNR-INSEAN, The Italian Ship Model [email protected]

Riccardo Pellegrini, Andrea SeraniCNR-INSEANNational Research Council-Marine Technology [email protected], [email protected]

MS61

B-splines on Sparse Grids for Stochastic Colloca-tion

The sparse grid combination technique is an establishedmethod to tackle higher-dimensional problems in the con-text of uncertainty quantification. Classical approaches usecombinations of anisotropic full grids with global polynomi-als as basis functions to approximate quantities of interestor to compute coefficients of the polynomial chaos expan-sion via pseudo spectral projection. We replaced the globalpolynomial basis in the sparse grid combination techniqueby a local B-Spline basis to reduce negative effects of nu-merical instabilities in the solution on the surrogate. B-Splines do not suffer from Gibb’s phenomenon, and theirpolynomial degree can be chosen arbitrarily depending onthe problem at hand to increase the order of convergence.We introduce a not-a-knot B-Spline basis for anisotropicfull grids to cope with inaccuracies at the boundary of thestochastic domain. Furthermore, we use established adap-tive refinement criteria that are derived from the sparsegrid approach to reduce the costs, i.e. the number of for-ward model runs, for the approximation. Numerical studiesdemonstrate the effectiveness of this approach compared tocommon sparse grid methods with various grids based onClenshaw-Curtis points and Leja sequences as well as topolynomial chaos expansion.

Michael F. Rehme, Fabian Franzelin, Dirk PflugerUniversity of [email protected],[email protected],

84 UQ18 Abstracts

[email protected]

MS61

Soft Information in Uncertainty Quantifications

We propose a framework for uncertainty quantificationbased on nonparametric statistical estimation and soft in-formation about shape, support, continuity, slope, loca-tion of modes, values, and other properties of the proba-bility density function (pdf) of a response quantity of in-terest. Using maximum likelihood, maximum entropy, andrelated criteria, we obtain estimators of such pdfs and es-tablish their strong consistency under mild assumptionsand robustness in the presence of model misspecification.We also achieve strong consistency of a rich class of plug-in estimators for modes of pdfs for response quantities.Computations are facilitated by approximation of the non-parametric estimation problems using epi-splines. Specificexamples illustrate the framework including an estimatorsimultaneously subject to bounds on pdf values and its(sub)gradients, restriction to concavity, penalization thatencourages lower modes, and imprecise information aboutthe expected value of the response quantity of interest.

Johannes O. RoysetOperations Research DepartmentNaval Postgraduate [email protected]

MS61

IGA-based Multi-index Stochastic Collocation

Multi-Index Stochastic Collocation (MISC) is a method ofthe multi-level family, aimed at reducing costs when re-peatedly solving a parametric PDE for UQ purposes byexploiting a hierarchy of discretizations. In this talk, weshow how to combine MISC with Isogeometric Analysis(IGA) solvers, leveraging on their tensor-structure. IGAsolvers employ splines instead of finite elements to solvePDEs, which enables simpler meshing process, exact ge-ometry representation and high-continuity basis functions.

Joakim [email protected]

Lorenzo TamelliniUniversita Di [email protected]

MS62

Convergence of Consistent Bayesian Inversion us-ing Surrogates

We summarize a newly developed Bayesian approach toconstructing pullback probability measures regularized byprior densities solving stochastic inverse problems. To im-prove computational efficiency, we investigate the use ofsurrogate approximations to the parameter-to-observablemap at the heart of the inverse problem. We consider thepractical conditions required of the surrogate approxima-tions and their refinements such that we can prove theapproximate posterior densities converge. The main the-oretical result represents a converse to Scheffe’s lemma inthe context of stochastic inverse problems and requires ageneralization of the famous Arzela-Ascoli theorem to cer-tain spaces of discontinuous functions. Numerical results

are used to motivate the conceptual issues and highlight thestrategy considered in this theoretical approach to provingconvergence.

Troy ButlerUniversity of Colorado [email protected]

Tim WildeyOptimization and Uncertainty Quantification Dept.Sandia National [email protected]

John D. JakemanSandia National [email protected]

MS62

Adaptive Sampling for Efficient UQ using VoronoiPiecewise Surrogates

Voronoi Piecewise Surrogate (VPS) models have been re-cently introduced as a local alternative to standard globalsurrogate models for Uncertainty Quantification (UQ) incomputational simulations. VPS models were demon-strated successful in modeling high-dimensional functionsthat exhibit discontinuities, without constraining samplelocations. In its basic (non-adaptive) version, a VPS modelis constructed on a user-defined sampleset. From an im-plementation perspective, a highly desired feature in a sur-rogate model is its ability to propose future samples forimproving the model accuracy. In this work, we intro-duce a new adaptive sampling capability of VPS modelsthat caters to both requirements. Our adaptation mech-anism relies on information from iteratively-constructedVPS models to generate new samples. We demonstrate theefficiency of adaptive VPS models using a collection of testfunctions that exhibit different behaviors (e.g. noise, highgradients, and discontinuities). We compare our adaptiveVPS model to standard surrogates such as Sparse Grids,and Gaussian Processes.

Ahmad A. RushdiUniversity of California, [email protected]

Marta D’EliaSandia National [email protected]

Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico [email protected]

Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Mohamed S. EbeidaSandia National [email protected]

MS62

On the Ensemble Propagation for Efficient Uncer-tainty Quantification of Mechanical Contact Prob-

UQ18 Abstracts 85

lems

A new approach called embedded ensemble propagationhas recently been proposed to improve the efficiencyof sampling-based uncertainty quantification methods onemerging architectures. It consists of simultaneously eval-uating a subset of samples of the model, instead of eval-uating them individually. This method improves memoryaccess patterns, enables sharing of information from sampleto sample, reduces message passing latency, and improvesopportunities for vectorization. However, the impact ofthese improvements on the efficiency of the code dependson its code divergence, whereby individual samples withinan ensemble must follow different code execution paths. Inthis presentation we will show the feasibility of propagatingan ensemble through mechanical contact problems, discusssome of the code divergence issues arising in mechanicalcontact problems where each sample within an ensemblecan give rise to a different contact configuration, discussstrategies to manage them, and illustrate them with nu-merical examples. At the end we will extend these notionsto more general non-linear problems.

Kim Liegeois, Romain BomanUniversity of [email protected], [email protected]

Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Tobias A. WiesnerSandia National [email protected]

Maarten ArnstUniversite de [email protected]

MS62

An Ensemble Generation Method for Efficient UQBased on Local Surrogate Models

Several Uncertainty Quantification (UQ) methods requireensemble propagation of samples for efficient processingon parallel architectures. In this work, we introduce anew ensemble generation capability based on adaptive ran-dom sampling. While many methods focus on finding thebest way for grouping selected samples into efficient ensem-ble(s), our approach generates an ensemble of samples fromscratch, adding degrees of design freedom. We use adaptiverandom sampling to iteratively choose samples with bestefficiency, while respecting a user-desired accuracy loss tol-erance. In each iteration, we construct two Voronoi Piece-wise Surrogate (VPS) models to quantify accuracy andcost. VPS models were recently demonstrated successfulin modeling high-dimensional functions that exhibit dis-continuities, without constraining sample locations. Wecompare our algorithm to standard grouping techniquesbuilt on sparse grids.

Ahmad A. RushdiUniversity of California, [email protected]

Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico 87185

[email protected]

Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Marta D’Elia, Mohamed S. EbeidaSandia National [email protected], [email protected]

MS63

A Conditional Gaussian Framework for Filteringand Predicting Complex Nonlinear Dynamical Sys-tems

We introduce a conditional Gaussian framework for dataassimilation and prediction. Despite the conditional Gaus-sianity, the dynamics remain highly nonlinear and capturestrongly non-Gaussian features such as intermittency andextreme events. The conditional Gaussian structure al-lows efficient and analytically solvable conditional statisticsthat facilitates the real-time data assimilation and predic-tion. The talk will include three applications of such con-ditional Gaussian framework. In the first part, a physics-constrained nonlinear stochastic model is developed, andis applied to data assimilation and the prediction of theMadden-Julian oscillation with strongly intermittent fea-tures. The second part regards the state estimation of mul-tiscale and turbulent ocean flows using noisy Lagrangiantracers. A suite of reduced filters are designed and com-pared in filtering different features. In the last part of thetalk, a brief discussion of applying conditional Gaussian fil-ters to solve high-dimensional Fokker-Planck equation willbe included. This method is able to beat the curse of di-mensions in traditional particle methods.

Nan ChenNew York [email protected]

Andrew MajdaCourant Institute [email protected]

Xin T. TongNational University of SingaporeDepartment of [email protected]

MS63

More Data is not Always Better: Why and HowFeature-based Data Assimilation can be Useful

Suppose you are interested in a system that exhibits tran-sitions from one steady state to another. You have a modelfor this situation and you are also given data. Your goal isto use the data to estimate parameters of your model. Youmay ask: are data collected during steady state redundant?If so, one may neglect some, or most, of the steady statedata and focus on those data collected during the transi-tion periods. We show by numerical examples that thisis indeed the case. The solution of a parameter estimationproblem that uses only data collected during transition pe-riods is almost identical to the solution of the problem thatuses all of the data, but the problem with fewer data is nu-merically easier to solve. We present these finding using amass-spring-damper system as a prototypical example, and

86 UQ18 Abstracts

also present progress towards a more rigorous frameworkfor such ideas. More generally, we discuss a feature-basedapproach to parameter estimation. The idea is to identifylow-dimensional features within the data and to estimatemodel parameters based on these features rather than theraw data. This approach can reduce the intrinsic dimensionof a parameter estimation problem which makes obtainingits solution numerically feasible.

Spencer C. Lunderman, Matthias MorzfeldUniversity of ArizonaDepartment of [email protected], [email protected]

MS63

A Class of Nonlinear Filters Induced by Local Cou-plings

We introduce a class of structure-exploiting nonlinearfilters for high-dimensional state-space models with in-tractable transition kernels. The idea is to transform theforecast ensemble into samples from the current filteringdistribution by means of a sequence of local (in state-space)nonlinear couplings computed mostly via low-dimensionalconvex optimization. This sequence of low-dimensionaltransformations implicitly approximates the projection ofthe filtering distribution onto a manifold of sparse Markovrandom fields (not necessarily Gaussian) and can be car-ried out with limited ensemble sizes. Many square-rootensemble Kalman filters can be interpreted as special in-stances of the proposed framework when we restrict ourattention exclusively to linear transformations, and whenwe neglect approximately sparse Markov structure in thefiltering distribution. We consider applications to chaoticdynamical systems.

Alessio Spantini, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS63

Particle Filters for Spatially Extended Systems

Data assimilation is the process of combining mathemat-ical models with collected observations in order to makeforecasts and quantify the associated uncertainty. In thistalk, we will focus on the sequential nature of many DAproblems; this context naturally leads to the repeated ap-plication of Bayes formula. The particle filter algorithmis a Monte-Carlo based approach that allows a straightfor-ward numerical implementation of these recursive updates.Particle filters rely on importance sampling combined witha re-sampling step in order to propagate a set of parti-cles forward in time; contrarily to other methods such asthe Ensemble Kalman Filter (EnKF), particle filters donot rely on Gaussian assumptions and are asymptoticallyexact.Although consistent, in order to give reliable results(i.e. avoid collapse), particle filters typically require a num-ber of particles that scale exponentially quickly with the(effective) dimension of the state-space; traditional parti-cle filters are consequently unusable for many large scaleapplications. To make progress, the spatial decorrelationthat is inherent to many applied scenarios has to be ex-ploited through localization procedures. In this talk, wereview some of the techniques that have recently been de-veloped for this purpose and propose some new extensions.

Alexandre H. Thiery

National University of [email protected]

MS64

Bayesian Optimization using Stacked GaussianProcesses

A Bayesian optimization strategy is presented in the con-text of Stacked Gaussian Processes. StackedGP is a net-work of independently trained Gaussian processes used tointegrate different datasets through model composition.By using analytical first and second-order moments of aGaussian process with uncertain inputs, approximated ex-pectations of quantities of interests that require an ar-bitrary composition of functions can be obtained. AStackedGP-based Bayesian optimization is presented in thecontext of heterogeneous catalysis to propose new materialsby accounting for the uncertainty in adsorbed and activa-tion energies.

Kareem AbdelfatahPhD StudentUniversity of South [email protected]

Gabriel TerejanuUniversity of South [email protected]

MS64

Scalable Methods for Bayesian Optimal Experi-mental Design, with Applications to Inverse Scat-tering

Abstract not available at time of publication.

Omar GhattasThe University of Texas at [email protected]

Umberto VillaUniversity of Texas at [email protected]

MS64

Extending the use of Statistical Emulators inBayesian Experimental Design

The location of high-dimensional Bayesian designs is acomputationally challenging task due to the requirementof optimising an expensive and noisy utility function. Wetackle this problem through proposing the k-dimensionalapproximate coordinate exchange algorithm which extendsthe use of emulators in the search for the optimal design.The chosen emulator is the Gaussian Process where theutility of a given design is interpolated as a function of dis-tance from a training set of designs. We propose an algo-rithm for training and adaptively augmenting the emulatorfor the location of efficient Bayesian designs. We compareour algorithm against current best practice, and show thatour approach locates highly efficient designs particularly inhigh dimensional design problems.

James McGreeQueensland University of [email protected]

Antony Overstall

UQ18 Abstracts 87

University of [email protected]

MS64

Towards Exascale Computing: Optimal Paralleliza-tion of Experimental Design

Plasma-Wall-Interaction(PWI) related simulations maytake days to weeks of simulation-time. It is thus mandatoryto utilize the available computational budget efficiently.However, many of the input parameters are uncertain anda proper comparison of the simulation results with experi-mental data requires the quantification of the uncertaintyof the code results as well. Unfortunately the curse ofdimension often prohibits a straightforward Monte Carlosampling of the uncertain parameters. Bayesian Experi-mental Design together with the use of surrogate models(Gaussian processes) suggests a more efficient approach.Employing the Kullback-Leibler-divergence as utility func-tion we propose an optimized and automated parameterselection procedure for the simulations. The proposedmethod is applied to PWI-simulations with several uncer-tain and Gaussian distributed input parameters. New ap-proaches to reduce the wall-clock-time for the optimizationas well as the performance of other utility functions are alsodiscussed.

Udo von Toussaint, Roland Preuss, Dirk NilleMax Planck Institute for Plasma [email protected], [email protected],[email protected]

MS65

Spatiotemporal Pattern Extraction with Operator-valued Kernels

We present a data-driven framework for extracting spa-tiotemporal patterns generated by ergodic dynamical sys-tems. Our approach is based on an eigendecompositionof a kernel integral operator acting on a Hilbert space ofvector-valued observables of the system, taking values in aspace of functions (scalar fields) on a spatial domain. Thisoperator is constructed by combining aspects of the theoryof operator-valued kernels for machine learning with delay-coordinate maps of dynamical systems. Specifically, delay-coordinate maps performed pointwise in the spatial domaininduce an operator acting on functions on that domain foreach pair of dynamical states. We show that in the limitof infinitely many delays the resulting kernel integral op-erator commutes with a Koopman operator governing theevolution of vector-valued observables under the dynamics;as a result, in that limit the recovered patterns lie in simul-taneous eigenspaces of these operators associated with thepoint spectrum of the dynamical system. We present ap-plications of our framework to the Kuramoto-Sivashinskymodel, which demonstrate considerable performance gainsin efficient and meaningful decomposition over eigendecom-position techniques utlizing scalar-valued kernels.

Dimitrios GiannakisCourant Institute of Mathematical SciencesNew York [email protected]

Abbas Ourmazd, Joanna SlawinskaUniversity of [email protected], [email protected]

Zhizhen Zhao

University of [email protected]

MS65

Data-driven Discovery of Dynamical Systems andUncertainty in Model Selection

Data-driven sparse identification algorithms allow one toselect candidate models for governing nonlinear dynami-cal systems by a Pareto front analysis. Cross validatingthe model selection process can be performed using AICor BIC scores for the remaining candidate models. Un-certainty in model parameters can also be cross validatedfor parametrized systems using group sparsity penalizationtechniques, thus allowing one to compute UQ metrics fordata-driven discovery of nonlinear systems. Our method isdemonstrated on a number of physically motivated exam-ples.

J. Nathan KutzUniversity of Washington, SeattleDept of Applied [email protected]

MS65

A Bayesian Topological Framework for the Identi-fication and Reconstruction of Subcellular Motion

Microscopy imaging allows detailed observations of intra-cellular movements and the acquisition of large data setsthat can be fully analyzed only by automated algorithms.This talk will expose a new computational method for theautomatic identification and reconstruction of trajectoriesfollowed by subcellular particles captured in microscopyimage data. The method operates on stacks of raw imagedata and computes the complete set of contained trajec-tories. The method engages Bayesian considerations withtopological data analysis and makes no assumptions aboutthe underlying dynamics. We test the developed methodsuccessfully against artificial and experimental datasets.Application of the method on the experimental data revealsgood agreement with manual tracking and benchmarkingyields performance scores competitive to the existing stateof the art tracking methods.

Vasileios MaroulasUniversity of Tennessee, [email protected]

MS65

On the Construction of Probabilistic Newton-typeAlgorithms

It has recently been shown that many of the existing quasi-Newton algorithms can be formulated as learning algo-rithms, capable of learning local models of the cost func-tions. Importantly, this understanding allows us to safelystart assembling probabilistic Newton-type algorithms, ap-plicable in situations where we only have access to noisyobservations of the cost function and its derivatives. Thisis where our interest lies. We make contributions to theuse of the nonparametric and probabilistic Gaussian pro-cess models in solving these stochastic optimisation prob-lems. Specifically, we present a new algorithm that unitesthese approximations together with recent probabilisticline search routines to deliver a probabilistic quasi-Newtonapproach. We also show that the probabilistic optimisationalgorithms deliver promising results on challenging nonlin-

88 UQ18 Abstracts

ear system identification problems where the very nature ofthe problem is such that we can only access the cost func-tion and its derivative via noisy observations, since thereare no closed-form expressions available.

Thomas SchonDepartment of Information TechnologyUppsala [email protected]

Adrian G. WillsUniversity of Newcastle NSW AustraliaDept. of Electrical and Computer [email protected]

MS66

A Deep Learning Approach in Traffic Predictionfor Autonomous Driving

With the development of AI technology, people paid moreand more attention to the AI technology which brings realchanges to human, especially combined with the traditionalindustry. Autonomous driving is one of the hottest researchareas in the last two years, and it will bring great changesto people’s lives. As one of the core issues in the field ofautomatic driving, prediction of the surrounding vehiclesand pedestrians is a very hard problem. We have triedCNN, RNN and other algorithms, and carried out in-depthstudy. It was applied in the field of automatic driving andachieved good results. In the follow-up study, let us exploretogether and apply it to real life.

Qi KongBaidu, [email protected]

MS66

Hybrid Data Assimilation for Aerosol ParameterEstimation

Estimation of the parameters which define aerosol distribu-tions, such as the space-time location of the source and thestrength and duration of release, is an important problemfor applications such as the enforcement of environmentalregulations and treaties. The mathematical and computa-tional challenges stem from the sparsity of observations inspace and time; the complexity and uncertainty in the non-linear advection-diffusion-reaction models; and confound-ing levels of background noise. These factors contribute toan ill-posed and poorly regularized inverse problem. In thistalk, we will consider several techniques for parameter esti-mation in this domain, and focus on a hybrid technique—motivated by the multi-scale structure of the dynamics andsparsity in the data—which aims to reduce the dimension-ality of the parameter space and accelerate convergencetoward more accurate parameter estimates.

William RosenthalPacific Northwest National [email protected]

MS66

Sequential Data Assimilation with Multiple Non-linear Models and Applications to Subsurface Flow

Complex systems are often described with competing mod-els. Such divergence of interpretation on the system maystem from model fidelity, mathematical simplicity, and

more generally, our limited knowledge of the underlyingprocesses. Meanwhile, available but limited observationsof system state could further complicate ones predictionchoices.Over the years, data assimilation techniques, suchas the Kalman filter, have become essential tools for im-proved system estimation by incorporating both modelsforecast and measurement; but its potential to mitigate theimpacts of aforementioned model-form uncertainty has yetto be developed. Based on an earlier study of Multi-modelKalman filter, we propose a novel framework to assimi-late multiple models with observation data for nonlinearsystems, using extended Kalman filter, ensemble Kalmanfilter and particle filter, respectively. Through numericalexamples of subsurface flow, we demonstrate that the newassimilation framework provides an effective and improvedforecast of system behavior.

Peng WangBeihang University, [email protected]

Akil NarayanUniversity of [email protected]

Lun YangBeihang [email protected]

MS66

Probabilistic Machine Learning for Fluid Flows

Previous works have used machine learning to constructdata-driven models for predicting various quantities suchas turbulent stresses, sub-grid-scale scalar fluxes, and wall-shear stresses in turbulent flows. However, it is desirableto have quantified uncertainties built in such predictions,which are unfortunately lacking in most existing machinelearning techniques. We used Bayesian neural networks(BNN) and direct numerical simulation datasets to con-struct data-driven turbulent constitutive models to predictturbulent stresses from mean flow field variables with quan-tified uncertainties. The promising results demonstratethat BNN has great potential in science and engineeringapplications.

yang zengVirginia [email protected]

Jinlong WuDept. of Aerospace and Ocean Engineering, Virginia [email protected]

Feng BaoUniversity of Tennessee at [email protected]

Hu WangHunan [email protected]

Heng XiaoDept. of Aerospace and Ocean Engineering, Virginia [email protected]

MS67

Graph-based Bayesian Learning: Continuum Lim-

UQ18 Abstracts 89

its and Algorithms

The principled learning of functions from data is at thecore of statistics, machine learning and artificial intelli-gence. The aim of this talk is to present some new the-oretical and methodological developments concerning thegraph-based, Bayesian approach to semi-supervised learn-ing. I will show suitable scalings of graph parameters thatprovably lead to robust Bayesian solutions in the limit oflarge number of unlabeled data. The analysis relies on acareful choice of topology and in the study of the the spec-trum of graph Laplacians. Besides guaranteeing the con-sistency of graph-based methods, our theory explains therobustness of discretized function space MCMC methodsin semi-supervised learning settings.

Daniel Sanz-Alonso, Nicolas Garcia Trillos, ZacharyKaplan, Thabo SamakhoanaBrown Universitydaniel [email protected], nico-las garcia [email protected], zachary [email protected],thabo [email protected]

MS67

Conditional Density Estimation, Filtering andClustering using Optimal Transport

A new non-parametric procedure for the estimation andsimulation of conditional probability density p(x|z) is pre-sented. The procedure originates from a data based for-mulation of the Wasserstein barycenter problem and hasthe byproduct of removing the effect of undesired covari-ates from the dataset under consideration. This approachis computationally efficient by means of a low rank tensorfactorization and it naturally embeds a Bayesian cluster-ing procedure to be used when only few to no points of xare available for a given z. The methodology is illustratedthrough examples that include the explanation of variabil-ity of ground temperature across the continental UnitedStates and the prediction of book preference among poten-tial readers.

Giulio TrigilaCourant [email protected]

MS67

Dimension Reduction in Optimization-based Sam-pling

Markov chain Monte Carlo (MCMC) relies on efficient pro-posals to sample from a target distribution of interest. Re-cent optimization-based MCMC algorithms for Bayesianinference, e.g. randomize-then-optimize (RTO), repeatedlysolve optimization problems to obtain proposal samples.We interpret RTO as an invertible map between two ran-dom variables and find that this mapping preserves thevalue of the random variables along many directions. Thisis an alternative usage of dimension reduction that corre-sponds to weaker requirements on the selected subspaces.

Zheng WangMassachusetts Institute of TechnologyUSAzheng [email protected]

Youssef M. MarzoukMassachusetts Institute of Technology

[email protected]

Tiangang CuiMonash [email protected]

MS68

Modeling Rare Events in Complex Systems

I will discuss numerical methods for the study of barrier-crossing events, particularly the string method, and presentsome applications including transitions in Kuroshio, thewetting transition on patterned solid surface, the isotropic-nematic transition of liquid crystal.

Weiqing RenNational University of Singapore and IHPC, [email protected]

MS68

A Laguerre Spectral Minimum Action Method forFinding the Most Probable Path

Minimum action method (MAM) is a powerful mathemati-cal tool for computing phase transition phenomena in manynoise-driven dynamical systems. Theoretically, the transi-tion time is infinity when noise amplitude takes limit tozero. There exist several approaches to handle the infinitelong transition time, e.g. taking a finite but large transi-tion time, using geometric MAM, determining an optimalfinite transition time based numerical resolution. In thistalk, we introduce another approach, a Laguerre spectralMAM, to handle this problem. This method has severalspecial properties which will be demonstrated by applyingit to several nonlinear ODE systems and PDE systems.

Haijun YuLSEC, Institute of Computational MathematicsAcademy of Mathematics and Systems Science, [email protected]

MS68

Minimum Action Method for Systems with Delays

In this work, we develop an hp-adaptivity strategy for theminimum action method (MAM) using a posteriori errorestimate and generalize it to deal with systems with delays.MAM plays an important role in minimizing the Freidlin-Wentzell action functional, which is the central object ofthe Freidlin-Wentzell theory of large deviations for noise-induced transitions in stochastic dynamical systems. Wewill demonstrate the effectiveness of our method for sys-tems with or without delays.

Jiayu ZhaiLuisiana State [email protected]

Xiaoliang WanLouisiana State UniversityDepartment of [email protected]

MS68

An Improved Adaptive Minimum Action Method

90 UQ18 Abstracts

for Non-gradient System

In this talk, I introduce two improvements on the mini-mum action method(MAM). The first one is to handle thenumerical issue of grid points tangling around the saddlepoint for a large time interval. The original monitor func-tion in the adaptive MAM is updated and linked to thedrift vector only so that the numerical derivative on thepath is now unnecessary. The improved aMAM has a bet-ter robustness. The second one is on the MAM departingfrom a stable limit cycle, in which the optimal path hasan infinite length. By constructing a quadratic solution ofthe quasi-potential on the limit cycle from the linearizedHJB equation, we run the MAM only from a thin tubearound the cycle so that the optimal path is truncated tobe finitely long. The quadratic solution in need is solvedby an efficient numerical method for the underlying Ricattiequation. This is joint work with Yiqun Sun and Ling Lin.The HK GRF 11304715 and 11337216 is acknowledged.

Xiang ZhouDepartment of MathematicsCity Univeristy of Hong [email protected]

MS69

Computational Optimal Design of Random RoughSurfaces in Thin-film Solar Cells

Random rough textures can increase the absorbing effi-ciency of solar cells by trapping the optical light and in-creasing the optical path of photons, and in turn increasesthe effectiveness of solar panels. In this talk, I will brieflydescribe the mechanism and the mathematical modelingof thin film solar cells to start with. Then I will describehow to use mathematical shape control methods to op-timally design of random rough surfaces in thin-film solarcells. Finally I will use numerical experiments to show thatoptimally obtained random textures yield an enormous ab-sorption enhancement, and thus increase the solar panel’senergy efficiency.

Junshan LinAuburn [email protected]

Yanzhao CaoDepartment of Mathematics & StatisticsAuburn [email protected]

MS69

Sparse Grid Quadratures from Conformal Map-pings

In this talk we describe the extension of quadrature ap-proximations, built from conformal mapping of interpola-tory rules, to sparse grid quadrature in the multidimen-sional setting. In one dimension, computation of an inte-gral involving an analytic function using these transformedquadrature rules can improve the convergence rate by a fac-tor approaching π/2 versus classical interpolatory quadra-ture. For the computation of high-dimensional integralswith analytic integrands, we implement the transformedquadrature rules in the sparse grid setting, and we showthat in certain settings, the convergence improvement canbe exponential with growing dimension. Numerical exam-ples demonstrate the benefits and drawbacks of the ap-

proach, as predicted by the theory.

Peter JantschDepartment of MathematicsTexas A&M [email protected]

Clayton G. WebsterOak Ridge National [email protected]

MS69

Unified Null Space Conditions for Sparse Approx-imations via Nonconvex Minimizations

This talk is concerned with sparse polynomial approxima-tion via compressed sensing approach. Nonconvex mini-mizations are generally closer to l0 penalty than l1 norm,thus it is widely accepted that they are able to enhance thesparsity and accuracy of the approximations. However, thetheory verifying that nonconvex penalties are as good as orbetter than l1 minimization in uniform, sparse reconstruc-tion has not been available beyond a few specific cases. Weaim to fill this gap by establishing new recovery guaranteesthrough unified null space properties that encompass mostof the currently proposed nonconvex functionals in the lit-erature. These conditions are less demanding than or iden-tical to the standard null space property, the necessary andsufficient condition for l1 minimization.

Hoang A. TranOak Ridge National LaboratoryComputer Science and Mathematics [email protected]

Clayton G. WebsterOak Ridge National [email protected]

MS69

A Generalized Sampling and Weighted Approachfor Sparse Approximation of Polynomial Chaos Ex-pansions

We consider the sampling stretagy of for sparse recoveryof polynomial chaos expansions via compressed sensing.In particular, we propose a general framework that sam-pling with respect to the (weighted) pluripotential equi-librium measure of the domain, and subsequently solvesa weighted least-squares problem. The framework coversboth the bounded and unbounded cases. We also discuss apotential application of this approach – handling arbitrarypolynomical choas expansions.

Tao ZhouInstitute of Computational MathematicsChinese Academy of [email protected]

MS70

Multifidelity Robust Optimization

This work presents a multifidelity framework for optimiza-tion under uncertainty. Designing robust systems can becomputationally prohibitive due to the numerous evalua-tions of numerical models required to estimate system levelstatistics at each optimization iteration. We propose a mul-tifidelity Monte Carlo approach combined with information

UQ18 Abstracts 91

reuse to estimate the first and second moments of the sys-tem outputs. The method uses nested control variates toexploit multiple fidelities and information from previousoptimization iterations in order to reduce the variance inthe moment estimates and the computational cost as com-pared to standard Monte Carlo simulation.

Anirban Chaudhuri, Karen E. WillcoxMassachusetts Institute of [email protected], [email protected]

MS70

Uncertainty Quantification via a Bi-fidelity Low-rank Approximation Technique

The use of model reduction has become widespread as ameans to reduce computational cost for uncertainty quan-tification of PDE systems. In this work, we presenta modelreduction technique that exploits the low-rank structure ofthe solution of interest, when exists, for fast propagationof high-dimensional uncertainties. To construct this low-rank approximation, the proposed method utilizes modelswith lower fidelities (hence cheaper to simulate) than theintended high-fidelity model. After obtaining realizationsto the lower fidelity models, a set of reduced basis andan interpolation rule are identified and applied to a smallset of high-fidelity realizations to obtain this low-rank, bi-fidelity approximation. In addition to the construction ofthis bi-fidelity approximation, we present convergence anal-ysis and numerical results.

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

Jerrad HamptonUniversity of Colorado, [email protected]

Hillary FairbanksDepartment of Applied MathematicsUniversity of Colorado, [email protected]

Akil NarayanUniversity of [email protected]

MS70

Adaptive Refinement Strategies for MultilevelPolynomial Expansions

In the simulation of complex physics, multiple model formsof varying fidelity and resolution are commonly avail-able. In computational fluid dynamics, for example, com-mon model fidelities include potential flow, inviscid Eu-ler, Reynolds-averaged Navier Stokes, and large eddy sim-ulation, each potentially supporting a variety of spatio-temporal resolution/discretization settings. While we seekresults that are consistent with the highest fidelity, thecomputational cost of directly applying UQ in high ran-dom dimensions quickly becomes prohibitive. In this pre-sentation, we focus on the development and deploymentof multilevel-multifidelity algorithms that adaptively fuseinformation from multiple model fidelities and resolutionsin order to reduce the overall computational burden. Inparticular, we focus on forward uncertainty quantificationusing multilevel emulator approaches that employ com-

pressed sensing and tensor trains to exploit sparse and lowrank structure. Several approaches for adaptively allocat-ing samples across levels will be explored and compared,based on deployments to both standard model problemsand engineered aerodynamic systems.

Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Gianluca GeraciSandia National Laboratories, [email protected]

Alex GorodetskySandia National [email protected]

John D. JakemanSandia National [email protected]

MS70

A Multi-fidelity Stochastic Collocation Method forTime-dependent Problems

In this talk, we shall discuss a collocation method withmulti-fidelity simulation models to efficiently reconstructthe time trajectory of time-dependent parameterized prob-lems. By utilizing the time trajectories of low/high-fidelity solutions to construct the approximation space, thismethod is demonstrated to offer two substantial advan-tages: (1) it is able to produce more accurate results witha limited number of high-fidelity simulations; (2) it avoidsinstability issues of time-dependent problems due to thenonintrusive nature. We also provide several numerical ex-amples to illustrate the effectiveness and applicability ofthe method.

Xueyu ZhuUniversity of [email protected]

Dongbin XiuOhio State [email protected]

MS71

Selection, Calibration, and Validation of Models inthe Presence of Uncertainty: Applications to Mod-eling Tumor Growth

The development of predictive computational models oftumor growth is faced with many formidable challenges.Some of the most challenging aspects is to rigorously ac-count for uncertainties in observational data, model selec-tion, and model parameters. In this work, we present aset of mathematical models for tumor growth, includingreaction-diffusion and phase-fields models, with and with-out mechanical deformation effects, and models for radi-ation therapy. The observational data is obtained fromquantitative magnetic resonance imaging data of a murinemodel of glioma, with X-ray radiation administered in themiddle of the experimental program. The mathematicalmodels are based on the balance laws of continuum mix-ture theory, particularly mass balance, and from acceptedbiological hypotheses on tumor growth. The Occam Plau-

92 UQ18 Abstracts

sibility Algorithm (OPAL) is implemented to provide aBayesian statistical calibration of the model classes, aswell as the determination of the most plausible models inthese classes relative to the observational data, and to as-sess model inadequacy through statistical validation pro-cesses. The results of the analyses through suggest thatthe general framework developed can provide a useful ap-proach for predicting tumor growth and radiation effects.Acknowledgments We thank the Center Prevention Re-search Institute of Texas (CPRIT) for funding throughRR160005, and the National Cancer Institute for fundingthrough U01CA174706 and R01CA186193.

Ernesto A. B. F. LimaICES - UT [email protected]

J. T. Oden, D. A. Hormuth IIICES - The University of Texas at [email protected], [email protected]

T. E. YankeelovICES - The University of Texas at AustinDepartment of Biomedical Engineering, UT [email protected]

A. ShahmoradiICES - The University of Texas at [email protected]

B. Wohlmuth, L. ScarabosioTechnical University of Munich, [email protected], [email protected]

MS71

Dynamic Bayesian Influenza Forecasting in theUnited States with Hierarchical Discrepancy

Timely and accurate forecasts of seasonal influenza wouldassist public health decision-makers in planning interven-tion strategies, efficiently allocating resources, and possi-bly saving lives. For these reasons, influenza forecasts areconsequential. Producing timely and accurate influenzaforecasts, however, have proven challenging due to noisyand limited data, an incomplete understanding of the un-derlying disease transmission process, and the mismatchbetween the disease transmission process and the data-generating process. In this talk I will introduce a dynamicBayesian (DB) flu forecasting model that exploits modeldiscrepancy (i.e., the mismatch between the underlying dis-ease transmission model and the data) through a Bayesianhierarchical model. The DB model allows forecasts of par-tially observed flu seasons to borrow discrepancy informa-tion from previously observed flu seasons. We demonstratethe DB’s flu forecasting capabilities by comparing the DBmodel to leading real-world competitors.

Dave Osthus, James Gattiker, Reid Priedhorsky, Sara DelValleLos Alamos National [email protected], [email protected], [email protected], [email protected]

MS71

Conditioning Multi-model Ensembles for DiseaseForecasting

The impact of model errors can be reduced by using an

ensemble of models and weighing them differentially, con-ditional on historical data. This is called model averaging.Starting from an ensemble of equally weighted models, wedescribe a sequential algorithm that updates the weightsconditional on time-series data. The method can accom-modate black-box models which provide temporal forecastsas distributions. The method assumes that forecasts areGaussians, and is a shortcoming of the algorithm. We il-lustrate this method with an ensemble of disease models,using data on influenza outbreaks in the US.

Jaideep Ray, Lynne BurksSandia National Laboratories, Livermore, [email protected], [email protected]

Katherine CauthenSandia National Laboratories, Albuquerque, [email protected]

MS71

Multi-physics Model Error Calibration

Discrepancy between model prediction and physical obser-vation can be attributed to errors in modelling the system,as well as measurement errors in the experiments. TheKennedy-O’Hagan framework has been usually adopted toestimate model discrepancy in static systems. However,this approach cannot be directly implemented in the caseof transient multi-physics models, where an error in mod-elling one of the constituent disciplines would lead to errorsin the outputs of other disciplinary models, and these errorswould accumulate in time. In this study, we treat the errorsin each constituent discipline as external inputs to the cor-responding governing equation. These errors are treatedas system states, and are estimated along with the otherstates corresponding to the original model, using Bayesianstate estimation, leading to the estimation of modelling er-rors as well as discrepancies in all the system states. Weillustrate this methodology using a flexible panel subjectedto hypersonic air flow and experiencing flow-induced pres-sure loads and heating. We solve the nonlinear state es-timation problem using a sampling-based state estimationmethod, and address the issues of identifiability, variancereduction, and discrepancy source identification.

Abhinav SubramanianVanderbilt [email protected]

MS72

A Bayesian Approach to Quantifying UncertaintyDivergence Free Flows

We treat the statistical regularization of the ill-posed in-verse problem of estimating a divergence free flow field ufrom the partial and noisy observation of a passive scalar?. Our solution is Bayesian posterior distribution, a proba-bility measure which precisely quantifies uncertainties in uonce one specifies models for measurement error and priorknowledge for u. We present some of our recent work whichanalyzes both analytically and numerically. In particularwe discuss some Markov Chain Monte Carlo (MCMC) al-gorithms which we have developed and refined to effectivelysample from . This is joint work with Jeff Borggaard andJustin Krometis

Nathan Glatt-HoltzTulane University

UQ18 Abstracts 93

[email protected]

MS72

Uncertainty Quantification for the Boltzmann -Poisson System

We study the uncertainty quantification for a Boltzmann-Poisson system that models electron transport in semi-conductors and the physical collision mechanisms over thecharges. We use the stochastic Galerkin method in orderto handle the randomness associated to the problem. Themain uncertainty in the Boltzmann equation is knowingthe initial condition for a large number of particles, whichis why the problem is formulated in terms of a probabilitydensity in phase space. The second source of uncertaintyin the Boltzmann-Poisson problem, directly related to itsquantum nature, is the collision operator, as its structure inthis semi-classical model comes from the quantum scatter-ing matrices operating on the wave function associated tothe electron probability density. Additional sources of un-certainty are transport, boundary data, etc. In this studywe choose the energy as a random variable, as it is a crucialvariable in the problem that determines both the collision(given by energy and momentum conservation) and trans-port (via the wave group velocity), whose balance definesthe Boltzmann equation. The dimensional cost is intendedto be kept minimal, as the energy band is a scalar functionof the momentum.

Jose A. Morales Escalante, Clemens HeitzingerVienna University of [email protected],[email protected]

MS73

Reduced Basis Approach using Sparse PolynomialChaos Expansions in Computational Fluid Dynam-ics Applications

One approach to propagate uncertainties in engineeringsystems is using polynomial chaos expansions. A fun-damental, well-known, limitation of polynomial chaos isthe curse-of-dimensionality; namely the computational costgrows dramatically with the number of input uncertainties.In this talk, a two-stage non-intrusive uncertainty quantifi-cation method is presented in order to address this issue.First, an optimal stochastic expansion, i.e. made of veryfew basis functions, is derived using proper orthogonal de-composition on a coarse discretization. It is further com-bined with an adaptive sparse polynomial chaos algorithmin order to construct the covariance matrix at low cost andthus significantly speed up the convergence of the method.Second, the optimal stochastic basis is utilized to expandthe stochastic solution on a fine discretization and derivethe statistical moments. A practical engineering problemis studied in order to illustrate and quantify the resultingcomputational savings over more traditional methods foruncertainty quantification in computational fluid dynam-ics. It consists of an airfoil operating at transonic flowcondition, and subject to geometrical and operational un-certainties. It is shown that the proposed two-stage uncer-tainty quantification method is able to predict statisticalquantities with little loss of information and at a cheapercost than other state-of-the-art techniques.

Simon AbrahamDepartment of Mechanical EngineeringFree University of [email protected]

Panagiotis TsirikoglouVrije Universiteit [email protected]

Francesco ContinoVrije Universiteit BrusselBrussels University (VUB)[email protected]

Ghader GhorbaniaslVrije Universiteit [email protected]

MS73

Quantifying Structural Uncertainty in Reynolds-averaged Navier-stokes Turbulence Models forSimulations of Heat Exchangers

Reynolds-averaged Navier-Stokes (RANS) simulations arecommonly used for engineering design and analysis of tur-bulent flow and heat or scalar transport. An importantlimitation is that the turbulence and turbulent flux modelscan introduce considerable uncertainty in predictions forcomplex applications. To enable robust design and opti-mization based on imperfect RANS results, the objective ofthis study is to establish a model-form uncertainty quantifi-cation method for turbulent scalar flux models. The pro-posed framework addresses uncertainty in the scalar fluxmagnitude and direction. First, the method solves a trans-port equation for the flux magnitude derived from classicsecond-moment closures, which introduces a dependency ofthe magnitude on the direction. Second, two different ap-proaches for varying the scalar flux direction are explored:(1) the monotonicity of the scalar flux production termwith respect to the flux direction is analyzed, and an al-gorithm to explore directions that increase or decrease theproduction is designed; (2) it is assumed that scalar trans-port is most effective when the flux vector is normal tothe local shear plane, and a function to rotate the vec-tor toward that orientation is constructed. Both strategiesare applied to a pin-fin heat exchanger for which large-eddysimulation data is available for validation. The results showpromising capabilities to bound predictions for the overallheat transfer rate and some local key flow features.

Zengrong HaoCivil and Environmental Engineering DepartmentStanford [email protected]

Catherine GorleStanford [email protected]

MS73

Multilevel and Multi-index Sampling for the For-ward Propagation of Many Uncertainties in Indus-trial Applications

Computational models in science and engineering are sub-ject to uncertainty, that is present under the form of un-certain parameters or model uncertainty. Any quantity ofinterest derived from simulations with these models willalso be uncertain. The efficient propagation of the uncer-tainties is often the computational bottleneck in other UQproblems, such as robust optimisation and risk analysis.The Monte Carlo method is the method of choice to dealwith these many uncertainties. However, the plain Monte

94 UQ18 Abstracts

Carlo method is impractical due to its expense. We inves-tigate how extensions of the Monte Carlo method, such asMultilevel Monte Carlo and Multi-Index Monte Carlo, andtheir respective Quasi-Monte Carlo variants, can be usedefficiently to solve uncertainty quantification problems inreal engineering applications.

Pieterjan RobbeDepartment of Computer Science, KU [email protected]

Dirk NuyensDepartment of Computer ScienceKU [email protected]

Stefan VandewalleDepartment of Computer ScienceKU Leuven - University of [email protected]

MS73

Robust PDE Constrained Optimization with Mul-tilevel Monte Carlo Methods

We consider PDE constrained optimization problemswhere the partial differential equation has uncertain coef-ficients modeled by means of random variables or randomfields. We wish to determine a robust optimum, i.e., anoptimum that is satisfactory in a broad parameter range,and as insensitive as possible to parameter uncertainties.To that end we optimize the expected value of a trackingtype objective with an additional penalty on the varianceof the state. The gradient and Hessian corresponding tosuch a cost functional also contain expected value opera-tors. Since the stochastic space is often high dimensional,a multilevel (quasi-) Monte Carlo method is presented toefficiently calculate the gradient and the Hessian. Theconvergence behavior is illustrated using a gradient anda Hessian based optimization method for a model ellip-tic diffusion problem with lognormal diffusion coefficientand optionally an additional nonlinear reaction term. Theevolution of the variances on each of the levels during theoptimization procedure leads to a practical strategy for de-termining how many and which samples to use. We alsoinvestigate the necessary tolerances on the mean squarederror of the estimated quantities. Finally, a practical al-gorithm is presented and tested on a problem with a largenumber of optimization variables and a large number ofuncertainties.

Andreas Van BarelDepartment of Computer ScienceKU Leuven - University of [email protected]

MS74

Adaptive Low-rank Separated RepresentationsBased on Mapped Tensor-product B-splines

There is a great need for robust, efficient, and accurate sur-rogate modeling techniques throughout a plethora of top-ics in the field of Uncertainty Quantification. SeparatedRepresentations (SR) provide an excellent tool for con-structing such surrogate models exhibiting rank sparsitywith respect to a tensor-product basis. Moreover, thereare a variety of available techniques for determining thecorresponding expansion coefficients including alternatingleast-squares routines or compressive sensing. Despite the

assortment of algorithms available for the SR construc-tion, the underlying expansion generally employs orthogo-nal polynomials which are globally differentiable. However,many physical phenomena exhibit localized features or po-tentially discontinuous regions, due to e.g. bifurcations,for which global polynomials are poor approximants. Asa remedy, we utilize non-uniform, tensor-product B-splineswhere continuity can be iteratively adapted and optimizedto accurately represent these non-smooth or localized fea-tures. Additionally, these discontinuous regions are notnecessarily axis-aligned. Therefore, we introduce a geomet-ric mapping from a Euclidean parametric domain wherethe tensor-product B-splines are defined. These splinesare then mapped to the physical domain where the dataexists. This mapping in turn can be optimized, providingthe necessary ingredient for a method capable of accuratelyrepresenting these non axis-aligned discontinuities and bi-furcations.

Joseph BenzakenUniversity of Colorado [email protected]

John A. EvansDepartment of Aerospace Engineering SciencesUniversity of Colorado at [email protected]

MS74

Propagating Fuzzy Uncertainties with HierarchicalB-splines on Sparse Grids

Fuzzy logic is a way of describing uncertainties in param-eters. In contrast to probabilistic approaches, it is oftenused to reflect epistemic uncertainties. Zadeh’s well-knownextension principle can be applied to propagate uncertaintyin the input parameters to the output of an objective func-tion. Unfortunately, it requires the solution of a number ofglobal optimization problems. In settings with expensiveobjective functions, the large amount of function evalua-tions required by optimization algorithms quickly becomesprohibitively expensive. We follow the approach proposedby Klimke and replace the objective function by a sparsegrid surrogate that can be evaluated very cheaply. Sparsegrids cope with the curse of dimensionality to some ex-tent allowing moderately high numbers of input parame-ters. However, only piecewise linear and global polynomialbasis functions have been studied so far, both of which havedrawbacks. Instead, we propose to use B-splines for thesparse grid basis. In this talk, we will show that the advan-tages of the new method are three-fold: First, B-splines dis-play higher numerical orders of convergence than piecewiselinear functions. Second, B-splines enable gradient-basedoptimization, as B-splines are in general continuously dif-ferentiable and gradients of the B-spline interpolants areexplicitly known. Third, we show that it is possible toemploy spatial adaptivity to resolve local features of theobjective function.

Julian Valentin, Dirk PflugerUniversity of [email protected],[email protected]

MS74

IsoGeometric Splines for Smoothing on Surfaces

We propose an Isogeometric approach for smoothing onsurfaces, namely estimating a function starting from noisy

UQ18 Abstracts 95

and discrete measurements. More precisely, we aim at esti-mating functions lying on a surface represented by NURBS,which are geometrical representations commonly used inindustrial applications. The estimation is based on theminimization of a penalized least-square functional. Thelatter is equivalent to solve a 4th-order Partial Differen-tial Equation (PDE). In this context, we use IsogeometricAnalysis (IGA) for the numerical approximation of suchsurface PDE, leading to an IsoGeometric Smoothing (IGS)method for fitting data spatially distributed on a surface.Indeed, IGA facilitates encapsulating the exact geometricalrepresentation of the surface in the analysis and also allowsthe use of at least globally C1−continuous NURBS basisfunctions for which the 4th-order PDE can be solved usingthe standard Galerkin method. We show the performanceof the proposed IGS method by means of numerical sim-ulations and we apply it to the estimation of the pressurecoefficient, and associated aerodynamic force on a wingletof the SOAR space shuttle.

Matthieu WilhelmUniversity of [email protected]

Luca Dede’MOX - Department of MathematicsPolitecnico di Milano, [email protected]

Laura M. SangalliMOX Department of MathematicsPolitecnico Milano, [email protected]

MS74

Minimum Spanning Trees and Support VectorMachines for High-dimensional and DiscontinuousSpline-based Surrogate Models

A novel approach for forward propagation of uncertaintiesthrough models is proposed for the case of discontinuousmodel responses in high-dimensional spaces. Traditionalmethods, such as generalised polynomial chaos (gPC), failwhen the quantity of interest depends discontinuously onthe input parameters. Adaptive sampling methods exist,but the number of sampling points needed for producingaccurate solutions is still intractable for complex engineer-ing problems. As a remedy we invented a Minimum Span-ning Tree based sampling algorithm, which searches forimportant features in the response adaptively. A key in-gredient of the proposed method, is the decomposition ofthe random space, by means of a machine learning algo-rithm (SVM). The samples are classified based their re-sponse values, and the SVM searches for a hypersurface,which separates the classes. This classification boundaryserves as an approximation of the discontinuity locationsin the random space. The domain is cut along the classifi-cation boundary and local approximations are constructedon each of the sub-domains. Splines are used for construct-ing the local approximations. The combination of adap-tive sampling, random space decomposition and splines,ensures an approximation without the appearance of Gibbsphenomena, while producing accurate solutions. The pro-posed method uses many fewer samples than any otheradaptive UQ method in literature.

Yous van HalderCWI

[email protected]

MS75

Matrix Decomposition Algorithms for Large-scaleData Compression

When working with large ensembles of large-scale PDE so-lutions, the cost of both storing and post- processing therelevant data can quickly become prohibitive. To mitigatethis problem, we present a variety of low-rank matrix de-composition algorithms, demonstrating their utility in gen-erating multi-fidelity approximations using only a fraction(about 5-10 percent) of the original data. In particular,we examine methods which minimize the number of passesone must make over the solution ensembles. These meth-ods include the interpolative decomposition (ID) and twoflavors of randomized singular value decomposition algo-rithms. In all of these algorithms applied to large-scalecomputational fluid dynamics data, we manage to limitthe number of passes required over the data to two andachieve accuracy up to 2-3 digits.

Alec M. DuntonUniversity of ColoradoDepartment of Aerospace [email protected]

Lluis Jofre-CruanyesCenter for Turbulence ResearchStanford [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

MS75

Towards Reduced Order Modeling of Liquid-fueledRocket Combustion Dynamics

Combustion instability in liquid rocket engines (LRE) isan extremely complex phenomenon, with strongly stochas-tic and nonlinear physics, and time and length scales thatrange over many orders of magnitude. Though moderncomputational capability has demonstrated the potentialto beyond the empirically-based design analyses of thepast, high-fidelity simulations of full-scale combustors re-main out of reach, and will continue to be out of reachfor engineering work flows, even on Exascale computers.While individual LES simulations of combustion instabilityin laboratory-scale combustors are relatively routine [Har-vazinski et al., Physics of Fluids, 2015], full-scale rocket en-gine simulations remain too computationally intensive fordesign applications. Our goal is to address the gap that ex-ists between the high-fidelity simulations of small and sim-plified geometries and the need for computations of largeand complicated domains through reduced order modeling(ROM) for efficient and accurate predictions of combustioninstability in LRE. This work will present progress towardsthe overall goal, and will detail the development of reduced-order models (ROMs) using adaptive bases and stabilized,data-driven techniques for closure. Results will highlightapplications of the these ROMs on laboratory-scale LREs.The developed ROMs will be integrated and coupled to-gether in a multi-fidelity framework to model combustioninstabilities in the full LRE system.

Cheng Huang

96 UQ18 Abstracts

University of MichiganDepartment of Aerospace [email protected]

Karthik Duraisamy, Jiayang XuUniversity of Michigan Ann [email protected], [email protected]

MS75

Data-driven Reduced Order Modeling for High Fi-delity Simulations of Wind Plants

Exascale computing resources will enable unprecedentedaccuracy in high-fidelity simulations of turbulent flowwithin wind plants. These simulations will reveal newinsights into complex flow physics, but are too compu-tationally expensive for use in design optimization, con-trols, or uncertainty quantification (UQ). Data-driven re-duced order modeling techniques provide a mathematicallyrigorous pathway for encapsulating high-fidelity physicalinsights into low-order surrogate models suitable for con-trols and UQ. We demonstrate a linear parameter vary-ing (LPV) approach in conjunction with dynamic modedecomposition (DMD) to derive a data-driven low-orderlinear state space model that preserves input-output rela-tionships. The LPV-DMD model can be used to assess un-certainty in the input-output relationship between turbinecontrols and power outputs given uncertainty in a modelparameter like the LES Smagorinsky coefficient or surfaceroughness. The high-fidelity snapshots used in the decom-position are obtained from Nalu, a large eddy simulation(LES) framework developed for exascale computing. Addi-tionally, we discuss two extensions to the LPV-DMD tool:streaming DMD where the decomposition is performed insitu in the LES code, and controller design for real-timecontrol of wind plants using data-driven insights from thehigh-fidelity simulations.

Ryan King, Michael Sprague, Jennifer [email protected], [email protected], [email protected]

MS75

Sampling Techniques for Stochastic Economic Dis-patch of Large Electrical Grids

As the penetration of renewable energy sources, e.g. windand solar, into the power grid increases, tools used for ca-pacity and transmission expansion planning must adapt toensure reliable service at a low cost. Reliability is a productof not only planning, but also the inclusion of reserve gen-eration. However, as the percentage of power supplied tothe grid by renewables grows, common practices for includ-ing reserves will become increasingly impractical. Exascalecomputing capabilities will provide a powerful tool to fore-see and address reliability problems during the capacityand transmission expansion planning process by enablingthe use of higher fidelity mathematical models (e.g. AC op-timal power flow) and including explicit representations ofuncertainty. One such representation is the sample averageapproximation (SAA) of the two-stage stochastic optimiza-tion problem, which relies on approximating mathematicalexpectations via sampling. Input samples, also called sce-narios, for the SAA must be chosen carefully: e.g. sce-narios based on random sampling are not necessarily ad-equate tests of grid robustness, often lacking events suchas large ramps in generation. We develop a method forbuilding scenarios based on importance sampling and the

solution of many surrogate models with simplified physicsand statistics. The results show improved convergence tothe expectation in the SAA and cheaper, yet robust eco-nomic dispatch decisions.

Matthew ReynoldsUniversity of Colorado, [email protected]

Ryan [email protected]

Wesley Jones, Devon SiglerNational Renewable Energy [email protected], [email protected]

MS76

On the Stability and the Uniform Propagation ofChaos Properties of Ensemble Kalman-Bucy Fil-ters

The Ensemble Kalman filter is a sophisticated and power-ful data assimilation method for filtering high dimensionalproblems arising in fluid mechanics and geophysical sci-ences. This Monte Carlo method can be interpreted as amean-field McKean-Vlasov type particle interpretation ofthe Kalman-Bucydiffusions. Besides some recent advanceson the stability of nonlinear Langevin type diffusions with-drift interactions, the long-time behaviour of models withinteracting diffusion matrices and conditional distributioninteraction functions has never been discussed in the litera-ture. One of the main contributions of the talk is to initiatethe study of this new class of models The talk presents aseries of newfunctional inequalities to quantify the stabilityof these nonlinear diffusion processes. The second contri-bution of this talk is to provide uniform propagation ofchaos properties as well as Lp-mean error estimates w.r.t.to the time horizon.

Pierre Del MoralINRIA and Department of MathematicsUniversity of [email protected]

MS76

Multilevel Monte Carlo for Data Assimilation

Bayesian inference provides a principled and well-definedapproach to the integration of data into an a priori knowndistribution. The posterior distribution, however, is knownonly point-wise (possibly with an intractable likelihood)and up to a normalizing constant. Monte Carlo meth-ods have been designed to sample such distributions, suchas Markov chain Monte Carlo (MCMC) and sequentialMonte Carlo (SMC) samplers. Recently, the multilevelMonte Carlo (MLMC) framework has been extended tosome of these cases, so that approximation error can beoptimally balanced with statistical sampling error, and ul-timately the Bayesian inverse problem can be solved for thesame asymptotic cost as solving the deterministic forwardproblem. This talk will concern the recent development ofMLMC data assimilation methods, which combine dynam-ical systems with data in an online fashion, including MLparticle filters and ensemble Kalman filters.

Kody LawOak Ridge National Laboratory

UQ18 Abstracts 97

[email protected]

MS76

Convergence Analysis of Ensemble Kalman Inver-sion

The ideas from the Ensemble Kalman Filter introduced byEvensen in 1994 can be adapted to inverse problems by in-troducing artifical dynamics. In this talk, we will discuss ananalysis of the EnKF based on the continuous time scalinglimits, which allows to derive estimates on the long-timebehavior of the EnKF and, hence, provides insights intothe convergence properties of the algorithm. In particular,we are interested in the properties of the EnKF for a fixedensemble size. Results from various numerical experimentssupporting the theoretical findings will be presented.

Claudia SchillingsUniversity of MannheimInstitute of [email protected]

Andrew StuartComputing + Mathematical SciencesCalifornia Institute of [email protected]

MS76

Long-time Stability and Accuracy of InteractingParticle Filters

In a nonlinear setting the filtering distribution can be ap-proximated via the empirical measure provided that anensemble of samples is available. A computationally fea-sible option to generate these particles is to define an ap-propriate modified evolution equation that describes thedynamics of the particles with respect to the incomingdata. The most famous example of such interacting par-ticle filter formulations is the ensemble Kalman-Bucy fil-ter (EnKBF). Although it works remarkably well in prac-tice its success is not well understood from a mathematicalpoint of view. In particular, the long time behavior ofthe EnKBF far from the asymptotic limit is of interest.In a recent study [de Wiljes, Reich, Stannat, Long-timestability and accuracy of the ensemble Kalman-Bucy filterfor fully observed processes and small measurement noise,https://arxiv.org/abs/1612.06065] we were able to de-rive stability and accuracy results for a specific variant ofthe EnKBF. More specifically we were able to derive lowerand upper bounds for the empirical covariance matrix for afully observed system and a finite number of particles. Fur-ther uniform-in-time stability and accuracy results, such ascontrol over the path wise estimation error, were show forthe considered EnKBF. Inspired by the EnKBF we also ex-plore more general interacting particle filter formulationsthat allow to overcome weaknesses of the EnKBF as wellas drawbacks of classical sequential resampling schemes.

Jana de WiljesUniversity of [email protected]

Sebastian ReichUniversitat [email protected]

Wilhelm StannatTU Darmstadt

[email protected]

MS77

Experimental Design in Diffuse Tomography

This talk considers two diffuse imaging modalities: electri-cal impedance tomography and thermal tomography. Theformer aims at reconstructing the conductivity distribu-tion inside a physical body from boundary measurementsof current and voltage, whereas the goal of the latter isto deduce information about the internal structure of aphysical body from time-dependent temperature measure-ments on its boundary. In practice, such measurementsare performed with a finite number of electrodes or witha finite number of heater elements and temperature sen-sors. This work considers finding optimal positions for theelectrodes in electrical impedance tomography and optimaltime-dependent heating patterns for the heater elements inthermal tomography. The goal is to place the electrodesand choose the heating patterns so that the posterior distr-bution for the (discretized) parameters of interest is as lo-calized as possible under physically reasonable restrictionson the measurement setup.

Nuutti HyvonenAalto UniversityDepartment of Mathematics and Systems [email protected]

Juha-Pekka PuskaAalto [email protected]

Aku SeppanenDepartment of Applied PhysicsUniversity of Eastern [email protected]

Stratos StaboulisTechnical University of [email protected]

MS77

Planning Sensitivity Tests using Mutual Informa-tion

Sensitivity testing involves exposing experimental units tovarious levels of a stimulus (e.g., heat, impact, voltage) toascertain population sensitivity to the stimulus. Outcomesare binary and usually modeled with a logistic or probitGLM. In this talk, we describe sequential Bayesian sensi-tivity tests using mutual information as a design criterion.We also numerically illustrate how sequential D-optimalsensitivity tests, like the Neyer and 3pod methods, haveBayesian interpretations.

Brian WeaverLos Alamos National [email protected]

Isaac MichaudNorth Carolina State [email protected]

MS77

Bayesian Design for Stochastic Models with Appli-

98 UQ18 Abstracts

cation to Models of Infectious Disease Dynamics

We will present recently-developed methods for efficientBayesian experimental design using Approximate BayesianComputation. This includes methods for evaluation of theutility, and search across the design space. The applicationof this framework to stochastic models, and in particular tothe design of group dose-response studies in the presenceof transmission will be explored.

Joshua RossUniversity of [email protected]

David PriceUniversity of [email protected]

Jono Tuke, Nigel BeanUniversity of [email protected], [email protected]

MS77

Optimal Design of High-speed Wind Tunnel Instru-mentation for Aero-thermal-structural Model Cal-ibration

Modeling the coupled aero-thermal-structural response of apanel subjected to high-speed flight environment is a chal-lenging task. Improving the model prediction confidencewith data is complicated by the substantial cost of windtunnel tests and experimental limitations. Therefore, it iscritical to optimally design tests that provide the most in-formative data possible. Maximum expected informationgain is used to determine the instrumentation locations forcalibration data on rigid wind tunnel specimen represen-tative of a deformed panel under loading conditions thatinduce aerothermoelastic coupling. Previous results fromBayesian calibration of model discrepancy in lower-fidelitymodels (i.e., piston theory and Eckert’s reference temper-ature method) is used to guide the experimental design.Multiple concurrent measurements are possible, such asaerodynamic pressure, surface temperature, and heat flux.However, the problem is constrained by several practicalconsiderations, including physical limitations on the instru-mentation (e.g., clearance of pressure transducers betweenthe panel and wedge, wire bundle size, and maximum op-erating temperature of sensors), as well as the budget forconducting the test (e.g., specimen fabrication, instrumen-tation, and wind tunnel run time). The overall objectiveis to determine the optimal experimental design that willreduce model uncertainty in aerothermoelastic predictions.

Benjamin P. SmarslokAir Force Research [email protected]

Gregory Bartram, Zachary Riley, Ricardo PerezUniversal Technology [email protected], [email protected],[email protected]

MS78

Semi-supervised Learning using Bayesian Hierar-chical Methods

In semi-supervised machine learning, the task of cluster-ing is to divide the data into groups using a labeled subset.

Our Bayesian approach to graph-based clustering views theclassifying function as a random variable with a distribu-tion that combines the label model with prior beliefs aboutthe classification. In Bayesian hierarchical methods, hyper-parameters governing the prior distribution are introducedand can be sampled as well, with the goals of both de-riving a classification and learning the distribution of thehyperparameters. We apply dimension-robust hierarchicalMarkov Chain Monte Carlo methods for indirectly sam-pling the posterior distribution of these random variables,as direct sampling is generally challenging. We focus onpriors derived from the graph Laplacian, a matrix whoseeigenvectors are known to contain cluster information. Weimplemented Bayesian hierarchical models that learn dif-ferent sets of hyperparameters, including ones that governthe scale of the eigenvectors or the number of eigenvectorsused. We tested these models on real and synthetic datasets. Our results indicate that there is information to belearned about the distribution of these hyperparameters,which could be used to improve classification accuracy.

Victor L. ChenCalifornia Institute of [email protected]

Matthew M. DunlopComputing + Mathematical SciencesCalifornia Institute of [email protected]

Omiros PapaspiliopoulosDepartment of EconomicsUniversitat Pompeu [email protected]

Andrew StuartComputing + Mathematical SciencesCalifornia Institute of [email protected]

MS78

Large-data and Zero-noise Limits of Graph-basedSemi-supervised Learning Algorithms

Graph-based approaches to semi-supervised classificationtypically employ a graph Laplacian L constructed from theunlabeled data to provide clustering information. We con-sider Bayesian approaches wherein the prior is defined interms of L; choice of data model then leads to probit andBayesian level set algorithms. We investigate large datalimits of these algorithms from both optimization and prob-abilistic points of view, both when the number of labels isfixed when it grows with the number of data points. Weobserve a common necessary condition for the limiting al-gorithms to be meaningful. Finally we consider small noiselimits of these algorithms.

Matthew M. DunlopComputing + Mathematical SciencesCalifornia Institute of [email protected]

Dejan SlepcevCarnegie Mellon [email protected]

Andrew StuartComputing + Mathematical SciencesCalifornia Institute of Technology

UQ18 Abstracts 99

[email protected]

Matthew ThorpeUniversity of [email protected]

MS78

Robust UQ in Graph-based Bayesian Semi-supervised Learning and Inverse Problems

A popular approach to semi-supervised learning proceedsby endowing the input data with a graph structure in or-der to extract geometric information and incorporate itinto a Bayesian framework. We introduce new theory thatgives appropriate scalings of graph parameters that prov-ably lead to a well-defined limiting posterior as the size ofthe unlabeled data set grows. Furthermore, we show thatthese consistency results have profound algorithmic impli-cations. When consistency holds, carefully designed graph-based Markov chain Monte Carlo algorithms are proved tohave a uniform spectral gap, independent of the number ofunlabeled inputs. Several numerical experiments corrob-orate both the statistical consistency and the algorithmicscalability established by the theory.This is joint work withZach Kaplan, Thabo Samakhoana and Daniel Sanz-Alonso.

Nicolas Garcia TrillosBrown Universitynicolas garcia [email protected]

MS78

Uncertainty Quantification in Graph-based Learn-ing

Abstract not available at time of production.

Xiyang LuoUniversity of California, Los [email protected]

MS79

Certified Reduced Basis Method for Nonlocal Dif-fusion Equations with Application to UncertaintyQuantification

Models of reduced computational complexity is indispens-able in scenarios where a large number of numerical solu-tions to a parametrized problem are desired in a fast/real-time fashion. This is frequently the case for UncertaintyQuantification. Utilizing an offline-online procedure andrecognizing that the parameter-induced solution manifoldscan be well approximated by finite-dimensional spaces,reduced basis method (RBM) and reduced collocationmethod (RCM) can improve efficiency by several ordersof magnitudes. In this talk, we present our recent develop-ment of mathematically rigorous and computationally effi-cient RBM for nonlocal partial differential equations, witha particular emphasis on nonlocal diffusion equations.

Yanlai ChenDepartment of MathematicsUniversity of Massachusetts [email protected]

Harbir AntilGeorge Mason UniversityFairfax, VA

[email protected]

Akil NarayanUniversity of [email protected]

MS79

Dynamical Low Rank Approximation of Time De-pendent Random PDEs

In this talk we consider time dependent PDEs with ran-dom parameters and seek for an approximate solution inseparable form that can be written at each time instant asa linear combination of linearly independent spatial modesmultiplied by linearly independent random variables (lowrank approximation) in the spirit of a truncated Karhunen-Love expansion. Since the optimal modes can significantlychange over time, we consider here a dynamical approachwhere those modes are computed on the fly as solutionsof suitable evolution equations. We discuss the construc-tion of the method as well as practical numerical aspectsfor several time dependent PDEs with random parameters,including the heat equation with a random diffusion co-efficient; the incompressible Navier-Stokes equations withrandom Dirichlet boundary conditions; the wave equationwith random wave speed. In the latter case, we propose adynamical low rank approximation that preserves the sym-plectic structure of the equations.

Fabio NobileEPFL, [email protected]

Eleonora MusharbashEcole Polytechnique Federale de [email protected]

MS79

Statistical Modeling of ROM State-space Errors bythe ROMES Method

Characterizing the epistemic uncertainty introduced bysubstituting a full-order model with a reduced-order model(ROM) is crucial for developing rapid yet reliable uncer-tainty quantification procedures. We present a techniquefor statistically modeling reduction errors on the solutionfield (i.e., state) itself arising when a reduced-order modelis adopted for the solution of a parametrized partial dif-ferential equation (PDE). The proposed technique extendsthe ROM error surrogate (ROMES) method, which wasoriginally conceived for modeling the error in scalar-valuedoutputs by using dual-weighted residuals and Gaussianprocess regression. First, we decompose the state-spaceerror into (i) an in-plane error, and (ii) an out-of-planeerror, and represent each of these errors in a unique low-dimensional subspace. Second, we construct an error sur-rogate for each generalized coordinate characterizing thelow-dimensional representation of the in-plane and out-of-plane state-space errors. The resulting error surrogate canthen be propagated through any output functional to pro-duce an error surrogate for any scalar or field quantity ofinterest. Numerical experiments performed on both linearand nonlinear steady PDEs illustrate that the proposed ap-proach improves prediction accuracy of the reduced-ordermodel without incurring significant additional computa-tional costs.

Stefano Pagani

100 UQ18 Abstracts

Politecnico di [email protected]

Kevin T. CarlbergSandia National [email protected]

Andrea ManzoniEPFL, [email protected]

MS79

Stochastic Sub-modeling under Heterogeneous In-put Uncertainty with Application to CoronaryArtery Disease

Hemodynamic simulations provide non-invasive assess-ments of blood flow not obtainable from standard clinicalimaging that can aid in disease research. Here we discussa theoretical/computational approach to simulate hemo-dynamics under uncertainty from a possibly large num-ber of interacting sources for cases where quantities of in-terest are referred only to a small portion of the model.Sources from assimilated flow-related inputs are combinedwith thickness/elastic property parameterizations underthe assumption that material property uncertainty doesnot affect the inference of flow boundary conditions from alumped peripheral model. We then combine the two uncer-tainty sources for the analysis of coronary bypass grafts. Areduced order parameterization for coronary bypass sub-models is constructed using a sparse orthogonal polyno-mial representation to map changes in peripheral circula-tion onto the inlet flow and outlet pressure of the sub-model. We also leverage a lower dimensional representa-tion of a random field on a sub model whose size is onlya small multiple of an experimentally observed correlationlength. We finally perform forward propagation on the re-duced model/parameterization with significant computa-tional savings. Results show wall shear stress is associatedwith limited variance, while material property uncertaintyinduces significant variability in the wall strain. Resultsfrom sensitivity analysis confirm our assumptions on therandom inputs.

Justin Tran, Daniele E. SchiavazziStanford [email protected], [email protected]

Alison MarsdenSchool of MedicineStanford University, [email protected]

MS80

Low-rank Approximations for Efficient MCMCSampling in Hierarchical Bayesian Inverse Prob-lems

In hierarchical Bayesian inverse problems, the posteriordensity function depends upon the state vector (oftenan unknown image) and hyper-parameters. Integratingthe posterior density function with respect to the statevector yields the marginal density, defined only over thehyper-parameters. Evaluating the marginal density canbe computationally intractable, especially for large-scaleand/or nonlinear inverse problems. In this talk, we showhow to use low-rank matrices to construct an approximate

marginal density that is more computationally efficient toevaluate. We then use this approximate marginal densityto develop computationally efficient MCMC methods forsampling from the posterior density functions arising inboth linear and nonlinear inverse problems.

Johnathan M. BardsleyUniversity of [email protected]

MS80

Methodologies for Enabling Bayesian Calibrationin Land-ice Modeling Towards Probabilistic Pro-jections of Sea-level Change

Mass loss from the polar ice sheets is expected to have asignificant contribution to future sea-level changes. Predic-tions of ice sheet evolution are fraught with uncertainty dueto unknown model parameters, incomplete knowledge ofinitial/boundary conditions, uncertain future climate forc-ing, and sparsity/imprecision of observational data. Thistalk will describe our ongoing efforts to quantify uncertain-ties in ice-sheet initial conditions, towards providing uncer-tainty bounds on expected 21st century sea-level change.First, we will overview our DOE-funded project for land-ice modeling and our land-ice simulation tool, known asProSPect and FELIX, respectively. Next, we will describeour workflow for estimating in a Bayesian setting the basalfriction coefficient, an important high-dimensional param-eter that defines the initial/boundary conditions in icesheet models. This approach is based on a previously-developed computational framework for solving large-scaleBayesian inverse problems (Bui-Thanh et al. 2013; Isaac etal. 2015). The parameter-to-observable map is linearizedaround the MAP point, computed by solving a large-scaledeterministic inversion problem. This leads to a Gaussianposterior distribution whose covariance can be estimatedefficiently using a low-rank approximation of the Hessian ofthe data misfit. Following a discussion of the Bayesian cal-ibration methodology and its implementation, we presentresults on some realistic Greenland simulations.

Irina K. TezaurSandia National [email protected]

John D. JakemanSandia National [email protected]

Mauro PeregoCSRI Sandia National [email protected]

Stephen PriceLos Alamos National [email protected]

MS80

Randomized Iterative Methods for Bayesian In-verse Problems

In this work, we develop randomized iterative methods toefficiently solve optimization problems that arise in largeBayesian inverse problems. We consider two approachesthat exploit randomness to overcome fundamental com-putational limitations (e.g., data storage and processing,memory allocation, and efficient forward and adjoint modelevaluations). One approach follows a randomized stochas-

UQ18 Abstracts 101

tic quasi-Newton approach and the other utilizes inexactKrylov methods, where randomization is used to avoid fullmatrix multiplications. Numerical examples from imageprocessing are provided.

Julianne Chung, Joseph T. Slagel, Matthias ChungDepartment of MathematicsVirginia [email protected], [email protected], [email protected]

MS80

Large-p Small-n Nonparametric Regression andAdditive-interactive Response Functions

Smoothing based prediction faces many difficulties whenthe predictor dimension p is large but sample size n islimited, a situation that is often referred to as the large-p small-n prediction problem. A fundamental challengeis that not all smooth functions can be consistently esti-mated from noisy data if p grows much faster than n. Wedemonstrate that additive-interactive function spaces offerthe ideal modeling framework for consistent estimation. Inthe additive-interactive model, the mean response functionis taken to be the sum total of a modest number of smoothfunctions each involving a small number of interacting pre-dictors. We show that the minimax L2 estimation rate oversuch a function space decays to zero under reasonable as-sumptions on the relative growth rates of n, p and the num-ber of truly active predictors. The additive-interactive as-sumption naturally leads to a hierarchical Bayesian modelfor the response function. We introduce a Bayesian estima-tion method for this model by utilizing an ”additive Gaus-sian process prior” on the model space. We investigateadaptive asymptotic efficiency of this method in predictionunder L2 loss and recovery of true predictors.

Surya TokdarDepartment of Statistical ScienceDuke [email protected]

MS81

Hamiltonian Monte Carlo-Subset Simulation(HMC-SS) Method for Failure Probabilities andRare Events Estimation in Non-Gaussian Spaces.

In this talk, we present and promote the use of Hamilto-nian Monte Carlo (HMC) method in the context of SubsetSimulation (SS) for failure probabilities and rare events es-timation. The HMC method uses Hamiltonian mechanicsto propose samples following a target probability distribu-tion. Compared to classical Markov-Chain Monte Carlo asGibbs or Metropolis-Hastings approaches, HMC methodalleviates the random walk behavior to achieve a more ef-ficient exploration of the probability space. In this talk,after a brief review of the principles of the HMC method,and subset simulation, we present a series of algorithms tomerge the two methods for failure probabilities and rareevents estimation. We focus in particular on non-Gaussianspaces, for which the method can be used without trans-forming the joint probability distribution onto the stan-dard normal space. Finally, we show the accuracy andefficiency of Subset Simulation with the proposed HMC-SSalgorithms for a series of benchmark examples.

Marco BroccardoETH [email protected]

Ziqi WangGuangzhou [email protected]

Junho SongSeul National [email protected]

MS81

MCMC and Nested Extreme Risks

We design and analyze an algorithm for estimating themean of a function of a conditional expectation, when theouter expectation is related to a rare-event. The outer ex-pectation is evaluated through the average along the pathof an ergodic Markov chain generated by a Markov chainMonte Carlo sampler. The inner conditional expectationis computed as a non-parametric regression, using a least-squares method with a general function basis and a de-sign given by the sampled Markov chain. We establishnon asymptotic bounds for the L2-empirical risks associ-ated to this least-squares regression; this generalizes theerror bounds usually obtained in the case of i.i.d. observa-tions. Global error bounds are also derived for the nestedexpectation problem. Numerical results in the context offinancial risk computations illustrate the performance ofthe algorithms.

Emmanuel GobetEcole [email protected]

MS81

Ensemble MCMC Samplers for Failure ProbabilityEstimation with Subset Simulation

Subset simulation has, in may respects, become thestandard-bearer for estimating failure probabilities - par-ticularly for complex systems with very high dimension.The primary research challenges in subset simulation relateto improving the efficiency and robustness of sampling ap-proaches, specifically Markov Chain Monte Carlo (MCMC)methods. The classical subset simulation algorithm uses amodified Metropolis-Hastings algorithm that decreases thedependence on problem dimension and reduces the num-ber of rejected samples. In the past decade, much work hasbeen performed to improve MCMC algorithms for subsetsimulation. Recently, a new family of MCMC algorithmsreferred to as ensemble MCMC samplers have been de-veloped that leverage simultaneously propagating MarkovChains. Ensemble samplers have been shown to reducesample correlations, dramatically reduce rejection rates es-pecially for degenerate distributions, and reduce the num-ber of parameters necessary for MCMC from a number thatscales with the dimension (i.e. typically a length-scale forthe proposal density must be specified in every dimension)to only one or two parameters. In this work, we propose amodified form of the affine invariant ensemble MCMC algo-rithm for applications in subset simulation. This algorithmis particularly effective for estimating failure probabilitiesin high dimensional systems with lower effective dimension.

Michael D. ShieldsJohns Hopkins [email protected]

V.S. Sundar

102 UQ18 Abstracts

University of California San [email protected]

Jiaxin Zhang, Dimitris GiovanisJohns Hopkins [email protected], [email protected]

MS81

Bayesian Subset Simulation Tutorial

We consider the problem of estimating a probability of fail-ure α, defined as the volume of the excursion set of a func-tion f : X ⊆ R

d → R above a given threshold, under agiven probability measure on X. In this talk, we presenta tutorial about the BSS (Bayesian Subset Simulation) al-gorithm (Bect, Li and Vazquez, SIAM JUQ 2017), whichcombines the popular subset simulation algorithm (Au andBeck, Probab. Eng. Mech. 2001) and our sequentialBayesian approach for the estimation of a probability offailure (Bect, Ginsbourger, Li, Picheny and Vazquez, Stat.Comput. 2012). This makes it possible to estimate α whenthe number of evaluations of f is very limited and α isvery small. A key idea is to estimate the probabilities of asequence of excursion sets of f above intermediate thresh-olds, using a sequential Monte Carlo (SMC) approach. AGaussian process prior on f is used to define the sequenceof densities targeted by the SMC algorithm, and drive theselection of evaluation points of f to estimate the interme-diate probabilities. BSS achieves significant savings in thenumber of function evaluations with respect to other MonteCarlo approaches. The BSS algorithm is implemented inthe STK (Small Matlab/Octave Toolbox for Kriging).

Emmanuel Vazquez, Julien [email protected],[email protected]

MS82

Regression Based Methods for Computing Low-rank Tensor-decompositions

Exploiting structure is essential for constructing accuratehigh-dimensional approximations from limited data. Spar-sity and low-rank are two types of structures often presentin model response surfaces. In this talk we present regres-sion based methods for constructing low-rank tensor de-compositions from unstructured samples. We demonstratethe utility of both linear and non-linear approximation,robust sampling schemes for orthogonal polynomials anddiscuss connections between low-rank decomposition andsparse approximation.

John D. JakemanSandia National [email protected]

MS82

Multilevel Higher-order Quasi-Monte CarloBayesian Estimation for PDEs with RandomCoefficients

We propose and analyze deterministic multilevel approx-imations for Bayesian inversion of operator equationswith uncertain distributed parameters, subject to additiveGaussian measurement data. The algorithms use a multi-level approach based on deterministic, higher-order quasi-Monte Carlo (HoQMC) quadrature for approximating the

high-dimensional expectations, which arise in the Bayesianestimators, and a Petrov-Galerkin (PG) method for ap-proximating the solution to the underlying partial differ-ential equation (PDE). This extends the previous single-level approach from [J. Dick, R. N. Gantner, Q. T. Le Giaand Ch. Schwab, Higher order quasi-Monte Carlo integra-tion for Bayesian estimation, Report 2016-13, Seminar forApplied Mathematics, ETH Zurich (in review)].

Quoc T. Le GiaSchool of Mathematics and StatisticsUniversity of New South Wales, [email protected]

Josef DickSchool of Mathematics and StatisticsThe University of New South [email protected]

Robert N. GantnerETH [email protected]

Christoph SchwabETH [email protected]

MS82

Multi-scale Sampling Methods for Partial Differ-ential Equations with Gaussian Markov RandomField Inputs

The high computational cost of stochastic simulations in-volving partial differential equations (PDEs) with uncer-tain input parameters is often attributable to a combina-tion of two bottlenecks: i) the steep cost of evaluating sam-ple paths and ii) the complexity of the underlying param-eter space. In this talk we relate both of these problems tothe computational mesh, by using Gaussian Markov ran-dom fields to model the PDE’s spatially varying input pa-rameters. This allows us to exploit readily available localdependency information of the parameter field in conjunc-tion with sensitivity estimates to identify spatial regionsthat contribute statistically to the variation in the com-puted quantity of interest.

Hans-Werner Van WykDepartment of Mathematics and StatisticsAuburn [email protected]

MS82

Estimation of Exciton Diffusion Lengths of OrganicSemiconductors in Random Domains

Exciton diffusion length plays a vital role in the functionof optoelectronic devices. Oftentimes, the domain occupiedby an organic semiconductor is subject to surface measure-ment error. In this paper, we employ a random functionrepresentation for the uncertain surface of the domain. Af-ter nondimensionalization, the forward model becomes adiffusion-type equation over the domain whose geometricboundary is subject to small random perturbations. Wepropose an asymptotic-based method as an approximateforward solver whose accuracy is justified both theoreti-cally and numerically. It only requires solving several de-terministic problems over a fixed domain. Therefore, forthe same accuracy requirement, we tested here, the running

UQ18 Abstracts 103

time of our approach is more than one order of magnitudesmaller than that of directly solving the original stochas-tic boundary-value problem by the stochastic collocationmethod. In addition, from numerical results, we find thatthe correlation length of randomness is important to de-termine whether a 1D reduced model is a good surrogate.

Zhongjian WangDepartment of MathematicsUniversity of Hong [email protected]

Zhiwen ZhangDepartment of Mathematics,The University of Hong Kong,[email protected]

Jingrun ChenSoochow [email protected]

Xiang ZhouDepartment of MathematicsCity Univeristy of Hong [email protected]

Ling LinSun Yat-Sen [email protected]

MS83

Time Discretization Bi-fidelity Modeling

For many practical simulation scenarios, modification ofthe time-step (large versus small) provides a simple lowversus high fidelity scenario in terms of accuracy versuscomputational cost. Multifidelity approaches attempt toconstruct a bi-fidelity model having an accuracy compa-rable with the high-fidelity model and computational costcomparable with the low-fidelity model. In this talk, wepresent a bi-fidelity framework defined by the time dis-cretization parameter that relies on the low-rank structureof the map between model parameters/uncertain inputsand the solution of interest. We show how this frameworkbehaves on canonical examples, and we show real-world(practical) extensions of our methodology to molecular dy-namics simulations.

Robert M. KirbyUniversity of UtahSchool of [email protected]

Akil NarayanUniversity of [email protected]

MS83

Polynomial Chaos Basis Reduction for UncertaintyQuantification – A Bi-fidelity Approach

Minimization of computational cost is a ubiquitous chal-lenge in uncertainty quantification or design space explo-ration of fluid mechanics simulations. A useful tool to easethe burden of solving complex systems of PDEs, whicharise in such simulations, is model reduction. We presenta stochastic basis reduction method in which low-fidelity

samples are employed to inform the construction of a re-duced basis. Approximating the high-fidelity quantities ofinterest in this reduced basis requires a small number ofhigh-fidelity samples to achieve a bi-fidelity estimate. Thepremise of this approach is that while a low-fidelity modelmay be inaccurate in terms of predicting the quantities ofinterest, it will represent the stochastic space of the prob-lem for an accurate bi-fidelity approximation. We thenpresent the successful application of this algorithm in twoscenarios: a lid-driven cavity and an airfoil. In both cases,we achieve acceptable errors for a minimal number of high-fidelity model evaluations.

Felix NewberryUniversity of Colorado [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

Michaela FarrUniversity of Colorado Bouldertba

MS83

A Multifidelity Cross-entropy Method for RareEvent Simulation

We introduce a multifidelity approach that leverages a hier-archy of low-cost surrogate models to efficiently constructbiasing densities for rare event simulation with importancesampling. Our multifidelity approach is based on the cross-entropy method that derives a biasing density via an op-timization problem. We approximate the solution of theoptimization problem at each level of the surrogate-modelhierarchy, reusing the densities found on the previous lev-els to precondition the optimization problem on the sub-sequent levels. With the preconditioning, an accurate ap-proximation of the solution of the optimization problem ateach level can be obtained from a few model evaluationsonly. In particular, at the highest level, only few evalua-tions of the computationally expensive high-fidelity modelare necessary. Our numerical results demonstrate that ourmultifidelity approach achieves speedups of several ordersof magnitude in a thermal and a reacting-flow examplecompared to the single-fidelity cross-entropy method thatuses a single model alone.

Benjamin PeherstorferDepartment of Mechanical EngineeringUniversity of [email protected]

Boris Kramer, Karen E. WillcoxMassachusetts Institute of [email protected], [email protected]

MS84

Multi-level Uncertainty Aggregation with BayesianModel Error Calibration and Validation

This talk considers the use of data from tests of lowercomplexity to inform the prediction regarding a compli-cated system where no test data is available. Typically,simpler test configurations are used to infer the unknownphysics model parameters of a system, and the calibrationresults are propagated through the system model to quan-

104 UQ18 Abstracts

tify the uncertainty in the system response. However, theparameter estimation results are affected by the qualityof the test configuration model, thus necessitating valida-tion tests of the test configuration model. This talk willpresent a systematic uncertainty roll-up methodology thatintegrates calibration and validation results at multiple lev-els of test configurations towards uncertainty quantificationin the system-level prediction. The roll-up methodologyalso includes a quantitative metric for the relevance of dif-ferent test configurations to the prediction configuration.A further contribution is the estimation of confidence inusing the roll-up distributions of parameters for system-level prediction. An optimization procedure is formulatedto maximize the roll-up confidence by selecting the mostvaluable calibration and validation tests at the lower levelsand by designing the selected lower level tests to maximizethe information gain. The proposed methods for both theforward and inverse UQ problems are applied to a multi-level dynamics challenge problem developed at Sandia Na-tional Laboratories.

Sankaran MahadevanVanderbilt [email protected]

MS84

Calibration and Propagation of Model DiscrepancyAcross Experiments

Despite continuing advances in statistical inversion andmodeling, model inadequacy due to model form error re-mains a concern in all areas of mathematical modeling.Much like physical models, calibrating a discrepancy modelrequires careful consideration regarding formulation, pa-rameter estimation, and uncertainty quantification, each ofwhich is often problem-specific. The validity of the originalphysical model, the inadequacy model, and the combinedmodel for the prediction of quantities of interest remainsin question. A generalized approach and implementationof model discrepancy detection, construction, and propa-gation into a predictive setting is presented in the contextof Bayesian model calibration and is demonstrated with anexample.

Kathryn MaupinSandia National [email protected]

Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico [email protected]

MS84

Bayesian Model Reduction using Automatic Rele-vance Determination (ARD): Observations and Im-provements

The authors have previously used ARD to perform auto-matic model reduction for a nonlinear aeroelastic oscilla-tor using Gaussian ARD priors [1]. In this work, we seekto investigate following facets of ARD in the context ofBayesian model reduction: 1) Difference in sparsity levelsbetween LASSO (least absolute shrinkage and selection op-erator) and ARD, 2) Effect of using different distributions(Uniform, Gaussian, Laplace) for ARD priors on evidenceoptimization and sparsity levels, 3) Using sequential par-ticle filtering techniques to expedite evidence optimizationwhereby evidence is computed using an MCMC algorithm.

Abhijit SarkarAssociate Professorcarleton University, Ottawa, Canadaabhijit [email protected]

Rimple SandhuCarleton University, Canadarimple [email protected]

Chris PettitUnited States Naval Academy, [email protected]

Mohammad KhalilSandia National [email protected]

Dominique PoirelRoyal Military College of Canada, [email protected]

MS84

Bayesian Inference of Subsurface Stratification

Abstract not available at time of publication.

Honglei SunZhejiang [email protected]

MS85

Multilevel/Multifidelity Monte Carlo for WavePropagation in Heterogeneous Media

In the last few years efficient algorithms have been pro-posed for the propagation of uncertainties through numer-ical codes, however lack of regularity of the solution andhigh dimensional input spaces still pose a challenge for UQin realistic configurations. In this scenario Monte Carlo isoften the only viable alternative, however its slow conver-gence rate makes it difficult to apply in the context of ex-pensive high-fidelity simulations. More recently multileveland multilevel/multifidelity estimators have been intro-duced in order to increase the reliability of a standard MCestimator by reducing its variance through use of a largernumber of coarser/low-fidelity simulations. In this talk wewill discuss and apply different multilevel/multifidelity es-timators to wave propagation problems in heterogeneousmedia and we will compare their performance with respectto the standard MC estimator.

Gianluca GeraciSandia National Laboratories, [email protected]

Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Gianluca IaccarinoStanford UniversityMechanical Engineering

UQ18 Abstracts 105

[email protected]

MS85

Efficient Stochastic Galerkin Methods for Uncer-tainty Quantification of CO2 Storage in SalineAquifers

Sequestration of CO2 in subsurface porous media is a keytool for mitigating anthropogenic greenhouse gas emissionscausing climate change. In order to ensure safe and se-cure long time storage it is essential to quantify the influ-ence of parameter uncertainties and reservoir heterogene-ity on leakage risk. In this work we apply intrusive un-certainty quantification (UQ) using a Stochastic Galerkin(SG) framework with generalized Polynomial Chaos on avertically integrated CO2 transport model. To improve ef-ficiency of the UQ a locally-reduced-order-basis approachand a means for reducing the cost of calculating the SG-matrices is employed. The accuracy and efficiency of theimproved SG method is then compared to traditional sam-pling based UQ.

Daniel Olderkjær, Per PetterssonUni [email protected], [email protected]

MS85

Data-driven Uncertainty Quantification for Trans-port Problems in Heterogeneous Porous Media

A challenging problem in flow and transport simulationsarises in reservoirs that contain channels or other fea-tures where the permeability varies discontinuously be-tween channels and surrounding matrix. Traditional two-point statistics methods like Karhunen-Loeve expansionsperform poorly in representing these features, but multiple-point statistics methods have been used to successfully torepresent channels within a stochastic framework. In thiswork we use a kernel transformation to create an efficientstochastic parameterization of heterogeneous permeabilityfields, honoring multiple-point statistics. We then per-form forward uncertainty quantification for transport prob-lems using wavelets based on data, combined with stochas-tic model reduction using analysis of variance (ANOVA)decomposition. The proposed methodology is tested ona problem setup that is representative for reservoirs em-ployed in off-shore CO2 storage in the North Sea.

Per PetterssonUni [email protected]

Anna NissenKTH Royal Institute of [email protected]

MS86

Data-space Inversion for Uncertainty Quantifica-tion in Reservoir Simulation and Carbon StorageApplications

Data-space inversion (DSI) enables posterior flow predic-tions and uncertainty quantification based on prior-modelsimulation results and observed data. Posterior geologicalmodels are not constructed. In previous work, we devel-oped and applied DSI procedures for predicting well-basedquantities of interest, such as oil and water productionrates. In this talk, new DSI treatments and applications

will be presented. First, DSI is extended to enable poste-rior forecasts, under variable (user-specified) well settings,to be generated without re-simulating any models. Thisis accomplished by treating new well settings as additionalobserved data to be matched when posterior DSI predic-tions are generated. We will demonstrate that this capabil-ity facilitates the use of DSI for production optimizationunder uncertainty. Next, DSI is extended to enable theprediction of CO2 plume location in carbon storage oper-ations. In this application the data derive from monitor-ing well observations, and a procedure that efficiently op-timizes monitoring well locations is also incorporated. Theresulting DSI framework will be shown to provide reason-able predictions for CO2 plume location, with uncertaintyreduced significantly relative to that in prior simulations.

Louis J. DurlofskyStanford UniversityDept of Petroleum [email protected]

Wenyue Sun, Su JiangStanford UniversityDept of Energy Resources [email protected], [email protected]

MS86

A Data-driven Multiscale Finite Volume Methodfor Uncertainty Quantification

We propose a framework to accelerate multiscale meth-ods within uncertainty quantification by leveraging the re-peated computations of basis functions which are obtainedby solving localized elliptic problems. Concretely, samplesof basis functions are collected from a small number of runsduring the uncertainty propagation pipeline. These sam-ples are then used to train a fast regression model of abasis predictor. This predictor then replaces the relativelymore expensive computations of localized problems in sub-sequent forward runs. The effectiveness of the proposedframework is demonstrated on two test cases of multiphaseporous media flow.

Ahmed H. ElSheikhInstitute of Petroleum EngineeringHeriot-Watt University, Edinburgh, [email protected]

Shing ChanHeriotWatt University, [email protected]

MS86

Learning Complex Geologic Patterns for Subsur-face Flow Model Calibration

We develop a machine-learning-based framework to aidthe reconstruction of complex geological facies maps fromlimited non-linear measurements in subsurface flow ap-plications. The proposed formulation involves a regular-ized least-squares problem with a geological feasibility con-straint. The least-squares minimization is decomposed intotwo subproblems that are iteratively solved using the al-ternating directions algorithm. The first subproblem inte-grates the nonlinear flow data to obtain approximate con-tinuous solutions, while the second subproblem maps theresulting continuous solution onto a geologically feasibleset. The feasible set is defined by prior training models thatrepresent the expected connectivity patterns in the discrete

106 UQ18 Abstracts

facies distributions. The mapping of the continuous solu-tion to the feasible set with discrete geologic patterns isimplemented using supervised learning via the k-Nearest-Neighbors (kNN) algorithm. Numerical examples are usedto evaluate the performance of the method and discuss itsimportant properties.

Azarang GolmohammadiElectrical Engineering, [email protected]

Behnam JafarpourDepartment of Chemical Engineering and [email protected]

MS86

Ultra-fast Reactive Transport Simulations usingMachine Learning

Modeling chemical reactions in reactive transport simula-tions is both challenging and computationally expensive.The involved computations, chemical equilibrium and ki-netics calculations, can account for over 99% of all com-putational costs in the simulation. For example, to calcu-late the chemical equilibrium state of the fluid and solidphases in every mesh cell (or degree of freedom), in ev-ery time step, several iterations are needed to solve theunderlying non-linear equations governing chemical equi-librium, with each iteration requiring the computation ofthermodynamic properties for all species (e.g., activity andfugacity coefficients). In this talk, we present a novel ma-chine learning algorithm that is able to learn on-demandhow to perform chemical equilibrium calculations. When-ever the algorithm is applied, it identifies among all pre-viously solved chemical equilibrium problems the one thatis closest to the new problem. Once this happens, themachine learning algorithm is able to quickly predict theresult, bypassing all those costly operations (evaluation ofthermodynamic properties, iterative solution of non-linearequations). If the prediction is not accurate enough, thena training event is triggered so that the smart algorithmcan learn how to solve similar problems in the future. Weshow that reactive transport simulations can be performedorders of magnitude faster using the proposed on-demandmachine learning algorithm.

Allan LealInstitute of Geophysics, ETH [email protected]

Dmitrii KulikLaboratory for Waste Management, Paul [email protected]

Martin SaarInstitute of Geophysics, ETH [email protected]

MS87

Variational Reformulation of the UncertaintyPropagation Problem using Probabilistic Numer-ics

Stochastic partial differential equations (SPDE) have aubiquitous presence in computational science and engineer-

ing. Stochasticity in SPDEs arises from unknown/randomboundary/initial conditions or field parameters (e.g., thepermeability of the ground in flow through porous media,the thermal conductivity in heat transfer) and, thus, itis inherently high-dimensional. In this regime, traditionaluncertainty propagation techniques fail because they at-tempt to learn the high-dimensional response surfaces (thecurse of dimensionality). The only viable alternative isMonte Carlo (and advanced variants such as multi-levelMC). However, as A. OHagan put in in his 1987 paper,Monte Carlo is fundamentally unsound because it fails toidentify and exploit correlations between the samples. Inthis work, we develop a promising alternative to MC in-spired by recent advances in probabilistic numerics (PN)and variational inference (VI). Our method does not relyon a traditional PDE solver, and it does not attempt tolearn a response surface. Instead, we use PN which resultsin two advantages. First, we gain control over the com-putational cost, albeit at the expense of additional (butquantified) epistemic uncertainty. Second, PN allows us toquantify the information loss between the true solution ofthe uncertainty propagation problem and a candidate pa-rameterization. The latter results in a reformulation of theuncertainty propagation problem as a variational inferenceproblem.

Ilias BilionisPurdue [email protected]

Panagiotis TsilifisUniversity of Southern [email protected]

MS87

Uncertainty Quantification for Kinetic Equations

Abstract not available at time of publication.

Shi JinShanghai Jiao Tong University, China and theUniversity of [email protected]

MS87

Computational Geometry Aspects of Monte CarloApproaches to PDE Problems in Biology, Chem-istry, and Materials

We will introduce at some Monte Carlo methods for solv-ing problems in electrostatics. These rely on evaluatingfunctionals of the first-passage time of Brownian motionon geometries defined by the system of interest. We willuse the Walk on Spheres (WOS) algorithm to quickly eval-uate these functionals, and we will show how a computa-tional geometric computation dominates this computationin complexity. We then consider computing the capaci-tance of a complicated shape, and use this as our modelproblem to find an efficient serial and parallel implementa-tion. The capacitance computation is prototypical of manyMonte Carlo approaches to problems in biology, biochem-istry, and materials science. This is joint work with Drs.Walid Keyrouz and Derek Juba from the Information Tech-nology Laboratory at NIST.

Michael Mascagni

Computer Science/Computational ScienceFlorida State University, Tallahassee

UQ18 Abstracts 107

[email protected]

MS87

Distributed Learning

Analyzing and processing big data has been an impor-tant and challenging task in various fields of science andtechnology. Distributed learning provides powerful meth-ods for handling big data and forms an important topicin learning theory.It is based on a divide-and-conquer ap-proach and consists of three steps: first we take a data setstored distributively or divide oversized data into subsetsand each data subset is distributed to one individual ma-chine; then each machine processes the distributed datasubset to produce one output; finally the outputs from in-dividual machines are combined to generate an output ofthe distributed learning algorithm. It is expected that adistributed learning algorithm can perform as efficiently asone big machine which could process the whole oversizeddata, in addition to the advantages of reducing storage andcomputing costs. This talk describes mathematical analy-sis of distributed learning.

Ding Xuan ZhouCity University of Hong [email protected]

MS88

Dakota: Explore and Predict with Confidence

Driven by Sandia National Laboratories’ applications, theDakota project (http://dakota.sandia.gov) invests in bothstate-of-the-art research and robust, usable software for op-timization and UQ. Broadly, Dakota’s advanced paramet-ric analysis enables design exploration, model calibration,risk analysis, and quantification of margins and uncertaintywith computational models. This presentation will surveychallenges with Sandia applications and Dakota capabili-ties that tackle them. Notably, Dakota has a broad suiteof forward and inverse UQ algorithms, including sampling,reliability, stochastic expansions, and non-probabilistic ap-proaches, any of which can be paired with model calibra-tion or design optimization. We will highlight active re-search and development directions, which include dimen-sion reduction, surrogate modeling, multi-fidelity methods,and inference, together with architecture and usability ad-vances that support their ready use by analysts.

Brian M. AdamsSandia National LaboratoriesOptimization/Uncertainty [email protected]

Patricia D. HoughSandia National [email protected]

J. Adam StephensOptimization/Uncertainty QuantificationSandia National [email protected]

MS88

The Openturns Uncertainty Quantification Soft-ware

OpenTURNS is an open source C++ library for uncer-tainty propagation by probabilistic methods. Developed in

a partnership of four industrial companies (EDF, Airbus,Phimeca and IMACS), it benefits from a strong practicalfeed-back. Classical algorithms of UQ (central dispersion,probability of exceedance, sensitivity analysis, metamod-els) are available and efficientely implemented. Connectinga new simulator to OpenTURNS is easy, thanks to variouswrapper services. The OpenTURNS Python module, builton top of the C++ library, is the interface that most userscommonly know in practice. However there are situationswhere we want to perform a UQ study without using aprogramming language. This is why we developped thegraphical user interface (GUI) of OpenTURNS, with thegoal of increasing the use of OpenTURNS and, more gen-erally, of the UQ methodology. In this talk, we presentthe OpenTURNSs GUI within SALOME, an open sourceplatform for pre and post-processing of numerical simula-tions. Through examples, we discuss the main features ofthe tool: central dispersion analysis, global sensitivity anal-ysis and threshold probability estimate. We also presentadvanced graphical features, including the interactive plotmatrix view and the parallel coordinate plot available inthe GUI and based on the Paraview framework. Finally,we show how the interface can be used within a HPC con-text, with only a limited input from the user.

Michael Baudin, Anne-Laure Popelin, Anthony Geay,Ovidiu Mirescu, Anne DutfoyEDF R&[email protected], [email protected],[email protected], [email protected],[email protected]

MS88

Data-driven, Adaptive Sparse Grids for UQ inSG++

Adaptive sparse grids provide a flexible and versatile way torepresent higher-dimensional dependencies. For UQ, theycan be employed to surrogate-based forward propagation aswell as for model calibration via density estimation. There-fore, sparse grids have gained increasing interest for UQ.SG++ is the most extensive toolkit for spatially adaptivesparse grids. It is a multi-platform toolkit that excels byfast and efficient algorithms for spatially adaptive sparsegrids for UQ and beyond. It provides advanced higher-order basis functions such as B-splines, which can acceler-ate higher-dimensional stochastic collocation significantly.SG++ supplies operations for sparse grids to compute Ja-cobian, Hessian matrices, Sobol-indices, or the Rosenblatttransformation for estimated sparse grid densities. SG++has recently been extended by the sparse grid combinationtechnique (CT / Smolyak rule) that supports a large vari-ety of one-dimensional quadrature rules including new L2

Leja sequences and global and local polynomial basis func-tions, and dimensionally adaptive refinement. Transfor-mations are included from CT to spatially adaptive sparsegrids and vice versa with proper grid truncation or exten-sion if required. Furthermore, we provide an interface toDAKOTA which makes it possible to obtain a polynomialchaos expansion that is equivalent to the CT solution. Weshow use cases and its convenient use via its rich interfacesto Python and Matlab.

Fabian Franzelin, Dirk PflugerUniversity of [email protected],

108 UQ18 Abstracts

[email protected]

MS88

MIT Uncertainty Quantification (MUQ): BayesianComputation for Statisticaland Physical Problems

MIT Uncertainty Quantification (MUQ) is an open-sourcesoftware package that provides tools for easily combin-ing models with structure-exploiting uncertainty quantifi-cation (UQ) algorithms. MUQ provides a framework formodelers, i.e., experts in their field who are not necessar-ily versed in the complexities of developing UQ algorithms,to integrate state-of-the-art UQ tools into their workflow.Users decompose both algorithms and physical/statisticalmodels into simple subcomponents, which are connectedin a graph that exposes relevant structure. Algorithmsubcomponents in MUQ have been designed with a com-mon interface and with attributes that mimic their math-ematical formulations, thus simplifying design, facilitatingsubcomponent exchange, and minimizing the distance be-tween high-level abstractions and code. Users create modelsubcomponents by writing simple wrappers around exist-ing models; these wrappers define the interface with UQcomponents and allow MUQ to evaluate the code. Algo-rithm and model construction is then as simple as definingconnections between components. MUQ uses software en-gineering practices to provide a straightforward graphicalstructure in a mathematically-aware framework to reducethe effort for domain experts to use UQ algorithms. Wehave developed state-of-the-art tools for MCMC sampling,constructing sparse polynomial chaos expansions and othersurrogate models, modeling random fields, performing in-ference with transport maps, and more.

Andrew D. [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

Matthew ParnoUS Army Cold Regions Research and Engineering Lab(CRREL)[email protected]

Arnold SongUS Army Cold Regions Research and Engineering [email protected]

MS89

Targeting a Constrained Traveling Observer by En-semble Kalman Filter Techniques

We examine the effect of the spatial location of an obser-vation within a Kalman filter data assimilation techniquefor a nonlinear model of potential meteorological interest.In particular, we model a traveling atmospheric observersubject to a maximum speed constraint, and determine atargeting strategy of this observer based on minimizing theensemble variance. We then evaluate the performance ofthis targeting strategy in reducing the forecast error andanalysis uncertainty.

Thomas BellskyUniversity of Maine

[email protected]

MS89

Targeted Observation Strategy for Space-weatherForecasting During a Geomagnetic Storm

We propose a targeted observation strategy, based onthe influence matrix diagnostic, that optimally selectswhere additional observations may be placed to im-prove global ionospheric state estimates. This strat-egy is applied in data assimilation observing systemexperiments, where a network of synthetic electrondensity vertical profiles, which represents that of re-alistic operational satellites, are assimilated into theThermosphere-Ionosphere-Electrodynamics Global Circu-lation Model (TIEGCM), using the Local Ensemble Trans-form Kalman Filter (LETKF), during the 26 September2011 geomagnetic storm. During each analysis step, theobservation vector is augmented with five synthetic verti-cal profiles optimally placed to target electron density er-rors, using our targeted observation strategy. Forecast im-provement due to assimilation of augmented vertical pro-files is measured with the root mean square error (RMSE)of analyzed electron density, averaged over 600 km regionscentered around the augmented vertical profile locations.Assimilating vertical profiles with targeted location yieldsa significant reduction in the RMSE of analyzed electrondensity and other state variables, such as neutral winds.These results demonstrate that our targeted strategy canimprove data assimilation efforts during extreme events bydetecting regions where additional observations would pro-vide the largest benefit to the state estimate.

Juan DurazoArizona State [email protected]

MS89

Data Assimilation for Irradiance Forecasting

Accurate global horizontal irradiance (GHI) estimates andforecasts are critical when determining the optimal loca-tion for a solar power plant, forecasting utility scale solarpower production, or forecasting distributed, behind themeter rooftop solar power production. Satellite imagesprovide a basis for producing the GHI estimates neededto undertake these objectives. The focus of this work isto use accurate but sparsely distributed ground sensors toimprove satellite derived GHI estimates which cover largeareas. We utilize satellite images taken from the GOES-15 geostationary satellite (available every 15-30 minutes)as well as ground data taken from irradiance sensors androoftop solar arrays (available every 5 minutes). The ad-vection model, driven by wind forecasts from a numericalweather model, simulates cloud motion between measure-ments. We use the Local Ensemble Transform Kalman Fil-ter (LETKF) to perform the data assimilation. We presentpreliminary results towards making such a system useful inan operational context. We explain how localization andinflation in the LETKF, perturbations of wind-fields, andrandom perturbations of the advection model, affect theaccuracy of our estimates and forecasts. We present ex-periments showing the accuracy of our forecasted GHI overforecast-horizons of 15 mins to 1 hr. The limitations of ourapproach and future improvements are also discussed.

Travis M. HartyUniversity of [email protected]

UQ18 Abstracts 109

Matthias MorzfeldUniversity of ArizonaDepartment of [email protected]

William HolmgrenUniversity of [email protected]

Antonio LorenzoUniversity of ArizonaDepartment of Hydrology & Atmospheric [email protected]

MS89

Forecast Sensitivity to Observation Impact and Ef-fect of Uncertainty Estimation

In data assimilation that attempts to predict nonlinearevolution of the system by combining complex computa-tional model, observations, and uncertainties associatedwith them, it is useful to be able to quantify the amount ofinformation provided by an observation or by an observ-ing system. Measures of the observational influence areuseful for the understanding of performance of the dataassimilation system. The Forecast sensitivity to observa-tion provides practical and useful metric for the assessmentof observations. Quite often complex data assimilation sys-tems use a simplified version of the forecast sensitivity for-mulation based on ensembles. In this talk, we first presentthe comparison of forecast sensitivity for 4DVar, Hybrid-4DEnVar, and 4DEnKF with or without such simplifica-tions using a highly nonlinear model. We then present theresults of ensemble forecast sensitivity to satellite radianceobservations for Hybrid-4DEnVart using NOAAs GlobalForecast System.

Kayo IdeUniversity of Maryland, College [email protected]

MS90

Bayesian Computation in Hierarchical Models Us-ing Marginal Local Approximation MCMC

Hierarchical Bayesian models can give rise to inferenceproblems where the actual parameters of interest may below dimensional relative to the entire state. For instance,one may wish to learn hyperparameters of the prior modelin a Bayesian inverse problem, rather than the parametersthemselves. Jointly inferring all of the uncertain parame-ters requires sampling a high dimensional distribution thathas complex structure. Moreover, in realistic applications,associated joint density evaluations can be computation-ally expensive. We address the challenges of sampling andcomputational cost through a combination of marginaliza-tion and surrogate modeling. We construct a Markov chainthat directly targets the marginal posterior of the actualparameters of interest. The desired marginal density isoften not available analytically; instead, we compute un-biased estimates of this density. We then introduce anMCMC algorithm that progressively synthesizes a local re-gression approximation of the low-dimensional target us-ing these Monte Carlo estimates. Our approach exploitsregularity in the marginal density to significantly reducecomputational expense relative to both regular and pseudo-marginal MCMC. Continual refinement of the approxima-tion leads to an asymptotically exact characterization ofthe desired marginal. Analysis of the bias-variance trade-

off guides an ideal refinement strategy that balances thedecay rates of components of the error.

Andrew D. [email protected]

MS90

Sampling Hyperparameters in Hierarchical Models

A common Bayesian model is where high-dimensional ob-served data depends on high-dimensional latent variablesthat, in turn, depend on relatively few hyperparameters.When the full conditional distribution over latent vari-ables has a known form, general MCMC sampling needonly be performed on the low-dimensional marginal poste-rior distribution over hyperparameters. This gives orders-of-magnitude improvement on popular posterior samplingmethods such as random-walk Metropolis-Hastings orGibbs sampling. For the common linear-Gaussian inverseproblem, this marginal-then-conditional decomposition al-lows the posterior mean to be calculated for less computecost than regularized inversion, as well as providing fulluncertainty quantification.

Colin FoxUniversity of Otago, New [email protected]

MS90

Hierarchical Priors in Atmospheric Tomography

Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate the wavefront dis-tortions caused by atmospheric turbulence. One methodthat allows to retrieve information about the atmospherefrom telescope data is so-called SLODAR, where the at-mospheric turbulence profile is estimated based on corre-lation data of the wavefront measurements. In this talk wepropose a novel extension of the SLODAR concept whichcorresponds to empirical estimation of a hierarchical priorin a specific inverse problem.

Tapio HelinUniversity of [email protected]

Stefan KindermannJohannes Kepler [email protected]

Jonatan LehtonenUniversity of [email protected]

Ronny RamlauJohannes Kepler Universitat Linz, [email protected]

MS90

Bilevel Parameter Learning in Inverse ImagingProblems

One of the most successful approaches to solve inverseproblems in imaging is to cast the problem as a varia-tional model. The key to the success of the variationalapproach is to define the variational energy such that itsminimiser reflects the structural properties of the imaging

110 UQ18 Abstracts

problem in terms of regularisation and data consistency.The expressibility of variational models depends on theright choice of parametrisation of regularisation and dataconsistency terms. Various strategies exist for such a pa-rameter choice, e.g. physical modelling, harmonic analysis,the classical discrepancy principle just to name a few. Inthis presentation I will discuss bilevel optimisation to learnmore expressible variational models. The basic principleis to consider a bilevel optimization problem, where theparametrised variational model appears as the lower-levelproblem and the higher-level problem is the minimizationover the parameters of a loss function that measures the re-construction error for the solution of the variational model.In this talk we discuss bilevel optimisation, its analysis andnumerical treatment, and show applications to regulari-sation learning, learning of noise models and of samplingpatterns in MRI. This talk includes joint work with M.Benning, L. Calatroni, C. Chung, J. C. De Los Reyes, M.Ehrhardt, G. Maierhofer, F. Sherry, and T. Valkonen.

Carola-Bibiane SchonliebDAMTP, University of [email protected]

MS91

Decomposing Functional Model Inputs forVariance-based Sensitivity Analysis

Variance-based sensitivity analysis, developed by Sobol’(1993) and further elaborated by others including Saltelli,Chan and Scott (2000) is a popular technique for assessingthe importance of model inputs when there are natural ormeaningful probability distributions associated with eachinput. This approach can be used when some of the modelinputs are functions rather than scalar-valued (e.g. Ioossand Ribatet, 2009, and Jacques et al., 2006), but may besomewhat less useful in this case because it does not ad-dress the nature of the relationships between functionalinputs and model outputs. We consider the option of sep-arating a random function-valued input, represented by avector of relatively high dimension, into one or a few scalar-valued summaries that are suggested by the context of themodeling exercise, and an independent, high dimensional”residual”. We also describe a graphical technique thatmay help to identify useful low-dimensional function sum-maries. When the model output is more sensitive to the lowdimensional summaries than to the residuals, this is use-ful information concerning the nature of model sensitivity,and also provides a route to constructing model surrogateswith scalar-valued indices that accurately represent mostof the variation in the output.

Max D. MorrisIowa State [email protected]

MS91

Calibration with Frequentist Coverage and Consis-tency

The resurgence of calibration as a research topic has pro-duced significant work on methods with favorable conver-gence properties. But there has been a noted lack of con-versations about statistically-justified confidence intervals.This talk will give intervals for parameters that, with veryfew assumptions, contain the underlying best parametervalue with the desired confidence, even in small samples.These intervals are also consistent: suboptimal parameters

are discarded with probability tending to one.

Matthew PlumleeUniversity of [email protected]

MS91

Variable Selection Based on a Bayesian CompositeGaussian Process Model

This talk will describe a variable selection method to iden-tify active inputs to a deterministic simulator code. Themethodology is based on a Bayesian composite GaussianProcess (GP) model. This composite model assumes thatthe output can be described as the sum of a mean plussmooth deviations from the mean. The mean and thedeviations are modeled as draws from independent GPswith separable correlation functions subject to appropri-ate constraints to enforce identifiability of the mean com-pared with the deviation process. Inputs having smallerestimated correlation parameters are judged to be moreactive. A reference inactive input is added to the datato judge the size of the correlation parameter for inactiveinputs.

Thomas SantnerOhio State [email protected]

Casey [email protected]

Christopher HansOhio State [email protected]

MS91

Calibration for Computer Experiments with Bi-nary Responses

Understanding calibration parameters is often criticalwhen physical and computer experiments are both con-ducted and there exist unshared input variables. Manymethods for experiments with continuous responses havebeen greatly discussed in literature. On the otherhand, with the molecular experiments at hand, calibrationmethod for experiments with binary response is in greatdemand but has received scant attention. In this paper,we propose an L2 calibration method for computer exper-iments with binary responses to estimate calibration pa-rameters, and show that the estimators are asymptoticallyconsistent and semiparametric efficient. Numerical exam-ples are examined and show that the method results in ac-curate estimates. Lastly, we apply the method to molecularexperiments; the estimated calibration parameters providescientific insights in biology which cannot be directly ob-tainable experimentally.

Chih-Li SungGeorgia Institude of [email protected]

Ying HungDepartment of Statistics and BiostatisticsRutgers [email protected]

William Rittase, Cheng Zhu

UQ18 Abstracts 111

Department of Biomedical EngineeringGeorgia Institute of [email protected], [email protected]

C. F. Jeff WuGeorgia Institute of [email protected]

MS92

Data-driven Correction of Model and Representa-tion Error in Data Assimilation

Conventional data assimilation assumes that a dynamicalmodel and observation function are both known and cor-rectly specified up to parameterization and noise models.When this assumption fails, the assimilation procedure canproduce spurious results and assign them high degrees ofconfidence. Methods such as adaptive filtering can quantifythis model mismatch by comparing filter based forecasts toactual observations and applying statistical tests. In thistalk we discuss methods which go beyond quantification ofmodel error and actually correct the model error. By com-bining reconstructed dynamics from historical data withmanifold learning based forecasting and regression meth-ods, we build data-driven auxiliary models which correctthe model error in the dynamical and/or observation mod-els. These methods can be used to improve uncertaintyquantification for data assimilation and forecasting.

Tyrus BerryGeorge Mason [email protected]

John HarlimPennsylvania State [email protected]

Franz HamiltonDepartment of MathematicsNorth Carolina State [email protected]

Timothy SauerDepartment of Mathematical SciencesGeorge Mason [email protected]

MS92

Learning Sparse Non-Gaussian Graphical Modelsfrom Data

In a Markov network, represented as an undirected prob-abilistic graphical model, the lack of an edge denotes con-ditional independence between the corresponding nodes.Sparsity in the graph is of interest as it can, for example,accelerate inference or reveal important couplings withinmulti-physics systems. Given a dataset from an unknown,continuous, non-Gaussian distribution, our goal is to iden-tify the sparsity of the associated graph. The proposedalgorithm relies on exploiting the connection between thesparsity of the graph and the sparsity of transport maps,which deterministically couple one probability measure toanother. We present results for datasets drawn from multi-disciplinary systems.

Rebecca MorrisonMassachusetts Institute of Technology

[email protected]

MS92

Bayesian Generative Models for Quantifying InputUncertainty using Limited Realizations

Quantifying input uncertainty is an essential rather unex-plored step before one can proceed with uncertainty prop-agation. This density estimation problem becomes verychallenging when the uncertain input is a random fieldgiven in terms of limited realizations. Recently there hasbeen rapid progress in generative models which representthe discretized random field with expressive transforms ofsimple random variables. These transforms are parameter-ized using deep neural networks. The expressiveness andadvances in training deep learning algorithms enable gener-ative models to approximate the input random field signifi-cantly better than other methods. In this work, we extendGenerative Adversarial Networks (GANs) to a Bayesiansetting to address the challenges of limited realizationsand known issues of GANs, such as mode collapse. Theimprovement and performance of the methodology is il-lustrated with stochastic reconstruction of heterogeneousrandom media with applications to geological flows.

Nicholas Zabaras, Yinhao ZhuUniversity of Notre [email protected], [email protected]

MS92

Bayesian Deep Neural Networks for SurrogateModeling

We investigate scaling Bayesian inference for deep neuralnetworks with applications to surrogate modeling for com-puter simulations of stochastic differential equations withhigh-dimensional input. Approaches to surrogate modelingbased on Gaussian processes are attractive because of theirBayesian nature, but are difficult to scale to large data sets.On the other hand, deep neural networks have seen majorsuccess in the realm of big data, but Bayesian versions areless developed. In this work, we scale Stein’s method forBayesian neural networks to surrogate modeling appropri-ate for uncertainty propagation. We illustrate the method-ology with standard benchmark problems in the solution ofSPDEs as well as other UQ examples where most availabletechniques fail.

Yinhao Zhu, Nicholas ZabarasUniversity of Notre [email protected], [email protected]

MS93

Statistical Error Modeling for Approximate Solu-tions to Parameterized Systems of Nonlinear Equa-tions using Machine Learning

Uncertainty-quantification tasks are often many query innature, as they require repeated evaluations of a model thatoften corresponds to a parameterized system of nonlinearequations (e.g., arising from the spatial discretization ofa PDE). To make this task tractable for large-scale mod-els, solution approximations (e.g., reduced-order models,coarse-mesh solutions, and unconverged iterations) mustbe employed. However, such approximations introduce ad-ditional error, which may be treated as a source of epis-temic uncertainty, that must be quantified to ensure rigorin the ultimate UQ result. We present an approach to

112 UQ18 Abstracts

quantify the error (i.e., epistemic uncertainty) introducedby these solution approximations. The approach (1) engi-neers features that are informative of the error using con-cepts related to dual-weighted residuals and rigorous er-ror bounds, and (2) applies machine learning regressiontechniques (e.g., artificial neural networks, random forests,support vector machines) to construct a statistical modelof the error from these features. We consider both (signed)errors in quantities of interest, as well as global state-spaceerror norms. We present several examples to demonstratethe effectiveness of the proposed approach compared tomore conventional feature and regression choices. In eachof the examples, the predicted errors have a coefficient ofdetermination (R2) value of at least 0.998.

Brian A. Freno, Kevin T. CarlbergSandia National [email protected], [email protected]

MS93

Quantifying Unresolved Effects in Reduced Or-der Models using the Mori-Zwanzig Formalism andVariational Multiscale Method

Quantifying the impact of unresolved scales in unsteady re-duced order models is a critical aspect for model predictionand design under uncertainty. This work considers a frame-work combining the Mori-Zwanzig formalism and Varia-tional Multiscale method (MZ-VMS) as a tool to quantifyand model unresolved numerical effects in projection-basedreduced order models. The MZ-VMS approach provides amethodology to reformulate a high-dimension Markoviandynamical system as a lower-dimensional, non-Markoviansystem. In this lower-dimensional system, the impact ofunresolved scales on the retained dynamics are embeddedin a memory term. This memory term, which is drivenby an orthogonal projection of the coarse-scale residual, isused as a starting point to identify and model unresolveddynamics. Examples are presented for unsteady fluid-flowproblems.

Eric ParishUniv of [email protected]

Chris WentlandUniversity of [email protected]

Karthik DuraisamyUniversity of Michigan Ann [email protected]

MS93

Dimension Reduction of the Input ParameterSpace of Vector-valued Functions

Approximation of multivariate functions is a difficult taskwhen the number of input parameters is large. Identi-fying the directions where the function does not signifi-cantly vary is a key step for complexity reduction. Amongother dimension reduction techniques, the Active Subspacemethod uses gradients of a scalar valued function to reducethe parameter space. In this talk, we extend this methodol-ogy for vector-valued functions, including multiple scalar-valued functions and functions taking values in functionalspaces. Numerical examples reveals the importance of thechoice of the metric to measure errors and compare it withthe commonly used truncated Karhunen-Loeve decompo-

sition.

Olivier ZahmMassachusetts Institute of [email protected]

MS93

Efficient PDE-constrained Optimization under Un-certainty using Adaptive Model Reduction andSparse Grids

This work introduces a framework for accelerating opti-mization problems governed by partial differential equa-tions with random coefficients by leveraging adaptivesparse grids and model reduction. Adaptive sparse gridsperform efficient integration approximation in a high-dimensional stochastic space and reduced-order models re-duce the cost of objective-function and gradient queriesby decreasing the complexity of primal and adjoint PDEsolves. A globally convergent trust-region framework ac-counts for inexactness in the objective and gradient. Theproposed method is demonstrated on 1d and 2d flow con-trol problems and shown to result in over an order of mag-nitude reduction in computational cost when compared toexisting methods.

Matthew J. ZahrLawrence Berkeley National LaboratoryUniversity of California, [email protected]

Kevin T. CarlbergSandia National [email protected]

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS94

Tuning Asymptotically Biased Samplers with Dif-fusion Based Stein Operators

As increasingly sophisticated enhanced sampling methodsare developed, a challenge that arises is how to tune thesimulation parameters appropriately to achieve good sam-pler performance. This is particularly pertinent for biasedsampling methods, which trade off asymptotic exactnessfor computational speed. While a reduction in variancedue to more rapid sampling can outweigh any bias intro-duced, the inexactness creates new challenges for hyperpa-rameter selection. The natural metric in which to quantifythis discrepancy is the Wasserstein or Kantorovich metric.However, the computational difficulties in com- puting thisquantity has typically dissuaded practitioners. To addressthis, we introduce a new computable quality measure usingStein operators constructed from diffusions which quantifythe maximum discrepancy between sample and target ex-pectations over a large class of test functions. We will alsoillustrate applications to hyperparameter selection, conver-gence rate assessment, and quantifying bias-variance trade-offs in sampling. This is joint work with Sebastian Vollmer(University of Warwick), Jackson Gorham (Stanford Uni-versity) and Lester Mackey (Microsoft Research).

Andrew DuncanUniversity of Sussex

UQ18 Abstracts 113

[email protected]

MS94

Irreversible Langevin Samplers, Variance Reduc-tion and MCMC

It is well known in many settings that reversible Langevindiffusions in confining potentials converge to equilibriumexponentially fast. Adding irreversible perturbations tothe drift of a Langevin diffusion that maintain the sameinvariant measure accelerates its convergence to stationar-ity. When implementing MCMC algorithms using time dis-cretisations of such SDEs, one can append the discretiza-tion with the usual Metropolis-Hastings accept-reject stepand this is often done in practice because the acceptrejectstep eliminates bias. However, such a step makes the re-sulting chain reversible. It is not known whether addingthe accept-reject step preserves the faster mixing proper-ties of the non-reversible dynamics. We address this gapbetween theory and practice by analyzing the optimal scal-ing of MCMC algorithms constructed from proposal movesthat are time-step Euler discretisations of an irreversibleSDE, for high dimensional Gaussian target measures. Wecall the resulting algorithm the ipMALA. To quantify howthe cost of the algorithm scales with the dimension N, weprove invariance principles for the appropriately rescaledchain. In contrast to the usual MALA algorithm, we showthat there could be two regimes asymptotically: (i) a dif-fusive regime, as in the MALA algorithm and (ii) a ’fluid’regime where the limit is an ODE. We provide exampleswhere the limit is a diffusion, as in the standard MALA,but with provably higher limiting acceptance probabilities.

Michele OttobreImperial College [email protected]

Konstantinos SpiliopoulosBoston [email protected]

MS94

Noise-robust Metropolis-Hastings Algorithms forBayesian Inverse Problems

Metropolis-Hastings (MH) algorithms for approximatesampling of target distributions typically suffer from ahigh dimensional state space or a high concentration ofthe target measure, respectively. Concerning the first is-sue dimension-independent MH algorithms have been de-veloped and analyzed in recent years, suitable for Bayesianinference in infinite dimensions. However, the second is-sue has drawn less attention yet, despite its importance forapplications with large or highly informative data. In thistalk we consider Bayesian inverse problems with a decreas-ing observational noise and study the performance of MHalgorithms for sampling from the increasingly concentratedposterior measure. In a first analysis we show that employ-ing approximations of the posterior covariance can yield anoise-independent performance of the resulting MH algo-rithm in terms of the expected acceptance rate and theexpected squared jump distance. Numerical experimentsconfirm the theoretical results and indicate further exten-sions.

Bjorn SprungkTU ChemnitzDepartment of Mathematics

[email protected]

MS94

Constructing Dimension-independent Particle Fil-ters for High-dimensional Geophysical Problems

Particle filters are approximate solutions to Bayesian Infer-ence Problems. Common wisdom has been that they sufferfrom the curse of dimensionality, such that they are inef-ficient in high-dimensional problems. Even the so-calledoptimal proposal particle filter is proven to need an ex-ponentially growing number of particles with increasingdimension. We will show that this wisdom is based ona misunderstanding of the actual problem, and constructparticle filters in which most or all of the particles haveequal weight, curing the curse of dimensionality. The basicidea is to generate proposals that to move particles suchthat they attain a fixed target weight by construction. Newfilters in this category are derived which minimize the biaspresent in older versions, providing an important next stepin nonlinear high-dimensional Bayesian Inference.

Peter Jan van LeeuwenUniversity of [email protected]

MS95

Importance Sampling with Stochastic ComputerModels

Importance sampling has been used to improve the ef-ficiency of simulations where the simulation output isuniquely determined, given a fixed input. We extend thetheory of importance sampling to estimate a system’s reli-ability with stochastic simulations. Thanks to the advanceof computing power, stochastic computer models are em-ployed in many applications to represent a complex systembehavior. In a stochastic computer model, a simulator gen-erates stochastic outputs at the same input. We developa new approach that efficiently uses stochastic simulationswith unknown output distribution. Specifically, we derivethe optimal importance sampling density and allocationprocedure that minimize the variance of an estimator. Ap-plication to a computationally intensive aeroelastic windturbine simulation demonstrates the benefits of the pro-posed approach.

Eunshin ByonUniversity of [email protected]

MS95

Non-Gaussian Models for Extremes

Most physical quantities exhibit random fluctuations whichcannot be captured by Gaussian models and non-Gaussianmodels should be used. Conceptually simple and compu-tationally efficient non-Gaussian models can be divided intwo types. The first includes models which match targetmoments beyond the mean and correlation functions. Theyare not designed to describe extremes. The second class(translation models) contains memoryless transformationsof Gaussian processes. They match target marginal distri-butions exactly and correlation functions approximately.These models may provide unsatisfactory estimates of ex-tremes since they have independent tails, i.e., the joint dis-tribution of

(X0(s),X0(t)

)can be approximated by the

product of the marginal distributions of the translation

114 UQ18 Abstracts

process X0(t) at times s and t, s = t, for sufficientlylarge thresholds. We extend the class of current trans-lation models to processes X(t) defined as memorylesstransformations of non-Gaussian processes. These mod-els match exactly target marginal distributions and havedependent tails. Moreover, they can be optimized to char-acterize accurately quantities of interest e.g. extremesmax0≤t≤τ ‖X(t)‖ of X(t) over time intervals [0, τ ]. Wegive theoretical arguments and numerical examples to showthat one can to construct simple and flexible non-Gaussianimages of X(t) and that the proposed translation modelsprovide accurate estimates of extremes.

Mircea GrigoriuCornell [email protected]

MS95

Modified Cross Entropy Based Importance Sam-pling with a FlexibleMixture Model for Rare EventEstimation

Predicting the probability of a rare event or failure eventis of interest in many areas of science and engineering.This probability is defined through a potentially high-dimensional probability integral. Often, the integrationdomain is only known point-wise in terms of the outcomeof a numerical model. In such cases, the probability of fail-ure is usually estimated with Monte Carlo-based samplingapproaches, which treat the complex numerical models asa black box input-output relationship. The efficiency ofcrude Monte Carlo can be improved significantly by impor-tance sampling (IS), provided that an effective IS densityis chosen. The cross entropy (CE) method is an adaptivesampling approach that determines the IS density throughminimising the Kullback-Leibler divergence between thetheoretically optimal IS density and a chosen parametricfamily of distributions. We present a modified version ofthe classical CE method that employs a flexible parametricdistribution model. We demonstrate the ability of the pro-posed model to handle low- and high-dimensional problemsas well as problems with multimodal failure domains.

Iason Papaioannou, Sebastian Geyer, Daniel StraubEngineering Risk Analysis GroupTU [email protected], [email protected],[email protected]

MS95

Adaptive Point Selection for Global vs. Local Sur-rogate Models

Surrogate models are commonly used to provide a quick ap-proximation of expensive functional evaluations, often forpurposes of failure estimation or optimization. The accu-racy of surrogates remains an issue for these applications.There are many approaches for adaptively selecting thenext best point(s) to add for updating the surrogate after asurrogate is constructed over an initial design. Such adap-tive schemes involve metrics such as minimax (space-filling)or choosing the point of maximum prediction variance,with the goal of lowering the overall prediction variance.In this talk, we compare some of the common adaptivesampling schemes for global Gaussian process surrogateswith adaptive schemes for local piecewise surrogates basedon Voronoi piecewise sampling (VPS). VPS decomposesthe parameter space into cells using an implicit Voronoitessellation around sample points as seeds. It has several

advantages over global surrogates in terms of scalabilityand handling of discontinuous responses. We will presentcomparisons of accuracy, scalability, and cost of global vs.local adaptive sampling approaches and address batch sam-pling needs.

Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico [email protected]

Mohamed S. Ebeida, Kathryn MaupinSandia National [email protected], [email protected]

Brian M. AdamsSandia National LaboratoriesOptimization/Uncertainty [email protected]

MS96

Analysis of Sparse Approximations in Bayesian Fil-tering

Bayesian filtering algorithms for systems with nonlineardynamics and high-dimensional states include ensembleKalman filters (EnKF), particle filters, and variationalBayesian methods. Given constraints on computationalresources, it is common to deploy these algorithms withstructured approximations, such as sparse or “localized’covariance or precision matrices in EnKF, or localizedweights in certain particle filters. These approximationstrade variance for bias, with the goal of preserving stabil-ity and improving scalability. In this presentation, we willanalyze the tradeoffs resulting from these approximations,focusing on the bias resulting from the imposition of sparseMarkov structure (i.e., conditional independence assump-tions) or marginal independence assumptions on compo-nents of the filtering distribution. We will characterize theassociated errors and their evolution in time, and demon-strate our theoretical results in the context of model prob-lems for numerical weather prediction, including stochasticturbulence models and chaotic dynamical systems.

Ricardo [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS96

Linear Bayesian Inference via Multi-Fidelity Mod-eling

A significant challenge for Bayesian inference of systemswith high-dimensional uncertainty is the calculation of theposterior covariance, as it requires a possibly dense ma-trix inversion. Recently there have been several works thatapply low-rank techniques for posterior covariance approxi-mation to improve upon the associated computational cost.However, in these cases, many high-fidelity model solvesare still required, incurring a possibly large computationalcost. In this work we consider further cost reduction byintroducing the use of a low-fidelity model to form a low-rank, bi-fidelity approximation to the posterior covariance.We will present the formulation of this method as well as

UQ18 Abstracts 115

numerical examples demonstrating the achieved cost re-duction.

Hillary FairbanksDepartment of Applied MathematicsUniversity of Colorado, [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

MS96

Parameter Identification with the Parallel Hierar-chical Matrix Technique

We use available measurements to estimate the unknownparameters (variance, smoothness parameter, and covari-ance length) of a covariance function by maximizing thejoint Gaussian log-likelihood function. To overcome cubiccomplexity in the linear algebra, we approximate the dis-cretized covariance function in the hierarchical (H-) matrixformat. The H-matrix format has a log-linear computa-tional cost and storage O(kn log n), where the rank k isa small integer and n is the number of locations. The H-matrix technique allows us to work with general covariancematrices in an efficient way, since H-matrices can approx-imate inhomogeneous covariance functions, with a fairlygeneral mesh that is not necessarily axes-parallel, and nei-ther the covariance matrix itself nor its inverse have to besparse. We demonstrate our method with Monte Carlosimulations and an application to soil moisture data. TheC, C++ codes and data are freely available.

Alexander LitvinenkoSRI-UQ and ECRC Centers, [email protected]

David E. [email protected]

Marc Genton, Ying SunKAUSTSaudi [email protected], [email protected]

MS96

Compressed Sparse Tensor Based Approximationfor Vibrational Quantum Mechanics Integrals

We propose a method that takes advantage of low rankstructure of high dimensional potential energy surfaces inquantum chemistry to find their approximation with veryfew point evaluations. The low rank approximation isachieved by a synthesis of methods from compressive sens-ing and low rank canonical tensor decomposition usingstochastic gradient descent. This approach is applied onhigh dimensional integration problems in a quantum chem-istry formulation and shows reduction in computation costby orders of magnitude as compared to the state of the art.

Prashant RaiSandia National LaboratoriesLivermore, CA, [email protected]

Khachik Sargsyan

Sandia National [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

MS97

Scalable Approximation of PDE-Constrained Op-timization under Uncertainty: Application to Tur-bulent Jet Flow

In this talk, we present a scalable method based on Taylorapproximation for PDE-constrained optimal control prob-lems under high-dimensional uncertainty. The computa-tional complexity of the method does not depend on thenominal but only on the intrinsic dimension of the uncer-tain parameter, thus the curse of dimensionality is brokenfor intrinsically low-dimensional problems. Further MonteCarlo correction for the Taylor approximation is proposed,which leads to an unbiased evaluation of the statisticalmoments in the objective function and achieves reductionof variance by several orders of magnitude compared to aMonte Carlo approximation. We apply our method for aturbulence model with infinite-dimensional random viscos-ity and demonstrate the scalability up to 1 million param-eter dimensions.

Peng ChenUT [email protected]

Umberto VillaUniversity of Texas at [email protected]

Omar GhattasThe University of Texas at [email protected]

MS97

The Role of Variational Multiscale Method in Un-certainty Quantification

The variational multiscale method (VMS) has been exten-sively used for developing subgrid-scale models that ac-count for the effect of the unresolved scales on the resolvedscales in deterministic systems. These models have beenused to perform the accurate simulation of such problemson relatively coarse discretizations. In contrast to this, therole of VMS in systems with stochastic parameters has notbeen fully explored, and is the topic of this talk. In par-ticular, we describe how the VMS method can be used todevelop subgrid models for PDEs with stochastic parame-ters in both intrusive and non-intrusive settings. We alsodescribe how it may be used an an error indicator to driveadaptive discretiztion in the joint physical-stochastic do-mains.

Jason LiDepartment of Mechanical, Aerospace and [email protected]

Onkar SahniRensselaer Polytechnic Institute

116 UQ18 Abstracts

[email protected]

Assad OberaiDepartment of Mechanical, Aerospace and NuclearEngineeringRensselaer Polytechnic [email protected]

MS97

Smoothing Techniques for PDE-Constrained Opti-mization under Uncertainty

In many areas of the natural sciences and engineering, theunderlying phenomena are readily modeled by partial dif-ferential equations (PDEs) with random inputs. Once theuncertainty has been propagated through the PDE, the re-sulting random field PDE solution transforms the usual ob-jective function into a random variable. One possibility tohandle this uncertainty in the objective or quantity of inter-est is to employ risk measures. However, many importantrisk measures, such as the conditional value-at-risk (CVaR)or mean plus semideviation are non-smooth. The resultingoptimization problem is then an often non-convex, non-smooth infinite dimensional problem. Due to the poorperformance of off-the-shelf non-smooth solvers for theseproblems, we propose a variational smoothing technique,which we call epi-regularization. Using epi-regularization,we exploit existing methods for smooth PDE-constrainedoptimization under unceratinty. The techniques also al-lows us to prove convergence of minimizers and stationarypoints. Under local curvature conditions on the reducedobjective, we can provide error estimates. We explore thetheoretical results by several numerical experiments.

Thomas M. SurowiecDepartment of Mathematics and Computer SciencePhilipps University of [email protected]

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS97

Multiscale Optimization and UQ for Additive Man-ufacturing

Additive manufacturing (AM) is capable of creating com-pelling designs at a much faster rate than standard meth-ods. However, achieving consistent material properties atvarious length scales remains a challenge. In this work,we present risk-averse optimization to produce robust de-signs by tightly controlling certain features of the dynamicswhile addressing underlying uncertainties. Aspects of theAM process are emulated with PDEs to mimic the behaviorof the material during the processing. The goal is to con-trol different aspects of the laser to achieve design targetsand simultaneously accommodate uncertainties in differentparts of the underlying dynamics. PDE-constrained opti-mization methods serve as the foundation for this workwith finite element discretizations, adjoint-based sensitiv-ities, trust-region methods, and Newton-Krylov solvers.Our final AM produced parts must achieve tight tolerancesfor a range of different material properties. Accordingly, arange of risk measures are considered with a specific em-phasis on reliability. A numerical example coupled to ex-perimental data demonstrates that the use of risk measuresas part of the objective function results in optimal solutions

and ensures that worse case scenarios are avoided.

Bart G. Van Bloemen Waanders, Timothy Wildey, DanielT. SeidlSandia National [email protected], [email protected],[email protected]

Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico [email protected]

MS98

A Bayesian Framework for Robust Decisions in thePresence of Unobserved Heterogeneity

Measures of decision risk based on the distribution oflosses over a posterior distribution obtained via standardBayesian techniques may not be robust to model misspeci-fication. Standard statistical models assume homogeneity:all individual observables are subject to the same latentvariables. Models with covariates relax this assumptionby introducing observed sources of heterogeneity. Here weconsider model misspecification due to unobserved sourcesof heterogeneity that are not readily captured by knowncovariates. For example, many predictive physical modelscontain conservation laws that are globally consistent, butthose physical models may be misspecified due to embed-ded phenomenological models that are only locally consis-tent within a narrow range of parameters. Instead of fit-ting a misspecified model to all observations in a data set,we propose fitting the model on multiple subsets of thedata where the model has little realized discrepancies—where the homogeneity assumption holds—in order to ob-tain multiple posterior distributions. By constructing aprobability measure over subsets of the data, we can definea distribution over posterior distributions, from which wecan obtain a distribution of risk measures. This distribu-tion of risk measures allows us to make robust decisions,e.g., by finding an action that minimizes an average ex-pected loss. We will demonstrate these methods on engi-neering applications of interest.

Chi Feng, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS98

Bayesian Analysis of Boundary Data in EIT: Dis-crete vs Continuous

Electrical Impedance Tomography (EIT) is an imagingmodality in which the goal is to estimate the conductivitydistribution inside a body from current/voltage measure-ments on the boundary of the body. A wealth of math-ematical theory on EIT, as well as some direct computa-tional methods, are based on the presumed knowledge ofthe Dirichlet-to-Neumann map at the boundary, while inreality the data is collected using a fixed set of contactelectrodes. In this talk, the mathematical correspondencebetween the discrete and continuous data is explored, anda computational method of passing from the discrete tocontinuous data, in the presence of sample-based prior in-formation is described.

Sumanth Reddy NakkiReddyDepartment of MathematicsCase Western Reserve University

UQ18 Abstracts 117

[email protected]

Daniela CalvettiCase Western Reserve UnivDepartment of Mathematics, Applied Mathematics [email protected]

MS98

Use of the Bayesian Approximation Error Ap-proach to Account for Model Discrepancy: TheRobin Problem Revisited

We address the problem of accounting for model discrep-ancy by the use of the Bayesian approximation error (BAE)approach in the context of inverse problems. In many in-verse problems when one wishes to infer some primary pa-rameter of interest there are other secondary parameterswhich are also uncertain. In the standard Bayesian (ordeterministic) approach such nuisance parameters are ei-ther inverted for or are ignored (perhaps by assignment ofsome nominal value). However, it is well understood thatthe ill-posedness of general inverse problems means theydo not handle modelling errors well. The BAE approachhas been developed as an efficient means to approximatelypre-marginalize over nuisance parameters so that one cansystematically incorporate the effects of neglecting thesesecondary parameters at the modelling stage. We motivatethe method through an application to the Robin problemgoverned by the Poisson equation.

Ruanui NicholsonApplied MathematicsUniversity of California, [email protected]

MS98

Analysis of Inadequacy in Simplified Models of Su-percapacitor Charge/discharge Cycles

It is common in modeling engineering systems to developreduced models by starting from a higher fidelity modeland introducing simplifying assumptions. In making pre-dictions using the resulting lower fidelity model, it is gener-ally important to represent the effects of these simplifyingassumptions (i.e., model inadequacy). Further, the rela-tionship between the high and low fidelity models should beexploited to do so in a manner that is both computationallyefficient and physically realistic. In this work, we explorethese issues in a simple context: models of supercapacitorcharge/discharge cycles. The high fidelity model is PDEbased, while the low fidelity model is an ODE based on thePDE behavior in the limit of slow variation of the input cur-rent. A stochastic ODE representing the low fidelity modelinadequacy is constructed. This model is based upon tworealizations about the original low fidelity model. First,the model does not depend correctly on the the time his-tory of the system. Second, the model cannot represent theshort time behavior induced by rapid changes in current.The construction of an inadequacy model based on theseknown deficiencies is discussed, and the performance of themodel relative to the high fidelity model is shown.

Todd A. OliverPECOS/ICES, The University of Texas at [email protected]

Danial FaghihiInstitute for Computational Engineering and Sciences

University of Texas at [email protected]

Robert D. MoserUniversity of Texas at [email protected]

MS99

UQ and Parameter Estimation for Coastal OceanHazard Modeling

Coastal ocean hazards, including storm surges andtsunamis, affect millions of people worldwide. These haz-ards may be increasing due to a number of factors, includ-ing sea level rise, warming oceans, population growth, andclimate variability. In this talk, we will give an overviewof coastal hazard modeling, the current state of the art,and needs for the future. We will also discuss many ofthe uncertainties that are encountered in predictive sim-ulation. We will describe a measure theoretic approachused to calculate uncertain parameters within a stochasticinverse framework. The method will be applied to estimat-ing coastal inundation due to storm surge.

Clint DawsonInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]

Troy ButlerUniversity of Colorado [email protected]

Don EstepColorado State [email protected]

Joannes WesterinkDept. of Civil & Environmental Engineering & EarthSciencesUniversity of Notre [email protected]

Lindley C. GrahamFlorida State UniversityDepartment of Scientific [email protected]

MS99

Bayesian Inference of Earthquake Parameters forthe Chile 2010 Event using Polynomial Chaos-based Surrogate and Buoy Data

We are interested in the Bayesian inference of earthquakeparameters associated with the Chile 2010 event. The goalis to infer the location, orientation as well as the slip param-eter of an Okada model of the fault from real data collectedat a single DART station. Several challenges are addressedin this work concerning the development of a statisticalmodel, a surrogate model and its goal-oriented reductionthat are robust concerning the limitations of the numeri-cal model and the data. First, a non-stationary correlatedGaussian noise is introduced to model the discrepancy be-tween the data and the numerical model. The posteriordistribution is then sampled using an adaptive Metropolis-Hastings algorithm. A surrogate model is constructed toaccelerate the inference using two polynomial chaos expan-sions computed by non-intrusive spectral projection. We

118 UQ18 Abstracts

first approximate the arrival time of the waves at the sta-tion and define a fictitious time such that, for every pa-rameter value, the wave reaches the gauge at a prescribedinstant. In this fictitious time, the water elevation withrespect to the parameter is regularized, and a polynomialchaos expansion can be used. Finally, a singular value de-composition based on the Mahalanobis distance is intro-duced to reduce the model while preserving the posteriordistribution of the inferred parameter. Numerical resultsare presented and discussed.

Loıc GiraldiKAUST, Kingdom of Saudi [email protected]

MS99

Earthquake Source Dimension Reduction withGaussian Process Emulation: Quantification ofTsunami Hazard

Tsunami hazard assessment requires the propagation oftsunami source uncertainties to the distributions of rele-vant quantities in the affected areas. The physics is cap-tured by a coupled two-stage deterministic model; thetsunamigenic seabed deformation caused by the earthquakesource, and subsequent propagation of the tsunami to af-fected coasts. The corresponding numerical codes are com-putationally expensive. We overcome this by building sta-tistical surrogates (or emulators) of the models that pro-vide faster and cheaper approximations. These Gaussianprocess emulators are then used to propagate the uncer-tainties in the source to the wave heights at the coasts.The uncertainties in the source (slips) are modeled us-ing a geo-statistical approach whilst encapsulating regionalgeophysical knowledge. The high dimensionality of thesource presents a significant hurdle in the emulator con-struction. Thus, we merge the emulator construction witha gradient-based kernel dimension reduction to decreasedimensionality while also reducing the loss in information.We illustrate the framework using the case of the potentialtsunamigenic zone in the Makran region.

Devaraj GopinathanDepartment of Statistical ScienceUniversity College [email protected]

Serge GuillasUniversity College [email protected]

MS99

Probabilistic Tsunami Hazard Assessments withConsideration of Uncertain Earthquake Character-istics

The uncertainty quantification of tsunami assessments dueto uncertain earthquakes faces important challenges. First,the generated earthquake samples must be consistent withpast events. Second, it must adopt an uncertainty propa-gation method to determine tsunami uncertainties with alow computational cost. In this study we propose a newmethodology, which improves the existing tsunami uncer-tainty assessment methods. The methodology considerstwo uncertain earthquake characteristics, the slip and lo-cation. First, the methodology considers the generation ofconsistent earthquake slip samples by means of a KarhunenLoeve expansion and a translation model, applicable to anynon-rectangular rupture and marginal distribution. Fur-

thermore, we preserve the original probability properties ofthe slip distribution by avoiding post sampling treatments,such as earthquake slip scaling. Second, the methodologyuses the stochastic reduced order model SROM (Grigo-riu, 2009) instead of a Monte Carlo simulation, which re-duces the computational cost of the uncertainty propaga-tion. The methodology is applied on real study cases. Wedemonstrate that our stochastic approach generates consis-tent earthquake samples with respect to the target prob-ability laws. We also show that the results obtained fromSROM are more accurate than classic Monte Carlo simu-lations.

Ignacio SepulvedaSchool of Civil & Environmental EngineeringCornell [email protected]

Philip L.-F. LiuNational University of [email protected]

Mircea Grigoriu, Matthew PritchardCornell [email protected], [email protected]

MS100

Identification of Primary Flow Regions ThroughThree-dimensional Discrete Fracture Networks us-ing Supervised Classification and Graph-basedRepresentations

Discrete fracture network (DFN) models are designed tosimulate flow and transport in fractured porous media.Flow and transport calculations reveal that a small back-bone of fractures exists, where most flow and transport oc-curs. Restricting the flowing fracture network to this back-bone provides a significant reduction in the network’s effec-tive size. However, the particle tracking simulations neededto determine the reduction are computationally intensive.Such methods are impractical for large systems or for ro-bust uncertainty quantification of fracture networks, wherethousands of forward simulations are needed to boundsystem behavior. We present a graph-based method toidentify primary flow and transport subnetworks in three-dimensional fracture networks. Coupled with supervisedmachine learning techniques the method efficiently identi-fies these backbones while also revealing key characteristicsof flow and transport through fractured media.

Jeffrey HymanLos Alamos National LaboratoryEarth and Environmental Sciences [email protected]

Aric HagbergLos Alamos National [email protected]

Manuel ValeraSan Diego State [email protected]

Allon PercusClaremont Graduate [email protected]

Hari ViswanathanLos Alamos National Laboratory

UQ18 Abstracts 119

Earth and Environmental Sciences [email protected]

Gowri SrinivasanT-5, Theoretical Division, Los Alamos [email protected]

MS100

Deep Residual Recurrent Neural Network forModel Reduction

Recently, proper orthogonal decomposition (POD) com-bined with Galerkin projection has been applied more fre-quently to overcome the computational burden of multi-query tasks (e.g. uncertainty quantification, model basedoptimization) governed by large scale nonlinear partial dif-ferential equations (PDEs). However, the effectiveness ofPOD-Galerkin based reduced order model (ROM) in han-dling nonlinear PDEs is limited mainly by two factors.The first factor is related to the treatment of the non-linear terms in the POD-Galerkin ROM and the secondfactor is related to maintaining the overall stability of theresulting ROM. In this talk, we combine deep residual re-current neural network (DR-RNN)with POD-Galerkin anddiscrete empirical interpolation method (DEIM). DR-RNNis a physics aware recurrent neural network for modelingthe evolution of dynamical systems. The DR-RNN archi-tecture is inspired by iterative update techniques of linesearch methods where a fixed number of layers are stackedtogether to minimize the residual of the physical modelunder consideration. We demonstrate the accuracy andstability property of DR-RNN on two forward uncertaintyquantification problems involving two-phase flow in sub-surface porous media. We show that DR-RNN providesaccurate and stable approximations of the full-order modelin comparison to standard POD-Galerkin ROMs.

J.Nagoor KaniHeriotWatt University, [email protected]

Ahmed H. ElSheikhInstitute of Petroleum EngineeringHeriot-Watt University, Edinburgh, [email protected]

MS100

Deep Learning and Dynamic Mode Decompositionfor Modeling the Dynamics of Oil & Gas Problems

In this work, we introduce and investigate the performanceof Convolutional and Recurrent Neural Networks (CNNsand RNNs) and the recently proposed Dynamic Mode De-composition (DMD) for modeling complex dynamical sys-tems. The application of any of the aforementioned meth-ods is basically new as viable alternatives to produce data-driven models capable of capturing the dynamics of com-plex and different Oil & Gas problems. CNN and RNN rep-resent two of the most successful Deep Learning (DL) ar-chitectures for generating accurate predictions from imageand time series data representations, respectively. How-ever, DL methodologies are still unclear alternatives to pro-duce business value propositions in the Oil & Gas industry.On the other hand, DMD appears as a data-driven modelreduction approach that assumes that complex patternscan be decomposed into a simple representation based onspatiotemporal coherent structures amenable to producefuture states and observable responses. Analysis of DL

and DMD and their comparison provides elements for en-hancing capabilities not only for prediction but also foroptimization and control of complex dynamical systems.By bringing up a few use cases, we show that both DL andDMD approaches have the potential to provide faster andreliable alternatives to overcome the high computationalcost entailed by multiple processes in the industry.

Hector KlieConocoPhillips [email protected]

MS101

A Robust Stochastic Galerkin Method for theCompressible Euler Equations with Uncertainty

It is known that the stochastic Galerkin method applied tohyperbolic systems such as the compressible Euler equa-tions subject to random inputs may lead to an enlargedsystem which is not necessarily hyperbolic. In addition,such a method usually relies on the positivity of somemacroscopic quantities (e.g. sound speed), which maybreak down when solution presents severe discontinuities.We introduce a stochastic Galerkin method for the com-pressible Euler equations based on a kinetic formulation.The method solves the Boltzmann equation efficiently for alarge range of Knudsen numbers and reduces to an approx-imated (regularized) solver for the Euler equations whenthe Knudsen number is small. Furthermore, the methoddoes not need to evaluate any macroscopic quantities norrequire their values to be positive, hence is especially suitedfor problems involving discontinuities. Joint work with ShiJin and Ruiwen Shu.

Jingwei HuDepartment of Mathematics, Purdue [email protected]

MS101

Analysis and Application of Stochastic CollocationMethod for Maxwells Equations with Random Co-efficients

In this talk, we will present the development and analysisof the stochastic collocation method for solving the time-dependent Maxwell’s equations with random coefficientsand subject to random initial conditions. We provide arigorous regularity analysis of the solution with respect tothe random variable. To our best knowledge, this is thefirst theoretical results derived for the stochastic Maxwell’sequations. The rate of convergence is proved dependingon the regularity of the solution. Numerical results arepresented to confirm the theoretical analysis. Extensionsof this analysis and applications to Maxwell’s equations inrandom Drude metamaterials will be discussed too.

Jichun LiDepartment of Mathematical SciencesUniversity of Nevada, Las Vegas, NV [email protected]

Zhiwei FangUniversity of Nevada Las [email protected]

MS101

Inferring the Biological Networks via Information

120 UQ18 Abstracts

Theoretic Approaches

Partial correlation (PC) or conditional mutual informa-tion (CMI) is widely used in detecting direct dependen-cies between the observed variables in a network by elimi-nating indirect correlations/associations, but it fails when-ever there are some strong correlations in the network, dueto the inherent underestimation problem of PC/CMI. Wetheoretically develop a multiscale association analysis toovercome this difficulty. Specifically, we first show whyconditional mutual information (CMI)/partial correlation(PC) suffers from an underestimation problem for biologi-cal networks with strong correlations. Then, we propose anew measure, partial association (PA), based on the mul-tiscale conditional mutual information (MCMI), to solvethis problem. We show that linear PA and nonlinear PAcan quantify the direct associations in biological networksin an accurate manner from both theoretical and compu-tational aspects. Both simulated models and real datasetsdemonstrate that PA is superior to PC and CMI in termsof efficiency and effectiveness, and is a powerful tool toidentify the direct associations or reconstruct the biologi-cal network based on the observed omics data.

Tiejun LiSchool of Mathematical SciencesPeking [email protected]

MS101

Stochastic Methods for the Design of RandomMeta-materials under Geometric Constraints

Nanostructured materials such as a meta-surface for so-lar cell applications provide new opportunities for efficientlight trapping. We present a computational stochasticmethodology to generate random configurations of meta-materials with optimized properties. The main compo-nents of the methodology are: (1) A deterministic solverfor electromagnetic scattering from multiple objects. (2)The Karhunen-Loeve expansion to represent the corre-lated configurations of the scattering objects. (3) TheMulti-element probabilistic collocation method (MEPCM)to have flexibility in adopting different probability distri-butions for the configurations. MEPCM is used in com-bination with the adaptive ANOVA decomposition andsparse grids to investigate the sensitivity of the transmis-sion and reflection coefficients to different parameters anddeal with dimensionality challenges. To characterize therandomness of the configurations we employ random fields(RFs) from a Spartan family that includes covariance func-tions with a damped oscillatory behavior. We study lightpropagation through randomly spaced heterojunctions andmulti-dimensional structures. We found that greater trans-mission and reflection, compared to the uniformly spacedstructures, can be achieved for a structure with an oscilla-tory spacing profile. Optimized configurations for differentwave numbers and angles of the incoming wave, as well asdifferent parameters of the RFs, are found.

Ivi C. TsantiliNational Technical University of Athens, Greece andBeijing Computational Science Research Center (current)[email protected] , [email protected]

Min Hyung ChoDepartment of Mathematical SciencesUniversity of Massachusetts Lowellminhyung [email protected]

Wei CaiSouthern Methodist [email protected]

George Em KarniadakisBrown Universitygeorge [email protected]

MS102

URANIE: The Uncertainty and Optimization Plat-form

URANIE developed by CEA is a framework to dealwith uncertainty propagation, sensitivity analysis, surro-gate models, code calibration and design optimization.URANIE is a free and Open Source multi-platform (Linuxand Windows) based on the data analysis frameworkROOT, an object-oriented and petaflopic computing sys-tem developed by the CERN. This framework provides allthe functionalities needed to deal with big data processing,statistical analysis, visualisation and storage. It is mainlywritten in C++ but integrated with other languages suchas Python and R.

Fabrice [email protected]

Gilles ArnaudCEA Saclay, [email protected]

Jean-Baptiste Blanchard, Jean-Marc [email protected], [email protected]

MS102

Mystic: Rigorous Model Certification and Engi-neering Design under Uncertainty

Highly-constrained, large-dimensional, and nonlinear op-timizations are at the root of many difficult problems inuncertainty quantification (UQ), risk, operations research,and other predictive sciences. The ‘mystic‘ software wasbuilt to solve large non-convex optimization problems, withnonlinear constraints. Mystic leverages parallel computingto provide fast global optimization, interpolation, and sur-rogate construction over difficult nonlinear surfaces, andadditionally provides a general mechanism for the directapplication of constraining information as reductions to op-timizer search space. With these tools, mystic enables thesolving of difficult UQ and engineering design problems asglobal optimizations.

Michael McKernsCalifornia Institute of [email protected]

MS102

Cossan Software: Recent Advancements and CaseStudies

Cossan is a general purpose software package to quantify,mitigate and manage risk and uncertainty. The pack-age offers the most advanced and recent algorithms forSimulation-based Reliability Analysis, Sensitivity Analy-sis, Meta-Modelling, Stochastic Finite Elements Analysis

UQ18 Abstracts 121

(SFEM), and Reliability-Based Optimization (RBO). In-cluding full interaction with any third-party solvers, andcombination of available algorithms with specific solutionsequences, Cossan provides the framework to tackle thechallenges of both academia and industry. We will pro-vide updates on the most recent additions to the Cossanlibraries with some examples. Robust Artificial NeuralNetwork library, to improve the prediction robustness ofan ANN based on coupling Bayesian framework and modelaveraging techniques. Interval Predictor Models (IPMs)toolbox, IPMs are a class of meta-model which describethe expected spread of the output of a model whilst makingvery few assumptions about the model. The Credal Net-work Toolbox, for Bayesian Belief Network construction, aprobabilistic graphical model based on the use of directedacyclic graphs, integrating graph theory with the robust-ness of Bayesian statistics. Finally the System ReliabilityToolbox, allowing users to construct multi-component sys-tems within the Cossan environment, and investigate thebehaviour of such systems. http://www.cossan.co.uk.

Edoardo PatelliUniversity of LiverpoolInstiture for Risk and [email protected]

Dominic CallejaInstitute for Risk and UncertaintyUniversity of [email protected]

MS102

Foqus-PSUADE: A Framework for UncertaintyQuantification and Optimization

PSUADE (Problem Solving environment for UncertaintyAnalysis and Design Exploration) is a general-purpose soft-ware tool for non-intrusive (’black box’ simulation models)uncertainty quantification. This talk will begin with a briefoverview of PSUADE’s core capabilities, followed by a dis-cussion of recent enhancements as well as future develop-ment efforts. Practical development issues such as userinterface development will also be addressed.

Charles TongLawrence Livermore National [email protected]

MS103

Long Term Integration of Burgers Equation withRough Noise

We propose a restarted stochastic collocation methods forBurgers equation driven by white noise. The standardstochastic collocation methods suffer from the curse ofdimensionality. To reduce the dimensionality in randomspace, we apply independent component analysis to recon-struct the computed solutions every few time steps. Aftersuch a reconstruction, we then march in time and computesolutions with appropriate stochastic collocation methodsand repeat the aforementioned procedure until desired in-tegration time instant. Numerical results and some basicanalysis of the proposed methodology will be presented.

Yuchen DongWorcester Polytechnic Institute

[email protected]

MS103

Mixed Finite Element Methods for the StochasticCahn-Hilliard Equation with Gradient-type Multi-plicative Noises

In this talk, we will study finite element approximations forthe stochastic Cahn-Hilliard equation with gradient-typemultiplicative noise that is white in time and correlatedin space. The sharp interface limit as the diffuse interfacethickness vanishes of the stochastic equation formally ap-proximates a stochastic Hele Shaw flow which is describedby a stochastically perturbed geometric law of the deter-ministic Hele Shaw flow. Both the stochastic Cahn-Hilliardequation and the stochastic Hele Shaw flow arise from ma-terials science, fluid mechanics and cell biology applica-tions. Two fully discrete finite element methods which arebased on different time-stepping strategies for the nonlinearterm are proposed. Strong convergence with sharp rates forboth fully discrete finite element methods is proved. This isdone with the crucial help of the Holder continuity in timewith respect to the spatial L2-norm and H1-semi-norm forthe strong solution of the stochastic Cahn-Hilliard equa-tion, which are key technical lemmas. It is proven thatthe error estimates hold on a set which is close enough tothe whole domain. Numerical experiments are provided togauge the performance of the proposed fully discrete finiteelement methods and to study the interplay of the geomet-ric evolution and gradient-type noise.

Xiaobing H. FengThe University of [email protected]

Yukun LiThe Ohio State UniversityDepartment of [email protected]

Yi ZhangUniversity of Notre [email protected]

MS103

Reduced Order Models for Uncertainty Quantifica-tion of Time-dependent Problems

In many time-dependent problems of practical interest theinitial conditions needed can exhibit uncertainty. One wayto address the problem of how this uncertainty impacts thesolution is to expand the solution using polynomial chaosexpansions and obtain a system of differential equations forthe evolution of the expansion coefficients. We present anapplication of the Mori-Zwanzig formalism to the problemof constructing reduced models of such systems of differen-tial equations. In particular, we construct reduced modelsfor a subset of the polynomial chaos expansion coefficientsthat are needed for a full description of the uncertainty. Weshow with several examples what are the computational is-sues arising from the application of the formalism and howthey can be addressed.

Panos StinisPacific Northwest National [email protected]

Jing LiPacific Northwest National Lab

122 UQ18 Abstracts

[email protected]

MS104

Replication or Exploration? Sequential Design forStochastic Simulation Experiments

We investigate the merits of replication, and provide meth-ods that search for optimal designs (including replicates),in the context of noisy computer simulation experiments.We first show that replication offers the potential to be ben-eficial from both design and computational perspectives, inthe context of Gaussian process surrogate modeling. Wethen develop a lookahead based sequential design schemethat can determine if a new run should be at an existinginput location (i.e., replicate) or at a new one (explore).When paired with a newly developed heteroskedastic Gaus-sian process model, our dynamic design scheme facilitateslearning of signal and noise relationships which can varythroughout the input space. We show that it does so effi-ciently, on both computational and statistical grounds. Inaddition to illustrative synthetic examples, we demonstrateperformance on two challenging real-data simulation exper-iments, from inventory management and epidemiology.

Robert GramacyVirginia [email protected]

MS104

Interleaved Lattice-based Minimax Distance De-signs

We propose a new method for constructing minimax dis-tance designs, which are useful for computer experiments.To circumvent computational difficulties, we consider de-signs with an interleaved lattice structure, a newly definedclass of lattice that has repeated or alternated layers basedon any single dimension. Such designs have boundaryadaptation and low-thickness properties. From our nu-merical results, the proposed designs are by far the bestminimax distance designs for moderate or large samples.

Xu HeChinese Academy of [email protected]

MS104

Leverage Values of Gaussian Process Regressionand Sequential Sampling

In this work, we define the hat matrix and the leveragevalues for the Gaussian Process regression models. We dis-cover some interesting properties of them that resemblethe hat matrix and leverage values of the linear regres-sion model. We also generalize the original definition ofCook Distance to GP regression that can be used to searchfor outliers. The second part of the paper develop the se-quential sampling procedure based on the leverage, and wecompared it with other alternatives. Examples on simula-tion and real data are illustrated to show the performancesof the proposed methods.

Lulu KangIllinois Institute of [email protected]

MS104

Robust Designs for Gaussian Process Modeling of

Computer Experiments

We introduce in this presentation two new classes of ex-perimental designs, called support points and projectedsupport points, which can provide robust and effective em-ulation of computer experiments under Gaussian processmodeling. These designs have three important propertieswhich are appealing for modeling computer experiments.First, the proposed designs are robust, in that they en-joy excellent emulation performance over a wide class ofsmooth and rugged response surfaces. Second, these de-signs can be shown to address several shortcomings in ex-isting computer experiment designs, both in theory and inpractice. Lastly, both support points and projected sup-port points can be efficiently generated for large designsin high dimensions. In this presentation, we first presenta theoretical framework illustrating the above properties,then demonstrate the effectiveness of the proposed designsover state-of-the-art methods in simulations and real-worldcomputer experiments.

Simon MakGeorgia Institute of [email protected]

MS105

Optimal Information Acquisition Algorithms forInferring the Order of Sensitivity Indices

Numerous engineering problems are characterized byexpensive simulation/experiment as objective functions,where the goal is to identify regions of design space whichsatisfy a set of criteria defined at the outset (which includesoptimization cases). In many engineering applications, it isof key interest to understand which design variables drivechanges in the objectives and to compute the relative orderof importance. This question is answered via a combina-tion of data-driven modeling and global sensitivity analy-sis where so-called sensitivity indices are computed. To-wards this, Bayesian global sensitivity analysis (BGSA)constructs a computationally cheap probabilistic surrogateof the expensive objective function(s) using Gaussian Pro-cess (GP) regression. The surrogate provides approximatesamples of the underlying function. We propose an algo-rithm that evaluates the merit of a hypothetical measure-ment towards the segregation of the individual sensitiv-ity indices. This framework guides the designer towardsevaluating the objective function to acquire informationabout the sensitivities sequentially. We verify and validatethe proposed methodology by applying it on synthetic testproblems. We then demonstrate our approach on a real-world industry engineering problem of optimizing a com-pressor for oil applications. The problem is characterizedby an expensive objective (lab tests; each taking 1-2 days)and a high-dimensional input space with 30 input variables.

Piyush PanditaPurdue [email protected]

Jesper KristensenCornell [email protected]

Ilias BilionisPurdue University

UQ18 Abstracts 123

[email protected]

MS105

A Spectral Approach for the Design of Experi-ments: Design, Analysis and Algorithms

This talk discusses a new approach to construct high qual-ity space-filling sample designs. First, we propose a noveltechnique to quantify the space-filling property and opti-mally trade-off uniformity and randomness in sample de-signs in arbitrary dimensions. Second, we connect theproposed metric (defined in the spatial domain) to theobjective measure of the design performance (defined inthe spectral domain). This connection serves as an ana-lytic framework for evaluating the qualitative properties ofspace-filling designs in general. Using the theoretical in-sights provided by this spatial-spectral analysis, we derivethe notion of optimal space-filling designs, which we referto as space-filling spectral designs. Third, we propose anefficient estimator to evaluate the space-filling propertiesof sample designs in arbitrary dimensions and use it todevelop an optimization framework to generate high qual-ity space-filling designs. Finally, we carry out a detailedperformance comparison on two different applications in 2to 6 dimensions: a) image reconstruction and b) surrogatemodeling on several benchmark optimization functions andan inertial confinement fusion (ICF) simulation code. Wedemonstrate that the propose spectral designs significantlyoutperform existing approaches especially in high dimen-sions.

Bhavya KailkhurLawrence Livermore National [email protected]

Jayaraman ThiagarajanLawrence Livermore National [email protected]

Peer-Timo Bremerlawrence livermore national [email protected]

MS105

Stochastic Dimension Reduction using Basis Adap-tation and Spatial Domain Decomposition forPDEs with Random Coefficients

In this work, we present a stochastic dimension reductionmethod based on the basis adaptation in combination withthe spatial domain decomposition method for partial differ-ential equations (PDEs) with random coefficients. We usepolynomial chaos based uncertainty quantification (UQ)methods to solve stochastic PDEs and model random co-efficient using Hermite polynomials in Gaussian randomvariables. In this approach, we decompose the spatial do-main into a set of non-overlapping subdomains, and find ineach subdomain a low-dimensional stochastic basis to rep-resent the local solution in that subdomain accurately. Thelocal basis in each subdomain is obtained by an appropriatelinear transformation of the original set of Gaussian ran-dom variables spanning the Gaussian Hilbert space. Thelocal solution in each subdomain is solved independentlyof each other while the continuity conditions for the so-lution and flux across the interface of the subdomains ismaintained. We employ Neumann-Neumann algorithm tosystematically compute the solution in the interior and atthe interface of the subdomains. To impose continuity, weproject local solution in each subdomain onto a common

basis. We show with the numerical experiments that theproposed approach significantly reduces the computationalcost of the stochastic solution.

Ramakrishna Tipireddy, Panos Stinis, AlexanderTartakovskyPacific Northwest National [email protected], [email protected], [email protected]

MS106

Pade Approximation for Helmholtz Frequency Re-sponse Problems with Stochastic Wavenumber

Due to the oscillatory behavior of the analytic solutions,the finite element approximation of the Helmholtz bound-ary value problem in mid- and high-frequency regimes iscomputational expensive and time-consuming: accurateapproximations are possible only on very fine meshes orwith high polynomial approximation degrees. For this rea-son, when solutions at many frequencies are of interest,repeated finite element computations become prohibitive.To reduce the computational cost we propose a model orderreduction method based on Least Square Pade-type tech-niques. In particular, the Helmholtz frequency responsemap - which associates each frequency with the correspond-ing solution of the Helmholtz boundary value problem - isapproximated starting only from precomputed evaluationsat few frequency values. Uniform convergence results ofthe Least Square Pad approximation error are proved. Twoalgorithms to compute the Least Square Pad approximantare discussed. 2D numerical tests are provided that con-firm the theoretical upper bound on the convergence error.This is joint work with Fabio Nobile and Davide Pradovera(EPFL), and Ilaria Perugia (University of Vienna).

Francesca BonizzoniFaculty of MathematicsUniversity of [email protected]

Fabio NobileEPFL, [email protected]

Davide PradoveraEPFL [email protected]

Ilaria PerugiaUniversity of ViennaFaculty of [email protected]

MS106

Reduced Order Models for CVaR Estimation andRisk Averse Optimization

Two reduced order model (ROM) approaches for CVaRestimation are introduced and analyzed. One replaces theoriginal quantity of interest by the ROM approximation.The other uses ROMs to generate importance sampling.Improvements gained by using ROMs are quantified and il-lustrated numerically. These CVaR estimation approachesare then used to optimize CVAR subject to PDE con-straints with uncertain parameters. In this context it isimportant that key quantities that need to be approxi-mated for CVAR objective function evaluations are alsothe key quantities arising in gradient computations. Nu-

124 UQ18 Abstracts

merical results are given to illustrate the performance gainsdue to ROM. This talk is based on joint work with BorisKramer (MIT), Timur Takhtaganov (Rice), and KarenWillcox (MIT).

Matthias HeinkenschlossDepartment of Computational and Applied MathematicsRice [email protected]

MS106

Multifidelity Dimension Reduction via Active Sub-spaces

Engineering problems such as optimization and inferencebecome increasingly challenging as the dimension of theproblem grows. In some cases, most of the variations of thefunction of interest can be captured by a low-dimensionalsubspace. Identifying and exploiting this subspace reducesthe complexity of solving the problem at hand. The activesubspace method leverages gradient information to com-pute such subspace. In this work, we propose to use mul-tifidelity techniques to reduce the cost of identifying theactive subspace. We provide an analysis of the number ofgradient evaluations necessary to achieve a prescribed er-ror in expectation and with high probability. Numericalexperiments illustrate the benefits of such an approach.

Remi LamMassachusetts Institute of [email protected]

K. [email protected]

MS106

Low-rank Methods for Approximations of Poste-rior Covariance Matrix of Linear Bayesian InverseProblems

We consider the problem of efficient computations of thecovariance matrix of the posterior probability density forlinear Gaussian Bayesian inverse problems. However, thecovariance matrix of the posterior probability density isdense and large. Hence, the computation of such a ma-trix is impossible for large dimensional parameter spacesas is the case for discretized PDEs. Low-rank approxi-mations to the posterior covariance matrix were recentlyintroduced as promising tools. Nevertheless, the resultingapproximations suffer from the curse of dimensionality fortransient problems. We here exploit the structure of thediscretized PDEs in such a way that spatial and tempo-ral components can be separated and the curse overcomes.We reduce both the computational complexity and storagerequirement from O(nxnt) to O(nx + nt)[1] Here nx is thedimension of the spatial domain and nt is the dimensionof the time domain. We use numerical experiments to il-lustrate the advantages of our approach. [1] P. Benner, Y.

Qiu, and M. Stoll. Low-rank computation of posterior covari-ance matrices in Bayesian inverse problems, arXiv:1703.05638,2017.

Peter BennerMax Planck Institute for Dynamics of Complex TechnicalSystems, Magdeburg, [email protected]

Yue QiuMax Planck [email protected]

Martin StollMax Planck Institute, [email protected]

MS107

Bayesian Algorithms for Data-driven TurbulenceModelling

Recently several researchers have started to use Bayesianmethods in the context of turbulence modelling - to obtainan empirical map from feature space consisting of the localmean-flow, to the Reynolds-stress tensor (RST), using ref-erence data (LES, DNS) to train the model. However, nosuch unique map exists, and current techniques to not takeinto account the scatter in possible RSTs for a given fea-ture vector. We aim to construct a stochastic map basedon a generalised Gaussian process, fitted to the mean, vari-ance, and (local) correlation-length of the reference datain feature-space. Realizability and Galilean invariance ofthe RST are naturally enforced. Random samples gener-ated from our map are propagated through a solver withMC and MLMC to predict flows not in the training-set.The quantified uncertainty of these predictions is comparedwith held-back reference data.

Richard P. DwightDelft University of [email protected]

MS107

Uncertainty Propagation in RANS Simulations viaMulti-level Monte Carlo Method

A Multilevel Monte Carlo (MLMC) method for quantify-ing model-form uncertainties associated with the Reynolds-Averaged Navier-Stokes (RANS) simulations is presented.Two high-dimensional local stochastic extensions of theRANS equation are considered to demonstrate the appli-cability of MLMC method. The first approach is based onglobal perturbation of the baseline eddy-viscosity field us-ing a log-Gaussian random field. A more general second ex-tension is also considered where the entire Reynolds StressTensor (RST) is perturbed while maintaining realizability.Two fundamental flow problems are studied, showing thepractical advantage of the multilevel Monte Carlo methodover the standard Monte Carlo methods.

Prashant [email protected]

Martin SchmelzerTU [email protected]

Richard P. DwightDelft University of [email protected]

MS107

Constitutive Modeling of Turbulence with Physics-informed Machine Learning

Reynolds-averaged Navier-Stokes (RANS) models are

UQ18 Abstracts 125

widely used in simulating turbulent flows in engineeringapplications. The modeling of Reynolds stress introduceslarge model-form uncertainties, diminishing the reliabil-ity of RANS simulation results. In this talk, we presenta data-driven constitutive modeling approach by usingphysics-informed machine learning. This framework con-sists of three components: (1) build a neural-network-based constitutive model for turbulence, (2) impose phys-ical constraints to that neural-network-based constitutivemodel, and (3) use the machine-learning-trained constitu-tive model to simulate mean velocity field and other quan-tities of interest. We evaluate the performance of proposeddata-driven constitutive modeling approach by using theflows with stress-induced secondary flow and flows withmassive separation, both of which are challenging for tra-ditional RANS modeling. Significant improvements overthe baseline RANS simulation are observed.

Jinlong WuDept. of Aerospace and Ocean Engineering, Virginia [email protected]

Carlos MichelenDept. of Aerospace and Ocean EngineeringVirginia Techunknown

Heng XiaoDept. of Aerospace and Ocean Engineering, Virginia [email protected]

MS107

Bayesian Modeling of Mixed Aleatory and Epis-temic Uncertainty in CFD

The problem of mixed aleatory/epistemic uncertaintyquantification applied to computationally expensive mod-els is considered. Aleatory uncertainty is irreducible uncer-tainty which is often characterized by probability densityfunctions. Epistemic uncertainty is introduced by lack-of-knowledge and errors in the models under considerationand can be reduced by investing more effort in modeling(e.g. by investing more computational time or using moreaccurate models). Our approach to model these uncer-tainties is fully Bayesian. The unknown epistemic uncer-tainty is characterized using the concept known as Bayesianmodel calibration. The aleatory uncertainties are com-bined with the resulting posterior such that predictions un-der uncertainty can be made efficiently. The novelty of ourapproach is a new sampling procedure, requiring significantless samples than conventional methods to produce accu-rate results. We propose an adaptive methodology thatreplaces the (computational expensive) model with an in-terpolating surrogate that is based on weighted Leja nodes.

Laurent van den Bos, Benjamin SanderseCentrum Wiskunde & Informatica (CWI)Amsterdam, the [email protected], [email protected]

MS108

Rare Event Probability Estimation using AdaptiveSupport Vector Regression - Importance of Kernelsand their Proper Tuning

The presentation is about surrogate models used for theestimation of rare event probabilities. In this context sup-port vector regressors (SVR) are adaptively constructed

as surrogates of expensive-to-evaluate functions. The fol-lowing important aspects are addressed in the talk: choiceof kernel type (e.g. isotropic vs. anisotropic), tuning ofSVR hyperparameters, selection criteria for enriching thetraining set of data. The efficiency and robustness of theproposed adaptive method are assessed from a set of chal-lenging examples with a comparison to published resultsobtained by other methods where available.

Jean-Marc BourinetSIGMA Clermont, Institut [email protected]

MS108

Sequential Designs of Surrogate Models for Relia-bility Analysis

Reliability analysis estimates the probability of a rareevent, the failure event. Efficient estimation of the proba-bility of failure can be performed with sequential impor-tance sampling methods. These methods sample a se-quence of measures that gradually approach the theoreti-cally optimal importance sampling density. To facilitate ef-ficient estimation for cases where the failure event is definedin terms of the outcome of a computationally intensive nu-merical model, sequential importance sampling approachescan be combined with surrogate models. We considerthe construction of multiple low-fidelity surrogate modelsthrough building polynomial chaos expansions (PCEs) ateach intermediate measure/sample set provided by the se-quential sampling procedure. By design, these surrogateswill be most accurate on the support of the current inter-mediate measure/sample set. To address the curse of di-mensionality, we consider the construction of order-reducedPCEs in rotated spaces as suggested in [R. Tipireddy, R.G.Ghanem, Basis adaptation in homogeneous chaos spaces,J. Comput. Phys. 259 (2014) 304317] after eliciting im-portant directions pointing towards the failure domain.

Max EhreTU [email protected]

Iason Papaioannou, Daniel StraubEngineering Risk Analysis GroupTU [email protected], [email protected]

MS108

A Unified Approach on Active Learning Methodsfor Reliability Analysis

Due to their efficiency in identifying complex failure do-mains, meta-model-based active learning techniques havebeen increasingly adopted in the rare events estimationliterature. A large number of publications has been de-voted in the last decade to capitalizing on the propertiesof specific surrogate model techniques (e.g. Kriging, sup-port vector regression and polynomial chaos expansions)to create adaptive-design strategies that efficiently explorethe design space so as to provide stable and accurate es-timates of small probability of failure of complex systems.In this contribution, instead of focusing on a specific classof surrogates and rare event estimation techniques (e.g.Monte Carlo simulation or subset sampling), we providea unified framework to perform adaptive sampling for rareevent estimation that does not depend on the specific tech-niques chosen. This approach can therefore be includedin pre-existing workflows with the techniques of choice,

126 UQ18 Abstracts

which can be beneficial to increase the appeal of active-learning-based reliability analysis to a wider audience. Theperformance of the proposed framework is then tested onseveral benchmark problems by mixing different surrogate-modelling and rare-event-estimation techniques.

Stefano MarelliChair of Risk, Safety and Uncertainty QuantificationInstitute of Structural Engineering, ETH [email protected]

Moustapha Maliki, Roland Schobi, Bruno SudretETH [email protected], [email protected],[email protected]

MS108

Rare Event Simulation Through Metamodel-drivenSequential Stochastic Sampling

Rare-event simulation is frequently performed in a sequen-tial setting, introducing a series of intermediate densitiesthat gradually converge to the target one (the rare-eventconditional failure density). Subset simulation facilitates,for example, such an approach. For applications with com-plex numerical models, the associated computational bur-den for this task is high and the adaptive Kriging stochasticsampling and density approximation algorithm (AK-SSD)is discussed here to alleviate this burden. AKSSD replacesthe exact system model with a fast-to-evaluate metamodel,while it also reduces the required number of experimentsto construct this metamodel by adaptively adjusting itsaccuracy through an iterative approach. The metamodelis refined by sample-based design of experiments duringthis iterative process, such that its prediction ability is im-proved both globally and locally. Local improvement issought after in the regions of importance for both the targetand the intermediate conditional failure densities. The re-finement stops when target densities approximated in con-secutive iterations become indifferent, with the Hellingerdistance employed as a comparison metric. Once approxi-mated (first stage), the conditional failure density is used(second stage) as importance sampling density for estimat-ing the rare-event likelihood. Use of either the metamodelor the exact numerical model is examined for the secondstage.

Jize Zhang, Alexandros A. TaflanidisUniversity of Notre [email protected], [email protected]

MS109

Bayesian Inverse Problems and Low-rank Approx-imations

We look at Bayesian updating in the framework of updat-ing a random variable (RV) whose distribution is the priordistribution. It is assumed that this RV is a function ofmany more elementary random variables. Furthermore, itis assumed that computationally the RV is represented ineither a sampling format or in a functional or spectral rep-resentation like the the polynomila chaos expansion. Thesamples resp. the coefficients may be viewed as a high de-gree tensor, and hence it is natural to assume a low-rankapproximation for this. Conditioning, the central elementof Bayesian updating, is theoretically based on the notionof conditional expectation. Here we use this construct alsoas the computational basis, namely the ability to computeconditional expectations. With this in hand, we formulate

an updating process, which produces a new RV throughsuccessively finer approximations, whose law resp. distri-bution is the sought posterior distribution. This is in effecta kind of filter. It will be shown that it the filter can becomputed using low-rank tensor approximations and thatit directly operates on the low-rank representation of theRV representing the prior, to produce a low-rank represe-natation of the posterior RV.

Hermann MatthiesTechnische Universitat [email protected]

MS109

Principal Component Analysis and Active Learn-ing in Tree Tensor Networks

We present an active learning algorithm for the approxima-tion of high-dimensional functions of random variables us-ing tree-based tensor formats (tree tensor networks). Thealgorithm is based on an extension of principal compo-nent analysis to multivariate functions, identified as ele-ments of Hilbert tensor spaces. It constructs a hierarchyof subspaces associated with the different nodes of a di-mension partition tree and a corresponding hierarchy ofsample-based projection operators. Optimal subspaces areestimated using empirical principal component analysis ofpartial random evaluations of the function. The algorithmis able to provide an approximation of a function in anytree-based tensor format, with either a prescribed rank ora prescribed relative error, with a number of evaluations ofthe order of the storage complexity of the resulting tensornetwork.

Anthony NouyEcole Centrale [email protected]

MS109

Multilevel Monte Carlo Computation of SeismicWave Propagation with Random Lame Parameters

Abstract not available at time of publication.

Anamika PandeySRI-UQ [email protected]

MS109

Sparse Multifidelity Approximations for ForwardUQ with Application to Scramjet Combustor Com-putations

Engineering models of practical interest frequently dependon high-dimensional parameter spaces and the correspond-ing numerical models are usually computationally expen-sive. This often limits the number of simulations one canperform and direct explorations of the parameter space areprohibitive. In this talk we discuss progress on construct-ing lower dimensional surrogate models in a multifidelitysetting via sparse regression and adaptive quadrature. Weillustrate the algorithms for the analysis of Large EddySimulation of a SCRAMJET combustor.

Cosmin SaftaSandia National [email protected]

Gianluca Geraci

UQ18 Abstracts 127

University of [email protected]

Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

MS110

Optimization with Fractional PDEs

Fractional differential operators provide an attractivemathematical tool to model effects with limited regular-ity properties. Particular examples are image processingand phase field models in which jumps across lower dimen-sional subsets and sharp transitions across interfaces are ofinterest. In this talk we discuss these applications in addi-tion to optimization problems with PDE constraints. Themain emphasis will be on how to account for uncertaintyin these models.

Harbir AntilGeorge Mason UniversityFairfax, [email protected]

MS110

A Data-oriented Approach to Statistical InverseProblems

We study a data-oriented (aka consistent) Bayesian ap-proach for inverse problem. Our focus is to investigate therelationship between the data-oriented Bayesian formula-tion and the standard one for linear inverse problems. Weshow that 1) from a deterministic point of view, the MAPof the data-oriented Bayesian method is similar to the trun-cated SVD approach (regularization by truncation), and 2)from the statistical point of view, no matter what prior isused, the method only allows the prior to act on the datauninformed parameter subspace, while leaving the data-informed untouched. Both theoretical and numerical re-sults will be presented to support our approach.

Brad MarvinUniversity of Texas at [email protected]

Tim WildeyOptimization and Uncertainty Quantification Dept.Sandia National [email protected]

Tan Bui-ThanhThe University of Texas at [email protected]

MS110

Safe Designs via Robust Geometric Programming

Geometric programming is a powerful optimization strat-egy for an increasingly large class of problems across en-gineering applications. This work extends geometric pro-gramming to the case of optimization under uncertainty.

Robust geometric programming helps model and processgeometric programs with uncertain data that is knownto belong to some uncertainty set. The work presents amethodology to approximate the robust counterparts ofan uncertain geometric program as a tractable optimiza-tion problem (convex or log-convex) using robust linearprogramming techniques. This methodology, as well asother existing methodologies, will be used to robustify acomplex aircraft design problem, and the results of the dif-ferent methodologies will be compared and discussed.

Ali SaabMassachusetts Institute of [email protected]

Edward [email protected]

Warren [email protected]

MS110

An Uncertainty-weighted ADMM Method for Mul-tiphysics Parameter Estimation

In this work, we accelerate the convergence of the globalvariable consensus ADMM algorithm for solving large-scaleparameter estimation problems coupling different physics.The idea of global variable consensus is to partition thejoint estimation problem into subproblems associated witheach PDE, which can be solved individually and in paral-lel. This allows one to use tailored parameter estimationmethods for each problem and is thus attractive for large-scale problems involving complex interactions of multiplephysics. A well-known drawback of global variable con-sensus ADMM is its slow convergence, particularly whenthe number of workers grows (i.e., a large number of sub-problems). To overcome this problem, we propose a newweighting scheme in the algorithm that accounts for theuncertainty associated with the solutions of each subprob-lem. The uncertainty information can be obtained effi-ciently using iterative methods, and we demonstrate thatthe weighting scheme improves convergence for a large-scale multi-physics inverse problem involving a travel timetomography and DC-resistivity survey.

Samy Wu FungEmory University400 Dowman Dr, 30322 Atlanta, GA, [email protected]

Lars RuthottoDepartment of Mathematics and Computer ScienceEmory [email protected]

MS111

Inadequacy Representation of Flamelet-basedRANS Model with a Physics-based Stochastic PDE

Stochastic representations of model inadequacy of flamelet-based RANS models for turbulent non-premixed flames arestudied. While high fidelity models based on the react-ing Navier-Stokes equations are available, turbulent flamesimulations based on such models require substantial com-putational resources and are not practical for use in en-gineering decision-making. Alternatively, flamelet-based

128 UQ18 Abstracts

RANS models introduce a number of simplifying assump-tions that lead to dramatically reduced computational cost,but also introduce non-negligible errors. Here, we developa stochastic modeling approach for representing errors in-troduced by using assumed PDF for the mixture fraction.Typical deterministic models are based on the mixture frac-tion mean and variance. We enrich the model by posing astochastic PDE for the mixture fraction triple correlation.This new state variable is then used to generate perturba-tions to the assumed PDF, resulting in a stochastic modelfor all flow variables. A non-premixed hydrogen-air jetflame is used to demonstrate the approach.

Myoungkyu LeeInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]

Todd A. OliverPECOS/ICES, The University of Texas at [email protected]

Robert D. MoserUniversity of Texas at [email protected]

MS111

Bayesian Calibration of Rheological Closure Rela-tions for Computational Models of Turbidity Cur-rents

Numerical models can help to push forward the knowledgeabout complex dynamic physical systems. The modern ap-proach to doing that involves detailed mathematical mod-els. Turbidity currents are a kind of particle-laden flowsthat represent a very complex natural phenomenon. Ina simple way, they are turbulent driven flows generatedbetween fluids with small density differences carrying par-ticles. They also are one mechanism responsible for thedeposition of sediments on the seabed. A detailed under-standing of this phenomenon, including uncertainties, mayoffer new insight to help geologists to understand reser-voir formation, a strategic knowledge in oil exploration.We present a finite element formulation applied to the nu-merical simulation of particle-laden flows in a Eulerian-Eulerian framework. When sediment concentrations arehigh enough, rheological empirical laws close the model,describing how sediment concentrations influence the mix-ture viscosity. The aim of this work is to investigate theeffects on the flow dynamics of some these empirical laws.We use two configurations for numerical experiments. Bothnumerical experiments are inspired in laboratory tests. Weshow how turbulent structures and quantities of interest,such as sediment deposition, are affected by the differ-ent empirical rheological laws. Moreover, we employ aBayesian framework to validate the employed hypothesisand introduce a stochastic error term for taking into ac-count the modeling error.

Fernando A. RochinhaCOPPE - Federal University of Rio de JaneiroRua Nascimento Silva 100 [email protected]

Zio Souleymane, Henrique CostaCOPPEFederal University of Rio de Janeirounknown, unknwon

Gabriel GuerraThe Federal University of Rio de [email protected]

MS111

Embedded Model Error Quantification and Propa-gation

Conventional applications of Bayesian calibration typicallyassume the model replicates the true mechanism behinddata generation. However, this idealization is often notachieved in practice, and computational models frequentlycarry different physical parameterizations and assumptionsthan the underlying ‘truth’. Ignoring model errors can thenlead to overconfident calibrations and predictions aroundvalues that are, in fact, biased. Most statistical methodsfor bias correction are specific to observable quantities, donot retain physical constraints in subsequent predictions,and experience identifiability challenges in distinguishingbetween data noise and model error. We develop a gen-eral Bayesian framework for non-intrusive embedded modelcorrection that addresses some of these difficulties, by in-serting a stochastic correction to the model input param-eters. The physical inputs and correction parameters arethen simultaneously inferred. With a polynomial chaoscharacterization of the correction term, the approach al-lows efficient quantification, propagation, and decomposi-tion of uncertainty that includes contributions from datanoise, parameter posterior uncertainty, and model error.We demonstrate the key strengths of this method on bothsynthetic examples and realistic engineering applications.

Khachik Sargsyan, Xun HuanSandia National [email protected], [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

MS111

Stochastic Inadequacy Models for Chemical Kinet-ics

Two of the most prominent modeling assumptions used insimulations of turbulent combustion are the introduction ofturbulence models and reduced chemistry models. In thepresent work, we develop a stochastic model inadequacyrepresentation to quantify uncertainties due to reducedmodels in chemical kinetics. The inadequacy formulationis a generalization of previous work in which a stochasticoperator is appended to a reduced chemical kinetics modeland calibrated with data from a detailed model using aheirarchical Bayesian approach. The inadequacy formula-tion accounts for situations in which both reactions andspecies are neglected from the detailed model. The newinadequacy model was developed to be adaptive such thatit is only active when chemical reactions are taking place;otherwise the inadequacy model does not participate. Wepresent results from a perfectly stirred reactor of H2/O2

combustion. The detailed model consists of 21 reactionsand eight species whereas the reduced model uses five reac-tions and seven species. We then apply the inadequacy for-mulation to a one-dimensional counterflow diffusion flamewith H2/O2 combustion. Finally, we present preliminaryresults on using the counterflow diffusion flame and its as-sociated uncertainties from the inadequacy formulation to

UQ18 Abstracts 129

build an uncertain flamelet library for non-premixed com-bustion.

David SondakInstitute for Applied Computational ScienceHarvard [email protected]

Todd A. OliverPECOS/ICES, The University of Texas at [email protected]

Chris SimmonsICES, University of [email protected]

Robert D. MoserUniversity of Texas at [email protected]

MS112

Emulation of Computer Models with MultivariateOutput

Computer models often produce multivariate output. AGaussian process emulator of a model is typically con-structed for each output individually; however, approachesfor dependent multivariate emulation of the output havebeen proposed. We will demonstrate the irrelevance of mul-tivariate modeling of the output of a computer model forthe purpose of emulation of the model theoretically andwith simulation studies. We will discuss the application ofthe result to volcano eruption computer models.

Ksenia N. KyzyurovaCEMSE DivisionKing Abdullah University of Science and [email protected]

James Berger, Robert L. WolpertDuke [email protected], [email protected]

MS112

Sequential Surrogate-based Optimization: Appli-cation to Storm Surge Modelling

To mitigate geophysical hazards, it is important to under-stand the different sources of uncertainty. The computa-tional burden of running a complex computer model, suchas a storm surge model, can make optimization and un-certainty quantification tasks impractical. Gaussian Pro-cess (GP) regressions are often used as statistical surrogatemodels to alleviate this issue. They have been shown to begood approximations of the computer model. This study isfocused on maximizing an unknown function with the leastnumber of evaluations using the exploration vs. exploita-tion trade-off strategy: a GP surrogate model is built andemployed to achieve this. We explore the uncertain regionsand find the input values that maximize the informationgain about the function using MICE, a sequential designalgorithm which maximizes the mutual information overthe input space. The computational efficiency of inter-weaving MICE and the optimization scheme is examined,and demonstrated on test functions. The benefit of thenewly developed sequential surrogate-based optimizationscheme is also examined on a storm surge simulator wherethe aim is to identify conditions that will create possibly

large storm surge run-ups at the local level and at the min-imum computational cost.

Theodoros MathikolonisDepartment of Statistical ScienceUniversity College [email protected]

Serge GuillasUniversity College [email protected]

MS112

Modeling of Geophysical Flows - Analysis of Mod-els and Modeling Assumptions using UQ

Dense large scale granular avalanches are a complex classof flows with physics that has often been poorly capturedby models. Sparsity of actual flow data (usually only de-posit information is available) and large uncertainty in themechanisms of initiation and flow propagation make themodeling task challenging and a subject of much continu-ing interest. Models that appear to represent the physicswell in certain flows turn out to be poorly behaved in othersdue to intrinsic mathematical or numerical issues. While,inverse problems can shed some light on parameter choicesit is difficult to make firm judgements on the validity or ap-propriateness of any single or set of modeling assumptionsfor a particular target flow or potential flows that needsto be modeled for predictive use in hazard analysis. Wewill present here an uncertainty quantification based ap-proach to carefully, analyze the effect of modeling assump-tions on quantities of interest in simulations based on threeestablished models (Mohr-Coulomb, Pouliqen-Fortere andVoellmy-Salm) and thereby derive a model (from a set ofmodeling assumptions) suitable for use in a particular con-text. We also illustrate that a simpler though more re-strictive approach is to use a Bayesian modeling averageapproach based on the limited data to combine the out-comes of different models.

Abani PatraDepartment of Mechanical & Aerospace EngineeringUniversity at [email protected]

Andrea BevilacquaUniversity at BuffaloCenter for Geohazard [email protected]

Ali SafeiSUNY at [email protected]

MS112

Multi-fidelity Sparse-grid-based UncertaintyQuantification Applied to Tsunami Runup

Given a numerical simulation, the objective of parameterestimation is to provide a joint posterior probability dis-tribution for an uncertain input parameter vector, condi-tional on available experimental data. However, exploringthe posterior requires a high number of numerical simula-tions, which can make the problem impracticable withina given computational budget. A well-known approach toreduce the number of required simulations is to construct asurrogate, which — based on a set of training simulations— can provide an inexpensive approximation of the simu-

130 UQ18 Abstracts

lation output for any parameter configuration. To furtherreduce the total cost of the simulations, we can introducelow-fidelity as well as high-fidelity training simulations. Inthis case, a small number of expensive high-fidelity simu-lations is augmented with a larger number of inexpensivelow-fidelity simulations. In this talk I will present a multi-fidelity surrogate-based estimator based on sparse grid in-terpolation. The method will be applied to the quantifi-cation of uncertainty for the inundation due to a tsunamirunup. We can demonstrate a speedup of 20 for a fourdimensional parameter estimation problem.

Stephen G. RobertsAustralian National [email protected]

MS113

Prediction of Permeability from Digital Images ofReservoir Rocks

Permeability is one of the key petrophysical propertiesthat impact recovery performance of hydrocarbon reser-voirs. We introduce a novel Deep Learning system thatestimates permeability directly from, SEM, micro-CT, andother Digital Rock images. The core of the system is aconvolutional neural network that learns a mapping fromthe image space to the property space. The trained modelpredicts properties in seconds, therefore speeding up work-flows and complementing/validating laboratory measure-ments and simulation results, and can also be used as anovel tool for uncertainty quantification. Preliminary re-sults will be presented and discussed.

Mauricio Araya-Polo, Faruk O. Alpak, Nishank Saxena,Sander HunterShell International E&P, [email protected], [email protected], [email protected], [email protected]

MS113

Parametrization and Generation of GeologicalModels with Generative Adversarial Networks

One of the main challenges in the parametrization of ge-ological models is the ability to capture complex geolog-ical structures often observed in subsurface fields. In re-cent years, Generative Adversarial Networks (GAN) wereproposed as an efficient method for the generation andparametrization of complex data, showing state-of-the-artperformances in challenging computer vision tasks such asreproducing natural images (handwritten digits, humanfaces, pets, cars, etc.). In this work, we study the ap-plication of Wasserstein GAN for the parametrization ofgeological models. The effectiveness of the method is as-sessed for uncertainty propagation tasks using several testcases involving different permeability patterns and subsur-face flow problems. Results show that GANs are able togenerate samples that preserve the multipoint statisticalfeatures of the geological models both visually and quan-titatively. The generated samples reproduce both the geo-logical structures and the flow properties of the referencedata.

Shing ChanHeriotWatt University, [email protected]

Ahmed H. ElSheikhInstitute of Petroleum Engineering

Heriot-Watt University, Edinburgh, [email protected]

MS113

Novel Robust Machine Learning Methods for Iden-tification and Extraction of Unknown Features inComplex Real-world Data Sets

Abstract not available at time of publication.

Velimir V. VesselinovLos Alamos National [email protected]

MS114

Numerical Methods for Hyperbolic Systems ofPDEs with Uncertainties

Abstract not available at time of publication.

Alexander KurganovSouthern University of Science and Technology of ChinaTulane University, [email protected]

MS114

Analysis of UQ in Computational Method for someKinetic and Hyperbolic Equations

In this talk, I will present some mathematical analysis ofUQ in computational method for some kinetic and hyper-bolic equations. For neutron transport and radiative trans-fer equations with small mean free path and random cross-section, we show regularity uniform in random variable andspectral accuracy for gPC Galerkin method. For Burgersequation with shock solution and random initial data, weshow that method of polynomial interpolation with propershifting can capture the physical quantities such as shockemergence time and shock location. This is a joint workwith Shi Jin, Qin Li, Zheng Ma, and Ruiwen Shu.

Jian-guo LiuDuke [email protected]

MS114

Discovering Variable Fractional Orders ofAdvection-dispersion Equations from Field Datausing Multi-fidelity Bayesian Optimization

The fractional advection-dispersion equation (FADE) candescribe accurately the solute transport in groundwaterbut its fractional order has to be determined a priori. Here,we employ multi-fidelity Bayesian optimization to obtainthe fractional order under various conditions, and we ob-tain more accurate results compared to previously pub-lished data. Moreover, the present method is very effi-cient as we use different levels of resolution to constructa stochastic surrogate model and quantify its uncertainty.We consider two different problem set ups. In the first setup, we obtain variable fractional orders of one-dimensionalFADE, considering both synthetic and field data. In thesecond set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaus-sian process regression models to construct the surrogates.

UQ18 Abstracts 131

Guofei PangBrown Universityguofei [email protected]

George E. KarniadakisBrown UniversityDivision of Applied Mathematicsgeorge [email protected]

Paris PerdikarisMassachusetts Institute of [email protected]

Wei CaiSouthern Methodist [email protected]

MS114

Asymptotically Efficient Simulations of EllipticProblems with Small Random Forcing

Recent rare-event simulations show that the large devia-tion principle (LDP) for stochastic problems plays an im-portant role in both theory and simulation, for studyingrare events induced by small noise. Practical challengesof applying this useful technique include minimizing therate function numerically and incorporating the minimizerinto the importance sampling scheme for the constructionof efficient probability estimators. For a spatially extendedsystem where the noise is modeled as a random field, evenfor simple steady state problems, many new issues are en-counteredin comparison to the finite dimensional models.We consider the Poisson’sequation subject to a Gaussianrandom forcing with vanishing amplitude. In contrast tothe simplified rate functional given by space white noise,we consider the covariance operator of trace class such thatthe effects of small noise of moderate or large correlationlength on rare events can be studied. We have constructedan LDP-based importance sampling estimator with a suf-ficient and necessary condition to guarantee the weak ef-ficiency, where numerical approximation of the large devi-ation principle is also addressed. Numerical studies havebeen presented.

Xiaoliang WanLouisiana State UniversityDepartment of [email protected]

Xiang ZhouDepartment of MathematicsCity Univeristy of Hong [email protected]

MS115

UQLaB: What’s Next?

With the public release of version 1.0, the UQLab projectreached a critical maturity stage. This important milestoneincluded the open source release of its scientific modules, inaddition to a number of new features, thus marking the be-ginning of a new development phase in the project. In thiscontribution we give an overview of the newly introducedscientific modules, which include:

• UQ-Link: a toolbox to easily connect external solversto the UQLab framework.

• support vector machines (regression + classification)surrogate modelling facilities;

• reliability-based design optimization;

• random fields discretization and sampling.

An outlook to current and upcoming collaborative devel-opment efforts will also be presented.

Stefano MarelliChair of Risk, Safety and Uncertainty QuantificationInstitute of Structural Engineering, ETH [email protected]

Bruno SudretETH [email protected]

MS115

QUESO: A Parallel C++ Library for QuantifyingUncertainty in Estimation, Simulation, and Opti-misation

QUESO is a tool for quantifying uncertainty for a spe-cific forward problem. QUESO solves Bayesian statisti-cal inverse problems, which pose a posterior distributionin terms of prior and likelihood distributions. QUESOexecutes MCMC, an algorithm well suited to evaluat-ing moments of high-dimensional distributions. Whilemany libraries exist that solve Bayesian inference prob-lems, QUESO is specialized software designed to solve suchproblems by utilizing parallel environments demanded bylarge-scale forward problems. QUESO is written in C++.

Damon McDougallInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]

MS115

Integrating SNOWPAC in Dakota with Applica-tion to a Scramjet

We present recent developments regarding the integrationof SNOWPAC (Stochastic Nonlinear Optimization WithPath-Augmented Constraints) in Dakota. SNOWPAC isa method for stochastic nonlinear constrained derivative-free optimization using a trust region approach and Gaus-sian processes. It solves optimization problems where sam-pling is used to estimate robustness or risk measures com-prising the objective function and/or constraints. We as-sume that the sample estimates involve expensive blackbox model evaluations, and thus we focus on a small sam-ple size regime. This introduces significant noise, whichslows the optimization process. To mitigate the impact ofnoise, SNOWPAC employs Gaussian process regression tosmooth models of the objective and constraint in the trustregion. Additionally, we propose approximate Gaussianprocess methods in SNOWPAC to handle large numbers ofoptimization iterations. SNOWPAC is now available in theDakota framework, which offers a highly flexible interfaceto couple the optimizer with different sampling strategiesor surrogate models. In this presentation we give detailsabout SNOWPAC and show the coupling of the libraries.Finally, we showcase the approach by presenting designoptimization results on a challenging scramjet application.Here, Dakota serves as the driver of the optimization pro-cess, employing SNOWPAC as optimization method and

132 UQ18 Abstracts

multilevel Monte Carlo sampling to evaluate the robust-ness measures.

Friedrich MenhornDepartment of Informatics, Technische [email protected]

Florian Augustin, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

MS115

Markov Chain Monte Carlo Sampling using GPUAccelerated Sparse Grids Surrogate Models

We combine the DiffeRential Evolution Adaptive Metropo-lis (DREAM) algorithm to surrogate models constructedusing sparse grids (SG) approximation. DREAM allowsus to collect random samples form a general probabilitydistribution, the algorithm can be used to solver problemsof Bayesian inference and multidimensional optimization.The challenge of the approach comes form the very largenumber of DREAM samples needed to compute reliablestatistics, which in turn requires a very large number ofSG evaluations. The DREAM algorithm processes samplesin batches, and the corresponding SG evaluation consistsof two stages: first, construct a matrix of function valuesthat is either sparse or dense (depending on the type ofSG used); second, multiply the matrix by a precomputeddata matrix. We present a comparison of different waysto accelerate both stages of the computation using NvidiaGPUs and leveraging the accelerated libraries cuBLAS andcuSparse, the newly developed capabilities in in MatrixAlgebra on GPU and Multicore Architectures, as well ascustom CUDA kernels. We show benchmark results fromseveral hardware architectures and GPU generations.

Miroslav StoyanovOak Ridge National [email protected]

MS116

Numerical Methods for Stochastic Delay Differen-tial Equations under Non-global Lipschitz Condi-tion

A class of explicit balanced schemes is presented forstochastic delay differential equations under some non-global Lipschitz conditions. One scheme is a balanced Eu-ler scheme, which is proved to be of order half in mean-square sense, another one is a balanced Milstein schemewith first-order convergence. Numerical stability of the twopresented schemes is further considered. Some numericalexamples are given to verify our theoretical predictions.

Wanrong CaoSoutheast University of [email protected]

MS116

Stochastic Computational Singular Perturbation

for Complex Chemical Reaction Systems

The research on reduction methods for chemical reactionsystems is driven by their complexity, to seek simplifiedsystems involving a smaller set of species and reactionsthat can approximate the original detailed systems in theprediction of specific quantities of interest. The existenceof such reduced systems frequently hinges on the existenceof a lower-dimensional, attracting, and invariant manifoldcharacterizing long-term dynamics. The ComputationalSingular Perturbation (CSP) method provides a generalframework for analysis and reduction of chemical reactionsystems, but is not directly applicable to chemical reactionsystems at micro or mesoscale, where stochasticity plays anon-negligible role and thus has to be taken into account.In this presentation a stochastic computational singularperturbation method and associated algorithm will be pre-sented. They can be used to not only construct accuratelyand efficiently the numerical solutions to stiff stochasticchemical reaction systems, but also analyze the dynamicsof the reduced stochastic reaction systems.

Xiaoying HanAuburn [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Yanzhao CaoDepartment of Mathematics & StatisticsAuburn [email protected]

Lijin WangUniversity of Chinese Academy of Sciences, [email protected]

MS116

Chemical Reaction Noise Induced Phenomena:Change in Dynamics and Pattern Formation

It has been realized that noise can have an effect on re-action diffusion systems. In particular, including noise inreaction diffusion models can induce pattern formation orchange the patterns in the deterministic models. In thistalk, I will provide a method for analyzing such effectsfrom intrinsic chemical reaction noise. I will first provide aframework for understanding how chemical reaction noisecan change the systems’ effective dynamics. Then the ap-proach will be applied to the classical Gray-Scott modeland a quantitative analysis of noise induced phenomenawill be elaborated and compared to the numerical simula-tions.

Yian MaUniversity of [email protected]

MS116

Efficient Integration of Fractional Beam Equationwith Space-time Noise

We consider numerical solutions of a nonlinear Beam equa-tion driven by space-time white noise with time-fractionalderivative. The solution of the considered equations requirelarge storage and prohibitively computational cost. To al-

UQ18 Abstracts 133

leviate heavy computational cost and storage requirement,we employ low-cost and low-storage fast algorithms to solvethe problem. We apply an implicit-explicit scheme andreduce the problem to a system of linear time-fractionaldifferential equation which can be solved efficiently. Theproposed method significantly improve a method developedin literature.

Zhongqiang ZhangWorcester Polytechnic [email protected]

Zhaopeng HaoMathematical Sciences of Worcester Polytechnic [email protected]

MS117

Importance Sampling the Union of Rare Eventswith Bounded Relative Error and an Applicationto Power Systems Analysis

This work presents a method for estimating the probabilityμ of a union of J rare events. The method uses n samples,each of which picks one of the rare events at random, sam-ples conditionally on that rare event happening and countsthe total number of rare events that happen. We call itALORE, for ‘at least one rare event’. The ALORE esti-mate is unbiased and has a coefficient of variation no largerthan

√(J + J−1 − 2)/(4n). The coefficient of variation is

also no larger than√

(μ/μ− 1)/n where μ is the unionbound. Our motivating problem comes from power systemreliability, where the phase differences between connectednodes have a joint Gaussian distribution and the J rareevents arise from unacceptably large phase differences. Inthe grid reliability problems even some events defined by5772 constraints in 326 dimensions, with probability below10−22, are estimated with a coefficient of variation of about0.0024 with only n = 10,000 sample values.

Art OwenStanford [email protected]

Yury Maximov, Michael ChertkovLos Alamos National [email protected], [email protected]

MS117

Input-output Uncertainty Comparisons for Opti-mization via Simulation

When input distributions to a simulation model are es-timated from real-world data, they have estimation errorcausing input uncertainty in the simulation output. If anoptimization via simulation (OvS) method that treats suchinput distributions as ‘correct’ is applied, then there is arisk of making a suboptimal decision for the real world,which we call input model risk. This talk addresses thediscrete OvS (DOvS) problem of selecting the real-worldoptimal from among a finite number of systems when allof them share the same input distributions estimated fromcommon input data. A DOvS procedure should reflect thelimited resolution provided by the simulation model in dis-tinguishing the real-world optimal solution from the otherswhen the input data are finite. Our input-output uncer-tainty comparisons (IOU-C) procedure focuses on compar-isons rather than selection: it provides simultaneous con-fidence intervals for each systems real-world mean and the

best mean of the rest with any desired probability, whileaccounting for both stochastic and input uncertainty. Tomake the resolution as high as possible (intervals as shortas possible) we exploit the common input data effect to re-duce uncertainty in the estimated differences. Under mildconditions we prove that the IOU-C procedure providesthe desired statistical guarantee asymptotically as the real-world sample size and simulation effort increase, but it isdesigned to be effective in finite samples.

Eunhye Song, Barry NelsonNorthwestern [email protected], [email protected]

MS117

Universal Convergence of Kriging

Kriging based on Gaussian random fields is widely usedin reconstructing unknown functions. The kriging methodhas pointwise predictive distributions which are computa-tionally simple. However, in many applications one wouldlike to predict for a range of untried points simultaneously.In this work we obtain some error bounds for the (simple)kriging predictor under the uniform metric. It works fora scattered set of input points in an arbitrary dimension,and also covers cases where the covariance function of theGaussian process is misspecified. These results lead to abetter understanding of the rate of convergence of krigingunder the Gaussian or the Matern correlation functions,the relationship between space-filling designs and krigingmodels, and the robustness of the Matern correlation func-tions.

Wenjia WangGeorgia Institute of [email protected]

Rui TuoChinese Academy of [email protected]

C. F. Jeff WuGeorgia Institute of [email protected]

MS117

Experimental Designs for Uncertainty Propagationand Robustness Analysis

For an input-output system, uncertainty propagation stud-ies how variability in input variables (e.g., resulting frommanufacturing tolerances) affect the variability in outputvariables. For real-world engineering problems, this input-output system is typically black-box and requires simula-tions to evaluate; in such situations, uncertainty propaga-tion is performed by pushing-forward a set of input settingsthrough a simulator. Since each simulation run can be com-putationally or monetarily expensive, the goal is to reducethe number of runs needed to conduct this propagation. Inthis talk, we present a novel experimental design approach,which makes use of a designed set of inputs to perform thispropagation using a limited number of samples. The pro-posed design is shown to perform better than Monte Carloand Quasi-Monte Carlo samples. We illustrate the effec-tiveness of the proposed designs in numerical simulations,and demonstrate its usefulness in several real-world appli-cations such as robust parameter design of a product or

134 UQ18 Abstracts

process.

Roshan Vengazhiyil, Simon MakGeorgia Institute of [email protected], [email protected]

MS118

Uncertainty Quantification of Transport in Hetero-geneous Porous Media with the Iruq-Cv Method

We present an inverse regression-based approach for uncer-tainty quantification with application to solute transportin heterogeneous porous media. In the proposed approachwe used sliced inverse regression (SIR) to identify a dimen-sion reduction subspace of random space that sufficientlycaptures the variability of the quantities of interest (QoIs)under consideration. This dimension reduction subspace isthen employed to construct a surrogate model for the QoIby means of Hermite polynomial expansions. Estimatesof statistics of QoIs are computed by using the surrogatemodel as a control variate, reducing the error of the esti-mates. We consider solute transport problems in saturatedmedia, where the hydraulic conductivity field is modeled asa second-order random field. We apply the proposed ap-proach to the estimation of statistics of the maximum con-centration and mean arrival time at various observationlocations throughout the simulation domain.

Weixuan LiExxon [email protected]

David A. Barajas-Solano, Guzel Tartakovsky, AlexanderTartakovskyPacific Northwest National [email protected],[email protected], [email protected]

MS118

Efficient Stochastic Inversion Using Adjoint Mod-els and Machine Learning

Performing stochastic inversion on a computationally ex-pensive forward simulation model with a high-dimensionaluncertain parameter space (e.g. a spatial random field)is computationally prohibitive even when gradient infor-mation can be computed efficiently. Moreover, the ‘non-linear’ mapping from parameters to observables generallygives rise to non-Gaussian posteriors even with Gaussianpriors, thus hampering the use of efficient inversion algo-rithms designed for models with Gaussian assumptions.In this paper, we propose a novel Bayesian stochastic in-version methodology, which is characterized by a tightcoupling between the gradient-based Langevin MarkovChain Monte Carlo (LMCMC) method and a kernel princi-pal component analysis (KPCA). This approach addressesthe ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional andnonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an effi-cient LMCMC method on the projected low-dimensionalfeature space. We will demonstrate this computationalframework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.This work was performed under the auspices of the U.S.Department of Energy by Lawrence Livermore National

Laboratory under Contract DE-AC52-07NA27344.

Charanraj ThimmisettyLawrence Livermore National [email protected]

Wenju ZhaoDepartment of Scientific ComputingFlorida State [email protected]

Charles Tong, Joshua A. White, Chen XiaoLawrence Livermore National [email protected], [email protected], [email protected]

MS118

Compressive Sensing with Built-in Basis Adapta-tion for Reduced Homogeneous Chaos Expansions

Compressive sensing methods have recently gained increas-ing momentum in the context of polynomial chaos expan-sions with an emphasis on its applicability in computa-tionally expensive or high dimensional models. We presenta novel heuristic algorithm that utilizes such methods inorder to compute optimal adapted bases in HomogeneousChaos spaces. This consists of a two-step optimization pro-cedure that computes the coefficients and the input pro-jection matrix of a low dimensional chaos expansion withrespect to a rotated basis. We demonstrate the attractivefeatures of our algorithm through several numerical exam-ples including the application on Large-Eddy simulationsof turbulent combustion in a HiFiRE scramjet engine.

Panagiotis TsilifisUniversity of Southern [email protected]

Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]

MS119

Random Sketching for Model Order Reduction ofHigh Dimensional Systems

We propose a methodology for reducing the cost of the clas-sical projection based Model Order Reduction algorithmswhile preserving the quality of the output. We show that tobuild the surrogate model it is enough to consider a sketchof the full problem, i.e. a set of low-dimensional projec-tions, instead of operating on large matrices and vectors.A sketch is computed randomly and guaranteed to containthe key information with high probability. Our approachcan be used for reducing both complexity and memory.The algorithms provided here are well suited for any mod-ern computational environment. All the operations, exceptsolving linear systems of equations, are embarrassingly par-allel. Our version of Proper Orthogonal Decomposition canbe computed on separate machines with a communicationcost independent of the dimension of the full problem. Thesurrogate model can even be constructed in the so-calledstreaming environment, i.e., under extreme memory con-straints. In addition, we provide an efficient way for esti-mating the residual norm. This approach is not only moreefficient than the classical one but is also less sensitive toround-off errors. Finally, we validate the methodology nu-

UQ18 Abstracts 135

merically on a benchmark problem.

Oleg BalabanovEcole Centrale de [email protected]

Anthony NouyEcole Centrale [email protected]

MS119

Gradient-free Active Subspace Techniques to Con-struct Surrogate Models Employed for Bayesian In-ference

Physical models employed in neutronics, thermal hy-draulics, fuels and chemistry codes typically have a mod-erate (10-100) or high (> 100) number of inputs – com-prised of parameters, initial or boundary conditions, orexogenous forces – many of which are nonidentifiable ornoninfluential in the sense that they are not uniquely de-termined by the data. We demonstrate the use of a novelinitialization algorithm to approximate the gradients usedto construct active subspaces for neutronics, thermal hy-draulics, fuels and chemistry models with moderate inputdimensions. We subsequently construct surrogate modelson these active subspaces, which can be used for subse-quent analysis, model calibration via Bayesian inference,or uncertainty quantification.

Kayla Coleman, Ralph SmithNorth Carolina State [email protected], [email protected]

Brian WilliamsLos Alamos National [email protected]

Max D. MorrisIowa State [email protected]

MS119

Certified Reduced Basis Methods for VariationalData Assimilation

The reduced basis (RB) method is a certified MOR tech-nique for the rapid and reliable solution of parametrisedPDEs. Here, we focus on problems in optimal control anddata assimilation. In particular, we present a certified RBapproach to four dimensional variational data assimilation(4D-Var). Several works have explored the use of reducedorder models as surrogates in a 4D-Var setting. We con-sider the case in which the behavior of the system is mod-elled by a parametrised parabolic PDE where the initialcondition and model parameters (e.g., material or geomet-ric properties) are unknown, and where the model itselfmay be imperfect. We consider (i) the standard strong-constraint 4D- Var approach, which uses the given obser-vational data to estimate the unknown initial condition ofthe model, and (ii) the weak-constraint 4D-Var formula-tion, which additionally provides an estimate for the modelerror, and thus can deal with imperfect models. Since themodel error is a distributed function in both space andtime, the 4D-Var formulation generally leads to a large-scale optimization problem that must be solved for everygiven parameter instance. We introduce RB spaces for thestate, adjoint, initial condition, and model error. We thenbuild upon recent results on RB methods for optimal con-

trol problems to derive a posteriori error estimates for RBapproximations to solutions of the 4D-Var problem. Nu-merical tests are conducted to verify the validity of theproposed approach.

Nicole NellesenRWTH Aachen [email protected]

Sebastien J. BoyavalLaboratoire Saint-VenantEcole des Ponts [email protected]

Martin GreplRWTH Aachen [email protected]

Karen VeroyAICESRWTH Aachen [email protected]

MS119

Dictionary Measurement Selection for State Esti-mation with Reduced Models

Parametric PDEs of the general form

P(u, a) = 0

are commonly used to describe many physical processes,where P is a differential operator, a is a high-dimensionalvector of parameters and u is the unknown solution be-longing to some Hilbert space V . Typically one observesm linear measurements of u(a) of the form �i(u) = 〈wi, u〉,i = 1, . . . ,m, where �i ∈ V ′ and wi are the Riesz rep-resenters, and we write Wm = span{w1, . . . , wm}. Thegoal is to recover an approximation u∗ of u from the mea-surements. The solutions u(a) lie in a manifold within Vwhich we can approximate by a linear space Vn, where n isof moderate dimension. The structure of the PDE ensurethat for any a the solution is never too far away from Vn,that is, dist(u(a), Vn) ≤ ε. In this setting, the observedmeasurements and Vn can be combined to produce an ap-proximation u∗ of u up to accuracy

‖u− u∗‖ ≤ β−1(Vn,Wm) ε

where

β(Vn,Wm) := infv∈Vn

‖PWmv‖‖v‖

plays the role of a stability constant. For a given Vn, onerelevant objective is to guarantee that β(Vn,Wm) ≥ γ > 0with a number of measurementsm ≥ n as small as possible.We present results in this direction when the measurementfunctionals �i belong to a complete dictionary.

James A. NicholsUniversity of New South Wales, [email protected]

Olga MulaCEREMADE (Paris Dauphine University)[email protected]

Albert CohenUniversite Pierre et Marie Curie, Paris, [email protected]

136 UQ18 Abstracts

Peter BinevUniversity of South [email protected]

MS120

Opportunities and Unsolved Problems in Quanti-fying Seemingly Random Behavior in Images ofShock Waves

Images of shock waves show irreproducible, stochastic andinhomogeneous behavior that is caused by material dynam-ics that are just beyond what state-of-the-art experimentscan resolve. While statistical models describing the varia-tion and uncertainty in these images could inform the un-derlying physics, much of that analysis is missing. Shockimages typically have an assortment of noise, intensity vari-ation, low contrast, blur, and other quality issues fromexperimental constraints, which cause most conventionalanalysis methods to fail. These data-rich images usuallyremain un-analyzed because few experimentalists have theexpertise to develop new methods to accommodate theseconstraints. As an experimentalist, I will present a va-riety of different types of image data that will show thescope of statistical uncertainty shown in images of shockphenomena. This talk will give an overview of the un-solved problems in UQ that are currently in high-demandfor the shock physics community. I will touch on possi-bilities in digital image correlation, image segmentation,texture analysis, gradient methods, as well as model-basedanalysis, and other fields beyond.

Leora Dresselhaus-CooperDepartment of Physical [email protected]

MS120

A Locally Adapting Technique for Quantifying Er-ror in Boundary Locations using Image Segmenta-tion

To compute quantities such as pressure and velocity fromlaser-induced shock waves propagating through materials,high-speed images are captured and analyzed. Shock im-ages typically display high noise and spatially-varying in-tensities, causing conventional analysis techniques to havedifficulty identi- fying boundaries in the images withoutmaking significant assumptions about the data. We presenta novel machine learning algorithm that efficiently seg-ments, or partitions, images with high noise and spatially-varying intensities, and provides error maps that describea level of uncertainty in the partition- ing. The user trainsthe algorithm by providing locations of known materialswithin the image but no assumptions are made on the ge-ometries in the image. The error maps are used to providelower and upper bounds on quan- tities of interest, such asvelocity and pressure, once boundaries have been identifiedand propagated through equations of state. This algorithmwill be demonstrated on images of shock waves with noiseand aberrations to quantify properties of the wave as itprogresses. This work was done by National Security Tech-nologies, LLC, under Contract No. DE- AC52-06NA25946with the U.S. Department of Energy and supported by theSite-Directed Research and Development Program.

Margaret C. HockThe University of Alabama in Huntsville

[email protected]

MS120

Radially Symmetric Modeling for Large Scale Lin-ear Inverse Problems in X-ray Imaging

In several applications of X-ray imaging, quantitative es-timation incorporates an assumption of radial symmetry.The dimensionality of the quantity of interest is often re-duced, e.g. a two-dimensional image of a radially symmet-ric object can be represented with its radial profile, butassumptions about the object must now be translated ap-propriately to reflect the geometry of the radial profile.With the Bayesian viewpoint, this can be implemented bymodeling the prior with radial symmetry. In particular,this work shows how radial symmetry can be incorporatedinto Markov field prior distributions whose precision is ahigh dimensional approximation to a differential operator.We show how these types of priors can be used for esti-mation problems on actual X-ray radiographic data, suchas estimating the point spread function of the system orrecovering areal density through Abel inversion.

Kevin JoyceNevada National Security SiteSignals Processing and Applied [email protected]

MS120

Fast Experimental Designs for LARGE Linear Pro-cesses

Microbial biofilms are virtually everywhere, exploitingsources of chemical and photo energy wherever they canbe found. A deeper understanding of their function canprofoundly transform the way we understand and interactwith our world. Microbiologists have unprecedented ac-cess to inexpensive molecular metagenomic, metatranscrip-tomic and imaging technologies that has revolutionized thepotential to identify and characterize microbial commu-nity capability and activity. At the same time increasinglysophisticated microprobe and imaging technology has en-abled resolution, down to the microscale, of the environ-ment in which microbial communities function. Commu-nity scale mathematical models of microbial communitiesare also available and capable of describing the interac-tion between these disparate biological and chemical andphysical data. Our goal is to understand both the for-ward (omics to environmental) and backward (environmentto omics) processes through efficient experimental designsthat generate laboratory data that optimally inform theunderlying mathematical model.

Al ParkerDepartment of Mathematical SciencesMontana State [email protected]

MS121

Continuous Level Monte Carlo and Sample-adaptive Model Hierarchies

In this talk, we present Continuous Level Monte Carlo(CLMC), a generalisation of Multilevel Monte Carlo(MLMC) to a continuous framework where the level pa-rameter is a continuous variable. This provides a naturalframework to use sample-wise adaptive refinement strate-gies, with a goal-oriented error estimator as our new level

UQ18 Abstracts 137

parameter. We introduce a practical CLMC estimator (andalgorithm) and prove a complexity theorem showing thesame rate of complexity as MLMC. Also, we show that itis possible to make the CLMC estimator unbiased with re-spect to the true quantity of interest. Finally, we providetwo numerical experiments which test the CLMC frame-work alongside a sample-wise adaptive refinement strategy,showing clear gains over a standard MLMC approach.

Gianluca DetommasoUniversity of [email protected]

Tim J. DodwellUniversity of ExeterCollege of Engineering, Mathematics & Physical [email protected]

Robert ScheichlUniversity of [email protected]

MS121

An Efficient Algorithm for a Class of StochasticWave Propagation Models

We consider a class of frequency domain stochastic wavepropagation models in heterogeneous media, with uncer-tainty in the output propagation quantities of interest(QoI) induced by random parameters describing the het-erogeneity. We compute high-order approximations tostochastic moments of the QoI using an efficient high-dimensional parameter sampling technique.

Mahadevan GaneshColorado School of [email protected]

MS121

Efficient Sampling from High-dimensional Distri-butions using Low-rank Tensor Surrogates

High-dimensional distributions are notoriously difficultto sample from, particularly in the context of PDE-constrained Bayesian inverse problems. In this talk, we willpresent general purpose samplers based on low-rank tensorsurrogates in the tensor-train (TT) format, a methodologythat has been exploited already for many years for scal-able, high-dimensional function approximations in quan-tum chemistry. In the Bayesian context, the TT surrogateis built in a two stage process. First we build a surro-gate of the entire PDE solution in the TT format, usinga novel combination of alternating least squares and theTT cross algorithm. It exploits and preserves the blockdiagonal structure of the discretised operator in stochasticcollocation schemes, requiring only independent PDE so-lutions at a few parameter values, thus allowing the use ofexisting high performance PDE solvers. In a second stage,we approximate the high-dimensional likelihood functionalso in TT format. Due to the particular structure of theTT surrogate, we can build an efficient inverse Rosenblatt(or cumulative) transform that only requires a samplingalgorithm for one-dimensional conditionals. The overallcomputational cost of the sampler grows only linearly withthe dimension. For sufficiently smooth prior distributionsof the input random fields, the ranks required for accu-rate TT approximations are moderate, leading to signifi-

cant computational gains.

Sergey DolgovUniversity of BathDepartment of Mathematical [email protected]

Colin FoxUniversity of Otago, New [email protected]

Robert ScheichlUniversity of [email protected]

Karim Anaya-IzquierdoUniversity of BathDepartment of Mathematical [email protected]

MS121

A High-performance Software Framework for Mul-tilevel Uncertainty Quantification

A common technique for solving Bayesian inference prob-lems is the Markov Chain Monte Carlo (MCMC) algo-rithm. However, in engineering applications the infer-ence problem is typically constrained by partial differen-tial equations (PDEs), requiring the costly numerical so-lution of a PDE in every single step of the chain. Manyof the samples are subsequently discarded. Clearly, forlarge problems this leads to a massive computational ef-fort. In analogy to multilevel PDE solvers, the MultilevelMarkov Chain Monte Carlo (MLMCMC) method exploitsthe structure within the underlying PDE in order to carryout the inference at a fraction of the computational costof classical methods. It restricts most of the sampling ef-fort to easy-to-compute coarse approximations of the PDE,while requiring only a small number of realisations at fullaccuracy. In this talk we give an algorithmic view of theMLMCMC approach, as well as describing a new imple-mentation that leverages both the high-performance PDElibrary DUNE (https://www.dune-project.org/) and a newextension to the MIT Uncertainty Quantification libraryMUQ (http://muq.mit.edu/). The former is an efficientmodular library for solving large-scale PDEs in high perfor-mance computing environments, while the latter providesan easy-to-use framework to handle the statistical part ofthe method, including cutting-edge proposal distributions.We demonstrate the effectiveness of the new implementa-tion on a number of applications.

Tim J. DodwellUniversity of ExeterCollege of Engineering, Mathematics & Physical [email protected]

Ole KleinInterdisciplinary Center for Scientific ComputingHeidelberg [email protected]

Robert ScheichlUniversity of [email protected]

Linus SeelingerUniversitat HeidelbergInstitut fur Wissenschaftliches Rechnen

138 UQ18 Abstracts

[email protected]

MS122

Adaptive Tensor Methods for Forward and InverseProblems

Abstract not available at time of publication.

Martin EigelWIAS [email protected]

MS122

Low-rank Tensors for Stochastic Forward Problems

Abstract not available at time of publication.

Mike EspigRWTH Aachen [email protected]

MS122

Sparse Spectral Bayesian Estimation of NonlinearMechanical Models

In this paper we study the uncertainty quantification ina functional approximation form of nonlinear mechanicalmodels parametrised by material uncertainties. Depart-ing from the classical optimisation point of view, we take aslightly different path by solving the problem in a Bayesianmanner with the help of new spectral based sparse Kalmanfilter algorithms. The identification employs the low-rankstructure of the solution and provides the confidence inter-vals on the estimated solution.

Bojana RosicTU [email protected]

Hermann MatthiesTechnische Universitaet [email protected]

MS122

Bayesian Estimation for a Tomography Problem

We use Bayesian estimation for a tomography problemmodeled by a partial differential equation, where random-ness is due to the uncertainty in the quantity of interest.The Poisson-Boltzmann equation is the main model equa-tion solved on a bounded and convex domain and appli-cations include electric-impedance tomography as well asnanoscale field-effect sensors. The unknown quantity ofinterest is the permittivity describing the size, number,and location of target molecules or inclusions; the goalis to identify these parameters. We use the Metropolis-Hastings algorithm in the context of Markov-chain Monte-Carlo (MCMC) methods to estimate the posterior distri-bution of the size, number, and location of the inclusionsas the quantity of interest in our model problem. TheMCMC method is computationally expensive and requiresthe calculation of the model response for a large number ofsamples (at least tens or hundreds of thousands), for eachof which the PDE model must be solved. Thus, an itera-tive MCMC solver helps to improve efficiency. This meanswe estimate the posterior distribution for the parametersof interest and then using this distribution as the prior

estimation, we estimate another posterior in the next iter-ation. The trade-off between the number of iterations andthe number of samples used in each iteration is discussedand results for a realistic problem are shown.

Leila Taghizadeh, Jose A. Morales Escalante, BenjaminStadlbauer, Clemens HeitzingerVienna University of [email protected],[email protected],[email protected],[email protected]

MS123

Calibration and Multi-stage Emulation for Disag-gregation and Complex Models

Uncertainty quantification methods are often useful formulti-scale problems. This talk will give an overview ofsome of the broad challenges in uncertainty quantificationfor multi-scale systems. Some of the broad challenges mayinclude calibration of a large parameter space, develop-ment of a multi-stage emulator, handling model discrep-ancy, experimental design, and computational challenges.These challenges will be discussed using a couple differ-ent applications. One is the disaggregation of a complextarget, in this example to determine the chemical composi-tion using laser induced breakdown spectrography (LIBS)measurements and output from a complex forward model.The second application is the calibration of a multi-scalesolvent-based carbon capture model to obtain accurate pre-dictions of the proportion of carbon captured.

K. Sham Bhat, Kary Myers, James GattikerLos Alamos National [email protected], [email protected], [email protected]

MS123

Parameter Estimation for System Submodels withLimited or Missing Data using a Data-free Infer-ence Procedure

Predictive simulations of real-world systems often employphysics-based submodels whose parameters are constrainedby comparison to experimental data. Since raw experimen-tal data are typically unreported, the form in which sum-mary statistics of the data in the form of parameters andtheir uncertainties is presented is crucial for enabling sta-tistical analyses, such as uncertainty quantification (UQ)and uncertainty propagation. In order to explore the jointparametric uncertainty structure for submodels in caseswhere supporting data is limited or missing, we constructa data inference procedure which employs the available in-formation as constraints to explore a space of hypotheticalnoisy data consistent with these constraints. The algo-rithm involves nested Bayesian inferences on data and pa-rameters respectively, employing a maximum entropy ap-proach with Approximate Bayesian Computation (ABC) toenforce consistency though a data likelihood function. Thealgorithm delivers consistent noisy data sets and associatedjoint parameter densities, which can be pooled to arrive ata consensus density. This approach both facilitates UQ inthe absence of data evidence, and also proposes a frame-work for data assimilation across separate experiments andtheir potentially varying reporting paradigms using a uni-versal data-centric representation of experimental evidenceand uncertainty.

Tiernan Casey

UQ18 Abstracts 139

Sandia National [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

MS123

Dynamic Discrepancy: Intrusive Methods for Get-ting More Science into Industrial Models

Complex physical phenomena often require equally com-plex models to describe them. Incorporating these high-order models into large-scale models of industrial systemsis a persistent challenge. This presentation will providean introduction to dynamic discrepancy reduced model-ing, in which relatively simple dynamic models arising fromphysical systems are outfitted with Gaussian process (GP)stochastic functions of the BSS-ANOVA type. Advanta-geous properties of the BSS-ANOVA GP render such mod-els particularly suited for Bayesian calibration to data gen-erated by experiment, by high-fidelity models or both. Se-lected applications will focus on the chemical engineeringsector, to include carbon capture and catalytic steam ref-ormation. The potential for machine learning through au-tomated model building will be discussed.

David S. MebaneDepartment of Mechanical and Aerospace EngineeringWest Virginia [email protected]

MS124

Model Error Treatment in Data Assimilation forHigh-dimensional System - The EnvironmentalPrediction Case

In this talk, I will first briefly describe the general issue ofmodel error treatment in data assimilation (DA) for high-dimensional chaotic dynamics, such as those encounteredin environmental prediction. Emphasis will be given toensemble-based DA methods. A novel approach, in whichmodel error is treated as a deterministic process correlatedin time, will be discussed in the second part. This allowsfor the derivation of the evolution equations for the rele-vant moments of the model error statistics required in DA,along with an approximation suitable for application tolarge numerical models. It will be shown how this deter-ministic description of the model error can be incorporatedin sequential and variational DA procedures. A numericalcomparison with standard methods is given using low-orderdynamical systems, prototypes of atmospheric circulation,and a realistic soil model. In the final part of the talk I willsummarize methods for estimating the statistical inputs ofthe model error, in relation but not limited to the determin-istic approach discussed before. A novel procedure basedon combining the ensemble Kalman filter with expectation-maximization and Newton-Raphson maximum likelihoodmethods will be presented.

Alberto [email protected]

MS124

Impact of Model Fidelity on Bayesian Experimen-

tal Design

Accurate parameter estimation of a physical system is cru-cial to many engineering applications; this can be achievedin a stochastic manner through Bayesian inference. How-ever, the available data might not be sufficient to accu-rately inform all model parameters. It is therefore desir-able to design experiments, using a simulation code, thatmaximize the amount of information that can be extractedfrom the data. Bayesian Experimental Design is a novelstrategy for accomplishing this goal. Reducing the compu-tational cost, especially to evaluate the associated utilityfunctions is of critical importance, and has attracted con-siderable interest in recent years. One area that has notbeen explored in depth is the impact of model inadequacyon the results of the experimental design. To this end, wetest Bayesian Experimental Design on a hierarchy of non-linear models for some test problems in fluid dynamics. Weinvestigate the sensitivity of the optimal experimental de-sign points as a function of model choice, and compare theresulting posterior parameter distributions.

Ohiremen DibuaMechanical EngineeringStanford [email protected]

Wouter N. EdelingStanford [email protected]

Gianluca IaccarinoStanford UniversityMechanical [email protected]

MS124

Scalable Parallel Solution and Uncertainty Quan-tification Techniques for Variational Inference

Data assimilation (DA) uses physical measurements alongwith a physical model to estimate the parameters or stateof a physical system. Solution to DA problems usingthe variational approach require multiple evaluations ofthe associated cost function and gradient. In this workwe present a scalable algorithm based on augmented La-grangian approach to solve the 4D-Var. The augmentedLagrangian framework facilitates parallel evaluation of costfunction and gradients. We show that this methodologyis scalable with increasing problem size by applying it tothe Lorenz-96 model and the Shallow Water model. Wealso develop a systematic framework to quantify the im-pact of observation and model errors on the solution to the4D-Var. We demonstrate this a-posteriori error estimationframework on the weather research and forecast (WRF)model.

Vishwas RaoArgonne National [email protected]

Emil M. ConstantinescuArgonne National LaboratoryMathematics and Computer Science [email protected]

Adrian SanduVirginia Polytechnic Instituteand State University

140 UQ18 Abstracts

[email protected]

Vishwas [email protected]

MS124

Reducing Model Discrepancies in Turbulent FlowSimulations with Physics-informed Machine Learn-ing

Turbulence modeling introduces large model-form uncer-tainties in the predictions. Recently, data-driven methodshave been proposed as a promising alternative by usingexisting database of experiments or high-fidelity simula-tions. In this talk, we present a physics-informed machine-learning-assisted turbulence modeling framework. Thisframework consists of three components: (1) reconstruct-ing Reynolds stress modeling discrepancies based on DNSdata via machine learning techniques, (2) assessing theprediction confidence a priori based on distance metricsin the mean flow features space, and (3) propagating thepredicted Reynolds stress field to mean velocity field byusing physics-informed stabilization. Several flows withmassive separations are investigated to evaluate the per-formance of the proposed framework. Significant improve-ments over the baseline RANS simulation are observed forthe Reynolds stress and the mean velocity fields.

Jinlong WuDept. of Aerospace and Ocean Engineering, Virginia [email protected]

Carlos MichelenDept. of Aerospace and Ocean EngineeringVirginia Techunknown

Jian-Xun WangUniversity of California, [email protected]

Heng XiaoDept. of Aerospace and Ocean Engineering, Virginia [email protected]

MS125

Bayesian Inversion of Volcano Monitoring Data us-ing Physics-based Eruption Models

Monitoring data such as gas emission rates, ground surfacedeformation, and rock geochemistry are routinely collectedat many volcanoes. Utilizing these evolving observations tocharacterize volcano source parameters (magma reservoirvolume, pressure, composition, etc.) and associated uncer-tainties is critical for basic science and for hazard monitor-ing, forecasting, and mitigation, but remains a significantchallenge. Here I discuss the use of physics-based eruptionmodels in a Bayesian framework to jointly invert diversemonitoring data in order to resolve probabilistic volcanicsource properties and, in limited cases, forecast future be-havior. This approach has yielded otherwise difficult-to-obtain insights into volcanic systems, including magmaticvolatile content and reservoir volume at Mount St. Helens(Washington), magma supply rate and composition at Ki-lauea Volcano (Hawaii), and even insight into the mantleregions which feed eruptions. Ongoing challenges includephysical processes that are often poorly understood and

occur over a wide range of spatial and temporal scales, vol-cano models that may be computationally expensive andexhibit multiple solutions for a given set of model param-eters, and inadequately resolved data uncertainties.

Kyle AndersonUnited States Geological [email protected]

MS125

Practical Heteroskedastic Gaussian Process Re-gression

We present a unified view of likelihood based Gaussianprogress regression for simulation experiments exhibitinginput-dependent noise. Replication plays an importantrole in that context, however previous methods leverag-ing replicates have either ignored the computational sav-ings that come from such design, or have short-cut fulllikelihood-based inference to remain tractable. Startingwith homoskedastic processes, we show how multiple ap-plications of a well-known Woodbury identity facilitate in-ference for all parameters under the likelihood (without ap-proximation), bypassing the typical full-data sized calcula-tions. We then borrow a latent-variable idea from machinelearning to address heteroskedasticity, adapting it to workwithin the same thrifty inferential framework, thereby si-multaneously leveraging the computational and statisticalefficiency of designs with replication. The result is an infer-ential scheme that can be characterized as single objectivefunction, complete with closed form derivatives, for rapidlibrary-based optimization. Illustrations are provided, in-cluding real-world simulation experiments from manufac-turing and the management of epidemics.

Robert GramacyVirginia [email protected]

Mickael BinoisUniversity of [email protected]

MS125

An Improved Approach to Imperfect ComputerModel Calibration and Prediction

We consider the problem of calibrating inexact computermodels using experimental data. To compensate for themisspecification of the computer model, a discrepancyfunction is usually included and modeled via a Gaus-sian stochastic process (GaSP), leading to better resultsof prediction. The calibration parameters in the com-puter model, however, sometimes become unidentifiablein the GaSP model, and the calibrated computer modelfits the experimental data poorly as a consequence. Inthis work, we propose the scaled Gaussian stochastic pro-cess (S-GaSP), a novel stochastic process for calibrationand prediction. This new approach bridges the gap be-tween two predominant methods, namely the L2 calibra-tion and GaSP calibration. A computationally feasible ap-proach is introduced for this new model under the Bayesianparadigm. The S-GaSP model not only provides a generalframework for calibration, but also enables the computermodel to predict well regardless of the discrepancy func-tion. Numerical examples are also provided to illustratethe connections and differences between this new model

UQ18 Abstracts 141

and other previous approaches.

Mengyang GuDepartment of Applied Mathematics & StatisticsJohns Hopkins [email protected]

MS125

Multi-objective Sequential Design for Hazard Map-ping

Output from computer simulations of geophysical massflows often has a complicated, topography dependent spa-tial footprint. If we consider the hazard threat at one mappoint, emulators can be used to aid in identifying key re-gions of input space in which to focus sampling for calcu-lating the probability of the hazard threat and to quan-tify uncertainties of those calculations. You could imaginerepeating this process for another map point and identi-fying a very different important region of input space forthat new location. This leads to a challenge if we needto produce maps quickly and/or have a limited computebudget. Effectively, if we seek to add additional designpoints or batches of design points, there will be multipleobjectives to satisfy corresponding to different regions ofthe map. We will present a multi-objective design strategyand demonstrate it on probabilistic hazard mapping for theLong Valley volcanic region.

Elaine SpillerMarquette [email protected]

MS126

Fitting Dynamic Models to Epidemic Outbreakswith Quantified Uncertainty: Parameter Uncer-tainty, Identifiability, and Forecasts

Mathematical models provide a quantitative frameworkwith which scientists can assess hypotheses on the poten-tial underlying mechanisms that explain patterns in ob-served data at different spatial and temporal scales, gener-ate estimates of key kinetic parameters, assess the impactof interventions, optimize the impact of control strategies,and generate forecasts. We review and illustrate a simpledata assimilation framework for calibrating mathematicalmodels based on ordinary differential equation models us-ing time series data describing the temporal progressionof case counts relating, for instance, to population growthor infectious disease transmission dynamics. In contrastto Bayesian estimation approaches that always raise thequestion of how to set priors for the parameters, this fre-quentist approach relies on modeling the error structure inthe data. We discuss issues related to parameter identifi-ability, uncertainty quantification and propagation as wellas model performance and forecasts along examples basedon phenomenological and mechanistic models parameter-ized using simulated and real datasets.

Gerardo ChowellSchool of Public HealthGeorgia State [email protected]

MS126

Implications of Uncertainty in Parameter Estima-tion for a Biomthematical Based Response Metric

for Glioblastoma

Glioblastomas, a type of lethal primary tumor, are knownfor their heterogeneity and invasiveness. Over the lastdecade, a growing literature has been developed demon-strating the clinical relevance of a biomathematical model,the Proliferation-Invasion (PI) model, of glioblastomagrowth in giving patient-specific insight into the individualtumor kinetics. Of specific interest to this work, is the de-velopment of a treatment response metric, the Days Gained(DG) metric, based on individual tumor kinetics estimatedthrough the PI model using segmented volumes of hyper-intense regions on T1-weighted enhanced with gadoliniumand T2-weighted magnetic resonance images from two timepoints. This metric was shown to be more prognostic ofoutcome than standard imaging response metrics for a cut-off of DG=91. The original papers considered a cohort of63 patients, but did not account for the various types ofuncertainty in the calculation of the DG metric leaving therobustness of this cutoff and metric in question. In thiswork, we harness the Bayesian framework to consider twosources of uncertainty coming from image acquisition andthe interobserver error in image segmentation. Our resultsshow that while the uncertainty can result in relativelylarge confidence intervals for the DG metric in individualpatients, the previously suggested cutoff of 91 days remainsrobust. This supports the use of the single value of DG asa clinical tool.

Andrea Hawkins-DaarudMayo Clinic [email protected]

Susan MasseyMayo Clinic ArizonaPrecision Neurotherapeutics Innovation [email protected]

MS126

Data Assimilation and Parameter Identification ina Dynamical Model of Cancer Treatment

This talk provides an overview of some of the issues thatarise when trying to validate a biomathematical modelfrom data. I will describe some ordinary differential equa-tion models of prostate cancer growth and its treatmentby hormone (androgen deprivation) therapy. Althoughsuch therapy usually is quite effective initially, the tumorinevitably evolves resistance. The underlying biologicalmechanisms of treatment resistance are complex, and re-searchers have proposed various simplifications in attemptsto derive tractable mathematical models of treatment re-sponse. One objective of such modeling efforts is to deter-mine, based on time series data of tumor markers such asserum concentrations of prostate-specific antigen, whetherthe tumor has evolved resistance (or soon will do so). At-tempts to answer this question involve matters of modeland measurement error and the identifiability of key modelparameters.

Eric J. KostelichArizona State UniversitySchool of Mathematical and Statistical [email protected]

Javier BaezSchool of Mathematical and Statistical SciencesArizona State [email protected]

142 UQ18 Abstracts

Yang KuangArizona State UniversitySchool of Mathematical and Statistical [email protected]

MS126

Sub-exponential Growth for Modeling Plague: ACase Study of the 1904 Bombay Plague

Abstract not available at time of publication.

Tin PhanArizona State UniversitySchool of Mathematical [email protected]

MS127

Applying Quasi-Monte Carlo to an Elliptic Eigen-value Problem with Stochastic Coefficients

In this talk we study an elliptic eigenvalue problem, with arandom coefficient that can be parametrised by infinitely-many stochastic parameters. The physical motivation isthe criticality problem for a nuclear reactor: in steady statethe fission reaction can be modelled by an elliptic eigen-value problem, and the smallest eigenvalue provides a mea-sure of how close the reaction is to equilibrium – in termsof production/absorption of neutrons. The coefficients areallowed to be random to model the uncertainty of the com-position of materials inside the reactor, e.g., the controlrods, reactor structure, fuel rods etc. The randomness inthe coefficient also results in randomness in the eigenvaluesand corresponding eigenfunctions. As such, our quantity ofinterest is the expected value, with respect to the stochasticparameters, of the smallest eigenvalue, which we formulateas an integral over the infinite-dimensional parameter do-main. Our approximation involves three steps: truncatingthe stochastic dimension, discretising the spatial domainusing finite elements and approximating the now finite butstill high-dimensional integral. To approximate the high-dimensional integral we use quasi-Monte Carlo methods.We show that the minimal eigenvalue belongs to the spacesrequired for QMC theory, outline the approximation algo-rithm and provide numerical results.

Alexander D. GilbertSchool of Mathematics and Statistics, UNSW [email protected]

Ivan G. GrahamUniversity of [email protected]

Frances Y. KuoSchool of Mathematics and StatisticsUniversity of New South [email protected]

Robert ScheichlUniversity of [email protected]

Ian H. SloanUniversity of New South WalesSchool of Mathematics and Stat

[email protected]

MS127

Recursive Numerical Integration for Lattice Sys-tems with Low-order Couplings

We will discuss recursive numerical integration as an effi-cient method to compute discretized path integrals in sys-tems with low-order couplings. Low-order couplings arisein many physical systems as a consequence of locality ex-hibited by the underlying physical model. As such, dis-cretizing space-time in a physical theory with locality im-plies that any given point of that discretized space-timecan only be directly influenced by its nearest neighbors andwe will show that recursive numerical integration excels inthese situations leading to exponential error scaling.

Tobias HartungKing’s College London, [email protected]

MS127

New Efficient High Dimensional Integration Rulesfor Quantum Field Theoretical Models

We present an investigation of a fully symmetric integra-tion over compact groups in the case of a 0+1 dimensionalsystem and the discretised quantum mechanical model, thetopological rotor. While in the case of the 0+1 dimen-sional model the method works perfect up to arbitrarymachine precision, for the quantum mechanical system wewere not able to find an improvement over conventionalMarkov chain Monte Carlo methods. We will give an ex-planation for this finding and suggest revenues to overcomethis failure.

Karl JansenDeutsches [email protected]

MS127

Quasi-Monte Carlo Sampling for the SchrodingerEquation

We approximate the solution of the time-dependentSchrodinger equation with a two step procedure. We sam-ple the physical domain using a rank-1 lattice rule andapply Strang splitting for time-stepping. We derive the-oretical results which show advantages and advances overprevious methods which are also demonstrated by numeri-cal results. This is joint work with Yuya Suzuki and GowriSuryanarayana.

Dirk Nuyens, Yuya SuzukiDepartment of Computer ScienceKU [email protected], [email protected]

Gowri SuryanarayaEnergy Ville Genk & Vito [email protected]

MS128

UQTk - A Flexible Python/C++ Toolkit for Un-certainty Quantification

The UQ Toolkit (UQTk) is a collection of libraries, toolsand apps for the quantification of uncertainty in numer-

UQ18 Abstracts 143

ical model predictions. UQTk offers intrusive and non-intrusive methods for forward uncertainty propagation,tools for sensitivity analysis, sparse surrogate construction,and Bayesian inference. The core libraries are implementedin C++ but a Python interface is available for easy proto-typing and incorporation in UQ workflows. We will presentthe core UQTk design philosophy and illustrate key capa-bilities.

Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]

Khachik Sargsyan, Cosmin Safta, Kenny ChowdharySandia National [email protected], [email protected],[email protected]

MS128

Chaospy: A Pythonic Approach to PolynomialChaos Expansion

Polynomial chaos expansion is a popular method for ex-tracting statistical metrics from solutions of forward prob-lems with well characterized variability in the model pa-rameters. Chaospy is a Python toolbox specifically devel-oped to implement polynomial chaos expansions using thehigh modularity and expressiveness of Python. The tool-box allows for implementation of an expansion using onlya handful lines of code. More over, Chaospy is a devel-opment foundry that allows for easy experimentation withcustom features beyond the scope of standard implemen-tation. To demonstrate the capabilities of the Chaospytoolbox, this talk will demonstrate how polynomial chaosexpansions can be applied on an intrusive Galerkin prob-lem. Intrusive problem formulation often requires the re-formulation and solving of a set of governing equations, anddoes not conform well to standardized tools. As such, thenumber of available tools to solve these type of problemsare limited.

Jonathan FeinbergSimula Research LaboratoryUniversity of [email protected]

MS128

Prediction and Reduction of Runtime in UQ Sim-ulations on HPC Systems using Chaospy

For non-intrusive UQ simulations many black box modelevaluations have to be performed. If the runtime of a modelis sensitive to the uncertain input parameters, the runtimemay vary significantly for each model run. In this talk, wepresent a novel idea by building a surrogate of the runtimeof a model. With this surrogate we can predict and reducethe total runtime of UQ simulations on HPC systems. Weexamined this with Chaospy.

Florian KunznerTechnical University Munich (TUM)[email protected]

Tobias NeckelTechnical University of [email protected]

Hans-Joachim Bungartz

Technical University of Munich, Department ofInformaticsChair of Scientific Computing in Computer [email protected]

MS128

A Standard for Algorithms of Numerical Experi-ments: Proposal, Implementation and Feedback

The GdRMASCOT-NUM holds exchanges between practi-tioners and designers of many numerical experiments field:sensitivity analysis, optimisation, inversion, ... As manysoftware platforms are now available for industrial usage,it will become efficient to side this growing usage withsome standardisation. Indeed, our critical point stands inthe transfer of scientific developments to industrial appli-cations, so the greatest improvement should be awaitedfrom a leaner transfer process. We propose such a prelim-inary standard for development of numerical experiments(for many common languages), carrying about minmal con-straints for implementations, considering that this task isstill often considered as optional and costly by designers ofsuch algorithms. We then demonstrate immediate benefitson some basic examples: automated testing, benchmarkingand smooth platform integration. We will focus on somesupport online tools that also widely improve end usersadvertising and feedback.

Yann RichetInstitut de Radioprotection et de Srete [email protected]

PP1

Modeling Nonstationary Response Surfaces withBayesian Warped Gaussian Processes

A common challenge in constructing a surrogate model foruncertainty propagation is that the response exhibits non-stationary behavior. A popular method for overcoming thisis to decompose the surrogate into many local models viaadaptive refinement. However, such models quickly runinto difficulties when either the dimensionality of the inputspace is high or when the nature of the nonstationarityis complicated. In such cases, a nonlocal, nonparametricmethod seems appropriate. We show how the frameworkof the Bayesian warped Gaussian process [Lazaro-Gredilla,NIPS 12] can be specially applied to problems in UQ toovercome this issue. In the first GP layer, the stochas-tic variables parameterizing the uncertain inputs to thesimulator are propagated into an intermediate, warped la-tent space. From here, one appends the spatiotemporalinput data from the simulator run data to learn a mappingto the corresponding simulator outputs. Joint training ofthe two layers ensures that the transformed latent space isamenable to the stationary assumption of the second GPlayer, and the nonparametric nature of the layers makesthe warping layer effective with complicated nonstationar-ities. Furthermore, we exploit separability properties inthe latent layer yielding a sparse GP model with efficientKronecker product decompositions, allowing for GP mod-eling of millions of data. The approach is demonstrated ondynamical ODEs and nonlinear elliptic PDEs with nonsta-tionary features.

Steven Atkinson, Nicholas ZabarasUniversity of Notre Dame

144 UQ18 Abstracts

[email protected], [email protected]

PP1

Bayesian Optimization with Variables Selection

Bayesian optimization techniques have been successfullyapplied in various fields of engineering. Generally, they areused in the case of a moderate number of variables. There-fore, there is a great interest in Bayesian optimization al-gorithms suitable for high-dimensional problems. Here, weconsider the high-dimensional case with a moderate num-ber of influential variables. The basic idea consists in fil-tering the minor variables in order to enhance the opti-mization. To do so, new criteria are introduced withinthe Gaussian Process regression model and a specific setof stationary kernels. The proposed algorithm combinesoptimization and variables selection. The points are gen-erated in order to accelerate the optimization and refinethe variables selection. The parameters splitting is chal-lenged sequentially thanks to a new criterion called doubt.The algorithm is tested and compared to other methodssuch as the classical EGO. The results show the efficiencyof the algorithm for high dimensional problems when theintrinsic dimensionality is moderate.

Malek Ben SalemMines de [email protected]

Francois BachocInstitut de Mathematiques de [email protected]

Fabrice GamboaInstitut de Mathematiques de Toulouse. AOC ProjectUniversity of [email protected]

Lionel [email protected]

Olivier RoustantMines [email protected]

PP1

Bayesian Inference and Statistical Modeling withTransportMaps

TransportMaps is a Python module for the solution ofBayesian inference problems through measure transportand variational methods. The core methodology con-sists in the creation of a mapping, of adjustable complex-ity, between a tractable distribution and the intractabledistribution of interest. This map can be used directlyfor the generation of quadrature rules and/or indepen-dent samples with respect to the intractable distribu-tion. Alternatively, the map can be used to precondi-tion other Bayesian inference methods, such as Markovchain Monte Carlo. The software provides a numberof algorithms built on top of this framework: adap-tive construction of transport maps [Bigoni D., et al.,On the computation of monotone transports], sequen-tial inference using low-dimensional couplings [Spantini,A., et al. (2017), Inference via low-dimensional cou-plings. http://arxiv.org/abs/1703.06131], sparsity identifi-cation of non-Gaussian graphical models [Morrison, R., et

al. (2017), Beyond normality: Learning sparse probabilisticgraphical models in the non-Gaussian setting, NIPS 2017],approximate nonlinear filtering [Spantini, A., (2017), inPhD thesis: On the low dimensional structure of Bayesianinference]. Additionally the software provides a flexiblemodeling language for statistical problems, allowing for itsintegration with existing software. Software and documen-tation: http://transportmaps.mit.edu

Daniele Bigoni, Alessio Spantini, Rebecca MorrisonMassachusetts Institute of [email protected], [email protected], [email protected]

Ricardo [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

PP1

Simulation-based Machine Learning: An Applica-tion to Structural Health Monitoring

The continuous development of computational models andexperimental engineering has led to the development ofdata assimilation methodologies. Among the applicationsof data assimilation we focus on structural health moni-toring, which refers to damage detection strategies appliedto aerospace, civil or mechanical engineering. By usinga data-based approach we perform risk-based decisions inreal time. This approach builds a mathematical model fora digital twin, which is an approximation of the real struc-ture of interest, to create a dataset of all possible healthystates. This synthetic dataset, composed of selected fea-tures, is then used to train a classifier, which can assessthe state of damage of the structure based on experimen-tal measurements. We explore different machine learningalgorithms with a particular focus on a One Class SupportVector Machine to detect anomalies and the Artificial Neu-ral Networks to predict real-valued quantities of interest.The parametric mathematical model is perturbed to in-clude natural variations of some physical and geometricalparameters. The latter are obtained via a pre-computedparameter estimation using a Bayesian approach. Becauseof the large parameter space a model order reduction strat-egy is exploited to reduce the computational burden. Weapply this strategy to the two-dimensional acoustic-elasticwave equation and show the effectiveness of the methodfor detecting cracks and monitor the damage state of thesystem.

Caterina Bigoni, Jan S. [email protected], [email protected]

PP1

Efficient Uncertainty Propagation of Physics-basedNondestructive Measurement Simulations usingSparse Sampling and Stochastic Expansions

The practice of determining the probability of defect detec-tion for a nondestructive testing (NDT) inspection throughuse of computational measurement simulation is recog-nized for its potential cost savings and improved accu-racy. However, measurement simulation covering the fullrange of possible random input parameters most often rep-resents a prohibitively large computational task, particu-

UQ18 Abstracts 145

larly when using time-consuming physics-based numericalsimulations. This work presents the application of efficientuncertainty propagation of physics-based NDT simulationsusing sparse sampling and stochastic expansions. In par-ticular, polynomial chaos expansions (PCE) are used toconstruct fast surrogate models of the computational sim-ulations and perform the uncertainty propagation. Morespecifically, three types of PCE models are used, namely,quadrature, ordinary least squares, and least-angle regres-sion sparse. The approach is demonstrated on model-assisted probability of detection of a spherical void defectin a quartz block using ultrasonic testing simulation witha focused transducer and three uncertain input parame-ters. The results are compared with direct Monte Carlosampling (MCS), as well as with MCS and deterministicKriging surrogate models. For this case, the PCE methodsshow around three orders of magnitude faster convergenceon the statistical moments over direct MCS, and needinghundreds of fewer samples than Kriging to reach the samelevel of modeling accuracy.

Xiasong Du, Leifur LeifssonIowa State UniversityDepartment of Aerospace [email protected], [email protected]

Jiming Song, William Meeker, Ronald RobertsIowa State [email protected], [email protected],[email protected]

PP1

Sparse Pseudo-spectral Projections in Linear Gy-rokinetics

The simulation of micro-turbulence in plasma fusion isparamount for understanding the confinement propertiesof fusion plasmas with magnetic fields. It is driven by thefree energy provided by the unavoidably steep plasma tem-perature and density gradients. Unfortunately, the mea-surement of the latter - as well as further decisive physicsparameters affecting the underlying micro-instabilities -are subject to uncertainties which requires an uncertaintyquantification framework. Here, we employ the estab-lished plasma micro-turbulence simulation code GENE(genecode.org) and restrict ourselves to the linear gyroki-netic eigenvalue problems in 5D phase space, taking intoaccount electrons and deuterium ions. Even without thenonlinear terms, the computational requirements are enor-mous and given the large number of uncertain parameters,quantifying uncertainty remains challenging. To overcomethese issues, we employ adaptive pseudo-spectral projec-tions constructed on Leja sequences. We test the approachin two test cases, a modified benchmark case, in whichwe consider two or seven stochastic inputs, and a realworld scenario having 11 stochastic inputs. Our resultsshow that sparse pseudo-spectral projections are suitablefor overcoming the challenges of quantifying uncertaintyin linear gyrokinetics. Furthermore, the total Sobol in-dices for global sensitivity analysis show that out of allstochastic inputs, the two temperature gradients are themost relevant.

Ionut-Gabriel FarcasTechnical University of MunichDepartment of Scientific Computing in Computer [email protected]

Tobias GoerlerMax Planck Institute for Plasma Physics

Tokamak Theory [email protected]

Tobias NeckelTechnical University of [email protected]

Hans-Joachim BungartzTechnical University of Munich, Department ofInformaticsChair of Scientific Computing in Computer [email protected]

PP1

Comparing Two Dimension Reduction Techniques

When working with high dimension models, it is often use-ful to create a surrogate model of lower dimension. Witha surrogate model in hand, we may be able to computeQuantities of Interest more efficiently. Two dimension re-duction techniques – Principal Component Analysis (PCA)and Active Subspace analysis – will each be used to createa surrogate model which approximates the action of a high-dimension, algebraic model. At their core, PCA and ActiveSubspace analysis can be roughly described as eigenvalueanalysis of a covariance matrix. The example presentedis intended highlight the subtle differences, along with thepro’s and con’s of each approach.

Jordan R. HallUniversity of Colorado DenverDepartment of Mathematical and Statistical [email protected]

PP1

Kunzel Model and Non-Intrusive Inverse Problem

In specific fields of research such as preservation of his-torical structures, medical imaging, material science, geo-physics and others, it is of particular interest to performonly a non-intrusive boundary measurement. The mainidea is to obtain comprehensive information of materialproperties inside the domain under consideration whilemaintaining the test sample intact. The forward modelis represented by Kunzel model with a finite element dis-cretization and parameters are subsequently recovered us-ing a modified Calderon problem principle, numericallysolved by a regularized Gauss-Newton method. We providea basic framework, implementation details and modifica-tion of general constrains originally derived for a standardsetup of Calderon problem. The proposed model setup isnumerically verified for various domains, load conditionsand material field distributions.

Jan Havelka, Jan Sykora, Anna KucerovaCzech Technical University in [email protected], [email protected],[email protected]

PP1

Optimal Kernel-based Dynamic Mode Decomposi-tion

This work focuses on the reduced modeling of a dynami-cal system based on a finite approximation of the Koop-man operator. The reduced-model is obtained by deter-mining the so-called extended dynamic mode decomposi-tion (EDMD). This decomposition characterises the eigen-

146 UQ18 Abstracts

structure of the solution of a least square problem subjectto low-rank constraints. Because of the too large dimensionof the state space, algorithms solving this problem need torely on kernel-based methods. However, the state-of-the-art kernel-based algorithm known as K-DMD provides asub-optimal solution to this problem, in the sense it re-lies on abusive approximations and on many restrictiveassumptions. The purpose of this work is to propose anoptimal kernel-based algorithm able to solve exactly thislow-rank approximation problem in a general setting.

Patrick HeasInria Centre Rennes - Gretagne [email protected]

Cedric HerzetInria Centre Rennes - Bretagne [email protected]

PP1

Heterogeneous Material Model Calibration usingStochastic Inversion

In order to calibrate a heterogeneous material model fromnoisy experimental data the corresponding uncertaintieshave to be taken into account in a proper way. The un-derlying step is to distinguish aleatory from epistemic un-certainties and treat these two types of uncertainties sepa-rately. The goal of the identification process is to quantifythe aleatory uncertainties expressing the inherent random-ness of heterogeneous material properties while the epis-temic uncertainties are reduced with any new measure-ment. From this perspective, the calibration of a heteroge-neous material model is a stochastic inverse problem whichcan be solved with the help of the Bayesian inference.

Eliska Janouchova, Anna KucerovaCzech Technical University in [email protected], [email protected]

PP1

Bootstrap Stochastic Approximation Monte CarloAlgorithms

Markov chain Monte Carlo (MCMC) simulation methodshave been widely used as a standard tool in Bayesianstatistics in order to facilitate inference. Nowadays, manyreal world applications involve large data-sets with com-plex structures and require complex statistical modelsto account for these structures. We develop the boot-strap stochastic approximation Monte Carlo algorithmthat makes the analysis of big-datasets feasible by avoid-ing repeated scans of the whole data-sets and mitigatesthe local trapping problems that MCMC algorithms of-ten encounter. Hence our algorithm is suitable to addressproblems with both large data-sets and complex statisticalmodels. The performance of the algorithm will be assessedin a simulation study. The algorithm will me implementedto train a surrogate model in a real word problem withBig-data.

Georgios KaragiannisPurdue [email protected]

PP1

Slow Scale Split Step Tau Leap Method for Stiff

Stochastic Chemical Systems

Tau-leaping is a family of algorithms for the approximatesimulation of discrete state continuous time Markov chains.The motivation for the development of such methods canbe found, for instance, in the fields of chemical kineticsand systems biology. It is well known that the dynamicalbehavior of biochemical systems is often intrinsically stiffrepresenting a serious challenge for their numerical approx-imation. The naive extension of stiff deterministic solversto stochastic integration usually yields numerical solutionswith either impractically large relaxation times or incor-rectly resolved covariance. We propose a novel splittingheuristic which allows to resolve these issues. The proposednumerical integrator takes advantage of the special struc-ture of the linear systems with explicitly available equa-tions for the mean and the covariance which we use tocalibrate the parameters of the scheme. It is shown thatthe method is able to reproduce the exact mean and vari-ance of the linear scalar test equation and has very goodaccuracy for the arbitrarily stiff systems at least in linearcase. The numerical examples for both linear and nonlinearsystems are also provided and the obtained results confirmthe efficiency of the considered splitting approach.

Abdul KhaliqMiddle Tennessee State [email protected]

Viktor ReshniakOak Ridge National [email protected]

David A. VossWestern Illinois [email protected]

PP1

Robust Experiment Design for Nonlinear ModelCalibration using Polynomial Chaos

Traditional approach to design of experiment for calibra-tion of nonlinear material model uses a linearisation andrequires some expert prior guess about the values of iden-tified material properties. Robust variant can be then for-mulated as a worst case approach, where the linearisedcriterion is evaluated in a set of feasible values of identi-fied parameters and the maximal value is minimised. Inthe proposed contribution, the linearisation is replaced byconsidering nonlinear approximation of the material modelusing a higher order polynomial chaos expansion. The opti-mality criterion is then integrated over the feasible domainof material parameters instead of considering a set of theirdiscrete values.

Anna Kucerova, Jan Sykora, Daniela Jaruskova, EliskaJanouchovaCzech Technical University in [email protected], [email protected],[email protected], [email protected]

PP1

Locally Stationary Spatio-Temporal Interpolationof Argo Profiling Float Data

Argo floats measure sea water temperature and salinity inthe upper 2,000 m of the global ocean. The statisticalanalysis of the resulting spatio-temporal dataset is chal-lenging due to its non-stationary structure and large size.

UQ18 Abstracts 147

We propose mapping these data using locally stationaryGaussian process regression where covariance parameterestimation and spatio-temporal prediction are carried outin a moving-window fashion. This yields computationallytractable non-stationary anomaly fields without the needto explicitly model the non-stationary covariance structure.We also investigate Student-t distributed microscale vari-ation as a means to account for non-Gaussian heavy tailsin Argo data. We use cross-validation to study the pointprediction and uncertainty quantification performance ofthe proposed approach. We demonstrate clear improve-ments in the point predictions and show that accountingfor the non-stationarity and non-Gaussianity is crucial forobtaining well-calibrated uncertainties. The approach alsoprovides data-driven local estimates of the spatial and tem-poral dependence scales which are of scientific interest intheir own right.

Mikael KuuselaSAMSI and University of North Carolina at Chapel [email protected]

Michael SteinUniversity of [email protected]

PP1

Solving Stochastic Inverse Problems with Consis-tent Bayesian Inference

We study a data-oriented (aka consistent) Bayesian ap-proach for inverse problem. Our focus is to investigate therelationship between the data-oriented Bayesian formula-tion and the standard one for linear inverse problems. Weshow that 1) from a deterministic point of view, the MAPof the data-oriented Bayesian method is similar to the trun-cated SVD approach (regularization by truncation), and 2)from the statistical point of view, no matter what prior isused, the method only allows the prior to act on the datauninformed parameter subspace, while leaving the data-informed untouched. Both theoretical and numerical re-sults will be presented to support our approach.

Brad MarvinUniversity of Texas at [email protected]

PP1

Multilevel Adaptive2 Sparse Grid Stochastic Col-location

Randomness in Uncertainty Quantification problems canoften be modeled by using a truncated Karhuenen-Loeve(KL) expansion. This applies for partial differential equa-tions with random coefficients as well as for random or-dinary differential equations, which involve random pro-cesses. Since the number of terms in the KL expansion di-rectly translates into the number of stochastic dimensionswe, however, quickly run into the curse of dimensional-ity. The most commonly used approach is Monte Carlo,which in general, however, converge slowly. Since the KLmodes decrease exponentially, we instead apply adaptivesparse grid surrogates to leverage anisotropy in the solu-tion. To add an additional adaptive layer, we drew in-spiration from the multilevel collocation approach whichdirectly relates to the combination technique. Thus, wedefine two grid layers: on the lower layer, we employ adimension-adaptive sparse grid to interpolate the outputfor a given tolerance. On the upper layer, we construct an

abstract three-dimensional sparse grid having as dimen-sions the aforementioned lower layer tolerance, the num-ber of KL terms, and the time step size; finally the so-lutions are combined using the combination technique onthe upper level. This deterministic approach offers solu-tions with the same accuracy as a single high resolutionadaptive sparse grid, however with much lower computa-tional cost by leveraging the nestedness and anisotropy ofthe underlying problem.

Friedrich MenhornDepartment of Informatics, Technische [email protected]

Ionut-Gabriel FarcasTechnical University of MunichDepartment of Scientific Computing in Computer [email protected]

Tobias NeckelTechnical University of [email protected]

Hans-Joachim BungartzTechnical University of Munich, Department ofInformaticsChair of Scientific Computing in Computer [email protected]

PP1

Efficient Iterative Methods for Discrete StokesEquations with Random Viscosity

We study the stochastic Galerkin finite element (SGFE)discretization of Stokes flow with random viscosity and theresulting discrete saddle point problem. In order to dealwith the large coupled system of equations efficiently, fastsolution methods are essential. In our work, we investi-gate the performance of different iterative solvers. As thediscrete SGFE problem can be arranged in a way that thesystem of equations is structurally equivalent to the purelyspatial one, we rely on iterative solvers designed for dis-crete Stokes equations with deterministic data. For thepreconditioners, we prescribe a Kronecker product struc-ture and use existing approaches from the finite elementand stochastic Galerkin literature as building blocks. Wederive analytical bounds for the eigenvalues of the precon-ditioned system matrices and use them to predict the con-vergence behavior of the iterative solvers. By the help ofa numerical test case, we verify the theoretical predictionsfor a block diagonal preconditioned MINRES method anda block triangular preconditioned Bramble-Pasciak conju-gate gradient method. We eventually compare the solversbased on two criteria: the computational complexity of theunderlying algorithms and the convergence behavior of themethods with respect to different problem parameters.

Christopher MuellerGraduate School of Computational Engineering, [email protected]

Sebastian UllmannTU [email protected]

Jens LangTechnische Universitat Darmstadt

148 UQ18 Abstracts

Department of [email protected]

PP1

Optimal Experimental Design of Time Series Datain a Consistent Bayesian Framework

To quantify uncertainties of inputs to models of dynami-cal systems, a fixed spatial configuration of sensors is oftendesigned and deployed to collect time-series data for solv-ing stochastic inverse problems. A general goal is to con-figure the sensors so that they provide useful informationfor the stochastic inverse problem over the full duration ofthe experiment and provide us with minimally redundantdata. We use a recently developed Consistent Bayesianframework for formulating and solving stochastic inverseproblems to investigate several design criteria and objec-tives for deploying sensors. We draw comparisons to otherapproaches and results based on more classical statisticalBayesian formulations.

Michael PilosovUniversity of Colorado: [email protected]

PP1

Quantifying Spatio-temporal Boundary ConditionUncertainty for the Deglaciation

Ice sheet models are currently unable to reproduce the re-treat of the North American ice sheet through the lastdeglaciation, due to the large uncertainty in the bound-ary conditions. To successfully calibrate such a model, itis important to vary both the input parameters and theboundary conditions. These boundary conditions are de-rived from global climate model simulations, and hence thebiases from the output of these models are carried throughto the ice sheet output, restricting the range of ice sheetoutput that is possible. Due to the expense of runningglobal climate models for the required 21,000 years, thereare only a small number of such runs available; hence itis difficult to quantify the boundary condition uncertainty.We develop a methodology for generating a range of plausi-ble boundary conditions, using a low-dimensional basis rep-resentation for the spatio-temporal input required. We de-rive this basis by combining key patterns, extracted from asmall climate model ensemble of runs through the deglacia-tion, with sparse spatio-temporal observations. Varyingthe coefficients for the chosen basis vectors and ice sheetparameters simultaneously, we run ensembles of the icesheet model. By emulating the ice sheet output, we his-tory match iteratively and rule out combinations of the icesheet parameters and boundary condition coefficients thatlead to implausible deglaciations, reducing the uncertaintydue to the boundary conditions.

James M. Salter, Daniel WilliamsonUniversity of [email protected], [email protected]

Lauren GregoireUniversity of [email protected]

PP1

Multiscale Interfaces for Large-scale Optimization

In this poster we describe MILO (Multiscale Interfaces for

Large-scale Optimization), a software package built fromcomponents of the Trilinos project that aims to providea lightweight PDE-constrained optimization solver with amultiscale modeling capability. The use of template-basedmetaprogramming automatic differentiation allows a userto specify a variety of forward models through a syntaxthat closely resembles pencil-and-paper variational formu-lations and automates adjoint computations. State of theart optimization under uncertainty algorithms are availablethrough an interface to the Rapid Optimization Library.Finally, the software contains an implementation of a gen-eralized mortar method for flexible modeling, optimization,and uncertainty quantification of multiscale physical phe-nomena.

Daniel T. Seidl, Bart G. Van Bloemen WaandersSandia National [email protected], [email protected]

Tim WildeyOptimization and Uncertainty Quantification Dept.Sandia National [email protected]

PP1

A Study of Elliptic PDEs with Jump Diffusion Co-efficients

As a simplified model for subsurface flows elliptic equationsmay be utilized. Insufficient measurements or uncertaintyin those are commonly modeled by a random coefficient,which then accounts for the uncertain permeability of agiven medium. As an extension of this methodology toflows in heterogeneous\fractured\porous media, we incor-porate jumps in the diffusion coefficient. These disconti-nuities then represent transitions in the media. More pre-cisely, we consider a second order elliptic problem wherethe random coefficient is given by the sum of a (contin-uous) Gaussian random field and a (discontinuous) jumppart. To estimate moments of the solution to the result-ing random partial differential equation, we use a pathwisenumerical approximation combined with multilevel MonteCarlo sampling. In order to account for the discontinuitiesand improve the convergence of the pathwise approxima-tion, the spatial domain is decomposed with respect to thejump positions in each sample, leading to pathdependentgrids. Hence, it is not possible to create a nested sequenceof grids which is suitable for each sample path a-priori.We address this issue by an adaptive multilevel algorithm,where the discretization on each level is sample-dependentand fulfills given refinement conditions. This is joint workwith Andrea Barth (SimTech, University of Stuttgart)

Andreas SteinUniversity of [email protected]

Andrea BarthSimTech, Uni [email protected]

PP1

Image-based Covariance Functions for Characteri-sation of Material Heterogeneity

The principal challenge in implementation of random fieldsarises from the need for determination of their correla-tion/characteristic lengths in the simplest case or more gen-erally their covariance functions. The present contribution

UQ18 Abstracts 149

is devoted to the construction of random fields based onimage analysis utilising statistical descriptors, which weredeveloped to describe the different morphology of randommaterial.

Jan Sykora, Anna Kucerova, Jan ZemanCzech Technical University in [email protected], [email protected],[email protected]

PP1

Numerical Algorithms for Solving the WeightedPoisson Equation with Application to Particle FlowAlgorithms

The poster is concerned with numerical algorithms for solv-ing the Poisson equation. The Poisson equation is a partialdifferential equation that involves a probability weightedLaplacian. The problem is to approximate the solutionto the Poisson equation using only particles sampled fromthe probability distribution. The solution of the Poissonequation plays a central role in particle flow algorithms forsolving the nonlinear filtering problem. The particle flowalgorithms are exact, i.e they exactly represent the poste-rior distribution, if the Poisson equation is solved exactly.Two algorithms for solving the Poisson equation are pre-sented: a Galerkin algorithm and a kernel-based algorithm.The Galerkin algorithm is based on the projection of thesolution to the span of preselected basis functions. Thekernel-based algorithm is based on expressing the Poissonequation as a fixed point problem involving the semigroupassociated with the weighted Laplacian. The poster con-tains error analysis of both algorithms. The significantresult is that the error of the kernel-based algorithm con-verges to zero in the asymptotic limit as the number ofparticles goes to infinity. The result also includes conver-gence rate. This result is important since it is the firststep for error analysis of the particle flow algorithms in thenonlinear setting.

Amirhossein TaghvaeiPhD studebt at University of Illinois [email protected]

PP1

Stochastic Galerkin Reduced Basis Methods forParametrized Elliptic PDEs

We propose stochastic Galerkin reduced basis (RB) meth-ods to estimate statistics of linear outputs in the contextof parametrized elliptic PDEs with random data. For agiven value of a deterministic parameter vector, a stochas-tic Galerkin finite element method can estimate outputstatistics at essentially the cost of a single solution of a verylarge linear algebraic system of equations. To lower thetime and memory requirements, we derive a Galerkin RBmethod by projecting the parametrized spatial-stochasticweak form of the problem onto a POD subspace. As aconsequence, the complexity of the reduced problem be-comes essentially independent of the number of stochas-tic Galerkin finite element degrees of freedom. Statisticalmoments of the reduced solution can be computed by ex-act integration. Consequently, they are only subject toan RB approximation error but not subject to quadratureerrors like in Monte Carlo sampling or stochastic colloca-tion. Moreover, standard RB theory provides computablea posteriori error bounds for the RB approximation er-ror of parameter-dependent output statistics. We present

numerical tests for a diffusion reaction problem, wherethe reaction coefficient acts as a deterministic parameterand a Karhunen-Loeve decomposed random diffusion fieldacts as a random input. Compared to a Monte Carlo RBmethod, statistical estimation with our method is signifi-cantly faster, because it requires only a single solution of areduced system of comparable size.

Sebastian Ullmann, Lang JensTU [email protected], [email protected]

PP1

Adaptive Gaussian Process Approximation forBayesian Inference with Expensive LikelihoodFunctions

We consider Bayesian inference problems with computa-tionally intensive likelihood functions. We propose a Gaus-sian process (GP) based method to approximate the jointdistribution of the unknown parameters and the data, builtupon a recent work. In particular, we write the joint den-sity approximately as a product of an approximate poste-rior density and an exponentiated GPsurrogate. We thenprovide an adaptive algorithm to construct such an ap-proximation,w here an active learning method is used tochoose the design points. With numerical examples, weillustrate that the proposed method has competitive per-formance against existing approaches for Bayesian compu-tation.

Hongqiao WangShanghai Jiao Tong [email protected]

Jinglai LiShanghai Jiaotong [email protected]

PP1

A Comparative Study of the Intrusive and Non-intrusive Polynomial Chaos Methods for Uncer-tainty Quantification of the Rossler Chaotic Dy-namical System

The simulations of dynamical system models contain sig-nificant uncertainty, which usually arises from errors initialconditions and model parameters. Uncertainty quantifica-tion, thus, is of critical importance in improving systemsimulations. This work aims to intercompare the effective-ness and efficiency of intrusive and non-intrusive polyno-mial chaos methods for uncertainty quantification of thenon-deterministic simulations of the Rossler chaotic dy-namical system. The simulation results are verified andconfirmed by comparing the results obtained using theMonte Carlo method. The results show that both ofthe polynomial chaos methods can effectively simulate thepropagation of uncertainties in chaotic dynamical systemand are more effective and efficient than the Monte Carlomethod. Through the comparative study, the strengthsand weaknesses of intrusive and non-intrusive polynomialchaos methods for uncertainty quantification are discussed.

Heng WangBeijing Normal [email protected]

Qingyun Duan

150 UQ18 Abstracts

Beijing Normal UniversityCollege of Global Change and Earth System [email protected]

Wei Gong, Zhenhua Di, Chiyuan Miao, Aizhong YeBeijing Normal [email protected], [email protected],[email protected], [email protected]

PP1

A Model-independent Iterative EnsembleSmoother for High-dimensional Inversion andUncertainty Estimation

We have implemented an iterative Gauss-Levenburg-Marquardt (GLM) form of the ensemble smoother thatfunctions in a model-independent way by programmaticallyinteracting with any forward model. Using the ensembleapproximation to the tangent linear operator (Jacobian)allows the GLM algorithm to be applied to extremely highdimensions with manageable computation burden, pro-vides some enhanced non-linear solution capabilities, andalso yields uncertainty estimates. The implementation in-cludes the capability for testing multiple lambda values andTCP/IP parallelization. Furthermore, a subset of the en-tire ensemble can be used to test the lambda values, reduc-ing the number of model evaluations required. An exampleapplication from groundwater hydrology is presented. Theresults of the iterative ensemble smoother are compared todeterministic inversion and to Monte Carlo results.

Jeremy WhiteU.S. Geological SurveyTexas Water Science [email protected]

PP1

Physics-informed Machine Learning for Data-driven Turbulence Modeling

Reynolds-averaged Navier-Stokes (RANS) models are stillworkhorse tool in engineering applications that involve tur-bulent flows. It is well known that the RANS modelsintroduce large model-form uncertainties into the predic-tion. Recently, data-driven methods have been proposedas a promising alternative by using existing database ofexperiments or high-fidelity simulations. In this poster, wepresent a physics-informed machine-learning-assisted tur-bulence modeling framework. This framework consists ofthree components: (1) construct a functional mapping be-tween physics-informed inputs and Reynolds stress basedon DNS data via machine learning techniques, (2) assess-ing the prediction confidence a priori by evaluating thedistance between different flows in the mean flow featuresspace, and (3) propagating the predicted Reynolds stressfield to mean velocity field by using physics-informed sta-bilization. We evaluate the performance of proposed byusing several flows with massive separations. Significantimprovements over the baseline RANS simulation are ob-served for both the Reynolds stress and the mean velocityfields. In addition, the prediction performance based ondifferent training flows can be assessed a priori by usingthe proposed distance metrics.

Jinlong WuDept. of Aerospace and Ocean Engineering, Virginia [email protected]

Carlos Michelen

Virginia [email protected]

Jian-Xun WangUniversity of California, [email protected]

Heng XiaoDept. of Aerospace and Ocean Engineering, Virginia [email protected]

PP1

Calibration Optimal Designs for Computer Exper-iments

History matching with Gaussian process emulators is awell-known methodology for the calibration of computermodels based on partial observations (Craig et al., 1996).History matching is a method that attempts to identifythe parts of input parameter space that are likely to re-sult in mismatches between simulator outputs and obser-vations. A measure called implausibility is used to ruleout parameter settings that are far from the observations.The space that has not yet been ruled out is called NotRuled Out Yet (NROY) space. An easily neglected limita-tion of this method is that the emulator must simulate thetarget NROY space well, else good parameter choices canbe ruled out. We show that even when an emulator vali-dates well on the whole parameter space, good parameterchoices can be ruled out in simple test case. If a Bayesiancalibration were tried here, the biases in the NROY regionwould lead to strong biases in the calibration. We presentnovel methods for detecting these cases and calibration op-timal designs for subsequent experiments that ensure thetrue NROY space is retained and parameter inference isnot biased.

Wenzhe XuUniversity of [email protected]

PP101

Visualizing Dynamic Global Sensitivities in Time-dependent Systems

Several problems in uncertainty quantification suffer fromthe curse of dimensionality. The best way to alleviate thecurse is to reduce the dimension. This minisymposteriumwill explore several ideas for parameter space dimension re-duction that enable otherwise infeasible uncertainty quan-tification studies for computational science models withseveral input parameters. This poster will explore visual-izing how global sensitivities of dynamical systems changeover time.

Izabel P. Aguiar, Paul ConstantineUniversity of [email protected],[email protected]

PP101

Parameter Space Dimension Reduction

Several problems in uncertainty quantification suffer fromthe curse of dimensionality. The best way to alleviate thecurse is to reduce the dimension. This minisymposteriumwill explore several ideas for parameter space dimension re-

UQ18 Abstracts 151

duction that enable otherwise infeasible uncertainty quan-tification studies for computational science models withseveral input parameters.

Paul ConstantineUniversity of [email protected]

PP101

A Lanczos-Stieltjes Method for One-dimensionalRidge Function Integration and Approximation

Several problems in uncertainty quantification suffer fromthe curse of dimensionality. The best way to alleviate thecurse is to reduce the dimension. This minisymposteriumwill explore several ideas for parameter space dimension re-duction that enable otherwise infeasible uncertainty quan-tification studies for computational science models withseveral input parameters. This poster focuses on an ap-proach to approximating one-dimensional ridge functionsbased on the Lanczos iteration.

Andrew GlawsDepartment of Computer ScienceUniversity of Colorado [email protected]

Paul ConstantineUniversity of Colorado [email protected]

PP101

Characterizing a Subspace of Shapes using Differ-ential Geometry

Engineering designers routinely manipulate the shapes inengineering systems toward design goals—e.g., shape op-timization of an airfoil. The computational tools for suchmanipulation include parameterized geometries such as B-splines, where the parameters provide a set of independentvariables that control the geometry. Recent work has de-veloped and exploited active subspaces in the map fromgeometry parameters to design quantities of interest (e.g.,lift or drag of an airfoil); the active subspace is a set of di-rections in the geometry parameter space that changes theassociated quantity of interest more, on average over thedesign space, than directions orthogonal to the active sub-space. The active directions produce insight-rich geome-try perturbations; however, these perturbations depend onthe chosen geometry parameterization. In this work, weuse tools and concepts from differential geometry to de-velop parameterization independent active subspaces withrespect to a given scalar field (e.g., the pressure field sur-rounding an airfoil). The differential geometry setup leadsto consistent numerical discretization. We show how theframework can yield insight into the design of an airfoilindependent of the choice of parameterization.

Zach GreyColorado School of [email protected]

Paul ConstantineUniversity of Colorado Boulder

[email protected]

PP101

Exploiting Ridge Structure in Chance ConstrainedDesign under Uncertainty

A chance constrained design problem consists of severalquantities of interest defined over a set of design and ran-dom variables with the goal of minimizing (for example)the expected value of one quantity of interest subject tothe constraint that the probability of the other quantitiesof interest exceed a threshold is less than a specified prob-ability. In the absence of exploitable structure where weonly have access to function samples, this is a challeng-ing problem. Evaluating the probability constraints viaMonte-Carlo requires many evaluations of potentially ex-pensive quantities of interest. However, if the chance con-straint quantities of interest exhibit ridge structure – thatis, these functions only vary with respect to a few linearcombinations of the input variables – we can replace thesechance constraints with a bound constraint over the designvariables. This replacement removes the need evaluate thequantities of interest to ensure the chance constraints aresatisfied, substantially decreasing the cost of chance con-strained design problems where the quantities of interestinvolve large scale, complex computational models.

Jeffrey M. Hokanson, Paul ConstantineUniversity of Colorado [email protected],[email protected]