Economic modeling and sensitivity analysis.
Hay, J W
1998-09-01
The field of pharmacoeconomics (PE) faces serious concerns of research credibility and bias. The failure of researchers to reproduce similar results in similar settings, the inappropriate use of clinical data in economic models, the lack of transparency, and the inability of readers to make meaningful comparisons across published studies have greatly contributed to skepticism about the validity, reliability, and relevance of these studies to healthcare decision-makers. Using a case study in the field of lipid PE, two suggestions are presented for generally applicable reporting standards that will improve the credibility of PE. Health economists and researchers should be expected to provide either the software used to create their PE model or a multivariate sensitivity analysis of their PE model. Software distribution would allow other users to validate the assumptions and calculations of a particular model and apply it to their own circumstances. Multivariate sensitivity analysis can also be used to present results in a consistent and meaningful way that will facilitate comparisons across the PE literature. Using these methods, broader acceptance and application of PE results by policy-makers would become possible. To reduce the uncertainty about what is being accomplished with PE studies, it is recommended that these guidelines become requirements of both scientific journals and healthcare plan decision-makers. The standardization of economic modeling in this manner will increase the acceptability of pharmacoeconomics as a practical, real-world science.
Sensitivity analysis of periodic matrix population models.
Caswell, Hal; Shyu, Esther
2012-12-01
Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments.
A discourse on sensitivity analysis for discretely-modeled structures
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
A tool model for predicting atmospheric kinetics with sensitivity analysis
无
2001-01-01
A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.
Detecting tipping points in ecological models with sensitivity analysis
Broeke, G.A. ten; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, J.
2016-01-01
Simulation models are commonly used to understand and predict the developmentof ecological systems, for instance to study the occurrence of tipping points and their possibleecological effects. Sensitivity analysis is a key tool in the study of model responses to change s in conditions. The applicabi
Detecting Tipping points in Ecological Models with Sensitivity Analysis
Broeke, ten G.A.; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, Jaap
2016-01-01
Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possible ecological effects. Sensitivity analysis is a key tool in the study of model responses to changes in conditions. The appli
Sensitivity analysis of a sound absorption model with correlated inputs
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Sensitivity analysis of the fission gas behavior model in BISON.
Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard
2013-05-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.
Sensitivity analysis in a Lassa fever deterministic mathematical model
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis
2006-01-01
Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis by Mostafiz R. Chowdhury and Ala Tabiei ARL-TR-3703...Adelphi, MD 20783-1145 ARL-TR-3703 January 2006 Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis...GRANT NUMBER 4. TITLE AND SUBTITLE Air Gun Launch Simulation Modeling and Finite Element Model Sensitivity Analysis 5c. PROGRAM
Shape sensitivity analysis in numerical modelling of solidification
E. Majchrzak
2007-12-01
Full Text Available The methods of sensitivity analysis constitute a very effective tool on the stage of numerical modelling of casting solidification. It is possible, among others, to rebuilt the basic numerical solution on the solution concerning the others disturbed values of physical and geometrical parameters of the process. In this paper the problem of shape sensitivity analysis is discussed. The non-homogeneous casting-mould domain is considered and the perturbation of the solidification process due to the changes of geometrical dimensions is analyzed. From the mathematical point of view the sensitivity model is rather complex but its solution gives the interesting information concerning the mutual connections between the kinetics of casting solidification and its basic dimensions. In the final part of the paper the example of computations is shown. On the stage of numerical realization the finite difference method has been applied.
Sensitivity analysis of fine sediment models using heterogeneous data
Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.
2012-04-01
Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the
Sensitivity analysis techniques for models of human behavior.
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity
Sensitivity Analysis of a Simplified Fire Dynamic Model
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...... are the most significant in each case. We apply the Sobol method, which is a quantitative method that gives the percentage of the total output variance that each parameter accounts for. The most important parameter is found to be the energy release rate that explains 92% of the uncertainty in the calculated...... results for the period before thermal penetration (tp) has occurred. The analysis is also done for all combinations of two parameters in order to find the combination with the largest effect. The Sobol total for pairs had the highest value for the combination of energy release rate and area of opening...
Improved environmental multimedia modeling and its sensitivity analysis.
Yuan, Jing; Elektorowicz, Maria; Chen, Zhi
2011-01-01
Modeling of multimedia environmental issues is extremely complex due to the intricacy of the systems with the consideration of many factors. In this study, an improved environmental multimedia modeling is developed and a number of testing problems related to it are examined and compared with each other with standard numerical and analytical methodologies. The results indicate the flux output of new model is lesser in the unsaturated zone and groundwater zone compared with the traditional environmental multimedia model. Furthermore, about 90% of the total benzene flux was distributed to the air zone from the landfill sources and only 10% of the total flux emitted into the unsaturated, groundwater zones in non-uniform conditions. This paper also includes functions of model sensitivity analysis to optimize model parameters such as Peclet number (Pe). The analyses results show that the Pe can be considered as deterministic input variables for transport output. The oscillatory behavior is eliminated with the Pe decreased. In addition, the numerical methods are more accurate than analytical methods with the Pe increased. In conclusion, the improved environmental multimedia model system and its sensitivity analysis can be used to address the complex fate and transport of the pollutants in multimedia environments and then help to manage the environmental impacts.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Sensitivity Analysis in a Complex Marine Ecological Model
Marcos D. Mateus
2015-05-01
Full Text Available Sensitivity analysis (SA has long been recognized as part of best practices to assess if any particular model can be suitable to inform decisions, despite its uncertainties. SA is a commonly used approach for identifying important parameters that dominate model behavior. As such, SA address two elementary questions in the modeling exercise, namely, how sensitive is the model to changes in individual parameter values, and which parameters or associated processes have more influence on the results. In this paper we report on a local SA performed on a complex marine biogeochemical model that simulates oxygen, organic matter and nutrient cycles (N, P and Si in the water column, and well as the dynamics of biological groups such as producers, consumers and decomposers. SA was performed using a “one at a time” parameter perturbation method, and a color-code matrix was developed for result visualization. The outcome of this study was the identification of key parameters influencing model performance, a particularly helpful insight for the subsequent calibration exercise. Also, the color-code matrix methodology proved to be effective for a clear identification of the parameters with most impact on selected variables of the model.
A Workflow for Global Sensitivity Analysis of PBPK Models
Kevin eMcNally
2011-06-01
Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.
Sensitivity analysis of numerical model of prestressed concrete containment
Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz
2015-12-15
Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.
Sensitivity Analysis of a Riparian Vegetation Growth Model
Michael Nones
2016-11-01
Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.
Xiao-meng SONG
2013-01-01
Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.
Understanding earth system models: how Global Sensitivity Analysis can help
Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-01-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
A Sensitivity Analysis of fMRI Balloon Model
Zayane, Chadia
2015-04-22
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Application of simplified model to sensitivity analysis of solidification process
R. Szopa
2007-12-01
Full Text Available The sensitivity models of thermal processes proceeding in the system casting-mould-environment give the essential information concerning the influence of physical and technological parameters on a course of solidification. Knowledge of time-dependent sensitivity field is also very useful in a case of inverse problems numerical solution. The sensitivity models can be constructed using the direct approach, this means by differentiation of basic energy equations and boundary-initial conditions with respect to parameter considered. Unfortunately, the analytical form of equations and conditions obtained can be very complex both from the mathematical and numerical points of view. Then the other approach consisting in the application of differential quotient can be applied. In the paper the exact and approximate approaches to the modelling of sensitivity fields are discussed, the examples of computations are also shown.
A qualitative model structure sensitivity analysis method to support model selection
Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.
2014-11-01
The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.
Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA
2017-05-01
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.
Analysis of Sea Ice Cover Sensitivity in Global Climate Model
V. P. Parhomenko
2014-01-01
Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters
Sensitivity analysis of the age-structured malaria transmission model
Addawe, Joel M.; Lope, Jose Ernie C.
2012-09-01
We propose an age-structured malaria transmission model and perform sensitivity analyses to determine the relative importance of model parameters to disease transmission. We subdivide the human population into two: preschool humans (below 5 years) and the rest of the human population (above 5 years). We then consider two sets of baseline parameters, one for areas of high transmission and the other for areas of low transmission. We compute the sensitivity indices of the reproductive number and the endemic equilibrium point with respect to the two sets of baseline parameters. Our simulations reveal that in areas of either high or low transmission, the reproductive number is most sensitive to the number of bites by a female mosquito on the rest of the human population. For areas of low transmission, we find that the equilibrium proportion of infectious pre-school humans is most sensitive to the number of bites by a female mosquito. For the rest of the human population it is most sensitive to the rate of acquiring temporary immunity. In areas of high transmission, the equilibrium proportion of infectious pre-school humans and the rest of the human population are both most sensitive to the birth rate of humans. This suggests that strategies that target the mosquito biting rate on pre-school humans and those that shortens the time in acquiring immunity can be successful in preventing the spread of malaria.
Global in Time Analysis and Sensitivity Analysis for the Reduced NS- α Model of Incompressible Flow
Rebholz, Leo; Zerfas, Camille; Zhao, Kun
2017-09-01
We provide a detailed global in time analysis, and sensitivity analysis and testing, for the recently proposed (by the authors) reduced NS- α model. We extend the known analysis of the model to the global in time case by proving it is globally well-posed, and also prove some new results for its long time treatment of energy. We also derive PDE system that describes the sensitivity of the model with respect to the filtering radius parameter, and prove it is well-posed. An efficient numerical scheme for the sensitivity system is then proposed and analyzed, and proven to be stable and optimally accurate. Finally, two physically meaningful test problems are simulated: channel flow past a cylinder (including lift and drag calculations) and turbulent channel flow with {Re_{τ}=590}. The numerical results reveal that sensitivity is created near boundaries, and thus this is where the choice of the filtering radius is most critical.
Integrative "omic" analysis for tamoxifen sensitivity through cell based models.
Liming Weng
Full Text Available It has long been observed that tamoxifen sensitivity varies among breast cancer patients. Further, ethnic differences of tamoxifen therapy between Caucasian and African American have also been reported. Since most studies have been focused on Caucasian people, we sought to comprehensively evaluate genetic variants related to tamoxifen therapy in African-derived samples. An integrative "omic" approach developed by our group was used to investigate relationships among endoxifen (an active metabolite of tamoxifen sensitivity, SNP genotype, mRNA and microRNA expressions in 58 HapMap YRI lymphoblastoid cell lines. We identified 50 SNPs that associate with cellular sensitivity to endoxifen through their effects on 34 genes and 30 microRNA expression. Some of these findings are shared in both Caucasian and African samples, while others are unique in the African samples. Among gene/microRNA that were identified in both ethnic groups, the expression of TRAF1 is also correlated with tamoxifen sensitivity in a collection of 44 breast cancer cell lines. Further, knock-down TRAF1 and over-expression of hsa-let-7i confirmed the roles of hsa-let-7i and TRAF1 in increasing tamoxifen sensitivity in the ZR-75-1 breast cancer cell line. Our integrative omic analysis facilitated the discovery of pharmacogenomic biomarkers that potentially affect tamoxifen sensitivity.
Parametric sensitivity analysis of a test cell thermal model using spectral analysis
Mara, Thierry Alex; Garde, François
2012-01-01
The paper deals with an empirical validation of a building thermal model. We put the emphasis on sensitivity analysis and on research of inputs/residual correlation to improve our model. In this article, we apply a sensitivity analysis technique in the frequency domain to point out the more important parameters of the model. Then, we compare measured and predicted data of indoor dry-air temperature. When the model is not accurate enough, recourse to time-frequency analysis is of great help to identify the inputs responsible for the major part of error. In our approach, two samples of experimental data are required. The first one is used to calibrate our model the second one to really validate the optimized model.
Sensitivity Analysis of the ALMANAC Model's Input Variables
XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da
2002-01-01
Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
Sensitivity Analysis of the Bone Fracture Risk Model
Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane
2017-01-01
Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including
Models for patients' recruitment in clinical trials and sensitivity analysis.
Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas
2012-07-20
Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister
Wittman, Richard S.
2013-09-20
This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis
Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)
1993-12-01
This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage
Batterbee, D C; Sims, N D; Becker, W; Worden, K; Rowson, J
2011-11-01
Non-accidental head injury in infants, or shaken baby syndrome, is a highly controversial and disputed topic. Biomechanical studies often suggest that shaking alone cannot cause the classical symptoms, yet many medical experts believe the contrary. Researchers have turned to finite element modelling for a more detailed understanding of the interactions between the brain, skull, cerebrospinal fluid (CSF), and surrounding tissues. However, the uncertainties in such models are significant; these can arise from theoretical approximations, lack of information, and inherent variability. Consequently, this study presents an uncertainty analysis of a finite element model of a human head subject to shaking. Although the model geometry was greatly simplified, fluid-structure-interaction techniques were used to model the brain, skull, and CSF using a Eulerian mesh formulation with penalty-based coupling. Uncertainty and sensitivity measurements were obtained using Bayesian sensitivity analysis, which is a technique that is relatively new to the engineering community. Uncertainty in nine different model parameters was investigated for two different shaking excitations: sinusoidal translation only, and sinusoidal translation plus rotation about the base of the head. The level and type of sensitivity in the results was found to be highly dependent on the excitation type.
Kinetic modeling and sensitivity analysis of plasma-assisted combustion
Togai, Kuninori
Plasma-assisted combustion (PAC) is a promising combustion enhancement technique that shows great potential for applications to a number of different practical combustion systems. In this dissertation, the chemical kinetics associated with PAC are investigated numerically with a newly developed model that describes the chemical processes induced by plasma. To support the model development, experiments were performed using a plasma flow reactor in which the fuel oxidation proceeds with the aid of plasma discharges below and above the self-ignition thermal limit of the reactive mixtures. The mixtures used were heavily diluted with Ar in order to study the reactions with temperature-controlled environments by suppressing the temperature changes due to chemical reactions. The temperature of the reactor was varied from 420 K to 1250 K and the pressure was fixed at 1 atm. Simulations were performed for the conditions corresponding to the experiments and the results are compared against each other. Important reaction paths were identified through path flux and sensitivity analyses. Reaction systems studied in this work are oxidation of hydrogen, ethylene, and methane, as well as the kinetics of NOx in plasma. In the fuel oxidation studies, reaction schemes that control the fuel oxidation are analyzed and discussed. With all the fuels studied, the oxidation reactions were extended to lower temperatures with plasma discharges compared to the cases without plasma. The analyses showed that radicals produced by dissociation of the reactants in plasma plays an important role of initiating the reaction sequence. At low temperatures where the system exhibits a chain-terminating nature, reactions of HO2 were found to play important roles on overall fuel oxidation. The effectiveness of HO2 as a chain terminator was weakened in the ethylene oxidation system, because the reactions of C 2H4 + O that have low activation energies deflects the flux of O atoms away from HO2. For the
Morshed, Monjur; Ingalls, Brian; Ilie, Silvana
2017-01-01
Sensitivity analysis characterizes the dependence of a model's behaviour on system parameters. It is a critical tool in the formulation, characterization, and verification of models of biochemical reaction networks, for which confident estimates of parameter values are often lacking. In this paper, we propose a novel method for sensitivity analysis of discrete stochastic models of biochemical reaction systems whose dynamics occur over a range of timescales. This method combines finite-difference approximations and adaptive tau-leaping strategies to efficiently estimate parametric sensitivities for stiff stochastic biochemical kinetics models, with negligible loss in accuracy compared with previously published approaches. We analyze several models of interest to illustrate the advantages of our method.
Global sensitivity analysis applied to drying models for one or a population of granules
Mortier, Severine Therese F. C.; Gernaey, Krist; Thomas, De Beer;
2014-01-01
compared to our earlier work. beta(2) was found to be the most important factor for the single particle model which is useful information when performing model calibration. For the PBM-model, the granule radius and gas temperature were found to be most sensitive. The former indicates that granulator......The development of mechanistic models for pharmaceutical processes is of increasing importance due to a noticeable shift toward continuous production in the industry. Sensitivity analysis is a powerful tool during the model building process. A global sensitivity analysis (GSA), exploring...... sensitivity in a broad parameter space, is performed to detect the most sensitive factors in two models, that is, one for drying of a single granule and one for the drying of a population of granules [using population balance model (PBM)], which was extended by including the gas velocity as extra input...
Sensitivity analysis in the WWTP modelling community – new opportunities and applications
Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.
2010-01-01
A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity ...
Rakovec, O.; Hill, M.C.; Clark, M.P.; Weerts, A.H.; Teuling, A.J.; Uijlenhoet, R.
2014-01-01
1] This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
van Voorn, George A. K.; Kooi, Bob W.
2017-06-01
Plato's well-known allegory of the cave describes an observer chained in a cave facing a blank wall on which shadows are projected of objects that are outside the cave. Only by breaking free from the chains can the observer submerge from the cave to see what the objects really look like. Ecological model features compare to the objects outside the cave in this allegory. By performing model analysis light is shed on these features, creating projections that researchers can see. Model analysis methodologies like bifurcation analysis and sensitivity analysis each focus on particular model features and thus allow researchers to uncover only part of the model behaviour. By combining methodologies for model analysis possibilities arise for unravelling more of the model's behaviour, allowing researchers to `break free'. In this paper benefits and issues of combining model analysis methodologies are discussed using a case study. The case study involves three representations of the well-known Rosenzweig-MacArthur predator-prey model, namely the usual one where state variables and parameters have dimensions, a dimensionless representation, and a generalized representation. Based on the results we argue that researchers should combine bifurcation and sensitivity analysis methodologies when analyzing ecological models.
Combined calibration and sensitivity analysis for a water quality model of the Biebrza River, Poland
Perk, van der M.; Bierkens, M.F.P.
1995-01-01
A study was performed to quantify the error in results of a water quality model of the Biebrza River, Poland, due to uncertainties in calibrated model parameters. The procedure used in this study combines calibration and sensitivity analysis. Finally,the model was validated to test the model capabil
Integrated Sensitivity Analysis Workflow
Friedman-Hill, Ernest J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoffman, Edward L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gibson, Marcus J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Clay, Robert L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-08-01
Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.
GCR Environmental Models I: Sensitivity Analysis for GCR Environments
Slaba, Tony C.; Blattnig, Steve R.
2014-01-01
Accurate galactic cosmic ray (GCR) models are required to assess crew exposure during long-duration missions to the Moon or Mars. Many of these models have been developed and compared to available measurements, with uncertainty estimates usually stated to be less than 15%. However, when the models are evaluated over a common epoch and propagated through to effective dose, relative differences exceeding 50% are observed. This indicates that the metrics used to communicate GCR model uncertainty can be better tied to exposure quantities of interest for shielding applications. This is the first of three papers focused on addressing this need. In this work, the focus is on quantifying the extent to which each GCR ion and energy group, prior to entering any shielding material or body tissue, contributes to effective dose behind shielding. Results can be used to more accurately calibrate model-free parameters and provide a mechanism for refocusing validation efforts on measurements taken over important energy regions. Results can also be used as references to guide future nuclear cross-section measurements and radiobiology experiments. It is found that GCR with Z>2 and boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. This finding is important given that most of the GCR models are developed and validated against Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer (ACE/CRIS) measurements taken below 500 MeV/n. It is therefore possible for two models to very accurately reproduce the ACE/CRIS data while inducing very different effective dose values behind shielding.
Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan
2009-02-01
Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.
Sensitivity Analysis and Statistical Convergence of a Saltating Particle Model
Maldonado, S
2016-01-01
Saltation models provide considerable insight into near-bed sediment transport. This paper outlines a simple, efficient numerical model of stochastic saltation, which is validated against previously published experimental data on saltation in a channel of nearly horizontal bed. Convergence tests are systematically applied to ensure the model is free from statistical errors emanating from the number of particle hops considered. Two criteria for statistical convergence are derived; according to the first criterion, at least $10^3$ hops appear to be necessary for convergent results, whereas $10^4$ saltations seem to be the minimum required in order to achieve statistical convergence in accordance with the second criterion. Two empirical formulae for lift force are considered: one dependent on the slip (relative) velocity of the particle multiplied by the vertical gradient of the horizontal flow velocity component; the other dependent on the difference between the squares of the slip velocity components at the to...
A global sensitivity analysis approach for morphogenesis models
Boas, S.E.M.; Navarro Jimenez, M.I.; Merks, R.M.H.; Blom, J.G.
2015-01-01
{\\bf Background} %if any Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the
Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model
Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.
2015-04-01
As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly increasing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis method for spatial input data (snow cover fraction - SCF) for a distributed rainfall-runoff model to investigate when the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focussed on the relation between the SCF sensitivity and the physical and spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland, for which a distributed WetSpa model is set up to simulate 2 years of daily runoff. The sensitivity analysis uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which employs different response functions for each spatial parameter representing a 4 × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as geomorphology, soil texture, land use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for our spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model. The developed method can be easily applied to other models and other spatial data.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall
Using sensitivity analysis to validate the predictions of a biomechanical model of bite forces.
Sellers, William Irvin; Crompton, Robin Huw
2004-02-01
Biomechanical modelling has become a very popular technique for investigating functional anatomy. Modern computer simulation packages make producing such models straightforward and it is tempting to take the results produced at face value. However the predictions of a simulation are only valid when both the model and the input parameters are accurate and little work has been done to verify this. In this paper a model of the human jaw is produced and a sensitivity analysis is performed to validate the results. The model is built using the ADAMS multibody dynamic simulation package incorporating the major occlusive muscles of mastication (temporalis, masseter, medial and lateral pterygoids) as well as a highly mobile temporomandibular joint. This model is used to predict the peak three-dimensional bite forces at each teeth location, joint reaction forces, and the contributions made by each individual muscle. The results for occlusive bite-force (1080N at M1) match those previously published suggesting the model is valid. The sensitivity analysis was performed by sampling the input parameters from likely ranges and running the simulation many times rather than using single, best estimate values. This analysis shows that the magnitudes of the peak retractive forces on the lower teeth were highly sensitive to the chosen origin (and hence fibre direction) of the temporalis and masseter muscles as well as the laxity of the TMJ. Peak protrusive force was also sensitive to the masseter origin. These result shows that the model is insufficiently complex to estimate these values reliably although the much lower sensitivity values obtained for the bite forces in the other directions and also for the joint reaction forces suggest that these predictions are sound. Without the sensitivity analysis it would not have been possible to identify these weaknesses which strongly supports the use of sensitivity analysis as a validation technique for biomechanical modelling.
Stability and Sensitive Analysis of a Model with Delay Quorum Sensing
Zhonghua Zhang
2015-01-01
Full Text Available This paper formulates a delay model characterizing the competition between bacteria and immune system. The center manifold reduction method and the normal form theory due to Faria and Magalhaes are used to compute the normal form of the model, and the stability of two nonhyperbolic equilibria is discussed. Sensitivity analysis suggests that the growth rate of bacteria is the most sensitive parameter of the threshold parameter R0 and should be targeted in the controlling strategies.
Siamphukdee, Kanjana; Collins, Frank; Zou, Roger
2013-06-01
Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.
Edouard, C.; Petit, M.; Forgez, C.; Bernard, J.; Revel, R.
2016-09-01
In this work, a simplified electrochemical and thermal model that can predict both physicochemical and aging behavior of Li-ion batteries is studied. A sensitivity analysis of all its physical parameters is performed in order to find out their influence on the model output based on simulations under various conditions. The results gave hints on whether a parameter needs particular attention when measured or identified and on the conditions (e.g. temperature, discharge rate) under which it is the most sensitive. A specific simulation profile is designed for parameters involved in aging equations in order to determine their sensitivity. Finally, a step-wise method is followed to limit the influence of parameter values when identifying some of them, according to their relative sensitivity from the study. This sensitivity analysis and the subsequent step-wise identification method show very good results, such as a better fitting of the simulated cell voltage with experimental data.
A sensitivity analysis using different spatial resolution terrain models and flood inundation models
Papaioannou, George; Aronica, Giuseppe T.; Loukas, Athanasios; Vasiliades, Lampros
2014-05-01
The impact of terrain spatial resolution and accuracy on the hydraulic flood modeling can pervade the water depth and the flood extent accuracy. Another significant factor that can affect the hydraulic flood modeling outputs is the selection of the hydrodynamic models (1D,2D,1D/2D). Human mortality, ravaged infrastructures and other damages can be derived by extreme flash flood events that can be prevailed in lowlands at suburban and urban areas. These incidents make the necessity of a detailed description of the terrain and the use of advanced hydraulic models essential for the accurate spatial distribution of the flooded areas. In this study, a sensitivity analysis undertaken using different spatial resolution of Digital Elevation Models (DEMs) and several hydraulic modeling approaches (1D, 2D, 1D/2D) including their effect on the results of river flow modeling and mapping of floodplain. Three digital terrain models (DTMs) were generated from the different elevation variation sources: Terrestrial Laser Scanning (TLS) point cloud data, classic land surveying and digitization of elevation contours from 1:5000 scale topographic maps. HEC-RAS and MIKE 11 are the 1-dimensional hydraulic models that are used. MLFP-2D (Aronica et al., 1998) and MIKE 21 are the 2-dimensional hydraulic models. The last case consist of the integration of MIKE 11/MIKE 21 where 1D-MIKE 11 and 2D-MIKE 21 hydraulic models are coupled through the MIKE FLOOD platform. The validation process of water depths and flood extent is achieved through historical flood records. Observed flood inundation areas in terms of simulated maximum water depth and flood extent were used for the validity of each application result. The methodology has been applied in the suburban section of Xerias river at Volos-Greece. Each dataset has been used to create a flood inundation map for different cross-section configurations using different hydraulic models. The comparison of resulting flood inundation maps indicates
Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen
Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J
2014-01-01
A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels. PMID:26601024
Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.
Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J
2014-01-01
A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.
Parameter sensitivity and uncertainty analysis for a storm surge and wave model
Bastidas, Luis A.; Knighton, James; Kline, Shaun W.
2016-09-01
Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.
Bayesian sensitivity analysis of incomplete data: bridging pattern-mixture and selection models.
Kaciroti, Niko A; Raghunathan, Trivellore
2014-11-30
Pattern-mixture models (PMM) and selection models (SM) are alternative approaches for statistical analysis when faced with incomplete data and a nonignorable missing-data mechanism. Both models make empirically unverifiable assumptions and need additional constraints to identify the parameters. Here, we first introduce intuitive parameterizations to identify PMM for different types of outcome with distribution in the exponential family; then we translate these to their equivalent SM approach. This provides a unified framework for performing sensitivity analysis under either setting. These new parameterizations are transparent, easy-to-use, and provide dual interpretation from both the PMM and SM perspectives. A Bayesian approach is used to perform sensitivity analysis, deriving inferences using informative prior distributions on the sensitivity parameters. These models can be fitted using software that implements Gibbs sampling.
Stochastic sensitivity analysis of the attractors for the randomly forced Ricker model with delay
Bashkirtseva, Irina; Ryashko, Lev
2014-11-14
Stochastically forced regular attractors (equilibria, cycles, closed invariant curves) of the discrete-time nonlinear systems are studied. For the analysis of noisy attractors, a unified approach based on the stochastic sensitivity function technique is suggested and discussed. Potentialities of the elaborated theory are demonstrated in the parametric analysis of the stochastic Ricker model with delay nearby Neimark–Sacker bifurcation. - Highlights: • Stochastically forced regular attractors of the discrete-time nonlinear systems are studied. • Unified approach based on the stochastic sensitivity function technique is suggested. • Potentialities of the elaborated theory are demonstrated. • Parametric analysis of the stochastic Ricker model with delay is given.
Ziliani, L.; Surian, N.; Coulthard, T. J.; Tarantola, S.
2013-12-01
paper addresses an important question of modeling stream dynamics: How may numerical models of braided stream morphodynamics be rigorously and objectively evaluated against a real case study? Using simulations from the Cellular Automaton Evolutionary Slope and River (CAESAR) reduced-complexity model (RCM) of a 33 km reach of a large gravel bed river (the Tagliamento River, Italy), this paper aims to (i) identify a sound strategy for calibration and validation of RCMs, (ii) investigate the effectiveness of multiperformance model assessments, (iii) assess the potential of using CAESAR at mesospatial and mesotemporal scales. The approach used has three main steps: first sensitivity analysis (using a screening method and a variance-based method), then calibration, and finally validation. This approach allowed us to analyze 12 input factors initially and then to focus calibration only on the factors identified as most important. Sensitivity analysis and calibration were performed on a 7.5 km subreach, using a hydrological time series of 20 months, while validation on the whole 33 km study reach over a period of 8 years (2001-2009). CAESAR was able to reproduce the macromorphological changes of the study reach and gave good results as for annual bed load sediment estimates which turned out to be consistent with measurements in other large gravel bed rivers but showed a poorer performance in reproducing the characteristics of the braided channel (e.g., braiding intensity). The approach developed in this study can be effectively applied in other similar RCM contexts, allowing the use of RCMs not only in an explorative manner but also in obtaining quantitative results and scenarios.
Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models
Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)
2011-04-15
Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.
Multi-objective global sensitivity analysis of the WRF model parameters
Quan, Jiping; Di, Zhenhua; Duan, Qingyun; Gong, Wei; Wang, Chen
2015-04-01
Tuning model parameters to match model simulations with observations can be an effective way to enhance the performance of numerical weather prediction (NWP) models such as Weather Research and Forecasting (WRF) model. However, this is a very complicated process as a typical NWP model involves many model parameters and many output variables. One must take a multi-objective approach to ensure all of the major simulated model outputs are satisfactory. This talk presents the results of an investigation of multi-objective parameter sensitivity analysis of the WRF model to different model outputs, including conventional surface meteorological variables such as precipitation, surface temperature, humidity and wind speed, as well as atmospheric variables such as total precipitable water, cloud cover, boundary layer height and outgoing long radiation at the top of the atmosphere. The goal of this study is to identify the most important parameters that affect the predictive skill of short-range meteorological forecasts by the WRF model. The study was performed over the Greater Beijing Region of China. A total of 23 adjustable parameters from seven different physical parameterization schemes were considered. Using a multi-objective global sensitivity analysis method, we examined the WRF model parameter sensitivities to the 5-day simulations of the aforementioned model outputs. The results show that parameter sensitivities vary with different model outputs. But three to four of the parameters are shown to be sensitive to all model outputs considered. The sensitivity results from this research can be the basis for future model parameter optimization of the WRF model.
Price, Jason Anthony; Nordblad, Mathias; Woodley, John
2014-01-01
This paper demonstrates the added benefits of using uncertainty and sensitivity analysis in the kinetics of enzymatic biodiesel production. For this study, a kinetic model by Fedosov and co-workers is used. For the uncertainty analysis the Monte Carlo procedure was used to statistically quantify...
Hasuike, Takashi; Katagiri, Hideki
2010-10-01
This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.
Malaguerra, Flavio; Chambon, Julie Claire Claudia; Albrechtsen, Hans-Jørgen;
2010-01-01
organic matter / electron donors, presence of specific biomass, etc. Here we develop a new fully-kinetic biogeochemical reactive model able to simulate chlorinated solvents degradation as well as production and consumption of molecular hydrogen. The model is validated using batch experiment data...... and global sensitivity analysis is performed....
Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna
2009-01-01
The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...
Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.
2001-01-01
This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis investig
Spatial sensitivity analysis of snow cover data in a distributed rainfall–runoff model
T. Berezowski
2014-10-01
Full Text Available As the availability of spatially distributed data sets for distributed rainfall–runoff modelling is strongly growing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis (SA method for snow cover fraction input data (SCF for a distributed rainfall–runoff model to investigate if the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focused on the relation between the SCF sensitivity and the physical, spatial parameters and processes of a distributed rainfall–runoff model. The methodology is tested for the Biebrza River catchment, Poland for which a distributed WetSpa model is setup to simulate two years of daily runoff. The SA uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT algorithm, which uses different response functions for each 4 km × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as: geomorphology, soil texture, land-use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for the spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model.
Global Sensitivity Analysis for Multiple Scenarios and Models of Nitrogen Processes
Chen, Z.; Shi, L.; Ye, M.
2015-12-01
Modeling nitrogen process in soil is a long-lasting challenge partly because of the uncertainties from parameters, models and scenarios. It may be difficult to identify a suitable model and its corresponding parameters.This study assesses the global sensitivity indices for parameters of multiple models and scenarios on nitrogen processes. The majority of existing nitrogen dynamics models consider nitrification and denitrification as a first-order decay process or a Michaelis-Menten model, while various reduction functions are used to reflect the impact of environmental soil conditions. To determine the model uncertainty, 9 alternative models were designed based on NP2D model in this study. These models have the similar descriptions of nitrogen process but are different in the cal reduction functions of soil water and temperature. A global sensitivity analysis of each models under various scenarios was evaluated. Results show that in our synthetic cases of nitrogen transport and transformation, the global sensitivity indices vary between each models and scenarios. Larger indices for parameters of nitrification are obtained than the ones of denitrification in 6 models, while an inverse relationship is revealed in the rest 3 models. Parameters of soil temperature reduction functions are more sensitive than those of soil water reduction functions. When the soil water and temperature increase separately or together, parameters of denitrification gain their sensitivity, but the indices for parameters of soil temperature reduction functions decrease simultaneously. Our results indicate that identifying important parameters may be biased if ignoring the model and scenario uncertainties. This problem can be resolved by using the global sensitivity indices for multiple models and multiple scenarios. The new indices is useful to determine the relative contributions from different models and scenarios.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
A sensitivity analysis of the WIPP disposal room model: Phase 1
Labreche, D.A.; Beikmann, M.A. [RE/SPEC, Inc., Albuquerque, NM (United States); Osnes, J.D. [RE/SPEC, Inc., Rapid City, SD (United States); Butcher, B.M. [Sandia National Labs., Albuquerque, NM (United States)
1995-07-01
The WIPP Disposal Room Model (DRM) is a numerical model with three major components constitutive models of TRU waste, crushed salt backfill, and intact halite -- and several secondary components, including air gap elements, slidelines, and assumptions on symmetry and geometry. A sensitivity analysis of the Disposal Room Model was initiated on two of the three major components (waste and backfill models) and on several secondary components as a group. The immediate goal of this component sensitivity analysis (Phase I) was to sort (rank) model parameters in terms of their relative importance to model response so that a Monte Carlo analysis on a reduced set of DRM parameters could be performed under Phase II. The goal of the Phase II analysis will be to develop a probabilistic definition of a disposal room porosity surface (porosity, gas volume, time) that could be used in WIPP Performance Assessment analyses. This report documents a literature survey which quantifies the relative importance of the secondary room components to room closure, a differential analysis of the creep consolidation model and definition of a follow-up Monte Carlo analysis of the model, and an analysis and refitting of the waste component data on which a volumetric plasticity model of TRU drum waste is based. A summary, evaluation of progress, and recommendations for future work conclude the report.
Supplementary Material for: A global sensitivity analysis approach for morphogenesis models
Boas, Sonja
2015-01-01
Abstract Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
[Local sensitivity and its stationarity analysis for urban rainfall runoff modelling].
Lin, Jie; Huang, Jin-Liang; Du, Peng-Fei; Tu, Zhen-Shun; Li, Qing-Sheng
2010-09-01
Sensitivity analysis of urban-runoff simulation is a crucial procedure for parameter identification and uncertainty analysis. Local sensitivity analysis using Morris screening method was carried out for urban rainfall runoff modelling based on Storm Water Management Model (SWMM). The results showed that Area, % Imperv and Dstore-Imperv are the most sensitive parameters for both total runoff volume and peak flow. Concerning total runoff volume, the sensitive indices of Area, % Imperv and Dstore-Imperv were 0.46-1.0, 0.61-1.0, -0.050(-) - 5.9, respectively; while with respect to peak runoff, they were 0.48-0.89, 0.59-0.83, 0(-) -9.6, respectively. In comparison, the most sensitive indices (Morris) for all parameters with regard to total runoff volume and peak flow appeared in the rainfall event with least rainfall; and less sensitive indices happened in the rainfall events with heavier rainfall. Furthermore, there is considerable variability in sensitive indices for each rainfall event. % Zero-Imperv's coefficient variations have the largest values among all parameters for total runoff volume and peak flow, namely 221.24% and 228.10%. On the contrary, the coefficient variations of conductivity among all parameters for both total runoff volume and peak flow are the smallest, namely 0.
Comprehensive, Population-Based Sensitivity Analysis of a Two-Mass Vocal Fold Model.
Daniel Robertson
Full Text Available Previous vocal fold modeling studies have generally focused on generating detailed data regarding a narrow subset of possible model configurations. These studies can be interpreted to be the investigation of a single subject under one or more vocal conditions. In this study, a broad population-based sensitivity analysis is employed to examine the behavior of a virtual population of subjects and to identify trends between virtual individuals as opposed to investigating a single subject or model instance. Four different sensitivity analysis techniques were used in accomplishing this task. Influential relationships between model input parameters and model outputs were identified, and an exploration of the model's parameter space was conducted. Results indicate that the behavior of the selected two-mass model is largely dominated by complex interactions, and that few input-output pairs have a consistent effect on the model. Results from the analysis can be used to increase the efficiency of optimization routines of reduced-order models used to investigate voice abnormalities. Results also demonstrate the types of challenges and difficulties to be expected when applying sensitivity analyses to more complex vocal fold models. Such challenges are discussed and recommendations are made for future studies.
Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool
Wenjie Tian
2014-01-01
Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.
Time-dependent global sensitivity analysis with active subspaces for a lithium ion battery model
Constantine, Paul G
2016-01-01
Renewable energy researchers use computer simulation to aid the design of lithium ion storage devices. The underlying models contain several physical input parameters that affect model predictions. Effective design and analysis must understand the sensitivity of model predictions to changes in model parameters, but global sensitivity analyses become increasingly challenging as the number of input parameters increases. Active subspaces are part of an emerging set of tools to reveal and exploit low-dimensional structures in the map from high-dimensional inputs to model outputs. We extend a linear model-based heuristic for active subspace discovery to time-dependent processes and apply the resulting technique to a lithium ion battery model. The results reveal low-dimensional structure that a designer may exploit to efficiently study the relationship between parameters and predictions.
Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan
2009-01-01
The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predict...
Sayar, N.A.; Chen, B.H.; Lye, G.J.
2009-01-01
In this paper we have used a proposed mathematical model, describing the carbon-carbon bond format ion reaction between beta-hydroxypyruvate and glycolaldehyde to synthesise L-erythrulose, catalysed by the enzyme transketolase, for the analysis of the sensitivity of the process to its kinetic par....... (C) 2009 Elsevier B.V. All rights reserved....
Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC
J. Yang
2014-03-01
Full Text Available Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method with a multiobjective optimization (MOO approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO with the traditional Nelder–Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1 the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2 both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash–Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3 evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4 compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol
Farzin Shabani
Full Text Available Using CLIMEX and the Taguchi Method, a process-based niche model was developed to estimate potential distributions of Phoenix dactylifera L. (date palm, an economically important crop in many counties. Development of the model was based on both its native and invasive distribution and validation was carried out in terms of its extensive distribution in Iran. To identify model parameters having greatest influence on distribution of date palm, a sensitivity analysis was carried out. Changes in suitability were established by mapping of regions where the estimated distribution changed with parameter alterations. This facilitated the assessment of certain areas in Iran where parameter modifications impacted the most, particularly in relation to suitable and highly suitable locations. Parameter sensitivities were also evaluated by the calculation of area changes within the suitable and highly suitable categories. The low temperature limit (DV2, high temperature limit (DV3, upper optimal temperature (SM2 and high soil moisture limit (SM3 had the greatest impact on sensitivity, while other parameters showed relatively less sensitivity or were insensitive to change. For an accurate fit in species distribution models, highly sensitive parameters require more extensive research and data collection methods. Results of this study demonstrate a more cost effective method for developing date palm distribution models, an integral element in species management, and may prove useful for streamlining requirements for data collection in potential distribution modeling for other species as well.
Neumann, Marc B
2012-09-01
Five sensitivity analysis methods based on derivatives, screening, regression, variance decomposition and entropy are introduced, applied and compared for a model predicting micropollutant degradation in drinking water treatment. The sensitivity analysis objectives considered are factors prioritisation (detecting important factors), factors fixing (detecting non-influential factors) and factors mapping (detecting which factors are responsible for causing pollutant limit exceedances). It is shown how the applicability of methods changes in view of increasing interactions between model factors and increasing non-linearity between the model output and the model factors. A high correlation is observed between the indices obtained for the objectives factors prioritisation and factors mapping due to the positive skewness of the probability distributions of the predicted residual pollutant concentrations. The entropy-based method which uses the Kullback-Leibler divergence is found to be particularly suited when assessing pollutant limit exceedances.
Sensitivity analysis of CLIMEX parameters in modelling potential distribution of Lantana camara L.
Subhashni Taylor
Full Text Available A process-based niche model of L. camara L. (lantana, a highly invasive shrub species, was developed to estimate its potential distribution using CLIMEX. Model development was carried out using its native and invasive distribution and validation was carried out with the extensive Australian distribution. A good fit was observed, with 86.7% of herbarium specimens collected in Australia occurring within the suitable and highly suitable categories. A sensitivity analysis was conducted to identify the model parameters that had the most influence on lantana distribution. The changes in suitability were assessed by mapping the regions where the distribution changed with each parameter alteration. This allowed an assessment of where, within Australia, the modification of each parameter was having the most impact, particularly in terms of the suitable and highly suitable locations. The sensitivity of various parameters was also evaluated by calculating the changes in area within the suitable and highly suitable categories. The limiting low temperature (DV0, limiting high temperature (DV3 and limiting low soil moisture (SM0 showed highest sensitivity to change. The other model parameters were relatively insensitive to change. Highly sensitive parameters require extensive research and data collection to be fitted accurately in species distribution models. The results from this study can inform more cost effective development of species distribution models for lantana. Such models form an integral part of the management of invasive species and the results can be used to streamline data collection requirements for potential distribution modelling.
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum
Application of nonlinear optimization method to sensitivity analysis of numerical model
XU Hui; MU Mu; LUO Dehai
2004-01-01
A nonlinear optimization method is applied to sensitivity analysis of a numerical model. Theoretical analysis and numerical experiments indicate that this method can give not only a quantitative assessment whether the numerical model is able to simulate the observations or not, but also the initial field that yields the optimal simulation. In particular, when the simulation results are apparently satisfactory, and sometimes both model error and initial error are considerably large, the nonlinear optimization method, under some conditions, can identify the error that plays a dominant role.
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Improvement of reflood model in RELAP5 code based on sensitivity analysis
Li, Dong; Liu, Xiaojing; Yang, Yanhua, E-mail: yanhuay@sjtu.edu.cn
2016-07-15
Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.
PANG Lei; ZHANG Jixian; YAN Qin
2010-01-01
For the high-resolution airborne synthetic aperture radar (SAR) stereo geolocation application, the final geolocation accuracy is influenced by various error parameter sources. In this paper, an airborne SAR stereo geolocation parameter error model,involving the parameter errors derived from the navigation system on the flight platform, has been put forward. Moreover, a kind of near-direct method for modeling and sensitivity analysis of navigation parameter errors is also given. This method directly uses the ground reference to calculate the covariance matrix relationship between the parameter errors and the eventual geolocation errors for ground target points. In addition, utilizing true flight track parameters' errors, this paper gave a verification of the method and a corresponding sensitivity analysis for airborne SAR stereo geolocation model and proved its efficiency.
Decomposition method of complex optimization model based on global sensitivity analysis
Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu
2014-07-01
The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.
Test and Sensitivity Analysis of Hydrological Modeling in the Coupled WRF-Urban Modeling System
Wang, Z.; yang, J.
2013-12-01
Rapid urbanization has emerged as the source of many adverse effects that challenge the environmental sustainability of cities under changing climatic patterns. One essential key to address these challenges is to physically resolve the dynamics of urban-land-atmospheric interactions. To investigate the impact of urbanization on regional climate, physically-based single layer urban canopy model (SLUCM) has been developed and implemented into the Weather Research and Forecasting (WRF) platform. However, due to the lack of realistic representation of urban hydrological processes, simulation of urban climatology by current coupled WRF-SLUCM is inevitably inadequate. Aiming at improving the accuracy of simulations, recently we implemented urban hydrological processes into the model, including (1) anthropogenic latent heat, (2) urban irrigation, (3) evaporation over impervious surface, and (4) urban oasis effect. In addition, we couple the green roof system into the model to verify its capacity in alleviating urban heat island effect at regional scale. Driven by different meteorological forcings, offline tests show that the enhanced model is more accurate in predicting turbulent fluxes arising from built terrains. Though the coupled WRF-SLUCM has been extensively tested against various field measurement datasets, accurate input parameter space needs to be specified for good model performance. As realistic measurements of all input parameters to the modeling framework are rarely possible, understanding the model sensitivity to individual parameters is essential to determine the relative importance of parameter uncertainty to model performance. Thus we further use an advanced Monte Carlo approach to quantify relative sensitivity of input parameters of the hydrological model. In particular, performance of two widely used soil hydraulic models, namely the van Genuchten model (based on generic soil physics) and an empirical model (viz. the CHC model currently adopted in WRF
Berezowski, Tomasz; Chormański, Jarosław; Nossent, Jiri; Batelaan, Okke
2014-05-01
Distributed hydrological models enhance the analysis and explanation of environmental processes. As more spatial input data and time series become available, more analysis is required of the sensitivity of the data on the simulations. Most research so far focussed on the sensitivity of precipitation data in distributed hydrological models. However, these results can not be compared until a universal approach to quantify the sensitivity of a model to spatial data is available. The frequently tested and used remote sensing data for distributed models is snow cover. Snow cover fraction (SCF) remote sensing products are easily available from the internet, e.g. MODIS snow cover product MOD10A1 (daily snow cover fraction at 500m spatial resolution). In this work a spatial sensitivity analysis (SA) of remotely sensed SCF from MOD10A1 was conducted with the distributed WetSpa model. The aim is to investigate if the WetSpa model is differently subjected to SCF uncertainty in different areas of the model domain. The analysis was extended to look not only at SA quantities but also to relate them to the physical parameters and processes in the study area. The study area is the Biebrza River catchment, Poland, which is considered semi natural catchment and subject to a spring snow melt regime. Hydrological simulations are performed with the distributed WetSpa model, with a simulation period of 2 hydrological years. For the SA the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm is used, with a set of different response functions in regular 4 x 4 km grid. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different landscape features. Moreover, the spatial patterns of the SA results are related to the WetSpa spatial parameters and to different physical processes. Based on the study results, it is clear that spatial approach of SA can be performed with the proposed algorithm and the MOD10A1 SCF is spatially sensitive in
Mockler, Eva M.; O'Loughlin, Fiachra E.; Bruen, Michael
2016-05-01
Increasing pressures on water quality due to intensification of agriculture have raised demands for environmental modeling to accurately simulate the movement of diffuse (nonpoint) nutrients in catchments. As hydrological flows drive the movement and attenuation of nutrients, individual hydrological processes in models should be adequately represented for water quality simulations to be meaningful. In particular, the relative contribution of groundwater and surface runoff to rivers is of interest, as increasing nitrate concentrations are linked to higher groundwater discharges. These requirements for hydrological modeling of groundwater contribution to rivers initiated this assessment of internal flow path partitioning in conceptual hydrological models. In this study, a variance based sensitivity analysis method was used to investigate parameter sensitivities and flow partitioning of three conceptual hydrological models simulating 31 Irish catchments. We compared two established conceptual hydrological models (NAM and SMARG) and a new model (SMART), produced especially for water quality modeling. In addition to the criteria that assess streamflow simulations, a ratio of average groundwater contribution to total streamflow was calculated for all simulations over the 16 year study period. As observations time-series of groundwater contributions to streamflow are not available at catchment scale, the groundwater ratios were evaluated against average annual indices of base flow and deep groundwater flow for each catchment. The exploration of sensitivities of internal flow path partitioning was a specific focus to assist in evaluating model performances. Results highlight that model structure has a strong impact on simulated groundwater flow paths. Sensitivity to the internal pathways in the models are not reflected in the performance criteria results. This demonstrates that simulated groundwater contribution should be constrained by independent data to ensure results
Kleijnen, J.P.C.
1995-01-01
This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for
Meng, Lexuan; Dragicevic, Tomislav; Vasquez, Juan Carlos
2015-01-01
of dynamic study. The aim of this paper is to model the complete DC microgrid system in z-domain and perform sensitivity analysis for the complete system. A generalized modeling method is proposed and the system dynamics under different control parameters, communication topologies and communication speed...... the dynamics of electrical and communication systems interact with each other. Apart from that, the communication characteristics also affect the dynamics of the system. Due to discrete nature of information exchange in communication network, Laplace domain analysis is not accurate enough for this kind...
Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won [KHNP Central Research Institute, Daejeon (Korea, Republic of)
2015-10-15
In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario.
Detection and Analysis of High Temperature Sensitivity of TGMS Lines in Rice Using AMMI Model
FU Li-zhong; XUE Qing-zhong
2004-01-01
With the AMMI(additive main effects and multiplicative interaction)analysis model,the determination of the sensitivity to temperature among different TGMS(thermo-sensitive genic male sterile)lines was performed. To assess the genetic differences due to high temperature stress at. the fertility-sensitive stage(10- 20 d before heading),seven genotypes(six TGMS lines and the control Pei-Ai64S)were grown from May 4 at seven different stages with 10d intervals. The temperatures at. the fertility-sensitive stages involved twelve levels from ＜ 20 to ＞30℃ under the regime natural conditions in Hangzhou,China. There was considerable variation in pollen fertility among genotypes in response to high temperature. Five genotypes identified as TGMS lines as their percentages of fertile pollens were lower than or close to that. of the. control except for the unstable line RTS19(V6). When the temperatures at. the fertility-sensitive stage were at Ⅰ -Ⅳ,Ⅴ-Ⅵ and Ⅶ-Ⅻ,the percentages of fertile pollens varied in the ranges of 46.46- 48.49%,19.62-22.79% and 3.49- 5.87%,respectively. The critical temperatures of sterility and fertility in the five TGMS lines were 25.1 and 23.0℃,respectively. Considering the amounts and directions of main effect and their IPCA(interaction principal components analysis),we can classify the lines and temperature levels into different groups,and describe the characteristics of genotyPe x temperature interaction,offering the information and tools for the development and utility of thermo-sensitive male sterile lines.Several TGMS rice lines with their reproductive sensitivity to high temperature that can be screened using the AMMI model may add valuable germplasm to the breeding program of hybrid rice.
Mockler, E. M.; O'Loughlin, F.; Bruen, M. P.
2013-12-01
Conceptual rainfall runoff (CRR) models aim to capture the dominant hydrological processes in a catchment in order to predict the flows in a river. Most flood forecasting models focus on predicting total outflows from a catchment and often perform well without the correct distribution between individual pathways. However, modelling of water flow paths within a catchment, rather than its overall response, is specifically needed to investigate the physical and chemical transport of matter through the various elements of the hydrological cycle. Focus is increasingly turning to accurately quantifying the internal movement of water within these models to investigate if the simulated processes contributing to the total flows are realistic in the expectation of generating more robust models. Parameter regionalisation is required if such models are to be widely used, particularly in ungauged catchments. However, most regionalisation studies to date have typically consisted of calibrations and correlations of parameters with catchment characteristics, or some variations of this. In order for a priori parameter estimation in this manner to be possible, a model must be parametrically parsimonious while still capturing the dominant processes of the catchment. The presence of parameter interactions within most CRR model structures can make parameter prediction in ungauged basins very difficult, as the functional role of the parameter within the model may not be uniquely identifiable. We use a variance based sensitivity analysis method to investigate parameter sensitivities and interactions in the global parameter space of three CRR models, simulating a set of 30 Irish catchments within a variety of hydrological settings over a 16 year period. The exploration of sensitivities of internal flow path partitioning was a specific focus and correlations between catchment characteristics and parameter sensitivities were also investigated to assist in evaluating model performances
Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia
Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica
2017-01-01
We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.
2014-07-01
Barrier Shoreline Wetland Value Assessment Model1 by S. Kyle McKay2 and J. Craig Fischenich3 OVERVIEW: Sensitivity analysis is a technique for...relevance of questions posed during an Independent External Peer Review (IEPR). BARATARIA BASIN BARRIER SHORELINE (BBBS) STUDY: On average...scale restoration projects to reduce marsh loss and maintain these wetlands as healthy functioning ecosystems. The Barataria Basin Barrier Shoreline
Sin, Gürkan; Gernaey, Krist V; Lantz, Anna Eliasson
2009-01-01
The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO(2) predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes.
A global sensitivity analysis of the PlumeRise model of volcanic plumes
Woodhouse, Mark J.; Hogg, Andrew J.; Phillips, Jeremy C.
2016-10-01
Integral models of volcanic plumes allow predictions of plume dynamics to be made and the rapid estimation of volcanic source conditions from observations of the plume height by model inversion. Here we introduce PlumeRise, an integral model of volcanic plumes that incorporates a description of the state of the atmosphere, includes the effects of wind and the phase change of water, and has been developed as a freely available web-based tool. The model can be used to estimate the height of a volcanic plume when the source conditions are specified, or to infer the strength of the source from an observed plume height through a model inversion. The predictions of the volcanic plume dynamics produced by the model are analysed in four case studies in which the atmospheric conditions and the strength of the source are varied. A global sensitivity analysis of the model to a selection of model inputs is performed and the results are analysed using parallel coordinate plots for visualisation and variance-based sensitivity indices to quantify the sensitivity of model outputs. We find that if the atmospheric conditions do not vary widely then there is a small set of model inputs that strongly influence the model predictions. When estimating the height of the plume, the source mass flux has a controlling influence on the model prediction, while variations in the plume height strongly effect the inferred value of the source mass flux when performing inversion studies. The values taken for the entrainment coefficients have a particularly important effect on the quantitative predictions. The dependencies of the model outputs to variations in the inputs are discussed and compared to simple algebraic expressions that relate source conditions to the height of the plume.
Grouillet, Benjamin; Ruelland, Denis; Vaittinada Ayar, Pradeebane; Vrac, Mathieu
2016-03-01
This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20-year period (1986-2005) to capture different climatic conditions in the basins. The daily GR4j conceptual model was used to simulate streamflow that was eventually evaluated at a 10-day time step. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled data sets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the cumulative distribution function-transform approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two general circulation models (GCMs) (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement in runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range
Augusta Neto, Maria; Yu, Wenbin; Pereira Leal, Rogerio
2008-10-01
This article describes a new approach to design the cross-section layer orientations of composite laminated beam structures. The beams are modelled with realistic cross-sectional geometry and material properties instead of a simplified model. The VABS (the variational asymptotic beam section analysis) methodology is used to compute the cross-sectional model for a generalized Timoshenko model, which was embedded in the finite element solver FEAP. Optimal design is performed with respect to the layers' orientation. The design sensitivity analysis is analytically formulated and implemented. The direct differentiation method is used to evaluate the response sensitivities with respect to the design variables. Thus, the design sensitivities of the Timoshenko stiffness computed by VABS methodology are imbedded into the modified VABS program and linked to the beam finite element solver. The modified method of feasible directions and sequential quadratic programming algorithms are used to seek the optimal continuous solution of a set of numerical examples. The buckling load associated with the twist-bend instability of cantilever composite beams, which may have several cross-section geometries, is improved in the optimization procedure.
Optimal control and sensitivity analysis of an influenza model with treatment and vaccination.
Tchuenche, J M; Khamis, S A; Agusto, F B; Mpeshe, S C
2011-03-01
We formulate and analyze the dynamics of an influenza pandemic model with vaccination and treatment using two preventive scenarios: increase and decrease in vaccine uptake. Due to the seasonality of the influenza pandemic, the dynamics is studied in a finite time interval. We focus primarily on controlling the disease with a possible minimal cost and side effects using control theory which is therefore applied via the Pontryagin's maximum principle, and it is observed that full treatment effort should be given while increasing vaccination at the onset of the outbreak. Next, sensitivity analysis and simulations (using the fourth order Runge-Kutta scheme) are carried out in order to determine the relative importance of different factors responsible for disease transmission and prevalence. The most sensitive parameter of the various reproductive numbers apart from the death rate is the inflow rate, while the proportion of new recruits and the vaccine efficacy are the most sensitive parameters for the endemic equilibrium point.
Vezzaro, Luca; Mikkelsen, Peter Steen
2012-01-01
. The analysis is based on the combination of variance-decomposition Global Sensitivity Analysis (GSA) with the Generalized Likelihood Uncertainty Estimation (GLUE) technique. The GSA-GLUE approach highlights the correlation between the model factors defining the mass of pollutant in the system......The need for estimating micropollutants fluxes in stormwater systems increases the role of stormwater quality models as support for urban water managers, although the application of such models is affected by high uncertainty. This study presents a procedure for identifying the major sources...... of uncertainty in a conceptual lumped dynamic stormwater runoff quality model that is used in a study catchment to estimate (i) copper loads, (ii) compliance with dissolved Cu concentration limits on stormwater discharge and (iii) the fraction of Cu loads potentially intercepted by a planned treatment facility...
W. Castaings
2009-04-01
Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.
In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.
It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.
For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.
Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.
2009-04-01
Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)
1989-01-01
We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.
2014-09-01
Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds, they can be responsible for soil pollution. In this context, this paper proposes a modelling exercise aimed at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semiempirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. An innovative point in this study is the assessment of the usefulness of dissolved trace metal contamination information for model calibration. Moreover, for supporting the modelling exercise, an extensive database was set up during two flood events. It includes water surface elevation records, discharge measurements and geochemistry data such as time series of dissolved/particulate contaminants and suspended-sediment concentrations. The most sensitive parameters were found to be the hydraulic friction coefficients and the sediment particle settling velocity in water. It was also found that model calibration did not benefit from dissolved trace metal contamination information. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.
Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology
M. Ratto
2006-09-01
Full Text Available In this paper, we discuss the problem of calibration and uncertainty estimation for hydrologic systems from two points of view: a bottom-up, reductionist approach; and a top-down, data-based mechanistic (DBM approach. The two approaches are applied to the modelling of the River Hodder catchment in North-West England. The bottom-up approach is developed using the TOPMODEL, whose structure is evaluated by global sensitivity analysis (GSA in order to specify the most sensitive and important parameters; and the subsequent exercises in calibration and validation are carried out in the light of this sensitivity analysis. GSA helps to improve the calibration of hydrological models, making their properties more transparent and highlighting mis-specification problems. The DBM model provides a quick and efficient analysis of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. TOPMODEL calibration takes more time and it explains the flow data a little less well than the DBM model. The main differences in the modelling results are in the nature of the models and the flow decomposition they suggest. The "quick'' (63% and "slow'' (37% components of the decomposed flow identified in the DBM model show a clear partitioning of the flow, with the quick component apparently accounting for the effects of surface and near surface processes; and the slow component arising from the displacement of groundwater into the river channel (base flow. On the other hand, the two output flow components in TOPMODEL have a different physical interpretation, with a single flow component (95% accounting for both slow (subsurface and fast (surface dynamics, while the other, very small component (5% is interpreted as an instantaneous surface runoff generated by rainfall falling on areas of
Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis
Lyu Kehong; Tan Xiaodong; Liu Guanjun; Zhao Chenxu
2014-01-01
In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs) are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann-Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI) through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitor-ing of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.
Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis
Lyu Kehong
2014-06-01
Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.
Global sensitivity analysis in the identification of cohesive models using full-field kinematic data
Alfano, Marco
2015-03-01
Failure of adhesive bonded structures often occurs concurrent with the formation of a non-negligible fracture process zone in front of a macroscopic crack. For this reason, the analysis of damage and fracture is effectively carried out using the cohesive zone model (CZM). The crucial aspect of the CZM approach is the precise determination of the traction-separation relation. Yet it is usually determined empirically, by using calibration procedures combining experimental data, such as load-displacement or crack length data, with finite element simulation of fracture. Thanks to the recent progress in image processing, and the availability of low-cost CCD cameras, it is nowadays relatively easy to access surface displacements across the fracture process zone using for instance Digital Image Correlation (DIC). The rich information provided by correlation techniques prompted the development of versatile inverse parameter identification procedures combining finite element (FE) simulations and full field kinematic data. The focus of the present paper is to assess the effectiveness of these methods in the identification of cohesive zone models. In particular, the analysis is developed in the framework of the variance based global sensitivity analysis. The sensitivity of kinematic data to the sought cohesive properties is explored through the computation of the so-called Sobol sensitivity indexes. The results show that the global sensitivity analysis can help to ascertain the most influential cohesive parameters which need to be incorporated in the identification process. In addition, it is shown that suitable displacement sampling in time and space can lead to optimized measurements for identification purposes.
Bedane, T.; Di Maio, L.; Scarfato, P.; Incarnato, L., E-mail: lincarnato@unisa.it; Marra, F. [Dipartimento di Ingegneria Industriale, Università degli studi di Salerno Via Giovanni Paolo II, 132- 84084, Fisciano (Italy)
2015-12-17
The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values of poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction
Sensitivity Analysis to Select the Most Influential Risk Factors in a Logistic Regression Model
Jassim N. Hussain
2008-01-01
Full Text Available The traditional variable selection methods for survival data depend on iteration procedures, and control of this process assumes tuning parameters that are problematic and time consuming, especially if the models are complex and have a large number of risk factors. In this paper, we propose a new method based on the global sensitivity analysis (GSA to select the most influential risk factors. This contributes to simplification of the logistic regression model by excluding the irrelevant risk factors, thus eliminating the need to fit and evaluate a large number of models. Data from medical trials are suggested as a way to test the efficiency and capability of this method and as a way to simplify the model. This leads to construction of an appropriate model. The proposed method ranks the risk factors according to their importance.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni, E-mail: Giovanni.Pastore@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Swiler, L.P., E-mail: LPSwile@sandia.gov [Optimization and Uncertainty Quantification, Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185-1318 (United States); Hales, J.D., E-mail: Jason.Hales@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Novascone, S.R., E-mail: Stephen.Novascone@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Perez, D.M., E-mail: Danielle.Perez@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Spencer, B.W., E-mail: Benjamin.Spencer@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Luzzi, L., E-mail: Lelio.Luzzi@polimi.it [Politecnico di Milano, Department of Energy, Nuclear Engineering Division, via La Masa 34, I-20156 Milano (Italy); Van Uffelen, P., E-mail: Paul.Van-Uffelen@ec.europa.eu [European Commission, Joint Research Centre, Institute for Transuranium Elements, Hermann-von-Helmholtz-Platz 1, D-76344 Karlsruhe (Germany); Williamson, R.L., E-mail: Richard.Williamson@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States)
2015-01-15
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code with a recently implemented physics-based model for fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO{sub 2} single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information in the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior predictions with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, significantly higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
a Workflow for the Application of Sensitivity Analysis to Earth System Models
Pianosi, F.; Wagener, T.; Rougier, J.; Freer, J. E.; Hall, J.
2013-12-01
Predictions of any earth system model are affected by unavoidable and potentially large uncertainty. When models are used to support risk management of natural hazard, such uncertainties can undermine the transparency and defensibility of the risk assessment. When models are applied to understand dominant controls or other aspects of the system under study, uncertainties will reduce our ability to chose between competing hypotheses. Sensitivity Analysis (SA) provides quantitative information about the contribution of the different input factors (e.g. parameters, boundary conditions or forcing data) to such uncertainty. SA application thus provides insights into the model behavior and potential for model simplification, indicates where further data collection and research is needed or would be beneficial, and enhances the credibility of our modelling results. The value of such analysis has motivated an increasing research effort in the development, application and comparison of SA techniques. Still, comprehensive understanding to guide choices between available SA methods and practical guidelines for their application in the context of earth system models is still insufficient. In this contribution, we aim at filling this gap by (i) providing a map of the existing SA techniques and their appropriateness in different contexts of earth system modeling; (ii) developing a workflow for the choice and application of SA techniques to environmental models; (iii) presenting a suite of visualization tools that can support the assessment and communication of SA results; (iv) defining challenges and opportunities for future research.
Huang, X.; Bandilla, K.; Celia, M. A.; Bachu, S.
2013-12-01
Geological carbon sequestration can significantly contribute to climate-change mitigation only if it is deployed at a very large scale. This means that injection scenarios must occur, and be analyzed, at the basin scale. Various mathematical models of different complexity may be used to assess the fate of injected CO2 and/or resident brine. These models span the range from multi-dimensional, multi-phase numerical simulators to simple single-phase analytical solutions. In this study, we consider a range of models, all based on vertically-integrated governing equations, to predict the basin-scale pressure response to specific injection scenarios. The Canadian section of the Basal Aquifer is used as a test site to compare the different modeling approaches. The model domain covers an area of approximately 811,000 km2, and the total injection rate is 63 Mt/yr, corresponding to 9 locations where large point sources have been identified. Predicted areas of critical pressure exceedance are used as a comparison metric among the different modeling approaches. Comparison of the results shows that single-phase numerical models may be good enough to predict the pressure response over a large aquifer; however, a simple superposition of semi-analytical or analytical solutions is not sufficiently accurate because spatial variability of formation properties plays an important role in the problem, and these variations are not captured properly with simple superposition. We consider two different injection scenarios: injection at the source locations and injection at locations with more suitable aquifer properties. Results indicate that in formations with significant spatial variability of properties, strong variations in injectivity among the different source locations can be expected, leading to the need to transport the captured CO2 to suitable injection locations, thereby necessitating development of a pipeline network. We also consider the sensitivity of porosity and
Sensitivity analysis and calibration of a dynamic physically based slope stability model
Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens
2017-06-01
Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that
Global sensitivity analysis and uncertainties in SEA models of vibroacoustic systems
Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan
2017-06-01
The effect of parametric uncertainties on the dispersion of Statistical Energy Analysis (SEA) models of structural-acoustic coupled systems is studied with the Fourier analysis sensitivity test (FAST) method. The method is firstly applied to an academic example representing a transmission suite, then to a more complex industrial structure from the space industry. Two sets of parameters are considered, namely error on the SEA model's coefficients, or directly the engineering parameters. The first case is an intrusive approach, but enables to identify the dominant phenomena taking place in a given configuration. The second is non-intrusive and appeals more to engineering considerations, by studying the effect of input parameters such as geometry or material characteristics on the SEA outputs. A study of the distribution of results in each frequency band with the same sampling shows some interesting features, such as bimodal repartitions in some ranges.
Dimethylsulfide model calibration and parametric sensitivity analysis for the Greenland Sea
Qu, Bo; Gabric, Albert J.; Zeng, Meifang; Xi, Jiaojiao; Jiang, Limei; Zhao, Li
2017-09-01
Sea-to-air fluxes of marine biogenic aerosols have the potential to modify cloud microphysics and regional radiative budgets, and thus moderate Earth's warming. Polar regions play a critical role in the evolution of global climate. In this work, we use a well-established biogeochemical model to simulate the DMS flux from the Greenland Sea (20°W-10°E and 70°N-80°N) for the period 2003-2004. Parameter sensitivity analysis is employed to identify the most sensitive parameters in the model. A genetic algorithm (GA) technique is used for DMS model parameter calibration. Data from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are used to drive the DMS model under 4 × CO2 conditions. DMS flux under quadrupled CO2 levels increases more than 300% compared with late 20th century levels (1 × CO2). Reasons for the increase in DMS flux include changes in the ocean state-namely an increase in sea surface temperature (SST) and loss of sea ice-and an increase in DMS transfer velocity, especially in spring and summer. Such a large increase in DMS flux could slow the rate of warming in the Arctic via radiative budget changes associated with DMS-derived aerosols.
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
Parameter sensitivity analysis of stochastic models provides insights into cardiac calcium sparks.
Lee, Young-Seon; Liu, Ona Z; Hwang, Hyun Seok; Knollmann, Bjorn C; Sobie, Eric A
2013-03-05
We present a parameter sensitivity analysis method that is appropriate for stochastic models, and we demonstrate how this analysis generates experimentally testable predictions about the factors that influence local Ca(2+) release in heart cells. The method involves randomly varying all parameters, running a single simulation with each set of parameters, running simulations with hundreds of model variants, then statistically relating the parameters to the simulation results using regression methods. We tested this method on a stochastic model, containing 18 parameters, of the cardiac Ca(2+) spark. Results show that multivariable linear regression can successfully relate parameters to continuous model outputs such as Ca(2+) spark amplitude and duration, and multivariable logistic regression can provide insight into how parameters affect Ca(2+) spark triggering (a probabilistic process that is all-or-none in a single simulation). Benchmark studies demonstrate that this method is less computationally intensive than standard methods by a factor of 16. Importantly, predictions were tested experimentally by measuring Ca(2+) sparks in mice with knockout of the sarcoplasmic reticulum protein triadin. These mice exhibit multiple changes in Ca(2+) release unit structures, and the regression model both accurately predicts changes in Ca(2+) spark amplitude (30% decrease in model, 29% decrease in experiments) and provides an intuitive and quantitative understanding of how much each alteration contributes to the result. This approach is therefore an effective, efficient, and predictive method for analyzing stochastic mathematical models to gain biological insight.
Matasci, G.; Pozdnoukhov, A.; Kanevski, M.
2009-04-01
The recent progress in environmental monitoring technologies allows capturing extensive amount of data that can be used to assist in avalanche forecasting. While it is not straightforward to directly obtain the stability factors with the available technologies, the snow-pack profiles and especially meteorological parameters are becoming more and more available at finer spatial and temporal scales. Being very useful for improving physical modelling, these data are also of particular interest regarding their use involving the contemporary data-driven techniques of machine learning. Such, the use of support vector machine classifier opens ways to discriminate the ``safe'' and ``dangerous'' conditions in the feature space of factors related to avalanche activity based on historical observations. The input space of factors is constructed from the number of direct and indirect snowpack and weather observations pre-processed with heuristic and physical models into a high-dimensional spatially varying vector of input parameters. The particular system presented in this work is implemented for the avalanche-prone site of Ben Nevis, Lochaber region in Scotland. A data-driven model for spatio-temporal avalanche danger forecasting provides an avalanche danger map for this local (5x5 km) region at the resolution of 10m based on weather and avalanche observations made by forecasters on a daily basis at the site. We present the further work aimed at overcoming the ``black-box'' type modelling, a disadvantage the machine learning methods are often criticized for. It explores what the data-driven method of support vector machine has to offer to improve the interpretability of the forecast, uncovers the properties of the developed system with respect to highlighting which are the important features that led to the particular prediction (both in time and space), and presents the analysis of sensitivity of the prediction with respect to the varying input parameters. The purpose of the
Melnikova, N B; Sloot, P M A
2012-01-01
The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.
Sensitivity of a numerical wave model on wind re-analysis datasets
Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel
2017-03-01
Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.
Plummer, S.E. [NERC/RSADU, Cambridgeshire (United Kingdom); Malthus, T.J. [Univ. of Edinburgh (United Kingdom); Clark, C.D. [Univ. of Sheffield (United Kingdom)
1997-06-01
Seagrass meadows are a key component of shallow coastal environments acting as a food resource, nursery and contributing to water oxygenation. Given the importance of these meadows and their susceptibility to anthropogenic disturbance, it is vital that the extent and growth of seagrass is monitored. Remote sensing techniques offer the potential to determine biophysical characteristics of seagrass. This paper presents observations on the development and testing of an invertible model of seagrass canopy reflectance. The model is an adaptation of a land surface reflectance model to incorporate the effects of attenuation and scattering of incoming radiative flux in water. Sensitivity analysis reveals that the subsurface reflectance is strongly dependent on the water depth, vegetation amount, the parameter which we wish to determine, and turbidity respectively. By contrast the chlorophyll concentration of water and gelbstoff are relatively unimportant. Water depth and turbidity need to be known or accommodated in any inversion as free parameters.
A Protocol for the Global Sensitivity Analysis of Impact Assessment Models in Life Cycle Assessment.
Cucurachi, S; Borgonovo, E; Heijungs, R
2016-02-01
The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessment models. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them.
Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.
Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui
2016-03-05
This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities.
Peterson, Kara J.; Bochev, Pavel Blagoveston; Paskaleva, Biliana S.
2010-09-01
Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
Zhao, J.; Tiede, C.
2011-05-01
An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.
Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud
Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.
2014-12-01
In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to
Scott, M. J.; Daly, D.; McJeon, H.; Zhou, Y.; Clarke, L.; Rice, J.; Whitney, P.; Kim, S.
2012-12-01
Residential and commercial buildings are a major source of energy consumption and carbon dioxide emissions in the United States, accounting for 41% of energy consumption and 40% of carbon emissions in 2011. Integrated assessment models (IAMs) historically have been used to estimate the impact of energy consumption on greenhouse gas emissions at the national and international level. Increasingly they are being asked to evaluate mitigation and adaptation policies that have a subnational dimension. In the United States, for example, building energy codes are adopted and enforced at the state and local level. Adoption of more efficient appliances and building equipment is sometimes directed or actively promoted by subnational governmental entities for mitigation or adaptation to climate change. The presentation reports on new example results from the Global Change Assessment Model (GCAM) IAM, one of a flexibly-coupled suite of models of human and earth system interactions known as the integrated Regional Earth System Model (iRESM) system. iRESM can evaluate subnational climate policy in the context of the important uncertainties represented by national policy and the earth system. We have added a 50-state detailed U.S. building energy demand capability to GCAM that is sensitive to national climate policy, technology, regional population and economic growth, and climate. We are currently using GCAM in a prototype stakeholder-driven uncertainty characterization process to evaluate regional climate mitigation and adaptation options in a 14-state pilot region in the U.S. upper Midwest. The stakeholder-driven decision process involves several steps, beginning with identifying policy alternatives and decision criteria based on stakeholder outreach, identifying relevant potential uncertainties, then performing sensitivity analysis, characterizing the key uncertainties from the sensitivity analysis, and propagating and quantifying their impact on the relevant decisions. In the
Liu, M.; He, B.; Lü, A.; Zhou, L.; Wu, J.
2014-06-01
Parameters sensitivity analysis is a crucial step in effective model calibration. It quantitatively apportions the variation of model output to different sources of variation, and identifies how "sensitive" a model is to changes in the values of model parameters. Through calibration of parameters that are sensitive to model outputs, parameter estimation becomes more efficient. Due to uncertainties associated with yield estimates in a regional assessment, field-based models that perform well at field scale are not accurate enough to model at regional scale. Conducting parameters sensitivity analysis at the regional scale and analyzing the differences of parameter sensitivity between stations would make model calibration and validation in different sub-regions more efficient. Further, it would benefit the model applied to the regional scale. Through simulating 2000 × 22 samples for 10 stations in the Huanghuaihai Plain, this study discovered that TB (Optimal temperature), HI (Normal harvest index), WA (Potential radiation use efficiency), BN2 (Normal fraction of N in crop biomass at mid-season) and RWPC1 (Fraction of root weight at emergency) are more sensitive than other parameters. Parameters that determine nutrition supplement and LAI development have higher global sensitivity indices than first-order indices. For spatial application, soil diversity is crucial because soil is responsible for crop parameters sensitivity index differences between sites.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall-runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error-based weighting of observation and prior information data, local sensitivity analysis, and single-objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Isoprene emissions modelling for West Africa: MEGAN model evaluation and sensitivity analysis
J. Ferreira
2010-09-01
Full Text Available Isoprene emissions are the largest source of reactive carbon to the atmosphere, with the tropics being a major source region. These natural emissions are expected to change with changing climate and human impact on land use. As part of the African Monsoon Multidisciplinary Analyses (AMMA project the Model of Emissions of Gases and Aerosols from Nature (MEGAN has been used to estimate the spatial and temporal distribution of isoprene emissions over the West African region. During the AMMA field campaign, carried out in July and August 2006, isoprene mixing ratios were measured on board the FAAM BAe-146 aircraft. These data have been used to make a qualitative evaluation of the model performance.
MEGAN was firstly applied to a large area covering much of West Africa from the Gulf of Guinea in the south to the desert in the north and was able to capture the large scale spatial distribution of isoprene emissions as inferred from the observed isoprene mixing ratios. In particular the model captures the transition from the forested area in the south to the bare soils in the north, but some discrepancies have been identified over the bare soil, mainly due to the emission factors used. Sensitivity analyses were performed to assess the model response to changes in driving parameters, namely Leaf Area Index (LAI, Emission Factors (EF, temperature and solar radiation.
A high resolution simulation was made of a limited area south of Niamey, Niger, where the higher concentrations of isoprene were observed. This is used to evaluate the model's ability to simulate smaller scale spatial features and to examine the influence of the driving parameters on an hourly basis through a case study of a flight on 17 August 2006.
This study highlights the complex interactions between land surface processes and the meteorological dynamics and chemical composition of the PBL. This has implications for quantifying the impact of biogenic emissions
Baldacchino, Tara; Cross, Elizabeth J.; Worden, Keith; Rowson, Jennifer
2016-01-01
Most physical systems in reality exhibit a nonlinear relationship between input and output variables. This nonlinearity can manifest itself in terms of piecewise continuous functions or bifurcations, between some or all of the variables. The aims of this paper are two-fold. Firstly, a mixture of experts (MoE) model was trained on different physical systems exhibiting these types of nonlinearities. MoE models separate the input space into homogeneous regions and a different expert is responsible for the different regions. In this paper, the experts were low order polynomial regression models, thus avoiding the need for high-order polynomials. The model was trained within a Bayesian framework using variational Bayes, whereby a novel approach within the MoE literature was used in order to determine the number of experts in the model. Secondly, Bayesian sensitivity analysis (SA) of the systems under investigation was performed using the identified probabilistic MoE model in order to assess how uncertainty in the output can be attributed to uncertainty in the different inputs. The proposed methodology was first tested on a bifurcating Duffing oscillator, and it was then applied to real data sets obtained from the Tamar and Z24 bridges. In all cases, the MoE model was successful in identifying bifurcations and different physical regimes in the data by accurately dividing the input space; including identifying boundaries that were not parallel to coordinate axes.
Flow analysis with WaSiM-ETH – model parameter sensitivity at different scales
J. Cullmann
2006-01-01
Full Text Available WaSiM-ETH (Gurtz et al., 2001, a widely used water balance simulation model, is tested for its suitability to serve for flow analysis in the context of rainfall runoff modelling and flood forecasting. In this paper, special focus is on the resolution of the process domain in space as well as in time. We try to couple model runs with different calculation time steps in order to reduce the effort arising from calculating the whole flow hydrograph at the hourly time step. We aim at modelling on the daily time step for water balance purposes, switching to the hourly time step whenever high-resolution information is necessary (flood forecasting. WaSiM-ETH is used at different grid resolutions, thus we try to become clear about being able to transfer the model in spatial resolution. We further use two different approaches for the overland flow time calculation within the sub-basins of the test watershed to gain insights about the process dynamics portrayed by the model. Our findings indicate that the model is very sensitive to time and space resolution and cannot be transferred across scales without recalibration.
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
Wu, Xueran; Jacob, Birgit
2015-01-01
The controllability of advection-diffusion systems, subject to uncertain initial values and emission rates, is estimated, given sparse and error affected observations of prognostic state variables. In predictive geophysical model systems, like atmospheric chemistry simulations, different parameter families influence the temporal evolution of the system.This renders initial-value-only optimisation by traditional data assimilation methods as insufficient. In this paper, a quantitative assessment method on validation of measurement configurations to optimize initial values and emission rates, and how to balance them, is introduced. In this theoretical approach, Kalman filter and smoother and their ensemble based versions are combined with a singular value decomposition, to evaluate the potential improvement associated with specific observational network configurations. Further, with the same singular vector analysis for the efficiency of observations, their sensitivity to model control can be identified by deter...
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Feyissa, Aberham Hailu; Gernaey, Krist; Adler-Nissen, Jens
2012-01-01
Similar to other processes, the modelling of heat and mass transfer during food processing involves uncertainty in the values of input parameters (heat and mass transfer coefficients, evaporation rate parameters, thermo-physical properties, initial and boundary conditions) which leads...... to uncertainty in the model predictions. The aim of the current paper is to address this uncertainty challenge in the modelling of food production processes using a combination of uncertainty and sensitivity analysis, where the uncertainty analysis and global sensitivity analysis were applied to a heat and mass...... transfer model of a contact baking process. The Monte Carlo procedure was applied for propagating uncertainty in the input parameters to uncertainty in the model predictions. Monte Carlo simulations and the least squares method were used in the sensitivity analysis: for each model output, a linear...
Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique
Gwo, J.P.; Toran, L.E.; Morris, M.D. [Oak Ridge National Lab., TN (United States); Wilson, G.V. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Plant and Soil Science
1994-09-01
Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone.
Integrated Direct and Indirect Flood Risk Modeling: Development and Sensitivity Analysis.
Koks, E E; Bočkarjova, M; de Moel, H; Aerts, J C J H
2015-05-01
In this article, we propose an integrated direct and indirect flood risk model for small- and large-scale flood events, allowing for dynamic modeling of total economic losses from a flood event to a full economic recovery. A novel approach is taken that translates direct losses of both capital and labor into production losses using the Cobb-Douglas production function, aiming at improved consistency in loss accounting. The recovery of the economy is modeled using a hybrid input-output model and applied to the port region of Rotterdam, using six different flood events (1/10 up to 1/10,000). This procedure allows gaining a better insight regarding the consequences of both high- and low-probability floods. The results show that in terms of expected annual damage, direct losses remain more substantial relative to the indirect losses (approximately 50% larger), but for low-probability events the indirect losses outweigh the direct losses. Furthermore, we explored parameter uncertainty using a global sensitivity analysis, and varied critical assumptions in the modeling framework related to, among others, flood duration and labor recovery, using a scenario approach. Our findings have two important implications for disaster modelers and practitioners. First, high-probability events are qualitatively different from low-probability events in terms of the scale of damages and full recovery period. Second, there are substantial differences in parameter influence between high-probability and low-probability flood modeling. These findings suggest that a detailed approach is required when assessing the flood risk for a specific region. © 2014 Society for Risk Analysis.
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can
Identification of model structure for aquatic ecosystems using regionalized sensitivity analysis.
Osidele, O O; Beck, M B
2001-01-01
The Regionalized Sensitivity Analysis (RSA) was developed in 1978, for identifying critical unknown processes in poorly defined systems, thus directing the focus of further scientific investigations. Here, we demonstrate its application to model structure identification, by ranking the constituent hypotheses and identifying the critical elements for progressive revision of the model. Our case study is Lake Oglethorpe--a small monomictic impoundment in South-eastern Georgia, USA. Recent studies indicate that the warm temperate regional climate affords an extended growing season--typically from March to October--which promotes bacterial productivity in the lake. The result is a summer food web dominated by microbial processes, in contrast to the conventional phytoplankton-dominated food chains typically observed in the cold temperate lakes of Europe and North America. Starting with a simple phytoplankton-based food web model and a qualitative definition of system behaviour, we use the RSA procedure to establish the critical role of bacteria-mediated decomposition in Lake Oglethorpe, thus justifying the inclusion of microbial processes. Further analysis reveals the importance of size-dependent selective consumption of phytoplankton and bacteria. Finally, we discuss important practical implications of this novel application of the RSA regarding sampling efficiency and statistical robustness.
Plot-scale testing and sensitivity analysis of Be7 based soil erosion conversion models
Taylor, Alex; Abdelli, Wahid; Barri, Bashar Al; Iurian, Andra; Gaspar, Leticia; Mabit, Lionel; Millward, Geoff; Ryken, Nick; Blake, Will
2016-04-01
Over the past 2 decades, a growing number of studies have recognised the potential for short-lived cosmogenic Be-7 (half-life 53 days) to be used as a tracer to evaluate soil erosion from short-term inter-rill erosion to hillslope sediment budgets. While conversion modelling approaches are now established for event-scale and extended-time-series applications, there is a lack of validation and sensitivity analysis to underpin confidence in their use across a full range of agro-climatic zones. This contribution aims to close this gap in the context of the maritime temperate climate of southwest UK. Two plots of 4 x 35 m were ploughed and tilled at the beginning of winter 2013/2014 in southwest UK to create (1) a bare, sloped soil surface and (2) a bare flat reference site. The bounded lower edge of the plot fed into a collection bin for overland flow and associated sediment. The tilled surface had a low bulk density and high permeability at the start of the experiment (ksat > 100 mm/hr). Hence, despite high rainfall in December (200 mm), notable overland flow was observed only after intense rain storms during late 2013 and early January 2014 when the soil profile was saturated i.e. driven by Saturation Overland Flow (SOF). At the time of SOF initiation, ca. 70% of the final Be-7 inventory had been delivered to the site. Subsequent to a series SOF events across a 1 month period, the plot soil surface was intensively sampled to quantify Be-7 inventory patterns and develop a tracer budget. Captured eroded sediment was dried, weighed and analysed for Be-7. All samples were analysed for particle size by laser granulometry. Be-7 inventory data were converted to soil erosion estimates using (1) standard profile distribution model, (2) the extended time series distribution model and (3) a new 'antecedent rainfall' extended time series model to account for lack of soil erosion prior to soil saturation. Results were scaled up to deliver a plot-scale sediment budget to include
Zhang Zhonghua
2014-01-01
Full Text Available In this paper, a plant disease model with continuous cultural control strategy and time delay is formulated. Then, how the time delay affects the overall disease progression and, mathematically, how the delay affects the dynamics of the model are investigated. By analyzing the transendental characteristic equation, stability conditions related to the time delay are derived for the disease-free equilibrium. Specially, when R0=1, the Jacobi matrix of the model at the disease-free equilibrium always has a simple zero eigenvalue for all τ≥0. The center manifold reduction and the normal form theory are used to discuss the stability and the steady-state bifurcations of the model near the nonhyperbolic disease-free equilibrium. Then, the sensitivity analysis of the threshold parameter R0 and the positive equilibrium E* is carried out in order to determine the relative importance of different factors responsible for disease transmission. Finally, numerical simulations are employed to support the qualitative results.
Teodoreanu, Ana-Maria; Leendertz, Caspar; Sontheimer, Tobias; Rech, Bernd [Helmholtz-Zentrum Berlin, Kekulestr. 5. 12489 Berlin (Germany)
2011-07-01
To gain a better insight into the efficiency-limiting processes in polycrystalline silicon (poly-Si) thin film solar cells, we developed a simulation model for the J-V characteristics and minority carrier lifetime based on experimental results using the numerical 1D simulation program AFORS-HET. The calibration of the model has been achieved through simultaneously fitting the measured dark and light J-V curves of twelve poly-Si thin film minimodules with dissimilar thickness and absorber doping concentration. Effective defect density, capture cross section products of 10..100 cm{sup -1} have been determined in the poly-Si absorber by this procedure. Transient photoconductance decay measurements of the poly-Si absorbers have also been conducted in the low injection regime (4.5.10{sup 14} cm{sup -3}). High lifetimes of 100 {mu} s have been found which can be explained within our simulation model by field effect passivation. Furthermore simulations indicate that this field effect leads to a strong injection-dependence of carrier lifetime in the operation range of the solar cell. The sensitivity analysis performed with our calibrated model shows that the defects in the absorber layer are crucial for the cell efficiency. Thus, the improvement of the emitter and back surface field layers becomes important only if the absorber itself is of better quality. Moreover we discuss the optimum absorber thickness subject to different doping levels and absorber defect densities.
Eva Fišerová
2014-06-01
Full Text Available The paper is focused on the decomposition of mixed partitioned multivariate models into two seemingly unrelated submodels in order to obtain more efficient estimators. The multiresponses are independently normally distributed with the same covariance matrix. The partitioned multivariate model is considered either with, or without an intercept. The elimination transformation of the intercept that preserves the BLUEs of parameter matri- ces and the MINQUE of the variance components in multivariate models with and without an intercept is stated. Procedures on testing the decomposition of the partitioned model are presented. The properties of plug-in test statistics as functions of variance compo- nents are investigated by sensitivity analysis and insensitivity regions for the significance level are proposed. The insensitivity region is a safe region in the parameter space of the variance components where the approximation of the variance components can be used without any essential deterioration of the significance level of the plug-in test statistic. The behavior of plug-in test statistics and insensitivity regions is studied by simulations.
Kinetic modeling and sensitivity analysis of acetone-butanol-ethanol production.
Shinto, Hideaki; Tashiro, Yukihiro; Yamashita, Mayu; Kobayashi, Genta; Sekiguchi, Tatsuya; Hanai, Taizo; Kuriya, Yuki; Okamoto, Masahiro; Sonomoto, Kenji
2007-08-01
A kinetic simulation model of metabolic pathways that describes the dynamic behaviors of metabolites in acetone-butanol-ethanol (ABE) production by Clostridium saccharoperbutylacetonicum N1-4 was proposed using a novel simulator WinBEST-KIT. This model was validated by comparing with experimental time-course data of metabolites in batch cultures over a wide range of initial glucose concentrations (36.1-295 mM). By introducing substrate inhibition, product inhibition of butanol, activation of butyrate and considering the cessation of metabolic reactions in the case of insufficiency of energy after glucose exhaustion, the revised model showed 0.901 of squared correlation coefficient (r(2)) between experimental time-course of metabolites and calculated ones. Thus, the final revised model is assumed to be one of the best candidates for kinetic simulation describing dynamic behavior of metabolites in ABE production. Sensitivity analysis revealed that 5% increase in reaction of reverse pathway of butyrate production (R(17)) and 5% decrease in reaction of CoA transferase for butyrate (R(15)) highly contribute to high production of butanol. These system analyses should be effective in the elucidation which pathway is metabolic bottleneck for high production of butanol.
Ciriello, V.; Di Federico, V.; Riva, M.; Cadini, F.; De Sanctis, J.; Zio, E.; Guadagnini, A.
2012-04-01
We perform a Global Sensitivity Analysis (GSA) of a transport model used to compute the peak radionuclide concentration at a given control location in a randomly heterogeneous aquifer, following a release from a near surface repository of radioactive waste and subsequent contaminant migration within the host porous medium. We illustrate how uncertainty stemming from incomplete characterization of (a) the correlation scale of the variogram of hydraulic conductivity, (b) the partition coefficient associated with sorption of the migrating radionuclide, and (c) the effective dispersivity at the scale of interest propagates to the first two (ensemble) moments of the peak solute concentration detected at a target location within a two-dimensional randomly heterogeneous hydraulic conductivity field. We treat the uncertain system parameters as independent random variables and perform a variance-based GSA within a numerical Monte Carlo framework. Groundwater flow and transport are solved by randomly sampling the space of the uncertain parameters for an ensemble of generated hydraulic conductivity realizations. The Sobol indices are adopted as sensitivity measures. These are calculated by employing a Polynomial Chaos Expansion (PCE) technique. The PCE-based representation of the response surface of the adopted transport model is then adopted as a surrogate model of the transport process to reduce the computational burden associated with a standard Monte Carlo solution of the original governing equations. This methodology allows identifying the relative influence of the selected uncertain parameters on the target (ensemble) moments of peak concentrations. Our results suggest that the ensemble mean of peak concentration is strongly influenced by the partition coefficient and the longitudinal dispersivity for the scenario analyzed. On the other hand, the hydraulic conductivity correlation scale plays an important role in the variance of the calculated peak concentration values
Majid Namdari
2011-05-01
Full Text Available This study examines energy consumption of inputs and output used in mandarin production, and to find relationship between energy inputs and yield in Mazandaran, Iran. Also the Marginal Physical Product (MPP method was used to analyze the sensitivity of energy inputs on mandarin yield and returns to scale of econometric model was calculated. For this purpose, the data were collected from 110 mandarin orchards which were selected based on random sampling method. The results indicated that total energy inputs were 77501.17 MJ/ha. The energy use efficiency, energy productivity and net energy of mandarin production were found as 0.77, 0.41 kg/MJ and -17651.17 MJ/ha. About 41% of the total energy inputs used in mandarin production was indirect while about 59% was direct. Econometric estimation results revealed that the impact of human labor energy (0.37 was found the highest among the other inputs in mandarin production. The results also showed that direct, indirect and renewable and non-renewable, energy forms had a positive and statistically significant impact on output level. The results of sensitivity analysis of the energy inputs showed that with an additional use of 1 MJ of each of the human labor, farmyard manure and chemical fertilizers energy would lead to an increase in yield by 2.05, 1.80 and 1.26 kg, respectively. The results also showed that the MPP value of direct and renewable energy were higher.
Sensitivity analysis of the relative biological effectiveness predicted by the local effect model.
Friedrich, T; Grün, R; Scholz, U; Elsässer, T; Durante, M; Scholz, M
2013-10-07
The relative biological effectiveness (RBE) is a central quantity in particle radiobiology and depends on many physical and biological factors. The local effect model (LEM) allows one to predict the RBE for radiobiologic experiments and particle therapy. In this work the sensitivity of the RBE on its determining factors is elucidated based on monitoring the RBE dependence on the input parameters of the LEM. The relevance and meaning of all parameters are discussed within the formalism of the LEM. While most of the parameters are fixed by experimental constraints, one parameter, the threshold dose Dt, may remain free and is then regarded as a fit parameter to the high LET dose response curve. The influence of each parameter on the RBE is understood in terms of theoretic considerations. The sensitivity analysis has been systematically carried out for fictitious in vitro cell lines or tissues with α/β = 2 Gy and 10 Gy, either irradiated under track segment conditions with a monoenergetic beam or within a spread out Bragg peak. For both irradiation conditions, a change of each of the parameters typically causes an approximately equal or smaller relative change of the predicted RBE values. These results may be used for the assessment of treatment plans and for general uncertainty estimations of the RBE.
Witholder, R.E.
1980-04-01
The Solar Energy Research Institute has conducted a limited sensitivity analysis on a System for Projecting the Utilization of Renewable Resources (SPURR). The study utilized the Domestic Policy Review scenario for SPURR agricultural and industrial process heat and utility market sectors. This sensitivity analysis determines whether variations in solar system capital cost, operation and maintenance cost, and fuel cost (biomass only) correlate with intuitive expectations. The results of this effort contribute to a much larger issue: validation of SPURR. Such a study has practical applications for engineering improvements in solar technologies and is useful as a planning tool in the R and D allocation process.
Wagener, T.; Pianosi, F.; Almeida, S.; Holcombe, E.
2016-12-01
We can define epistemic uncertainty as those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past, because the historical data is unreliable (imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is simply no observation or network available). This kind of uncertainty is typical for earth system modelling, but our approaches to address it are poorly developed. Because epistemic uncertainties cannot easily be characterised by probability distributions, traditional uncertainty analysis techniques based on Monte Carlo simulation and forward propagation of uncertainty are not adequate. Global Sensitivity Analysis (GSA) can provide an alternative approach where, rather than quantifying the impact of poorly defined or even unknown uncertainties on model predictions, one can investigate at what level such uncertainties would start to matter and whether this level is likely to be reached within the relevant time period analysed. The underlying objective of GSA in this case lies in mapping the uncertain input factors onto critical regions of the model output, e.g. when the output exceeds a certain thresholds. Methods to implement this mapping step have so far received less attention and significant improvement is needed. We will present an example from landslide modelling - a field where observations are scarce, sub-surface characteristics are poorly constrained, and potential future rainfall triggers can be highly uncertain due to climate change. We demonstrate an approach that combines GSA and advanced Classification and Regression Tress (CART) to understand the risk of slope failure for an application in the Caribbean. We close with a discussion of opportunities for further methodological advancement.
Bi-directional exchange of ammonia in a pine forest ecosystem - a model sensitivity analysis
Moravek, Alexander; Hrdina, Amy; Murphy, Jennifer
2016-04-01
Ammonia (NH3) is a key component in the global nitrogen cycle and of great importance for atmospheric chemistry, neutralizing atmospheric acids and leading to the formation of aerosol particles. For understanding the role of NH3 in both natural and anthropogenically influenced environments, the knowledge of processes regulating its exchange between ecosystems and the atmosphere is essential. A two-layer canopy compensation point model is used to evaluate the NH3 exchange in a pine forest in the Colorado Rocky Mountains. The net flux comprises the NH3 exchange of leaf stomata, its deposition to leaf cuticles and exchange with the forest ground. As key parameters the model uses in-canopy NH3 mixing ratios as well as leaf and soil emission potentials measured at the site in summer 2015. A sensitivity analysis is performed to evaluate the major exchange pathways as well as the model's constraints. In addition, the NH3 exchange is examined for an extended range of environmental conditions, such as droughts or varying concentrations of atmospheric pollutants, in order to investigate their influence on the overall net exchange.
Saad, Bilal M.
2017-09-18
This work focuses on the simulation of CO2 storage in deep underground formations under uncertainty and seeks to understand the impact of uncertainties in reservoir properties on CO2 leakage. To simulate the process, a non-isothermal two-phase two-component flow system with equilibrium phase exchange is used. Since model evaluations are computationally intensive, instead of traditional Monte Carlo methods, we rely on polynomial chaos (PC) expansions for representation of the stochastic model response. A non-intrusive approach is used to determine the PC coefficients. We establish the accuracy of the PC representations within a reasonable error threshold through systematic convergence studies. In addition to characterizing the distributions of model observables, we compute probabilities of excess CO2 leakage. Moreover, we consider the injection rate as a design parameter and compute an optimum injection rate that ensures that the risk of excess pressure buildup at the leaky well remains below acceptable levels. We also provide a comprehensive analysis of sensitivities of CO2 leakage, where we compute the contributions of the random parameters, and their interactions, to the variance by computing first, second, and total order Sobol’ indices.
Modeling and sensitivity analysis of electron capacitance for Geobacter in sedimentary environments.
Zhao, Jiao; Fang, Yilin; Scheibe, Timothy D; Lovley, Derek R; Mahadevan, R
2010-03-01
In situ stimulation of the metabolic activity of Geobacter species through acetate amendment has been shown to be a promising bioremediation strategy to reduce and immobilize hexavalent uranium [U(VI)] as insoluble U(IV). Although Geobacter species are reducing U(VI), they primarily grow via Fe(III) reduction. Unfortunately, the biogeochemistry and the physiology of simultaneous reduction of multiple metals are still poorly understood. A detailed model is therefore required to better understand the pathways leading to U(VI) and Fe(III) reduction by Geobacter species. Based on recent experimental evidence of temporary electron capacitors in Geobacter we propose a novel kinetic model that physically distinguishes planktonic cells into electron-loaded and -unloaded states. Incorporation of an electron load-unload cycle into the model provides insight into U(VI) reduction efficiency, and elucidates the relationship between U(VI)- and Fe(III)-reducing activity and further explains the correlation of high U(VI) removal with high fractions of planktonic cells in subsurface environments. Global sensitivity analysis was used to determine the level of importance of geochemical and microbial processes controlling Geobacter growth and U(VI) reduction, suggesting that the electron load-unload cycle and the resulting repartition of the microbes between aqueous and attached phases are critical for U(VI) reduction. As compared with conventional Monod modeling approaches without inclusion of the electron capacitance, the new model attempts to incorporate a novel cellular mechanism that has a significant impact on the outcome of in situ bioremediation.
Sensitivity analysis of the secondary settling tank double-exponential function model
Abusam, A.; Keesman, K.J.
2002-01-01
The secondary settling tank plays a very crucial role in achieving the very strict effluent standards of wastewater treatment plants. To investigate the ability of the widely used secondary settling tank model, the double-exponential model, to predict the dynamic behavior, a factorial sensitivity an
Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T
2014-01-01
This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used.
Virus-sized colloid transport in a single pore: Model development and sensitivity analysis
Seetha, N.; Mohan Kumar, M. S.; Majid Hassanizadeh, S.; Raoof, Amir
2014-08-01
A mathematical model is developed to simulate the transport and deposition of virus-sized colloids in a cylindrical pore throat considering various processes such as advection, diffusion, colloid-collector surface interactions and hydrodynamic wall effects. The pore space is divided into three different regions, namely, bulk, diffusion and potential regions, based on the dominant processes acting in each of these regions. In the bulk region, colloid transport is governed by advection and diffusion whereas in the diffusion region, colloid mobility due to diffusion is retarded by hydrodynamic wall effects. Colloid-collector interaction forces dominate the transport in the potential region where colloid deposition occurs. The governing equations are non-dimensionalized and solved numerically. A sensitivity analysis indicates that the virus-sized colloid transport and deposition is significantly affected by various pore-scale parameters such as the surface potentials on colloid and collector, ionic strength of the solution, flow velocity, pore size and colloid size. The adsorbed concentration and hence, the favorability of the surface for adsorption increases with: (i) decreasing magnitude and ratio of surface potentials on colloid and collector, (ii) increasing ionic strength and (iii) increasing pore radius. The adsorbed concentration increases with increasing Pe, reaching a maximum value at Pe = 0.1 and then decreases thereafter. Also, the colloid size significantly affects particle deposition with the adsorbed concentration increasing with increasing particle radius, reaching a maximum value at a particle radius of 100 nm and then decreasing with increasing radius. System hydrodynamics is found to have a greater effect on larger particles than on smaller ones. The secondary minimum contribution to particle deposition has been found to increase as the favorability of the surface for adsorption decreases. The sensitivity of the model to a given parameter will be high
Lunde, Torleif Markussen; Korecha, Diriba; Loha, Eskindir; Sorteberg, Asgeir; Lindtjørn, Bernt
2013-01-23
Most of the current biophysical models designed to address the large-scale distribution of malaria assume that transmission of the disease is independent of the vector involved. Another common assumption in these type of model is that the mortality rate of mosquitoes is constant over their life span and that their dispersion is negligible. Mosquito models are important in the prediction of malaria and hence there is a need for a realistic representation of the vectors involved. We construct a biophysical model including two competing species, Anopheles gambiae s.s. and Anopheles arabiensis. Sensitivity analysis highlight the importance of relative humidity and mosquito size, the initial conditions and dispersion, and a rarely used parameter, the probability of finding blood. We also show that the assumption of exponential mortality of adult mosquitoes does not match the observed data, and suggest that an age dimension can overcome this problem. This study highlights some of the assumptions commonly used when constructing mosquito-malaria models and presents a realistic model of An. gambiae s.s. and An. arabiensis and their interaction. This new mosquito model, OMaWa, can improve our understanding of the dynamics of these vectors, which in turn can be used to understand the dynamics of malaria.
2013-01-01
Background Most of the current biophysical models designed to address the large-scale distribution of malaria assume that transmission of the disease is independent of the vector involved. Another common assumption in these type of model is that the mortality rate of mosquitoes is constant over their life span and that their dispersion is negligible. Mosquito models are important in the prediction of malaria and hence there is a need for a realistic representation of the vectors involved. Results We construct a biophysical model including two competing species, Anopheles gambiae s.s. and Anopheles arabiensis. Sensitivity analysis highlight the importance of relative humidity and mosquito size, the initial conditions and dispersion, and a rarely used parameter, the probability of finding blood. We also show that the assumption of exponential mortality of adult mosquitoes does not match the observed data, and suggest that an age dimension can overcome this problem. Conclusions This study highlights some of the assumptions commonly used when constructing mosquito-malaria models and presents a realistic model of An. gambiae s.s. and An. arabiensis and their interaction. This new mosquito model, OMaWa, can improve our understanding of the dynamics of these vectors, which in turn can be used to understand the dynamics of malaria. PMID:23342980
P. A. Garambois
2013-01-01
Full Text Available This paper presents a detailed analysis of 10 flash flood events in the Mediterranean region using the distributed hydrological model MARINE. Characterizing catchment's response during flash flood events may provide a new and valuable insight into the processes involved for extreme flood response and their dependency on catchment properties and flood severity. The main objective of this study is to analyze hydrologic model sensitivity in the case of flash floods with a new approach in hydrology, allowing model outputs variance decomposition for temporal patterns of parameter sensitivity analysis. Such approaches enable ranking of uncertainty sources for non-linear and non-monotonic mappings with a low computational cost. This study uses hydrologic model and sensitivity analysis as learning tools to derive temporal sensitivity analysis with a variance based method in the case of 10 flash floods that occurred in the French Pyrenees and Cévennes foothills. This constitutes a huge dataset given the scarcity of data about flash flood events. With Nash performances above 0.73 on average for this extended set of validation events, the five sensitive parameters of MARINE distributed physically based model are analyzed. This contribution shows that soil depth explains more than 80% of model output variance when most hydrographs are peaking. Moreover the lateral subsurface transfer is responsible for 80% of model variance for some catchment-flood events' hydrographs during slow declining limbs. The unexplained variance of model output representing interactions between parameters reveals to be very low during modeled flood peaks and informs that model parsimonious parameterization is appropriate to tackle the problem of flash floods. Interactions observed after model initialization or rainfall intensity peaks incite to improve water partition representation between flow components and initialization itself. This paper gives a practical framework for
Models for Risk Aggregation and Sensitivity Analysis: An Application to Bank Economic Capital
Hulusi Inanoglu
2009-12-01
Full Text Available A challenge in enterprise risk measurement for diversified financial institutions is developing a coherent approach to aggregating different risk types. This has been motivated by rapid financial innovation, developments in supervisory standards (Basel 2 and recent financial turmoil. The main risks faced - market, credit and operational – have distinct distributional properties, and historically have been modeled in differing frameworks. We contribute to the modeling effort by providing tools and insights to practitioners and regulators. First, we extend the scope of the analysis to liquidity and interest rate risk, having Basel Pillar II of Basel implications. Second, we utilize data from major banking institutions’ loss experience from supervisory call reports, which allows us to explore the impact of business mix and inter-risk correlations on total risk. Third, we estimate and compare alternative established frameworks for risk aggregation (including copula models on the same data-sets across banks, comparing absolute total risk measures (Value-at-Risk – VaR and proportional diversification benefits-PDB, goodness-of-fit (GOF of the model as data as well as the variability of the VaR estimate with respect to sampling error in parameter. This benchmarking and sensitivity analysis suggests that practitioners consider implementing a simple non-parametric methodology (empirical copula simulation- ECS in order to quantify integrated risk, in that it is found to be more conservatism and stable than the other models. We observe that ECS produces 20% to 30% higher VaR relative to the standard Gaussian copula simulation (GCS, while the variance-covariance approximation (VCA is much lower. ECS yields the highest PDBs than other methodologies (127% to 243%, while Archimadean Gumbel copula simulation (AGCS is the lowest (10-21%. Across the five largest banks we fail to find the effect of business mix to exert a directionally consistent impact on
Xi, Jinxiang; Si, Xiuhua A.; Kim, JongWon; Mckee, Edward; Lin, En-Bing
2014-01-01
Background Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases. Objective and Methods In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns. Findings Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma. Conclusion Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities. PMID:25105680
Jinxiang Xi
Full Text Available Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases.In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns.Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma.Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities.
Modeling Analysis of Signal Sensitivity and Specificity by Vibrio fischeri LuxR Variants.
Colton, Deanna M; Stabb, Eric V; Hagen, Stephen J
2015-01-01
The LuxR protein of the bacterium Vibrio fischeri belongs to a family of transcriptional activators that underlie pheromone-mediated signaling by responding to acyl-homoserine lactones (-HSLs) or related molecules. V. fischeri produces two acyl-HSLs, N-3-oxo-hexanoyl-HSL (3OC6-HSL) and N-octanoyl-HSL (C8-HSL), each of which interact with LuxR to facilitate its binding to a "lux box" DNA sequence, thereby enabling LuxR to activate transcription of the lux operon responsible for bioluminescence. We have investigated the HSL sensitivity of four different variants of V. fischeri LuxR: two derived from wild-type strains ES114 and MJ1, and two derivatives of LuxRMJ1 generated by directed evolution. For each LuxR variant, we measured the bioluminescence induced by combinations of C8-HSL and 3OC6-HSL. We fit these data to a model in which the two HSLs compete with each other to form multimeric LuxR complexes that directly interact with lux to activate bioluminescence. The model reproduces the observed effects of HSL combinations on the bioluminescence responses directed by LuxR variants, including competition and non-monotonic responses to C8-HSL and 3OC6-HSL. The analysis yields robust estimates for the underlying dissociation constants and cooperativities (Hill coefficients) of the LuxR-HSL complexes and their affinities for the lux box. It also reveals significant differences in the affinities of LuxRMJ1 and LuxRES114 for 3OC6-HSL. Further, LuxRMJ1 and LuxRES114 differed sharply from LuxRs retrieved by directed evolution in the cooperativity of LuxR-HSL complex formation and the affinity of these complexes for lux. These results show how computational modeling of in vivo experimental data can provide insight into the mechanistic consequences of directed evolution.
Neumann, Marc B; Gujer, Willi; von Gunten, Urs
2009-03-01
This study quantifies the uncertainty involved in predicting micropollutant oxidation during drinking water ozonation in a pilot plant reactor. The analysis is conducted for geosmin, methyl tert-butyl ether (MTBE), isopropylmethoxypyrazine (IPMP), bezafibrate, beta-cyclocitral and ciprofloxazin. These compounds are representative for a wide range of substances with second order rate constants between 0.1 and 1.9x10(4)M(-1)s(-1) for the reaction with ozone and between 2x10(9) and 8x10(9)M(-1)s(-1) for the reaction with OH-radicals. Uncertainty ranges are derived for second order rate constants, hydraulic parameters, flow- and ozone concentration data, and water characteristic parameters. The uncertain model factors are propagated via Monte Carlo simulation and the resulting probability distributions of the relative residual micropollutant concentrations are assessed. The importance of factors in determining model output variance is quantified using Extended Fourier Amplitude Sensitivity Testing (Extended-FAST). For substances that react slowly with ozone (MTBE, IPMP, geosmin) the water characteristic R(ct)-value (ratio of ozone- to OH-radical concentration) is the most influential factor explaining 80% of the output variance. In the case of bezafibrate the R(ct)-value and the second order rate constant for the reaction with ozone each contribute about 30% to the output variance. For beta-cyclocitral and ciprofloxazin (fast reacting with ozone) the second order rate constant for the reaction with ozone and the hydraulic model structure become the dominating sources of uncertainty.
2012-09-01
ATMOSPHERIC MODELS INCLUDING ENSEMBLE METHODS Scott E. Miller Lieutenant Commander, United States Navy B.S., University of South Carolina, 2000 B.S...Typical gas turbine fuel consumption curve and relationship to sea state .......51 Figure 16. DDG 58 speed reduction curves for bow seas...Day Time Group ECDIS-N Electronic Chart Display and Information System – Navy ECMWF European Center for Medium Range Weather Forecasts EFAS
Smith, G.P.
1999-03-01
The author has examined the kinetic reliability of ozone model predictions by computing direct first-order sensitivities of model species concentrations to input parameters: S{sub ij} = [dC{sub i}/C{sub i}]/[dk{sub j}/k{sub j}], where C{sub i} is the abundance of species i (e.g., ozone) and k{sub j} is the rate constant of step j (reaction, photolysis, or transport), for localized boxes from the LLNL 2-D diurnally averaged atmospheric model. An ozone sensitivity survey of boxes at altitudes of 10--55 km, 2--62N latitude, for spring, equinox, and winter is presented. Ozone sensitivities are used to evaluate the response of model predictions of ozone to input rate coefficient changes, to propagate laboratory rate uncertainties through the model, and to select processes and regions suited to more precise measurements. By including the local chemical feedbacks, the sensitivities quantify the important roles of oxygen and ozone photolysis, transport from the tropics, and the relation of key catalytic steps and cycles in regulating stratospheric ozone as a function of altitude, latitude, and season. A sensitivity-uncertainty analysis uses the sensitivity coefficients to propagate laboratory error bars in input photochemical parameters and estimate the net model uncertainties of predicted ozone in isolated boxes; it was applied to potential problems in the upper stratospheric ozone budget, and also highlights superior regions for model validation.
On the Juno radio science experiment: models, algorithms and sensitivity analysis
Tommei, G.; Dimare, L.; Serra, D.; Milani, A.
2015-01-01
Juno is a NASA mission launched in 2011 with the goal of studying Jupiter. The probe will arrive to the planet in 2016 and will be placed for one year in a polar high-eccentric orbit to study the composition of the planet, the gravity and the magnetic field. The Italian Space Agency (ASI) provided the radio science instrument KaT (Ka-Band Translator) used for the gravity experiment, which has the goal of studying the Jupiter's deep structure by mapping the planet's gravity: such instrument takes advantage of synergies with a similar tool in development for BepiColombo, the ESA cornerstone mission to Mercury. The Celestial Mechanics Group of the University of Pisa, being part of the Juno Italian team, is developing an orbit determination and parameters estimation software for processing the real data independently from NASA software ODP. This paper has a twofold goal: first, to tell about the development of this software highlighting the models used, secondly, to perform a sensitivity analysis on the parameters of interest to the mission.
On the Juno Radio Science Experiment: models, algorithms and sensitivity analysis
Tommei, Giacomo; Serra, Daniele; Milani, Andrea
2014-01-01
Juno is a NASA mission launched in 2011 with the goal of studying Jupiter. The probe will arrive to the planet in 2016 and will be placed for one year in a polar high-eccentric orbit to study the composition of the planet, the gravity and the magnetic field. The Italian Space Agency (ASI) provided the radio science instrument KaT (Ka-Band Translator) used for the gravity experiment, which has the goal of studying the Jupiter's deep structure by mapping the planet's gravity: such instrument takes advantage of synergies with a similar tool in development for BepiColombo, the ESA cornerstone mission to Mercury. The Celestial Mechanics Group of the University of Pisa, being part of the Juno Italian team, is developing an orbit determination and parameters estimation software for processing the real data independently from NASA software ODP. This paper has a twofold goal: first, to tell about the development of this software highlighting the models used, second, to perform a sensitivity analysis on the parameters ...
X. Hu
2014-01-01
Full Text Available The atmospheric transport and ground deposition of radioactive isotopes 131I and 137Cs during and after the Fukushima Daiichi Nuclear Power Plant (FDNPP accident (March 2011 are investigated using the Weather Research and Forecasting/Chemistry (WRF/Chem model. The aim is to assess the skill of WRF in simulating these processes and the sensitivity of the model's performance to various parameterizations of unresolved physics. The WRF/Chem model is first upgraded by implementing a radioactive decay term into the advection-diffusion solver and adding three parameterizations for dry deposition and two parameterizations for wet deposition. Different microphysics and horizontal turbulent diffusion schemes are then tested for their ability to reproduce observed meteorological conditions. Subsequently, the influence on the simulated transport and deposition of the characteristics of the emission source, including the emission rate, the gas partitioning of 131I and the size distribution of 137Cs, is examined. The results show that the model can predict the wind fields and rainfall realistically. The ground deposition of the radionuclides can also potentially be captured well but it is very sensitive to the emission characterization. It is found that the total deposition is most influenced by the emission rate for both 131I and 137Cs; while it is less sensitive to the dry deposition parameterizations. Moreover, for 131I, the deposition is also sensitive to the microphysics schemes, the horizontal diffusion schemes, gas partitioning and wet deposition parameterizations; while for 137Cs, the deposition is very sensitive to the microphysics schemes and wet deposition parameterizations, and it is also sensitive to the horizontal diffusion schemes and the size distribution.
Morris, Edgar [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-10-01
The Used Fuel Disposition Campaign (UFDC), as part of the DOE Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technology program (FCT) is investigating the disposal of high level radioactive waste (HLW) and spent nuclear fuela (SNF) in a variety of geologic media. The feasibility of disposing SNF and HLW in clay media has been investigated and has been shown to be promising [Ref. 1]. In addition the disposal of these wastes in clay media is being investigated in Belgium, France, and Switzerland. Thus, Argillaceous media is one of the environments being considered by UFDC. As identified by researchers at Sandia National Laboratory, potentially suitable formations that may exist in the U.S. include mudstone, clay, shale, and argillite formations [Ref. 1]. These formations encompass a broad range of material properties. In this report, reference to clay media is intended to cover the full range of material properties. This report presents the status of the development of a simulation model for evaluating the performance of generic clay media. The clay Generic Disposal System Model (GDSM) repository performance simulation tool has been developed with the flexibility to evaluate not only different properties, but different waste streams/forms and different repository designs and engineered barrier configurations/ materials that could be used to dispose of these wastes.
Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu
2017-03-01
Developing predictive capability for future atmospheric oxidation capacity requires a detailed analysis of model uncertainties and sensitivity of the modeled oxidation capacity to model input variables. Using oxidant mixing ratios modeled by the GEOS-Chem chemical transport model and measured on the NASA DC-8 aircraft, uncertainty and global sensitivity analyses were performed on the GEOS-Chem chemical transport model for the modeled oxidants hydroxyl (OH), hydroperoxyl (HO2), and ozone (O3). The sensitivity of modeled OH, HO2, and ozone to model inputs perturbed simultaneously within their respective uncertainties were found for the flight tracks of NASA's Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) A and B campaigns (2008) in the North American Arctic. For the spring deployment (ARCTAS-A), ozone was most sensitive to the photolysis rate of NO2, the NO2 + OH reaction rate, and various emissions, including methyl bromoform (CHBr3). OH and HO2 were overwhelmingly sensitive to aerosol particle uptake of HO2 with this one factor contributing upwards of 75 % of the uncertainty in HO2. For the summer deployment (ARCTAS-B), ozone was most sensitive to emission factors, such as soil NOx and isoprene. OH and HO2 were most sensitive to biomass emissions and aerosol particle uptake of HO2. With modeled HO2 showing a factor of 2 underestimation compared to measurements in the lowest 2 km of the troposphere, lower uptake rates (γHO2 < 0. 055), regardless of whether or not the product of the uptake is H2O or H2O2, produced better agreement between modeled and measured HO2.
Huang, Jiacong; Gao, Junfeng; Yan, Renhua
2016-08-15
Phosphorus (P) export from lowland polders has caused severe water pollution. Numerical models are an important resource that help water managers control P export. This study coupled three models, i.e., Phosphorus Dynamic model for Polders (PDP), Integrated Catchments model of Phosphorus dynamics (INCA-P) and Universal Soil Loss Equation (USLE), to describe the P dynamics in polders. Based on the coupled models and a dataset collected from Polder Jian in China, sensitivity analysis were carried out to analyze the cause-effect relationships between environmental factors and P export from Polder Jian. The sensitivity analysis results showed that P export from Polder Jian were strongly affected by air temperature, precipitation and fertilization. Proper fertilization management should be a strategic priority for reducing P export from Polder Jian. This study demonstrated the success of model coupling, and its application in investigating potential strategies to support pollution control in polder systems.
Sin, Gürkan; Gernaey, Krist; Neumann, Marc B.
2011-01-01
a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R2...... in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance...
Sensitivity Assessment of Ozone Models
Shorter, Jeffrey A.; Rabitz, Herschel A.; Armstrong, Russell A.
2000-01-24
The activities under this contract effort were aimed at developing sensitivity analysis techniques and fully equivalent operational models (FEOMs) for applications in the DOE Atmospheric Chemistry Program (ACP). MRC developed a new model representation algorithm that uses a hierarchical, correlated function expansion containing a finite number of terms. A full expansion of this type is an exact representation of the original model and each of the expansion functions is explicitly calculated using the original model. After calculating the expansion functions, they are assembled into a fully equivalent operational model (FEOM) that can directly replace the original mode.
Sullivan, Adam John
In chapter 1, we consider the biases that may arise when an unmeasured confounder is omitted from a structural equation model (SEM) and sensitivity analysis techniques to correct for such biases. We give an analysis of which effects in an SEM are and are not biased by an unmeasured confounder. It is shown that a single unmeasured confounder will bias not just one but numerous effects in an SEM. We present sensitivity analysis techniques to correct for biases in total, direct, and indirect effects when using SEM analyses, and illustrate these techniques with a study of aging and cognitive function. In chapter 2, we consider longitudinal mediation with latent growth curves. We define the direct and indirect effects using counterfactuals and consider the assumptions needed for identifiability of those effects. We develop models with a binary treatment/exposure followed by a model where treatment/exposure changes with time allowing for treatment/exposure-mediator interaction. We thus formalize mediation analysis with latent growth curve models using counterfactuals, makes clear the assumptions and extends these methods to allow for exposure mediator interactions. We present and illustrate the techniques with a study on Multiple Sclerosis(MS) and depression. In chapter 3, we report on a pilot study in blended learning that took place during the Fall 2013 and Summer 2014 semesters here at Harvard. We blended the traditional BIO 200: Principles of Biostatistics and created ID 200: Principles of Biostatistics and epidemiology. We used materials from the edX course PH207x: Health in Numbers: Quantitative Methods in Clinical & Public Health Research and used. These materials were used as a video textbook in which students would watch a given number of these videos prior to class. Using surveys as well as exam data we informally assess these blended classes from the student's perspective as well as a comparison of these students with students in another course, BIO 201
Renewable Energy Deployment in Colorado and the West: A Modeling Sensitivity and GIS Analysis
Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Haase, Scott [National Renewable Energy Lab. (NREL), Golden, CO (United States); Melius, Jennifer [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mooney, Meghan [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-03-01
The Resource Planning Model is a capacity expansion model designed for a regional power system, such as a utility service territory, state, or balancing authority. We apply a geospatial analysis to Resource Planning Model renewable energy capacity expansion results to understand the likelihood of renewable development on various lands within Colorado.
Sensitivity analysis of a forest gap model concerning current and future climate variability
Lasch, P.; Suckow, F.; Buerger, G.; Lindner, M.
1998-07-01
The ability of a forest gap model to simulate the effects of climate variability and extreme events depends on the temporal resolution of the weather data that are used and the internal processing of these data for growth, regeneration and mortality. The climatological driving forces of most current gap models are based on monthly means of weather data and their standard deviations, and long-term monthly means are used for calculating yearly aggregated response functions for ecological processes. In this study, the results of sensitivity analyses using the forest gap model FORSKA{sub -}P and involving climate data of different resolutions, from long-term monthly means to daily time series, including extreme events, are presented for the current climate and for a climate change scenario. The model was applied at two sites with differing soil conditions in the federal state of Brandenburg, Germany. The sensitivity of the model concerning climate variations and different climate input resolutions is analysed and evaluated. The climate variability used for the model investigations affected the behaviour of the model substantially. (orig.)
Parametric sensitivity analysis for the helium dimers on a model potential
Nelson Henrique Teixeira Lemes
2012-01-01
Full Text Available Potential parameters sensitivity analysis for helium unlike molecules, HeNe, HeAr, HeKr and HeXe is the subject of this work. Number of bound states these rare gas dimers can support, for different angular momentum, will be presented and discussed. The variable phase method, together with the Levinson's theorem, is used to explore the quantum scattering process at very low collision energy using the Tang and Toennies potential. These diatomic dimers can support a bound state even for relative angular momentum equal to five, as in HeXe. Vibrational excited states, with zero angular momentum, are also possible for HeKr and HeXe. Results from sensitive analysis will give acceptable order of magnitude on potentials parameters.
P. A. Garambois
2013-06-01
Full Text Available This paper presents a detailed analysis of 10 flash flood events in the Mediterranean region using the distributed hydrological model MARINE. Characterizing catchment response during flash flood events may provide new and valuable insight into the dynamics involved for extreme catchment response and their dependency on physiographic properties and flood severity. The main objective of this study is to analyze flash-flood-dedicated hydrologic model sensitivity with a new approach in hydrology, allowing model outputs variance decomposition for temporal patterns of parameter sensitivity analysis. Such approaches enable ranking of uncertainty sources for nonlinear and nonmonotonic mappings with a low computational cost. Hydrologic model and sensitivity analysis are used as learning tools on a large flash flood dataset. With Nash performances above 0.73 on average for this extended set of 10 validation events, the five sensitive parameters of MARINE process-oriented distributed model are analyzed. This contribution shows that soil depth explains more than 80% of model output variance when most hydrographs are peaking. Moreover, the lateral subsurface transfer is responsible for 80% of model variance for some catchment-flood events' hydrographs during slow-declining limbs. The unexplained variance of model output representing interactions between parameters reveals to be very low during modeled flood peaks and informs that model-parsimonious parameterization is appropriate to tackle the problem of flash floods. Interactions observed after model initialization or rainfall intensity peaks incite to improve water partition representation between flow components and initialization itself. This paper gives a practical framework for application of this method to other models, landscapes and climatic conditions, potentially helping to improve processes understanding and representation.
Hoseyni, Seyed Mohsen [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Basic Sciences; Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Young Researchers and Elite Club; Pourgol-Mohammad, Mohammad [Sahand Univ. of Technology, Tabriz (Iran, Islamic Republic of). Dept. of Mechanical Engineering; Yousefpour, Faramarz [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)
2017-03-15
This paper deals with simulation, sensitivity and uncertainty analysis of LP-FP-2 experiment of LOFT test facility. The test facility simulates the major components and system response of a pressurized water reactor during a LOCA. MELCOR code is used for predicting the fission product release from the core fuel elements in LOFT LP-FP-2 experiment. Moreover, sensitivity and uncertainty analysis is performed for different CORSOR models simulating release of fission products in severe accident calculations for nuclear power plants. The calculated values for the fission product release are compared under different modeling options to the experimental data available from the experiment. In conclusion, the performance of 8 CORSOR modeling options is assessed for available modeling alternatives in the code structure.
Donckels, B M R; Kroll, S; Van Dorpe, M; Weemaes, M
2014-01-01
The presence of high concentrations of hydrogen sulfide in the sewer system can result in corrosion of the concrete sewer pipes. The formation and fate of hydrogen sulfide in the sewer system is governed by a complex system of biological, chemical and physical processes. Therefore, mechanistic models have been developed to describe the underlying processes. In this work, global sensitivity analysis was applied to an in-sewer process model (aqua3S) to determine the most important model input factors with regard to sulfide formation in rising mains and the concrete corrosion rate downstream of a rising main. The results of the sensitivity analysis revealed the most influential model parameters, but also the importance of the characteristics of the organic matter, the alkalinity of the concrete and the movement of the sewer gas phase.
Yan Gao; Zhiqiang Hu; Jin Wang
2014-01-01
The increasing marine activities in Arctic area have brought growing interest in ship-iceberg collision study. The purpose of this paper is to study the iceberg geometry shape effect on the collision process. In order to estimate the sensitivity parameter, five different geometry iceberg models and two iceberg material models are adopted in the analysis. The FEM numerical simulation is used to predict the scenario and the related responses. The simulation results including energy dissipation ...
Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer
2006-01-01
Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.
Sensitivity analysis of a simple linear model of a savanna ecosystem at Nyslvley
Getz, WA
1975-12-01
Full Text Available parameters is analysed. The results obtained from this analysis are discussed and some general statements on important structures in the Nylsvley ecosystem that emerge from the analysis of the model are made. In particular certain conclusions are drawn...
de'Michieli Vitturi, M.; Engwell, S. L.; Neri, A.; Barsotti, S.
2016-10-01
The behavior of plumes associated with explosive volcanic eruptions is complex and dependent on eruptive source parameters (e.g. exit velocity, gas fraction, temperature and grain-size distribution). It is also well known that the atmospheric environment interacts with volcanic plumes produced by explosive eruptions in a number of ways. The wind field can bend the plume but also affect atmospheric air entrainment into the column, enhancing its buoyancy and in some cases, preventing column collapse. In recent years, several numerical simulation tools and observational systems have investigated the action of eruption parameters and wind field on volcanic column height and column trajectory, revealing an important influence of these variables on plume behavior. In this study, we assess these dependencies using the integral model PLUME-MoM, whereby the continuous polydispersity of pyroclastic particles is described using a quadrature-based moment method, an innovative approach in volcanology well-suited for the description of the multiphase nature of magmatic mixtures. Application of formalized uncertainty quantification and sensitivity analysis techniques enables statistical exploration of the model, providing information on the extent to which uncertainty in the input or model parameters propagates to model output uncertainty. In particular, in the framework of the IAVCEI Commission on tephra hazard modeling inter-comparison study, PLUME-MoM is used to investigate the parameters exerting a major control on plume height, applying it to a weak plume scenario based on 26 January 2011 Shinmoe-dake eruptive conditions and a strong plume scenario based on the climatic phase of the 15 June 1991 Pinatubo eruption.
A sensitivity analysis of a radiological assessment model for Arctic waters
Nielsen, S.P.
1998-01-01
A model based on compartment analysis has been developed to simulate the dispersion of radionuclides in Arctic waters for an assessment of doses to man. The model predicts concentrations of radionuclides in the marine environment and doses to man from a range of exposure pathways. A parameter sen...
Ferrari, A.; Gutierrez, S.; Sin, Gürkan
2016-01-01
A steady state model for a production scale milk drying process was built to help process understanding and optimization studies. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a comprehensive statistical analysis for quality assurance using sensitiv...
Motion analysis study on sensitivity of finite element model of the cervical spine to geometry.
Zafarparandeh, Iman; Erbulut, Deniz U; Ozer, Ali F
2016-07-01
Numerous finite element models of the cervical spine have been proposed, with exact geometry or with symmetric approximation in the geometry. However, few researches have investigated the sensitivity of predicted motion responses to the geometry of the cervical spine. The goal of this study was to evaluate the effect of symmetric assumption on the predicted motion by finite element model of the cervical spine. We developed two finite element models of the cervical spine C2-C7. One model was based on the exact geometry of the cervical spine (asymmetric model), whereas the other was symmetric (symmetric model) about the mid-sagittal plane. The predicted range of motion of both models-main and coupled motions-was compared with published experimental data for all motion planes under a full range of loads. The maximum differences between the asymmetric model and symmetric model predictions for the principal motion were 31%, 78%, and 126% for flexion-extension, right-left lateral bending, and right-left axial rotation, respectively. For flexion-extension and lateral bending, the minimum difference was 0%, whereas it was 2% for axial rotation. The maximum coupled motions predicted by the symmetric model were 1.5° axial rotation and 3.6° lateral bending, under applied lateral bending and axial rotation, respectively. Those coupled motions predicted by the asymmetric model were 1.6° axial rotation and 4° lateral bending, under applied lateral bending and axial rotation, respectively. In general, the predicted motion response of the cervical spine by the symmetric model was in the acceptable range and nonlinearity of the moment-rotation curve for the cervical spine was properly predicted.
Sensitivity analysis in remote sensing
Ustinov, Eugene A
2015-01-01
This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...
Application of system-identification by ARMarkov and sensitivity analysis to noise-amplifier models
Dovetta, Nicolas; Schmid, Peter; Sipp, Denis; McKeon, Beverley
2011-11-01
Separated flow often exhibit amplification of external noise sources via an interaction with shear layer instabilities. In order to manipulate this amplification process we consider a data-based control design strategy. The first step is to build a state-space representation of the input-output transfer function. An auto-regressive representation is used that explicitly includes Markov parameters (ARMarkov). This is then coupled with the eigensystem realization algorithm (ERA) which yields a reduced-order state-space representation of the problem. In real experiments the data is contaminated by measurement noise or by non-linearities which are not accounted for by the present approach. In order to enforce robustness of the identification-realization procedure a sensitivity analysis of the algorithm is performed. These sensitivities provide quantitative criteria to find the most robust way of identifying the system using the ARMarkov/ERA algorithm. The system-identification and sensitivity framework will be demonstrated on the Ginzburg-Landau equation. Support from the Partner University Fund (PUF) is gratefully acknowledged.
Sensitivity analysis of dispersion modeling of volcanic ash from Eyjafjallajökull in May 2010
Devenish, B. J.; Francis, P. N.; Johnson, B. T.; Sparks, R. S. J.; Thomson, D. J.
2012-10-01
We analyze the sensitivity of a mathematical model of volcanic ash dispersion in the atmosphere to the representation of key physical processes. These include the parameterization of subgrid-scale atmospheric processes and source parameters such as the height of the eruption column, the mass emission rate, the size of the particulates, and the amount of ash that falls out close to the source. By comparing the results of the mathematical model with satellite and airborne observations of the ash cloud that erupted from Eyjafjallajökull volcano in May 2010, we are able to gain some insight into the processes and parameters that govern the long-range dispersion of ash in the atmosphere. The structure of the ash cloud, particularly its width and depth, appears to be sensitive to the source profile (i.e., whether ash is released over a deep vertical column or not) and to the level of subgrid diffusion. Of central importance to the quantitative estimates of ash concentration in the distal ash cloud is the fallout of ash close to the source. By comparing the mass of the ash and the column loadings in the modeled and observed distal ash cloud, we estimate the fraction of fine ash that survives into the distal ash cloud albeit with considerable uncertainty. The processes that contribute to this uncertainty are discussed.
Wang, Chenguang; Daniels, Michael J
2011-09-01
Pattern mixture modeling is a popular approach for handling incomplete longitudinal data. Such models are not identifiable by construction. Identifying restrictions is one approach to mixture model identification (Little, 1995, Journal of the American Statistical Association 90, 1112-1121; Little and Wang, 1996, Biometrics 52, 98-111; Thijs et al., 2002, Biostatistics 3, 245-265; Kenward, Molenberghs, and Thijs, 2003, Biometrika 90, 53-71; Daniels and Hogan, 2008, in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis) and is a natural starting point for missing not at random sensitivity analysis (Thijs et al., 2002, Biostatistics 3, 245-265; Daniels and Hogan, 2008, in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis). However, when the pattern specific models are multivariate normal, identifying restrictions corresponding to missing at random (MAR) may not exist. Furthermore, identification strategies can be problematic in models with covariates (e.g., baseline covariates with time-invariant coefficients). In this article, we explore conditions necessary for identifying restrictions that result in MAR to exist under a multivariate normality assumption and strategies for identifying sensitivity parameters for sensitivity analysis or for a fully Bayesian analysis with informative priors. In addition, we propose alternative modeling and sensitivity analysis strategies under a less restrictive assumption for the distribution of the observed response data. We adopt the deviance information criterion for model comparison and perform a simulation study to evaluate the performances of the different modeling approaches. We also apply the methods to a longitudinal clinical trial. Problems caused by baseline covariates with time-invariant coefficients are investigated and an alternative identifying restriction based on residuals is proposed as a solution.
Multiparameter Symbolic Sensitivity Analysis Enhanced by Nullor Model and Modified Coates Flow Graph
Irina Asenova
2013-01-01
Full Text Available In symbolic sensitivity analysis very important role plays the number of additionally generated expressions and in consequence additional number of arithmetical operations. The main drawback of some methods based on the adjoint graph or on the two-graph technique, i.e. the necessity to multiply analyze the corresponding graph, is avoided. Advantages of the method suggested are that, the matrix inversion is not required and the Coates graph is significantly simplified. Simplifications of the method introduced in this paper lead to the significant reduction of the final symbolic expressions without violation of accuracy. This simplification method can be considered as SBG-type and has an important impact on symbolic analysis. A special software tool called "HoneySen" has been developed to implement the suggested method. In the paper, it was shown that the presented method is more effective than the transimpedance method taking the number of arithmetical operations and the circuit insight into consideration. Comparison results for the multiparameter sensitivity calculations of the voltage the transfer function for a fourth-order low pass filter and a second-order high-pass filter are presented.
Sensitivity analysis of the boundary layer height on idealised cities (model study)
Schayes, G. [Univ. of Louvain, Louvain-la-Neuve (Belgium); Grossi, P. [Joint Research Center, Ispra (Italy)
1997-10-01
The behaviour of the typical diurnal variation of the atmospheric boundary layer (ABL) over cities is a complex function of very numerous environmental parameters. Two types of geographical situations have been retained: (i) inland city only surrounded by uniform fields, (ii) coastal city, thus influenced by the sea/land breeze effect. We have used the three-dimensional Thermal Vorticity-mode Mesoscale (TVM) model developed jointly by the UCL (Belgium) and JRC (Italy). In this study it has been used in 2-D mode allowing to perform many sensitivity runs. This implies that a kind of infinitely wide city has been effectively stimulated, but this does not affect the conclusions for the ABL height. The sensibility study has been performed for two turbulence closure schemes, for various assumptions for the ABL height definition in the model, and for a selected parameter, the soil water content. (LN)
Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen
2011-01-01
Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based....... The outcome of this study contributes to a better understanding of uncertainty in WWTPs, and explicitly demonstrates the significance of secondary settling processes that are crucial elements of model prediction under dry and wet-weather loading conditions....
2015-10-28
Considerable research has been conducted on the topic of decision aiding methods such as Multi -criteria and Multi - objective Decision Analysis to...the best compromise solution amongst multiple or infinite possibilities. This is generally known as Multi - Objective Decision Making (MODM). For the...from those of the program manager, resource sponsor, or even the user. This research focuses on the use of recursive sensitivity analysis to mitigate
Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David
2013-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…
Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.
Vanthoor, B.H.E.; Henten, van E.J.; Stanghellini, C.; Visser, de P.H.B.
2011-01-01
Greenhouse design is an optimisation problem that might be solved by a model-based greenhouse design method. A sensitivity analysis of a combined greenhouse climate-crop yield model of tomato was done to identify the parameters, i.e. greenhouse design parameters, outdoor climate and climate set-poin
Drouet, J.-L., E-mail: Jean-Louis.Drouet@grignon.inra.fr [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Capian, N. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Fiorelli, J.-L. [INRA, UR 0055 Agro-Systemes Territoires Ressources (ASTER), F-88500 Mirecourt (France); Blanfort, V. [INRA, UR 0874 Unite de Recherche sur l' Ecosysteme Prairial (UREP), F-63100 Clermont-Ferrand (France); CIRAD, Systemes d' Elevage, F-97387 Kourou (France); Capitaine, M. [ENITA, Agronomie et Fertilite Organique des Sols (AFOS), F-63370 Lempdes (France); Duretz, S.; Gabrielle, B. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Martin, R.; Lardy, R. [INRA, UR 0874 Unite de Recherche sur l' Ecosysteme Prairial (UREP), F-63100 Clermont-Ferrand (France); Cellier, P. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), F-78850 Thiverval-Grignon (France); Soussana, J.-F. [INRA, UR 0874 Unite de Recherche sur l' Ecosysteme Prairial (UREP), F-63100 Clermont-Ferrand (France)
2011-11-15
Modelling complex systems such as farms often requires quantification of a large number of input factors. Sensitivity analyses are useful to reduce the number of input factors that are required to be measured or estimated accurately. Three methods of sensitivity analysis (the Morris method, the rank regression and correlation method and the Extended Fourier Amplitude Sensitivity Test method) were compared in the case of the CERES-EGC model applied to crops of a dairy farm. The qualitative Morris method provided a screening of the input factors. The two other quantitative methods were used to investigate more thoroughly the effects of input factors on output variables. Despite differences in terms of concepts and assumptions, the three methods provided similar results. Among the 44 factors under study, N{sub 2}O emissions were mainly sensitive to the fraction of N{sub 2}O emitted during denitrification, the maximum rate of nitrification, the soil bulk density and the cropland area. - Highlights: > Three methods of sensitivity analysis were compared in the case of a soil-crop model. > The qualitative Morris method provided a screening of the input factors. > The quantitative EFAST method provided a thorough analysis of the input factors. > The three methods provided similar results regarding sensitivity of N{sub 2}O emissions. > N{sub 2}O emissions were mainly sensitive to a few, especially four, input factors. - Three methods of sensitivity analysis were compared to analyse their efficiency in assessing the sensitivity of a complex soil-crop model to its input factors.
Tsujita-Inoue, Kyoko; Hirota, Morihiko; Ashikaga, Takao; Atobe, Tomomi; Kouzuki, Hirokazu; Aiba, Setsuya
2014-06-01
The sensitizing potential of chemicals is usually identified and characterized using in vivo methods such as the murine local lymph node assay (LLNA). Due to regulatory constraints and ethical concerns, alternatives to animal testing are needed to predict skin sensitization potential of chemicals. For this purpose, combined evaluation using multiple in vitro and in silico parameters that reflect different aspects of the sensitization process seems promising. We previously reported that LLNA thresholds could be well predicted by using an artificial neural network (ANN) model, designated iSENS ver.1 (integrating in vitro sensitization tests version 1), to analyze data obtained from two in vitro tests: the human Cell Line Activation Test (h-CLAT) and the SH test. Here, we present a more advanced ANN model, iSENS ver.2, which additionally utilizes the results of antioxidant response element (ARE) assay and the octanol-water partition coefficient (LogP, reflecting lipid solubility and skin absorption). We found a good correlation between predicted LLNA thresholds calculated by iSENS ver.2 and reported values. The predictive performance of iSENS ver.2 was superior to that of iSENS ver.1. We conclude that ANN analysis of data from multiple in vitro assays is a useful approach for risk assessment of chemicals for skin sensitization.
A sensitivity analysis of a radiological assessment model for Arctic waters
Nielsen, S.P.
1998-01-01
A model based on compartment analysis has been developed to simulate the dispersion of radionuclides in Arctic waters for an assessment of doses to man. The model predicts concentrations of radionuclides in the marine environment and doses to man from a range of exposure pathways. A parameter sen...... scavenging, water-sediment interaction, biological uptake, ice transport and fish migration. Two independent evaluations of the release of radioactivity from dumped nuclear waste in the Kara Sea have been used as source terms for the dose calculations....
Petelet, M
2008-07-01
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Petelet, M
2007-10-15
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Sensitivity Analysis of Wavelet Neural Network Model for Short-Term Traffic Volume Prediction
Jinxing Shen
2013-01-01
Full Text Available In order to achieve a more accurate and robust traffic volume prediction model, the sensitivity of wavelet neural network model (WNNM is analyzed in this study. Based on real loop detector data which is provided by traffic police detachment of Maanshan, WNNM is discussed with different numbers of input neurons, different number of hidden neurons, and traffic volume for different time intervals. The test results show that the performance of WNNM depends heavily on network parameters and time interval of traffic volume. In addition, the WNNM with 4 input neurons and 6 hidden neurons is the optimal predictor with more accuracy, stability, and adaptability. At the same time, a much better prediction record will be achieved with the time interval of traffic volume are 15 minutes. In addition, the optimized WNNM is compared with the widely used back-propagation neural network (BPNN. The comparison results indicated that WNNM produce much lower values of MAE, MAPE, and VAPE than BPNN, which proves that WNNM performs better on short-term traffic volume prediction.
Dynamic Air-Route Adjustments - Model,Algorithm,and Sensitivity Analysis
GENG Rui; CHENG Peng; CUI Deguang
2009-01-01
Dynamic airspace management (DAM) is an important approach to extend limited air space resources by using them more efficiently and flexibly.This paper analyzes the use of the dynamic air-route adjustment (DARA) method as a core procedure in DAM systems.DARA method makes dynamic decisions on when and how to adjust the current air-route network with the minimum cost.This model differs from the air traffic flow management (ATFM) problem because it considers dynamic opening and closing of air-route segments instead of only arranging flights on a given air traffic network and it takes into account several new constraints,such as the shortest opening time constraint.The DARA problem is solved using a two-step heuristic algorithm.The sensitivities of important coefficients in the model are analyzed to determine proper values for these coefficients.The computational results based on practical data from the Beijing ATC region show that the two-step heuristic algorithm gives as good results as the CPLEX in less or equal time in most cases.
无
2007-01-01
With the fast growth of Chinese economic,more and more capital will be invested in environmental projects.How to select the environmental investment projects(alternatives)for obtaining the best environmental quality and economic benefits is an important problem for the decision makers.The purpose of this paper is to develop a decision-making model to rank a finite number of alternatives with several and sometimes conflicting criteria.A model for ranking the projects of municipal sewage treatment plants is proposed by using exports' information and the data of the real projects.And,the ranking result is given based on the PROMETHEE method. Furthermore,by means of the concept of the weight stability intervals(WSI),the sensitivity of the ranking results to the size of criteria values and the change of weights value of criteria are discussed.The result shows that some criteria,such as"proportion of benefit to projoct cost",will influence the ranking result of alternatives very strong while others not.The influence are not only from the value of criterion but also from the changing the weight of criterion.So,some criteria such as"proportion of benefit to projoct cost" are key critera for ranking the projects. Decision makers must be cautious to them.
ZHENG Wei; SHI Honghua; SONG Xikun; HUANG Dongren; HU Long
2012-01-01
Prediction and sensitivity models,to elucidate the response of phytoplankton biomass to environmental factors in Quanzhou Bay,Fujian,China,were developed using a back propagation(BP)network.The environmental indicators of coastal phytoplankton biomass were determined and monitoring data for the bay from 2008 was used to train,test and build a three-layer BP artificial neural network with multi-input and single-output.Ten water quality parameters were used to forecast phytoplankton biomass (measured as chlorophyll-a concentration).Correlation coefficient between biomass values predicted by the model and those observed was 0.964,whilst the average relative error of the network was-3.46％ and average absolute error was 10.53％.The model thus has high level of accuracy and is suitable for analysis of the influence of aquatic environmental factors on phytoplankton biomass.A global sensitivity analysis was performed to determine the influence of different environmental indicators on phytoplankton biomass.Indicators were classified according to the sensitivity of response and its risk degree.The results indicate that the parameters most relevant to phytoplankton biomass are estuary-related and include pH,sea surface temperature,sea surface salinity,chemical oxygen demand and ammonium.
Costa, Vander Menengoy da; Rosa, Arlei Lucas de Sousa [Department of Electrical Engineering, Federal University of Juiz de Fora, Campus Universitario - Bairro Martelos, 36036-330 Juiz de Fora - MG (Brazil); Guedes, Magda Rocha [Federal Center of Technologic Education of Minas Gerais - CEFET, Rua Jose Peres, 558 36700-000 Leopoldina - MG (Brazil); Cantarino, Marcelo [Centrais Eletricas Brasileiras S.A - ELETROBRAS, Av. Rio Branco, 53, Centro, 14 andar, 20090-004 Rio de Janeiro - RJ (Brazil)
2010-05-15
This paper presents new mathematical models to compute the loading margin, as well as to perform the sensitivity analysis of loading margin with respect to different electric system parameters. The innovative idea consists of evaluating the performance of these methods when the power flow equations are expressed with the voltages in rectangular coordinates. The objective is to establish a comparative process with the conventional models expressed in terms of power flow equations with the voltages in polar coordinates. IEEE test system and a South-Southeastern Brazilian network are used in the simulations. (author)
Ramin, Elham; Flores Alsina, Xavier; Sin, Gürkan;
2014-01-01
This study investigates the sensitivity of wastewater treatment plant (WWTP) model performance to the selection of one-dimensional secondary settling tanks (1-D SST) models with first-order and second-order mathematical structures. We performed a global sensitivity analysis (GSA) on the benchmark......, the settling parameters were found to be as influential as the biokinetic parameters on the uncertainty of WWTP model predictions, particularly for biogas production and treated water quality. However, the sensitivity measures were found to be dependent on the 1-D SST models selected. Accordingly, we suggest...... have, however, no physical meaning, and might additionally obtain unrealistic values. In contrast, using second-order SST models, the focus of calibration should be on providing measured values for the hindered settling parameters. This approach is in close agreement with the recommendations made...
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
K. Zhang; Y.S. Wu; J.E. Houseworth
2006-03-21
The unsaturated fractured volcanic deposits at Yucca Mountain have been intensively investigated as a possible repository site for storing high-level radioactive waste. Field studies at the site have revealed that there exist large variabilities in hydrological parameters over the spatial domain of the mountain. This paper reports on a systematic analysis of hydrological parameters using the site-scale 3-D unsaturated zone (UZ) flow model. The objectives of the sensitivity analyses are to evaluate the effects of uncertainties in hydrologic parameters on modeled UZ flow and contaminant transport results. Sensitivity analyses are carried out relative to fracture and matrix permeability and capillary strength (van Genuchten a), through variation of these parameter values by one standard deviation from the base-case values. The parameter variation results in eight parameter sets. Modeling results for the eight UZ flow sensitivity cases have been compared with field observed data and simulation results from the base-case model. The effects of parameter uncertainties on the flow fields are discussed and evaluated through comparison of results for flow and transport. In general, this study shows that uncertainties in matrix parameters cause larger uncertainty in simulated moisture flux than corresponding uncertainties in fracture properties for unsaturated flow through heterogeneous fractured rock.
Xu, Pengpeng; Huang, Helai; Dong, Ni; Abdel-Aty, Mohamed
2014-09-01
A wide array of spatial units has been explored in current regional safety analysis. Since traffic crashes exhibit extreme spatiotemporal heterogeneity which has rarely been a consideration in partitioning these zoning systems, research based on these areal units may be subjected to the modifiable areal unit problem (MAUP). This study attempted to conduct a sensitivity analysis to quantitatively investigate the MAUP effect in the context of regional safety modeling. The emerging regionalization method-RECDAP (regionalization with dynamically constrained agglomerative clustering and partitioning) was employed to aggregate 738 traffic analysis zones in the county of Hillsborough to 14 zoning schemes at an incremental step-size of 50 zones based on spatial homogeneity of crash risk. At each level of aggregation, a Bayesian Poisson lognormal model and a Bayesian spatial model were calibrated to explain observed variations in total/severe crash counts given a number of zone-level factors. Results revealed that as the number of zones increases, the spatial autocorrelation of crash data increases. The Bayesian spatial model outperforms the Bayesian Poisson-lognormal model in accurately accounting for spatial autocorrelation effects, unbiased parameter estimates, and model performance, especially in cases with higher disaggregated levels. Zoning schemes with higher number of zones tend to have increasing number of significant variables, more stable coefficient estimation, smaller standard error, whereas worse model performance. The variables of population density and median household income show consistently significant effects on crash risk and are robust to variation in data aggregation. The MAUP effects may be significantly reduced if we just maintain at about 50% of the original number of zones (350 or larger). The present study highlights MAUP that is generally ignored by transportation safety analysts, and provides insights into the nature of parameter sensitivity to
Yan Gao
2014-01-01
Full Text Available The increasing marine activities in Arctic area have brought growing interest in ship-iceberg collision study. The purpose of this paper is to study the iceberg geometry shape effect on the collision process. In order to estimate the sensitivity parameter, five different geometry iceberg models and two iceberg material models are adopted in the analysis. The FEM numerical simulation is used to predict the scenario and the related responses. The simulation results including energy dissipation and impact force are investigated and compared. It is shown that the collision process and energy dissipation are more sensitive to iceberg local shape than other factors when the elastic-plastic iceberg material model is applied. The blunt iceberg models act rigidly while the sharp ones crush easily during the simulation process. With respect to the crushable foam iceberg material model, the iceberg geometry has relatively small influence on the collision process. The spherical iceberg model shows the most rigidity for both iceberg material models and should be paid the most attention for ice-resist design for ships.
Sensitivity and uncertainty in flood inundation modelling – concept of an analysis framework
T. Weichel
2007-01-01
Full Text Available After the extreme flood event of the Elbe in 2002 the definition of flood risk areas by law and their simulation became more important in Germany. This paper describes a concept of an analysis framework to improve the localisation and duration of validity of flood inundation maps. The two-dimensional finite difference model TrimR2D is used and linked to a Monte-Carlo routine for parameter sampling as well as to selected performance measures. The purpose is the investigation of the impact of different spatial resolutions and the influence of changing land uses in the simulation of flood inundation areas. The technical assembling of the framework is realised and beside the model calibration, first tests with different parameter ranges were done. Preliminary results show good correlations with observed data, but the investigation of shifting land uses reflects only poor changes in the flood extension.
The power of sensitivity analysis and thoughts on models with large numbers of parameters
Havlacek, William [Los Alamos National Laboratory
2008-01-01
The regulatory systems that allow cells to adapt to their environments are exceedingly complex, and although we know a great deal about the intricate mechanistic details of many of these systems, our ability to make accurate predictions about their system-level behaviors is severely limited. We would like to make such predictions for a number of reasons. How can we reverse dysfunctional molecular changes of these systems that cause disease? More generally, how can we harness and direct cellular activities for beneficial purposes? Our ability to make accurate predictions about a system is also a measure ofour fundamental understanding of that system. As evidenced by our mastery of technological systems, a useful understanding ofa complex system can often be obtained through the development and analysis ofa mathematical model, but predictive modeling of cellular regulatory systems, which necessarily relies on quantitative experimentation, is still in its infancy. There is much that we need to learn before modeling for practical applications becomes routine. In particular, we need to address a number of issues surrounding the large number of parameters that are typically found in a model for a cellular regulatory system.
Chen, XinJian
2012-06-01
This paper presents a sensitivity study of simulated availability of low salinity habitats by a hydrodynamic model for the Manatee River estuary located in the southwest portion of the Florida peninsula. The purpose of the modeling study was to establish a regulatory minimum freshwater flow rate required to prevent the estuarine ecosystem from significant harm. The model used in the study was a multi-block model that dynamically couples a three-dimensional (3D) hydrodynamic model with a laterally averaged (2DV) hydrodynamic model. The model was calibrated and verified against measured real-time data of surface elevation and salinity at five stations during March 2005-July 2006. The calibrated model was then used to conduct a series of scenario runs to investigate effects of the flow reduction on salinity distributions in the Manatee River estuary. Based on simulated salinity distribution in the estuary, water volumes, bottom areas and shoreline lengths for salinity less than certain predefined values were calculated and analyzed to help establish the minimum freshwater flow rate for the estuarine system. The sensitivity analysis conducted during the modeling study for the Manatee River estuary examined effects of the bottom roughness, ambient vertical eddy viscosity/diffusivity, horizontal eddy viscosity/diffusivity, and ungauged flow on the model results and identified the relative importance of these model parameters (input data) to the outcome of the availability of low salinity habitats. It is found that the ambient vertical eddy viscosity/diffusivity is the most influential factor controlling the model outcome, while the horizontal eddy viscosity/diffusivity is the least influential one.
Chen, Jiajia; Pitchai, Krishnamoorthy; Birla, Sohan; Negahban, Mehrdad; Jones, David; Subbiah, Jeyamkondan
2014-10-01
A 3-dimensional finite-element model coupling electromagnetics and heat and mass transfer was developed to understand the interactions between the microwaves and fresh mashed potato in a 500 mL tray. The model was validated by performing heating of mashed potato from 25 °C on a rotating turntable in a microwave oven, rated at 1200 W, for 3 min. The simulated spatial temperature profiles on the top and bottom layer of the mashed potato showed similar hot and cold spots when compared to the thermal images acquired by an infrared camera. Transient temperature profiles at 6 locations collected by fiber-optic sensors showed good agreement with predicted results, with the root mean square error ranging from 1.6 to 11.7 °C. The predicted total moisture loss matched well with the observed result. Several input parameters, such as the evaporation rate constant, the intrinsic permeability of water and gas, and the diffusion coefficient of water and gas, are not readily available for mashed potato, and they cannot be easily measured experimentally. Reported values for raw potato were used as baseline values. A sensitivity analysis of these input parameters on the temperature profiles and the total moisture loss was evaluated by changing the baseline values to their 10% and 1000%. The sensitivity analysis showed that the gas diffusion coefficient, intrinsic water permeability, and the evaporation rate constant greatly influenced the predicted temperature and total moisture loss, while the intrinsic gas permeability and the water diffusion coefficient had little influence. This model can be used by the food product developers to understand microwave heating of food products spatially and temporally. This tool will allow food product developers to design food package systems that would heat more uniformly in various microwave ovens. The sensitivity analysis of this study will help us determine the most significant parameters that need to be measured accurately for reliable
Daniele Cavalli
2016-09-01
Full Text Available Two features distinguishing soil organic matter simulation models are the type of kinetics used to calculate pool decomposition rates, and the algorithm used to handle the effects of nitrogen (N shortage on carbon (C decomposition. Compared to widely used first-order kinetics, Monod kinetics more realistically represent organic matter decomposition, because they relate decomposition to both substrate and decomposer size. Most models impose a fixed C to N ratio for microbial biomass. When N required by microbial biomass to decompose a given amount of substrate-C is larger than soil available N, carbon decomposition rates are limited proportionally to N deficit (N inhibition hypothesis. Alternatively, C-overflow was proposed as a way of getting rid of excess C, by allocating it to a storage pool of polysaccharides. We built six models to compare the combinations of three decomposition kinetics (first-order, Monod, and reverse Monod, and two ways to simulate the effect of N shortage on C decomposition (N inhibition and C-overflow. We conducted sensitivity analysis to identify model parameters that mostly affected CO2 emissions and soil mineral N during a simulated 189-day laboratory incubation assuming constant water content and temperature. We evaluated model outputs sensitivity at different stages of organic matter decomposition in a soil amended with three inputs of increasing C to N ratio: liquid manure, solid manure, and low-N crop residue. Only few model parameters and their interactions were responsible for consistent variations of CO2 and soil mineral N. These parameters were mostly related to microbial biomass and to the partitioning of applied C among input pools, as well as their decomposition constants. In addition, in models with Monod kinetics, CO2 was also sensitive to a variation of the half-saturation constants. C-overflow enhanced pool decomposition compared to N inhibition hypothesis when N shortage occurred. Accumulated C in the
Mahmoudi Hoda
2014-09-01
Full Text Available These instructions give you guidelines for preparing papers for IFAC conferences. A reverse supply chain is configured by a sequence of elements forming a continuous process to treat return-products until they are properly recovered or disposed. The activities in a reverse supply chain include collection, cleaning, disassembly, test and sorting, storage, transport, and recovery operations. This paper presents a mathematical programming model with the objective of minimizing the total costs of reverse supply chain including transportation, fixed opening, operation, maintenance and remanufacturing costs of centers. The proposed model considers the design of a multi-layer, multi-product reverse supply chain that consists of returning, disassembly, processing, recycling, remanufacturing, materials and distribution centers. This integer linear programming model is solved by using Lingo 9 software and the results are reported. Finally, a sensitivity analysis of the proposed model is also presented.
Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2016-11-01
Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.
S. TATTARI
2008-12-01
Full Text Available Modeling tools are needed to assess (i the amounts of loading from agricultural sources to water bodies as well as (ii the alternative management options in varying climatic conditions. These days, the implementation of Water Framework Directive (WFD has put totally new requirements also for modeling approaches. The physically based models are commonly not operational and thus the usability of these models is restricted for a few selected catchments. But the rewarding feature of these process-based models is an option to study the effect of protection measures on a catchment scale and, up to a certain point, a possibility to upscale the results. In this study, the parameterization of the SWAT model was developed in terms of discharge dynamics and nutrient loads, and a sensitivity analysis regarding discharge and sediment concentration was made. The SWAT modeling exercise was carried out for a 2nd order catchment (Yläneenjoki, 233 km2 of the Eurajoki river basin in southwestern Finland. The Yläneenjoki catchment has been intensively monitored during the last 14 years. Hence, there was enough background information available for both parameter setup and calibration. In addition to load estimates, SWAT also offers possibility to assess the effects of various agricultural management actions like fertilization, tillage practices, choice of cultivated plants, buffer strips, sedimentation ponds and constructed wetlands (CWs on loading. Moreover, information on local agricultural practices and the implemented and planned protective measures were readily available thanks to aware farmers and active authorities. Here, we studied how CWs can reduce the nutrient load at the outlet of the Yläneenjoki river basin. The results suggested that sensitivity analysis and autocalibration tools incorporated in the model are useful by pointing out the most influential parameters, and that flow dynamics and annual loading values can be modeled with reasonable
Yuan, Hao; Sin, Gürkan
2011-01-01
filtration coefficients, while deposition is more sensitive to filtration coefficients. More experimental measurements at these moments are suggested to determine dispersion coefficients more accurately. More measurements of the steady-state effluent concentration or deposition are suggested to determine...
Chen, Libin; Yang, Zhifeng; Liu, Haifei
2017-06-01
Inter-basin water transfers containing a great deal of nitrogen are great threats to human health, biodiversity, and air and water quality in the recipient area. Danjiangkou Reservoir, the source reservoir for China's South-to-North Water Diversion Middle Route Project, suffers from total nitrogen pollution and threatens the water transfer to a number of metropolises including the capital, Beijing. To locate the main source of nitrogen pollution into the reservoir, especially near the Taocha canal head, where the intake of water transfer begins, we constructed a 3-D water quality model. We then used an inflow sensitivity analysis method to analyze the significance of inflows from each tributary that may contribute to the total nitrogen pollution and affect water quality. The results indicated that the Han River was the most significant river with a sensitivity index of 0.340, followed by the Dan River with a sensitivity index of 0.089, while the Guanshan River and the Lang River were not significant, with the sensitivity indices of 0.002 and 0.001, respectively. This result implies that the concentration and amount of nitrogen inflow outweighs the geographical position of the tributary for sources of total nitrogen pollution to the Taocha canal head of the Danjiangkou Reservoir.
无
2002-01-01
This paper presents an error modeling methodology that enables the tolerance design, assembly and kinematic calibration of a class of 3-DOF parallel kinematic machines with parallelogram struts to be integrated into a unified framework. The error mapping function is formulated to identify the source errors affecting the uncompensable pose error. The sensitivity analysis in the sense of statistics is also carried out to investigate the influences of source errors on the pose accuracy. An assembly process that can effectively minimize the uncompensable pose error is proposed as one of the results of this investigation.
Wong, J. S.; Freer, J.; Bates, P. D.; Sear, D. A.
2012-04-01
Recent research into modelling floodplain inundation processes is primarily concentrated on the simulation of inundation flow without considering the influences of channel morphology and sediment delivery from upstream. River channels are often represented by simplified geometry and implicitly assumed to remain unchanged. However, during and after flood episodes the river bed elevation can change quickly and in some cases drastically. Despite this, the effect of channel geometry and topographic complexity on model results has been largely unexplored. To address this issue, the impact of channel cross-section geometry, and channel long-profile variability on flood inundation extent are examined using a simplified 1D-2D hydraulic model (LISFLOOD-FP) of the Cockermouth floods of November 2009 within an uncertainty analysis framework. The Cockermouth region provides a useful test site for such study because of the availability of channel and floodplain data, the collection of post-event water and wrack marks and the presence of pre-and post-event morphological surveyed data. More importantly, in some areas the river has undergone significant course change and additionally the deposition of stones and debris on the floodplain. The use of relatively simple formulations of critical velocities in the initiation of motion formula enables the construction of a series of hypothetical bedform scenarios among cross-sections. These scenarios can be used as input to LISFLOOD-FP. Slope gradient, Manning roughness coefficients, grain size characteristic, and critical shear stress will be considered in a Monte Carlo simulation framework. The November 2009 Cockermouth flood is simulated and the results are analysed to quantify the accuracy associated with each bedform scenario and to assess how different channel long-profiles affects the performance of LISFLOOD-FP. The study will further analyse and quantify the variability and uncertainty of flood inundation extent resulting from
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Stamile, Claudio; Kocevar, Gabriel; Cotton, François; Durand-Dubief, Françoise; Hannoun, Salem; Frindel, Carole; Guttmann, Charles R. G.; Rousseau, David; Sappey-Marinier, Dominique
2016-01-01
Diffusion tensor imaging (DTI) is a sensitive tool for the assessment of microstructural alterations in brain white matter (WM). We propose a new processing technique to detect, local and global longitudinal changes of diffusivity metrics, in homologous regions along WM fiber-bundles. To this end, a reliable and automatic processing pipeline was developed in three steps: 1) co-registration and diffusion metrics computation, 2) tractography, bundle extraction and processing, and 3) longitudinal fiber-bundle analysis. The last step was based on an original Gaussian mixture model providing a fine analysis of fiber-bundle cross-sections, and allowing a sensitive detection of longitudinal changes along fibers. This method was tested on simulated and clinical data. High levels of F-Measure were obtained on simulated data. Experiments on cortico-spinal tract and inferior fronto-occipital fasciculi of five patients with Multiple Sclerosis (MS) included in a weekly follow-up protocol highlighted the greater sensitivity of this fiber scale approach to detect small longitudinal alterations. PMID:27224308
Sensitivity analysis of PBL schemes by comparing WRF model and experimental data
A. Balzarini
2014-09-01
Full Text Available This work discusses the sources of model biases in reconstructing the Planetary Boundary Layer (PBL height among five commonly used PBL parameterizations. The Weather Research and Forecasting (WRF Model was applied over the critical area of Northern Italy with 5 km of horizontal resolution, and compared against a wide set of experimental data for February 2008. Three non-local closure PBL schemes (Asymmetrical Convective Model version 2, ACM2; Medium Range Forecast, MRF; Yonsei University, YSU and two local closure parameterizations (Mellor Yamada Janjic, MYJ; University of Washington Moist Turbulence, UW were selected for the analysis. Vertical profiles of aerosol number concentrations and Lidar backscatter profiles were collected in the metropolitan area of Milan in order to derive the PBL hourly evolution. Moreover, radio-soundings of Milano Linate airport as well as surface temperature, mixing ratio and wind speed of several meteorological stations were considered too. Results show that all five parameterizations produce similar performances in terms of temperature, mixing ratio and wind speed in the city of Milan, implying some systematic errors in all simulations. However, UW and ACM2 use the same local closure during nighttime conditions, allowing smaller mean biases (MB of temperature (ACM2 MB = 0.606 K, UW MB = 0.209 K, and wind speed (ACM2 MB = 0.699 m s−1, UW MB = 0.918 m s−1. All schemes have the same variations of the diurnal PBL height, since over predictions of temperature and wind speed are found to cause a general overestimation of mixing during its development in winter. In particular, temperature estimates seem to impact the early evolution of the PBL height, while entrainment fluxes parameterizations have major influence on the afternoon development. MRF, MYJ and ACM2 use the same approach in reconstructing the entrainment process, producing the largest overestimations of PBL height (MB ranges from 85.51–179.10 m. On
Sensitivity analysis of PBL schemes by comparing WRF model and experimental data
Balzarini, A.; Angelini, F.; Ferrero, L.; Moscatelli, M.; Perrone, M. G.; Pirovano, G.; Riva, G. M.; Sangiorgi, G.; Toppetti, A. M.; Gobbi, G. P.; Bolzacchini, E.
2014-09-01
This work discusses the sources of model biases in reconstructing the Planetary Boundary Layer (PBL) height among five commonly used PBL parameterizations. The Weather Research and Forecasting (WRF) Model was applied over the critical area of Northern Italy with 5 km of horizontal resolution, and compared against a wide set of experimental data for February 2008. Three non-local closure PBL schemes (Asymmetrical Convective Model version 2, ACM2; Medium Range Forecast, MRF; Yonsei University, YSU) and two local closure parameterizations (Mellor Yamada Janjic, MYJ; University of Washington Moist Turbulence, UW) were selected for the analysis. Vertical profiles of aerosol number concentrations and Lidar backscatter profiles were collected in the metropolitan area of Milan in order to derive the PBL hourly evolution. Moreover, radio-soundings of Milano Linate airport as well as surface temperature, mixing ratio and wind speed of several meteorological stations were considered too. Results show that all five parameterizations produce similar performances in terms of temperature, mixing ratio and wind speed in the city of Milan, implying some systematic errors in all simulations. However, UW and ACM2 use the same local closure during nighttime conditions, allowing smaller mean biases (MB) of temperature (ACM2 MB = 0.606 K, UW MB = 0.209 K), and wind speed (ACM2 MB = 0.699 m s-1, UW MB = 0.918 m s-1). All schemes have the same variations of the diurnal PBL height, since over predictions of temperature and wind speed are found to cause a general overestimation of mixing during its development in winter. In particular, temperature estimates seem to impact the early evolution of the PBL height, while entrainment fluxes parameterizations have major influence on the afternoon development. MRF, MYJ and ACM2 use the same approach in reconstructing the entrainment process, producing the largest overestimations of PBL height (MB ranges from 85.51-179.10 m). On the contrary, the
Bakopoulou, C.; Bulygina, N.; Butler, A. P.; McIntyre, N. R.
2012-04-01
Land surface models (LSMs) are recognised as important components of Global Circulation Models (GCMs). Simulating exchanges of the moisture, carbon and energy between land surface and atmosphere in a consistent manner requires physics-based LSMs of high complexity, fine vertical resolution and a large number of parameters that need to be estimated. The "physics" that is incorporated in such models is generally based on our knowledge of point (or very small) scale hydrological processes. Therefore, while larger GCM grid-scale performance may be the ultimate goal, the ability of the model to simulate the point-scale processes is, intuitively, a pre-requisite for its reliable use at larger scales. Critical evaluation of model performance and parameter uncertainty at point scales is therefore a rational starting point for critical evaluation of LSMs; and identification of optimal parameter sets at the point scale is a significant stage of the model evaluation at larger scales. The Joint UK Land Environment Simulator (JULES) is a complex LSM, which is used to represent surface exchanges in the UK Met Office's forecast and climate change models. This complexity necessitates a large number of model parameters (in total 108) some of which are incapable of being measured directly at large (i.e. kilometer) scales. For this reason, a parameter sensitivity analysis is a vital confidence building process within the framework of every LSM, and as a part of the calibration strategy. The problem of JULES parameter estimation and uncertainty at the point scale with a view to assessing the accuracy and the uncertainty in the default parameter values is addressed. The sensitivity of the JULES output of soil moisture is examined using parameter response surface analysis. The implemented technique is based on the Regional Sensitivity Analysis method (RSA), which evaluates the model response surface over a region of parameter space using Monte Carlo sampling. The modified version of RSA
Pierre Casadebaig
Full Text Available A crop can be viewed as a complex system with outputs (e.g. yield that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background. The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90 was evaluated in a wide target population of environments (4 sites × 125 years, management practices (3 sowing dates × 3 nitrogen fertilization levels and CO2 (2 levels. The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total. The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear and interaction (i.e. non-linear and interaction sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model improvement.
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
Wind Farm Group Efficiency - A Sensitivity Analysis with a Mesoscale Model
Volker, Patrick; Badger, Jake; Hahmann, Andrea N.
2014-01-01
The total installed capacity in the North Sea was in 2012 5 GW, and it estimated that it will grow to 40 GW by 2020 (EWEA). This will lead to an increasing wind farm density in regions with the most favourable conditions. In this study, we investigate the sensitivity of power density losses to wi...
Jang, Jinwoo; Smyth, Andrew W.
2017-01-01
The objective of structural model updating is to reduce inherent modeling errors in Finite Element (FE) models due to simplifications, idealized connections, and uncertainties of material properties. Updated FE models, which have less discrepancies with real structures, give more precise predictions of dynamic behaviors for future analyses. However, model updating becomes more difficult when applied to civil structures with a large number of structural components and complicated connections. In this paper, a full-scale FE model of a major long-span bridge has been updated for improved consistency with real measured data. Two methods are applied to improve the model updating process. The first method focuses on improving the agreement of the updated mode shapes with the measured data. A nonlinear inequality constraint equation is used to an optimization procedure, providing the capability to regulate updated mode shapes to remain within reasonable agreements with those observed. An interior point algorithm deals with nonlinearity in the objective function and constraints. The second method finds very efficient updating parameters in a more systematic way. The selection of updating parameters in FE models is essential to have a successful updating result because the parameters are directly related to the modal properties of dynamic systems. An in-depth sensitivity analysis is carried out in an effort to precisely understand the effects of physical parameters in the FE model on natural frequencies. Based on the sensitivity analysis, cluster analysis is conducted to find a very efficient set of updating parameters.
Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models
Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles
2012-01-01
This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…
Wu, Y.; Liu, S.
2012-01-01
Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty
Pianosi, Francesca; Wagener, Thorsten
2016-04-01
Simulations from environmental models are affected by potentially large uncertainties stemming from various sources, including model parameters and observational uncertainty in the input/output data. Understanding the relative importance of such sources of uncertainty is essential to support model calibration, validation and diagnostic evaluation, and to prioritize efforts for uncertainty reduction. Global Sensitivity Analysis (GSA) provides the theoretical framework and the numerical tools to gain this understanding. However, in traditional applications of GSA, model outputs are an aggregation of the full set of simulated variables. This aggregation of propagated uncertainties prior to GSA may lead to a significant loss of information and may cover up local behaviour that could be of great interest. In this work, we propose a time-varying version of a recently developed density-based GSA method, called PAWN, as a viable option to reduce this loss of information. We apply our approach to a medium-complexity hydrological model in order to address two questions: [1] Can we distinguish between the relative importance of parameter uncertainty versus data uncertainty in time? [2] Do these influences change in catchments with different characteristics? The results present the first quantitative investigation on the relative importance of parameter and data uncertainty across time. They also provide a demonstration of the value of time-varying GSA to investigate the propagation of uncertainty through numerical models and therefore guide additional data collection needs and model calibration/assessment.
Land Sensitivity Analysis of Degradation using MEDALUS model: Case Study of Deliblato Sands, Serbia
Kadović Ratko
2016-12-01
Full Text Available This paper studies the assessment of sensitivity to land degradation of Deliblato sands (the northern part of Serbia, as a special nature reserve. Sandy soils of Deliblato sands are highly sensitive to degradation (given their fragility, while the system of land use is regulated according to the law, consisting of three zones under protection. Based on the MEDALUS approach and the characteristics of the study area, four main factors were considered for evaluation: soil, climate, vegetation and management. Several indicators affecting the quality of each factor were identified. Each indicator was quantified according to its quality and given a weighting of between 1.0 and 2.0. ArcGIS 9 was utilized to analyze and prepare the layers of quality maps, using the geometric mean to integrate the individual indicator map. In turn, the geometric mean of all four quality indices was used to generate sensitivity of land degradation status map. Results showed that 56.26% of the area is classified as critical; 43.18% as fragile; 0.55% as potentially affected and 0.01% as not affected by degradation. The values of vegetation quality index, expressed as coverage, diversity of vegetation functions and management policy during the protection regime are clearly represented through correlation coefficient (0.87 and 0.47.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Vibration Sensitive Keystroke Analysis
Lopatka, M.; Peetz, M.-H.; van Erp, M.; Stehouwer, H.; van Zaanen, M.
2009-01-01
We present a novel method for performing non-invasive biometric analysis on habitual keystroke patterns using a vibration-based feature space. With the increasing availability of 3-D accelerometer chips in laptop computers, conventional methods using time vectors may be augmented using a distinct fe
Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
S. Wang
2012-12-01
Full Text Available Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped calibration protocol that used streamflow measured at one single watershed outlet to a multi-site calibration method which employed streamflow measurements at three stations within the large Chaohe River basin in northern China. Simulation results showed that the single-site calibrated model was able to sufficiently simulate the hydrographs for two of the three stations (Nash-Sutcliffe coefficient of 0.65–0.75, and correlation coefficient 0.81–0.87 during the testing period, but the model performed poorly for the third station (Nash-Sutcliffe coefficient only 0.44. Sensitivity analysis suggested that streamflow of upstream area of the watershed was dominated by slow groundwater, whilst streamflow of middle- and down- stream areas by relatively quick interflow. Therefore, a multi-site calibration protocol was deemed necessary. Due to the potential errors and uncertainties with respect to the representation of spatial variability, performance measures from the multi-site calibration protocol slightly decreased for two of the three stations, whereas it was improved greatly for the third station. We concluded that multi-site calibration protocol reached a compromise in term of model performance for the three stations, reasonably representing the hydrographs of all three stations with Nash-Sutcliffe coefficient ranging from 0.59–072. The multi-site calibration protocol applied in the analysis generally has advantages to the single site calibration protocol.
Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup
2011-01-01
been modeled using modified Michaelis–Menten kinetics and has been implemented in the geochemical code PHREEQC. The model have been calibrated using a Shuffled Complex Evolution Metropolis algorithm to observations of chlorinated solvents, organic acids, and H2 concentrations in laboratory batch...
Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification
Sarri, A; Dias, F
2012-01-01
Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake). Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the Outer Product Emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the Leave-One-O...
Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification
A. Sarri
2012-06-01
Full Text Available Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake. Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the outer product emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the leave-one-out method.
Robert Moss
Full Text Available Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a "virtual population" from which "virtual individuals" can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the "virtual individuals" that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models.
Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S Randall
2012-01-01
Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a "virtual population" from which "virtual individuals" can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the "virtual individuals" that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models.
Giulia Carreras
2012-09-01
Full Text Available
Background: parameter uncertainty in the Markov model’s description of a disease course was addressed. Probabilistic sensitivity analysis (PSA is now considered the only tool that properly permits parameter uncertainty’s examination. This consists in sampling values from the parameter’s probability distributions.
Methods: Markov models fitted with microsimulation were considered and methods for carrying out a PSA on transition probabilities were studied. Two Bayesian solutions were developed: for each row of the modeled transition matrix the prior distribution was assumed as a product of Beta or a Dirichlet. The two solutions differ in the source of information: several different sources for each transition in the Beta approach and a single source for each transition from a given health state in the Dirichlet. The two methods were applied to a simple cervical cancer’s model.
Results : differences between posterior estimates from the two methods were negligible. Results showed that the prior variability highly influence the posterior distribution.
Conclusions: the novelty of this work is the Bayesian approach that integrates the two distributions with a product of Binomial distributions likelihood. Such methods could be also applied to cohort data and their application to more complex models could be useful and unique in the cervical cancer context, as well as in other disease modeling.
Vesselinov, V. V. (Velimir V.); Keating, E. H. (Elizabeth H.); Zyvoloski, G. A. (George Anthony)
2002-01-01
Predictions and their uncertainty are key aspects of any modeling effort. The prediction uncertainty can be significant when the predictions depend on uncertain system parameters. We analyze prediction uncertainties through constrained nonlinear second-order optimization of an inverse model. The optimized objective function is the weighted squared-difference between observed and simulated system quantities (flux and time-dependent head data). The constraints are defined by the maximization/minimization of the prediction within a given objective-function range. The method is applied in capture-zone analyses of groundwater-supply systems using a three-dimensional numerical model of the Espanola Basin aquifer. We use the finite-element simulator FEHM coupled with parameter-estimation/predictive-analysis code PEST. The model is run in parallel on a multi-processor supercomputer. We estimate sensitivity and uncertainty of model predictions such as capture-zone identification and travel times. While the methodology is extremely powerful, it is numerically intensive.
Schotanus, D; Meeussen, J C L; Lissner, H; van der Ploeg, M J; Wehrer, M; Totsche, K U; van der Zee, S E A T M
2014-01-01
Transport and degradation of de-icing chemical (containing propylene glycol, PG) in the vadose zone were studied with a lysimeter experiment and a model, in which transient water flow, kinetic degradation of PG and soil chemistry were combined. The lysimeter experiment indicated that aerobic as well as anaerobic degradation occurs in the vadose zone. Therefore, the model included both types of degradation, which was made possible by assuming advection-controlled (mobile) and diffusion-controlled (immobile) zones. In the mobile zone, oxygen can be transported by diffusion in the gas phase. The immobile zone is always water-saturated, and oxygen only diffuses slowly in the water phase. Therefore, the model is designed in a way that the redox potential can decrease when PG is degraded, and thus, anaerobic degradation can occur. In our model, manganese oxide (MnO2, which is present in the soil) and NO3- (applied to enhance biodegradation) can be used as electron acceptors for anaerobic degradation. The application of NO3- does not result in a lower leaching of PG nor in a slower depletion of MnO2. The thickness of the snowcover influences the leached fraction of PG, as with a high infiltration rate, transport is fast, there is less time for degradation and thus more PG will leach. The model showed that, in this soil, the effect of the water flow dominates over the effect of the degradation parameters on the leaching at a 1-m depth.
Duives, Dorine C.; Daamen, Winnie; Hoogendoorn, Serge P.
2016-04-01
In recent years numerous pedestrian simulation tools have been developed that can support crowd managers and government officials in their tasks. New technologies to monitor pedestrian flows are in dire need of models that allow for rapid state-estimation. Many contemporary pedestrian simulation tools model the movements of pedestrians at a microscopic level, which does not provide an exact solution. Macroscopic models capture the fundamental characteristics of the traffic state at a more aggregate level, and generally have a closed form solution which is necessary for rapid state estimation for traffic management purposes. This contribution presents a next step in the calibration and validation of the macroscopic continuum model detailed in Hoogendoorn et al. (2014). The influence of global and local route choice on the development of crowd movement phenomena, such as dissipation, lane-formation and stripe-formation, is studied. This study shows that most self-organization phenomena and behavioural trends only develop under very specific conditions, and as such can only be simulated using specific parameter sets. Moreover, all crowd movement phenomena can be reproduced by means of the continuum model using one parameter set. This study concludes that the incorporation of local route choice behaviour and the balancing of the aptitude of pedestrians with respect to their own class and other classes are both essential in the correct prediction of crowd movement dynamics.
Schroeter, Jens; Wunsch, Carl
1986-01-01
The paper studies with finite difference nonlinear circulation models the uncertainties in interesting flow properties, such as western boundary current transport, potential and kinetic energy, owing to the uncertainty in the driving surface boundary condition. The procedure is based upon nonlinear optimization methods. The same calculations permit quantitative study of the importance of new information as a function of type, region of measurement and accuracy, providing a method to study various observing strategies. Uncertainty in a model parameter, the bottom friction coefficient, is studied in conjunction with uncertain measurements. The model is free to adjust the bottom friction coefficient such that an objective function is minimized while fitting a set of data to within prescribed bounds. The relative importance of the accuracy of the knowledge about the friction coefficient with respect to various kinds of observations is then quantified, and the possible range of the friction coefficients is calculated.
Mehta, Piyush M.; Kubicek, Martin; Minisci, Edmondo; Vasile, Massimiliano
2017-01-01
Well-known tools developed for satellite and debris re-entry perform break-up and trajectory simulations in a deterministic sense and do not perform any uncertainty treatment. The treatment of uncertainties associated with the re-entry of a space object requires a probabilistic approach. A Monte Carlo campaign is the intuitive approach to performing a probabilistic analysis, however, it is computationally very expensive. In this work, we use a recently developed approach based on a new derivation of the high dimensional model representation method for implementing a computationally efficient probabilistic analysis approach for re-entry. Both aleatoric and epistemic uncertainties that affect aerodynamic trajectory and ground impact location are considered. The method is applicable to both controlled and un-controlled re-entry scenarios. The resulting ground impact distributions are far from the typically used Gaussian or ellipsoid distributions.
Liang Tang
2010-01-01
Full Text Available A mathematical model for M/G/1-type queueing networks with multiple user applications and limited resources is established. The goal is to develop a dynamic distributed algorithm for this model, which supports all data traffic as efficiently as possible and makes optimally fair decisions about how to minimize the network performance cost. An online policy gradient optimization algorithm based on a single sample path is provided to avoid suffering from a “curse of dimensionality”. The asymptotic convergence properties of this algorithm are proved. Numerical examples provide valuable insights for bridging mathematical theory with engineering practice.
Schotanus, D.; Meeussen, J.C.L.; Lissner, H.; Ploeg, van der M.J.; Wehrer, M.; Totsche, K.U.; Zee, van der S.E.A.T.M.
2014-01-01
Transport and degradation of de-icing chemical (containing propylene glycol, PG) in the vadose zone were studied with a lysimeter experiment and a model, in which transient water flow, kinetic degradation of PG and soil chemistry were combined. The lysimeter experiment indicated that aerobic as well
Schotanus, D.; Meeussen, J.C.L.; Lissner, H.; Ploeg, van der M.J.; Wehrer, M.; Totsche, K.U.; Zee, van der S.E.A.T.M.
2014-01-01
Transport and degradation of de-icing chemical (containing propylene glycol, PG) in the vadose zone were studied with a lysimeter experiment and a model, in which transient water flow, kinetic degradation of PG and soil chemistry were combined. The lysimeter experiment indicated that aerobic as well
Spectral sensitivity analysis of FWI in a constant-gradient background velocity model
Kazei, V.; Kashtan, B.M.; Troyan, V.N.; Mulder, W.A.
2013-01-01
Full waveform inversion suffers from local minima, due to a lack of low frequencies in the data. A reflector below the zone of interest may help in recovering the long-wavelength components of a velocity perturbation, as demonstrated in a paper by Mora. Because smooth models are more popular as init
Sensitivity analysis of ground level ozone in India using WRF-CMAQ models
Sharma, Sumit; Chatani, Satoru; Mahtta, Richa; Goel, Anju; Kumar, Atul
2016-01-01
Ground level ozone is emerging as a pollutant of concern in India. Limited surface monitoring data reveals that ozone concentrations are well above the prescribed national standards. This study aims to simulate the regional and urban scale ozone concentrations in India using WRF-CMAQ models. Sector-
Sensitivity Analysis of a Cognitive Architecture for the Cultural Geography Model
2011-12-01
Theory of Planned Behavior (TPB) proposed by Ajzen (1991). TPB postulates three key factors that determine an individual’s intention, leading to a...BLANK 123 LIST OF REFERENCES Ajzen , I. (1991). The theory of planned behavior . Organizational Behavior and Human Decision Processes, 50(2), 179–211...GEOGRAPHY MODEL ........................................................13 1. Theory of Planned Behavior
Sensitivity Analysis of Empirical Parameters in the Ionosphere-Plasmasphere Model
2011-03-01
drift that is calculated within the model. The most significant results from this comparison occur during the day near Mada - gascar (45◦E) and in the...cases. Although the decrease near Mada - gascar occurs for this case with a maximum decrease of 50% (29 TECU) at 0600UT, the increase in the Southeast
Snoek, J.W.; Stigter, J.D.; Ogink, N.W.M.; Groot Koerkamp, P.W.G.
2014-01-01
Ammonia (NH3) emission can cause acidification and eutrophication of the environment, is an indirect source of nitrous oxide, and is a precursor of fine dust. The current mechanistic NH3 emission base model for explaining and predicting NH3 emissions from dairy cow houses with cubicles, a floor and
Abdallah, Wael
2011-05-18
Interfacial tension (IFT) measurements of Dodecane/brine systems at different concentrations and Dodecane/deionized water subject to different Dodecane purification cycles were taken over extended durations at room temperature and pressure to investigate the impact of aging. When a fresh droplet was formed, a sharp drop in IFT was observed assumed to be a result of intrinsic impurity adsorption at the interface. The subsequent measurements exhibited a prolonged equilibration period consistent with diffusion from the bulk phase to the interface. Our results indicate that minute amounts of impurities present in experimental chemical fluids "used as received" have a drastic impact on the properties of the interface. Initial and equilibrium IFT are shown to be dramatically different, therefore it is important to be cautious of utilizing IFT values in numerical models. The study demonstrates the impact these variations in IFT have on relative permeability relationships by adopting a simple pore network model simulation.
Sensitivity analysis of a biofilm model describing mixed growth of nitrite oxidisers in a CSTR.
Kornaros, M; Dokianakis, S N; Lyberatos, G
2006-01-01
A simple kinetic model has been developed for describing nitrite oxidation by autotrophic aerobic nitrifiers in a CSTR reactor, in which mixed (suspended and attached) growth conditions are prevailing. In this work, a critical dimensionless parameter is identified containing both biofilm characteristics and microbial kinetic parameters, as well as the specific (per volume) surface of the reactor configuration used. Evaluation of this dimensionless parameter can easily provide information on whether or not wall attachment is critical, and should be taken into account either in kinetic studies or in reactor design, when specific pollutants are to be removed from the waste influent stream. The effect of bulk dissolved oxygen (DO) concentration on the validity of this model is addressed and minimum non-limiting DO concentrations are proposed depending on the reactor configuration.
Scaling the Earth: A Sensitivity Analysis of Terrestrial Exoplanetary Interior Models
Unterborn, Cayman T; Panero, Wendy R
2015-01-01
An exoplanet's structure and composition are first-order controls of the planet's habitability. We explore which aspects of bulk terrestrial planet composition and interior structure affect the chief observables of an exoplanet: its mass and radius. We apply these perturbations to the Earth, the planet we know best. Using the mineral physics toolkit BurnMan to self-consistently calculate mass-radius models, we find that core radius, presence of light elements in the core and an upper-mantle consisting of low-pressure silicates have the largest effect on the final calculated mass at a given radius, with mantle composition being secondary. We further apply this model to determine the interior composition of Kepler-36b, finding that it is likely structurally similar to the Earth with Si/Fe = 1.14 compared to Earth's Si/Fe = 1 and Sun's Si/Fe = 1.19. We expand these results provide a grid of terrestrial mass-radius models for determining whether exoplanets are indeed "Earth-like" as bound by their composition and...
Eigenfrequency sensitivity analysis of flexible rotors
Šašek J.
2007-10-01
Full Text Available This paper deals with sensitivity analysis of eigenfrequencies from the viewpoint of design parameters. The sensitivity analysis is applied to a rotor which consists of a shaft and a disk. The design parameters of sensitivity analysis are the disk radius and the disk width. The shaft is modeled as a 1D continuum using shaft finite elements. The disks of rotating systems are commonly modeled as rigid bodies. The presented approach to the disk modeling is based on a 3D flexible continuum discretized using hexahedral finite elements. The both components of the rotor are connected together by special proposed couplings. The whole rotor is modeled in rotating coordinate system with considering rotation influences (gyroscopic and dynamics stiffness matrices.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Jiang, S C; Zhang, X X
2005-12-01
A two-dimensional model was developed to model the effects of dynamic changes in the physical properties on tissue temperature and damage to simulate laser-induced interstitial thermotherapy (LITT) treatment procedures with temperature monitoring. A modified Monte Carlo method was used to simulate photon transport in the tissue in the non-uniform optical property field with the finite volume method used to solve the Pennes bioheat equation to calculate the temperature distribution and the Arrhenius equation used to predict the thermal damage extent. The laser light transport and the heat transfer as well as the damage accumulation were calculated iteratively at each time step. The influences of different laser sources, different applicator sizes, and different irradiation modes on the final damage volume were analyzed to optimize the LITT treatment. The numerical results showed that damage volume was the smallest for the 1,064-nm laser, with much larger, similar damage volumes for the 980- and 850-nm lasers at normal blood perfusion rates. The damage volume was the largest for the 1,064-nm laser with significantly smaller, similar damage volumes for the 980- and 850-nm lasers with temporally interrupted blood perfusion. The numerical results also showed that the variations in applicator sizes, laser powers, heating durations and temperature monitoring ranges significantly affected the shapes and sizes of the thermal damage zones. The shapes and sizes of the thermal damage zones can be optimized by selecting different applicator sizes, laser powers, heating duration times, temperature monitoring ranges, etc.
Quan Zhou
2015-01-01
Full Text Available Eddy current brake (ECB is an attractive contactless brake whereas it suffers from braking torque attenuation when the rotating speed increases. To stabilize the ECB’s torque generation property, this paper introduces the concept of anti-magneto-motive force to develop the ECB model on the fundamental of magnetic circles. In the developed model, the eddy current demagnetization and the influence of temperature which make the braking torque attenuation are clearly presented. Using the developed model of ECB, the external and internal characteristics of the ECB are simulated through programming by MATLAB. To find the sensibility of the influences on ECB’s torque generation stability, the stability indexes are defined and followed by a sensibility analysis on the internal parameters of an ECB. Finally, this paper indicates that (i the stability of ECB’s torque generating property could be enhanced by obtaining the optimal combination of “demagnetization speed point and the nominal maximum braking torque.” (ii The most remarkable influencing factor on the shifting the demagnetization speed point of ECB was the thickness of the air-gap. (iii The radius of pole shoe’s cross section area and the distance from the pole shoe center to the rotation center are both the most significant influences on the nominal maximum braking torque.
Y. Y. Yu
2013-01-01
Full Text Available To accurately estimate past terrestrial carbon pools is the key to understanding the global carbon cycle and its relationship with the climate system. SoilGen2 is a useful tool to obtain aspects of soil properties (including carbon content by simulating soil formation processes; thus it offers an opportunity for both past soil carbon pool reconstruction and future carbon pool prediction. In order to apply it to various environmental conditions, parameters related to carbon cycle process in SoilGen2 are calibrated based on six soil pedons from two typical loess deposition regions (Belgium and China. Sensitivity analysis using the Morris method shows that decomposition rate of humus (k_{HUM}, fraction of incoming plant material as leaf litter (fr_{ecto} and decomposition rate of resistant plant material (k_{RPM} are the three most sensitive parameters that would cause the greatest uncertainty in simulated change of soil organic carbon in both regions. According to the principle of minimizing the difference between simulated and measured organic carbon by comparing quality indices, the suited values of k_{HUM}, (fr_{ecto} and k_{RPM} in the model are deduced step by step and validated for independent soil pedons. The difference of calibrated parameters between Belgium and China may be attributed to their different vegetation types and climate conditions. This calibrated model allows more accurate simulation of carbon change in the whole pedon and has potential for future modeling of carbon cycle over long timescales.
A numerical comparison of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.
Rankinen, K.; Granlund, K. [Finnish Environmental Inst., Helsinki (Finland); Futter, M. N. [Swedish Univ. of Agricultural Sciences, Uppsala (Sweden)
2013-11-01
The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small. (orig.)
Sensitivity Analysis Using Simple Additive Weighting Method
Wayne S. Goodridge
2016-05-01
Full Text Available The output of a multiple criteria decision method often has to be analyzed using some sensitivity analysis technique. The SAW MCDM method is commonly used in management sciences and there is a critical need for a robust approach to sensitivity analysis in the context that uncertain data is often present in decision models. Most of the sensitivity analysis techniques for the SAW method involve Monte Carlo simulation methods on the initial data. These methods are computationally intensive and often require complex software. In this paper, the SAW method is extended to include an objective function which makes it easy to analyze the influence of specific changes in certain criteria values thus making easy to perform sensitivity analysis.
Rodrigo-Ilarri, Javier; Segura-Sobrino, Francisco; Rodrigo-Clavero, Maria-Elena
2014-05-01
Landfills are commonly used as the final deposit of urban solid waste. Despite the waste is previously processed on a treatment plant, the final amount of organic matter which reaches the landfill is large however. The biodegradation of this organic matter forms a mixture of greenhouse gases (essentially Methane and Carbon-Dioxide as well as Ammonia and Hydrogen Sulfide). From the environmental point of view, solid waste landfills are therefore considered to be one of the main greenhouse gas sources. Different mathematical models are usually applied to predict the amount of biogas produced on real landfills. The waste chemical composition and the availability of water in the solid waste appear to be the main parameters of these models. Results obtained when performing a sensitivity analysis over the biogas production model parameters under real conditions are shown. The importance of a proper characterizacion of the waste as well as the necessity of improving the understanding of the behaviour and development of the water on the unsaturated mass of waste are emphasized.
Fate and transport of mercury in soil systems : a numerical model in HP1 and sensitivity analysis
Leterme, Bertrand; Jacques, Diederik
2013-04-01
demethylation was not implemented, because it could be neglected in an oxidising environment. However, if the model is to be tested in more reducing conditions (e.g. shallow groundwater table), methyl- and dimethylmercury formation can be non negligible. Using 50 year time series of daily weather observations in Dessel (Belgium) and a typical sandy soil with deep groundwater (free drainage, oxic conditions), a sensitivity analysis was performed to assess the relative importance of processes and parameters within the model. We used the elementary effects method (Morris, 1991; Campolongo et al., 2007), which draws trajectories across the parameter space to derive information on the global sensitivity of the selected input parameters. The impact of different initial contamination phases (solid, NAPL, aqueous and combinations of these) was also tested. Simulation results are presented in terms of (i) Hg volatilized to the atmosphere; (ii) Hg leached out of the soil profile; (iii) Hg still present in the soil horizon originally polluted; and (iv) Hg still present in the soil profile but below the original contaminated horizon. Processes and parameters identified as critical based on the sensitivity analysis differ from one scenario to the other ; depending on pollution type (cinnabar, NAPL, aqueous Hg), on the indicator assessed and on time (after 5, 25 or 50 years). However, in general DOM in soil water was the most critical parameter. Other important parameters were those related to Hg sorption on SOM (thiols, and humic and fulvic acids), and to Hg complexation with DOM. Initial Hg concentration was also often identified as a sensitive parameter. Interactions between factors and non linear effects as measured by the elementary effect method were generally important, but also dependent on the type of contamination and on time. No model calibration was performed until now. The numerical tool could greatly benefit from partial model calibration and/or validation. Ideally, detailed
Sensitivity analysis and application in exploration geophysics
Tang, R.
2013-12-01
In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the
Ciffroy, P; Alfonso, B; Altenpohl, A; Banjac, Z; Bierkens, J; Brochot, C; Critto, A; De Wilde, T; Fait, G; Fierens, T; Garratt, J; Giubilato, E; Grange, E; Johansson, E; Radomyski, A; Reschwann, K; Suciu, N; Tanaka, T; Tediosi, A; Van Holderbeke, M; Verdonck, F
2016-10-15
MERLIN-Expo is a library of models that was developed in the frame of the FP7 EU project 4FUN in order to provide an integrated assessment tool for state-of-the-art exposure assessment for environment, biota and humans, allowing the detection of scientific uncertainties at each step of the exposure process. This paper describes the main features of the MERLIN-Expo tool. The main challenges in exposure modelling that MERLIN-Expo has tackled are: (i) the integration of multimedia (MM) models simulating the fate of chemicals in environmental media, and of physiologically based pharmacokinetic (PBPK) models simulating the fate of chemicals in human body. MERLIN-Expo thus allows the determination of internal effective chemical concentrations; (ii) the incorporation of a set of functionalities for uncertainty/sensitivity analysis, from screening to variance-based approaches. The availability of such tools for uncertainty and sensitivity analysis aimed to facilitate the incorporation of such issues in future decision making; (iii) the integration of human and wildlife biota targets with common fate modelling in the environment. MERLIN-Expo is composed of a library of fate models dedicated to non biological receptor media (surface waters, soils, outdoor air), biological media of concern for humans (several cultivated crops, mammals, milk, fish), as well as wildlife biota (primary producers in rivers, invertebrates, fish) and humans. These models can be linked together to create flexible scenarios relevant for both human and wildlife biota exposure. Standardized documentation for each model and training material were prepared to support an accurate use of the tool by end-users. One of the objectives of the 4FUN project was also to increase the confidence in the applicability of the MERLIN-Expo tool through targeted realistic case studies. In particular, we aimed at demonstrating the feasibility of building complex realistic exposure scenarios and the accuracy of the
Schwarz, Massimiliano; Cohen, Denis
2017-04-01
Morphology and extent of hydrological pathways, in combination with the spatio-temporal variability of rainfall events and the heterogeneities of hydro-mechanical properties of soils, has a major impact on the hydrological conditions that locally determine the triggering of shallow landslides. The coupling of these processes at different spatial scales is an enormous challenge for slope stability modeling at the catchment scale. In this work we present a sensitivity analysis of a new dual-porosity hydrological model implemented in the hydro-mechanical model SOSlope for the modeling of shallow landslides on vegetated hillslopes. The proposed model links the calculation of the saturation dynamic of preferential flow-paths based on hydrological and topographical characteristics of the landscape to the hydro-mechanical behavior of the soil along a potential failure surface due to the changes of soil matrix saturation. Furthermore, the hydro-mechanical changes of soil conditions are linked to the local stress-strain properties of the (rooted-)soil that ultimately determine the force redistribution and related deformations at the hillslope scale. The model considers forces to be redistributed through three types of solicitations: tension, compression, and shearing. The present analysis shows how the conditions of deformation due to the passive earth pressure mobilized at the toe of the landslide are particularly important in defining the timing and extension of shallow landslides. The model also shows that, in densely rooted hillslopes, lateral force redistribution under tension through the root-network may substantially contribute to stabilizing slopes, avoiding crack formation and large deformations. The results of the sensitivity analysis are discussed in the context of protection forest management and bioengineering techniques.
Phantom pain : A sensitivity analysis
Borsje, Susanne; Bosmans, JC; Van der Schans, CP; Geertzen, JHB; Dijkstra, PU
2004-01-01
Purpose : To analyse how decisions to dichotomise the frequency and impediment of phantom pain into absent and present influence the outcome of studies by performing a sensitivity analysis on an existing database. Method : Five hundred and thirty-six subjects were recruited from the database of an o
Jiménez-Murcia, Susana; Fernández-Aranda, Fernando; Mestre-Bach, Gemma; Granero, Roser; Tárrega, Salomé; Torrubia, Rafael; Aymamí, Neus; Gómez-Peña, Mónica; Soriano-Mas, Carles; Steward, Trevor; Moragas, Laura; Baño, Marta; Del Pino-Gutiérrez, Amparo; Menchón, José M
2017-06-01
Most individuals will gamble during their lifetime, yet only a select few will develop gambling disorder. Gray's Reinforcement Sensitivity Theory holds promise for providing insight into gambling disorder etiology and symptomatology as it ascertains that neurobiological differences in reward and punishment sensitivity play a crucial role in determining an individual's affect and motives. The aim of the study was to assess a mediational pathway, which included patients' sex, personality traits, reward and punishment sensitivity, and gambling-severity variables. The Sensitivity to Punishment and Sensitivity to Reward Questionnaire, the South Oaks Gambling Screen, the Symptom Checklist-Revised, and the Temperament and Character Inventory-Revised were administered to a sample of gambling disorder outpatients (N = 831), diagnosed according to DSM-5 criteria, attending a specialized outpatient unit. Sociodemographic variables were also recorded. A structural equation model found that both reward and punishment sensitivity were positively and directly associated with increased gambling severity, sociodemographic variables, and certain personality traits while also revealing a complex mediational role for these dimensions. To this end, our findings suggest that the Sensitivity to Punishment and Sensitivity to Reward Questionnaire could be a useful tool for gaining a better understanding of different gambling disorder phenotypes and developing tailored interventions.
Kang, Daiwen; Aneja, Viney P.; Mathur, Rohit; Ray, John D.
2003-10-01
A detailed modeling analysis is conducted focusing on nonmethane hydrocarbons and ozone in three southeast United States national parks for a 15-day time period (14-29 July 1995) characterized by high O3 surface concentrations. The three national parks are Smoky Mountains National Park (GRSM), Mammoth Cave National Park (MACA), and Shenandoah National Park (SHEN), Big Meadows. A base emission scenario and eight variant predictions are analyzed, and predictions are compared with data observed at the three locations for the same time period. Model-predicted concentrations are higher than observed values for O3 (with a cutoff of 40 ppbv) by 3.0% at GRSM, 19.1% at MACA, and 9.0% at SHEN (mean normalized bias error). They are very similar to observations for overall mean ozone concentrations at GRSM and SHEN. They generally agree (the same order of magnitude) with observed values for lumped paraffin compounds but are an order of magnitude lower for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene). Model sensitivity analyses here indicate that each location differs in terms of volatile organic compound (VOC) capacity to produce O3, but a maximum VOC capacity point (MVCP) exists at all locations that changes the influence of VOCs on O3 from net production to production suppression. Analysis of individual model processes shows that more than 50% of daytime O3 concentrations at the high-elevation rural locations (GRSM and SHEN) are transported from other areas; local chemistry is the second largest O3 contributor. At the low-elevation location (MACA), about 80% of daytime O3 is produced by local chemistry and 20% is transported from other areas. Local emissions (67-95%) are predominantly responsible for VOCs at all locations, the rest coming from transport. Chemistry processes are responsible for about 50% removal of VOCs for all locations; less than 10% are lost to surface deposition and the rest are exported to other areas
An analysis of sensitivity tests
Neyer, B.T.
1992-03-06
A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.
George P. Petropoulos
2015-05-01
Full Text Available In today’s changing climate, the development of robust, accurate and globally applicable models is imperative for a wider understanding of Earth’s terrestrial biosphere. Moreover, an understanding of the representation, sensitivity and coherence of such models are vital for the operationalisation of any physically based model. A Global Sensitivity Analysis (GSA was conducted on the SimSphere land biosphere model in which a meta-modelling method adopting Bayesian theory was implemented. Initially, effects of assuming uniform probability distribution functions (PDFs for the model inputs, when examining sensitivity of key quantities simulated by SimSphere at different output times, were examined. The development of topographic model input parameters (e.g., slope, aspect, and elevation were derived within a Geographic Information System (GIS before implementation within the model. The effect of time of the simulation on the sensitivity of previously examined outputs was also analysed. Results showed that simulated outputs were significantly influenced by changes in topographic input parameters, fractional vegetation cover, vegetation height and surface moisture availability in agreement with previous studies. Time of model output simulation had a significant influence on the absolute values of the output variance decomposition, but it did not seem to change the relative importance of each input parameter. Sensitivity Analysis (SA results of the newly modelled outputs allowed identification of the most responsive model inputs and interactions. Our study presents an important step forward in SimSphere verification given the increasing interest in its use both as an independent modelling and educational tool. Furthermore, this study is very timely given on-going efforts towards the development of operational products based on the synergy of SimSphere with Earth Observation (EO data. In this context, results also provide additional support for the
Furfaro, R.; Morris, R. D.; Kottas, A.; Taddy, M.; Ganapol, B. D.
2007-12-01
Analyzing, quantifying and reporting the uncertainty in remote sensed data products is critical for our understanding of Earth's coupled system. It is the only way in which the uncertainty of further analyses using these data products as inputs can be quantified. Analyzing the source of the data product uncertainties can identify where the models must be improved, or where better input information must be obtained. Here we focus on developing a probabilistic framework for analysis of uncertainties occurring when satellite data (e.g., MODIS) are employed to retrieve biophysical properties of vegetation. Indeed, the process of remotely estimating vegetation properties involves inverting a Radiative Transfer Model (RTM), as in the case of the MOD15 algorithm where seven atmospherically corrected reflectance factors are ingested and compared to a set of computed, RTM-based, reflectances (look-up table) to infer the Leaf Area Index (LAI). Since inversion is generally ill-conditioned, and since a-priori information is important in constraining the inverse model, sensitivity analysis plays a key role in defining which parameters have the greatest impact to the computed observation. We develop a framework to perform global sensitivity analysis, i.e., to determine how the output changes as all inputs vary continuously. We used a coupled Leaf-Canopy radiative transfer Model (LCM) to approximate the functional relationship between the observed reflectance and vegetation biophysical parameters. LCM was designed to study the feasibility of detecting leaf/canopy biochemistry using remote sensed observations and has the unique capability to include leaf biochemistry (e.g., chlorophyll, water, lignin, protein) as input parameters. The influence of LCM input parameters (including canopy morphological and biochemical parameters) on the hemispherical reflectance is captured by computing the "main effects", which give information about the influence of each input, and the "sensitivity
W. Zhang
2012-03-01
Full Text Available The high-order decoupled direct method in three dimensions for particulate matter (HDDM-3D/PM has been implemented in the Community Multiscale Air Quality (CMAQ model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity analysis of ISORROPIA, the inorganic aerosol module of CMAQ. A case-specific approach has been applied, and the sensitivities of activity coefficients and water content are explicitly computed. Stand-alone tests are performed for ISORROPIA by comparing the sensitivities (first- and second-order computed by HDDM and the brute force (BF approximations. Similar comparison has also been carried out for CMAQ sensitivities simulated using a week-long winter episode for a continental US domain. Second-order sensitivities of aerosol species (e.g., sulfate, nitrate, and ammonium with respect to domain-wide SO_{2}, NO_{x}, and NH_{3} emissions show agreement with BF results, yet exhibit less noise in locations where BF results are demonstrably inaccurate. Second-order sensitivity analysis elucidates poorly understood nonlinear responses of secondary inorganic aerosols to their precursors and competing species. Adding second-order sensitivity terms to the Taylor series projection of the nitrate concentrations with a 50% reduction in domain-wide NO_{x} or SO_{2} emissions rates improves the prediction with statistical significance.
A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...
Wei, Wei; Larrey-Lassalle, Pyrene; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis
2015-01-06
Sensitivity analysis (SA) is a significant tool for studying the robustness of results and their sensitivity to uncertainty factors in life cycle assessment (LCA). It highlights the most important set of model parameters to determine whether data quality needs to be improved, and to enhance interpretation of results. Interactions within the LCA calculation model and correlations within Life Cycle Inventory (LCI) input parameters are two main issues among the LCA calculation process. Here we propose a methodology for conducting a proper SA which takes into account the effects of these two issues. This study first presents the SA in an uncorrelated case, comparing local and independent global sensitivity analysis. Independent global sensitivity analysis aims to analyze the variability of results because of the variation of input parameters over the whole domain of uncertainty, together with interactions among input parameters. We then apply a dependent global sensitivity approach that makes minor modifications to traditional Sobol indices to address the correlation issue. Finally, we propose some guidelines for choosing the appropriate SA method depending on the characteristics of the model and the goals of the study. Our results clearly show that the choice of sensitivity methods should be made according to the magnitude of uncertainty and the degree of correlation.
Fujimura, Kazumasa; Iseri, Yoshihiko; Kanae, Shinjiro; Murakami, Masahiro
2014-05-01
Accurate estimation of low flow can contribute to better water resources management and also lead to more reliable evaluation of climate change impacts on water resources. In the early study, the nonlinearity of low flow related to the storage in the basin was suggested by Horton (1937) as the exponential function of Q=KSN, where Q is the discharge, S is the storage, K is a constant and N is the exponent value. In the recent study by Ding (2011) showed the general storage-discharge equation of Q = KNSN. Since the constant K is defined as the fractional recession constant and symbolized as Au by Ando et al. (1983), in this study, we rewrite this equation as Qg=AuNSgN, where Qg is the groundwater runoff and Sg is the groundwater storage. Although this equation was applied to a short-term runoff event of less than 14 hours using the unit hydrograph method by Ding, it was not yet applied for a long-term runoff event including low flow more than 10 years. This study performed a sensitive analysis of two parameters of the constant Au and exponent value N by using the hourly hydrological model for two mountainous basins in Japan. The hourly hydrological model used in this study was presented by Fujimura et al. (2012), which comprise the Diskin-Nazimov infiltration model, groundwater recharge and groundwater runoff calculations, and a direct runoff component. The study basins are the Sameura Dam basin (SAME basin) (472 km2) located in the western Japan which has variability of rainfall, and the Shirakawa Dam basin (SIRA basin) (205km2) located in a region of heavy snowfall in the eastern Japan, that are different conditions of climate and geology. The period of available hourly data for the SAME basin is 20 years from 1 January 1991 to 31 December 2010, and for the SIRA basin is 10 years from 1 October 2003 to 30 September 2013. In the sensitive analysis, we prepared 19900 sets of the two parameters of Au and N, the Au value ranges from 0.0001 to 0.0100 in steps of 0
Whitehead, P G; Leckie, H; Rankinen, K; Butterfield, D; Futter, M N; Bussi, G
2016-12-01
Pathogens are an ongoing issue for catchment water management and quantifying their transport, loss and potential impacts at key locations, such as water abstractions for public supply and bathing sites, is an important aspect of catchment and coastal management. The Integrated Catchment Model (INCA) has been adapted to model the sources and sinks of pathogens and to capture the dominant dynamics and processes controlling pathogens in catchments. The model simulates the stores of pathogens in soils, sediments, rivers and groundwaters and can account for diffuse inputs of pathogens from agriculture, urban areas or atmospheric deposition. The model also allows for point source discharges from intensive livestock units or from sewage treatment works or any industrial input to river systems. Model equations are presented and the new pathogens model has been applied to the River Thames in order to assess total coliform (TC) responses under current and projected future land use. A Monte Carlo sensitivity analysis indicates that the input coliform estimates from agricultural sources and decay rates are the crucial parameters controlling pathogen behaviour. Whilst there are a number of uncertainties associated with the model that should be accounted for, INCA-Pathogens potentially provides a useful tool to inform policy decisions and manage pathogen loading in river systems. Copyright © 2016. Published by Elsevier B.V.
Francone, C.; Cassardo, C.; Richiardone, R.; Confalonieri, R.
2012-09-01
We used sensitivity-analysis techniques to investigate the behaviour of the land-surface model UTOPIA while simulating the micrometeorology of a typical northern Italy vineyard ( Vitis vinifera L.) under average climatic conditions. Sensitivity-analysis experiments were performed by sampling the vegetation parameter hyperspace using the Morris method and quantifying the parameter relevance across a wide range of soil conditions. This method was used since it proved its suitability for models with high computational time or with a large number of parameters, in a variety of studies performed on different types of biophysical models. The impact of input variability was estimated on reference model variables selected among energy (e.g. net radiation, sensible and latent heat fluxes) and hydrological (e.g. soil moisture, surface runoff, drainage) budget components. Maximum vegetation cover and maximum leaf area index were ranked as the most relevant parameters, with sensitivity indices exceeding the remaining parameters by about one order of magnitude. Soil variability had a high impact on the relevance of most of the vegetation parameters: coefficients of variation calculated on the sensitivity indices estimated for the different soils often exceeded 100 %. The only exceptions were represented by maximum vegetation cover and maximum leaf area index, which showed a low variability in sensitivity indices while changing soil type, and confirmed their key role in affecting model results.
La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto
2016-09-01
With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.
W. Zhang
2011-10-01
Full Text Available The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM has been implemented in the Community Multiscale Air Quality (CMAQ model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity analysis of ISORROPIA, the inorganic aerosol module of CMAQ. A case-specific approach has been applied, and the sensitivities of activity coefficients and water content are explicitly computed. Stand-alone tests are performed for ISORROPIA by comparing the sensitivities (first- and second-order computed by HDDM and the brute force (BF approximations. Similar comparison has also been carried out for CMAQ results simulated using a week-long winter episode for a continental US domain. Second-order sensitivities of aerosol species (e.g., sulfate, nitrate, and ammonium with respect to domain-wide SO_{2}, NO_{x}, and NH_{3} emissions show agreement with BF results, yet exhibit less noise in locations where BF results are demonstrably inaccurate. Second-order sensitivity analysis elucidates nonlinear responses of secondary inorganic aerosols to their precursors and competing species that have not yet been well-understood with other approaches. Including second-order sensitivity coefficients in the Taylor series projection of the nitrate concentrations with a 50% reduction in domain-wide NO_{x} emission shows a statistically significant improvement compared to the first-order Taylor series projection.
Zhang, Keni; Wu, Yu-Shu; Houseworth, James E
2006-02-01
The unsaturated fractured volcanic deposits at Yucca Mountain in Nevada, USA, have been intensively investigated as a possible repository site for storing high-level radioactive waste. Field studies at the site have revealed that there exist large variabilities in hydrological parameters over the spatial domain of the mountain. Systematic analyses of hydrological parameters using a site-scale three-dimensional unsaturated zone (UZ) flow model have been undertaken. The main objective of the sensitivity analyses was to evaluate the effects of uncertainties in hydrologic parameters on modeled UZ flow and contaminant transport results. Sensitivity analyses were carried out relative to fracture and matrix permeability and capillary strength (van Genuchten {alpha}) through variation of these parameter values by one standard deviation from the base-case values. The parameter variation resulted in eight parameter sets. Modeling results for the eight UZ flow sensitivity cases have been compared with field observed data and simulation results from the base-case model. The effects of parameter uncertainties on the flow fields were evaluated through comparison of results for flow and transport. In general, this study shows that uncertainties in matrix parameters cause larger uncertainty in simulated moisture flux than corresponding uncertainties in fracture properties for unsaturated flow through heterogeneous fractured rock.
Barbato, Michele; Conte, J P
2005-01-01
This paper focuses on a comparison between displacement-based and force-based elements for static and dynamic response sensitivity analysis of frame type structures. Previous research has shown that force-based frame elements are superior to classical displacement-based elements enabling, at no significant additional computational costs, a drastic reduction in the number of elements required for a given level of accuracy in the simulated response. The present work shows that this advantage of...
Reliability Sensitivity Analysis for Location Scale Family
洪东跑; 张海瑞
2011-01-01
Many products always operate under various complex environment conditions. To describe the dynamic influence of environment factors on their reliability, a method of reliability sensitivity analysis is proposed. In this method, the location parameter is assumed as a function of relevant environment variables while the scale parameter is assumed as an un- known positive constant. Then, the location parameter function is constructed by using the method of radial basis function. Using the varied environment test data, the log-likelihood function is transformed to a generalized linear expression by de- scribing the indicator as Poisson variable. With the generalized linear model, the maximum likelihood estimations of the model coefficients are obtained. With the reliability model, the reliability sensitivity is obtained. An instance analysis shows that the method is feasible to analyze the dynamic variety characters of reliability along with environment factors and is straightforward for engineering application.
Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan
2017-06-01
Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.
Lauvernet, Claire; Noll, Dorothea; Muñoz-Carpena, Rafael; Carluer, Nadia
2014-05-01
agricultural field and the VFS characteristics. These scenarios are based on: 2 types of climates (North and South-west of France), different rainfall intensities and durations, different lengths and slopes of hillslope, different humidity conditions, 4 soil types (silt loam, sandy loam, clay loam, sandy clay loam), 2 crops (wheat and corn) for the contributive area, 2 water table depths (1m and 2.5m) and 4 soil types for the VFS. The sizing method was applied for all these scenarios, and a sensitivity analysis of the VFS optimal length was performed for all the input parameters in order to understand their influence, and to identify for which a special care has to be given. Based on that sensitivity analysis, a metamodel has been developed. The idea is to simplify the whole toolchain and to make it possible to perform the buffer sizing by using a unique tool and a smaller set of parameters, given the available information from the end users. We first compared several mathematical methods to compute the metamodel, and then validated them on an agricultural watershed with real data in the North-West of France.
Winck, Flavia V.; Melo, David O. Páez; Riaño-Pachón, Diego M.; Martins, Marina C. M.; Caldana, Camila; Barrios, Andrés F. González
2016-01-01
The development of microalgae sustainable applications needs better understanding of microalgae biology. Moreover, how cells coordinate their metabolism toward biomass accumulation is not fully understood. In this present study, flux balance analysis (FBA) was performed to identify sensitive metabolic pathways of Chlamydomonas reinhardtii under varied CO2 inputs. The metabolic network model of Chlamydomonas was updated based on the genome annotation data and sensitivity analysis revealed CO2 sensitive reactions. Biological experiments were performed with cells cultivated at 0.04% (air), 2.5, 5, 8, and 10% CO2 concentration under controlled conditions and cell growth profiles and biomass content were measured. Pigments, lipids, proteins, and starch were further quantified for the reference low (0.04%) and high (10%) CO2 conditions. The expression level of candidate genes of sensitive reactions was measured and validated by quantitative real time PCR. The sensitive analysis revealed mitochondrial compartment as the major affected by changes on the CO2 concentrations and glycolysis/gluconeogenesis, glyoxylate, and dicarboxylate metabolism among the affected metabolic pathways. Genes coding for glycerate kinase (GLYK), glycine cleavage system, H-protein (GCSH), NAD-dependent malate dehydrogenase (MDH3), low-CO2 inducible protein A (LCIA), carbonic anhydrase 5 (CAH5), E1 component, alpha subunit (PDC3), dual function alcohol dehydrogenase/acetaldehyde dehydrogenase (ADH1), and phosphoglucomutase (GPM2), were defined, among other genes, as sensitive nodes in the metabolic network simulations. These genes were experimentally responsive to the changes in the carbon fluxes in the system. We performed metabolomics analysis using mass spectrometry validating the modulation of carbon dioxide responsive pathways and metabolites. The changes on CO2 levels mostly affected the metabolism of amino acids found in the photorespiration pathway. Our updated metabolic network was
Flavia Vischi Winck
2016-02-01
Full Text Available The development of microalgae sustainable applications needs better understanding of microalgae biology. Moreover, how cells coordinate their metabolism towards biomass accumulation is not fully understood. In this present study, flux balance analysis (FBA was performed to identify sensitive metabolic pathways of Chlamydomonas reinhardtii under varied CO2 inputs. The metabolic network model of Chlamydomonas was updated based on the genome annotation data and sensitivity analysis revealed CO2 sensitive reactions. Biological experiments were performed with cells cultivated at 0.04% (air, 2.5%, 5%, 8% and 10% CO2 concentration under controlled conditions and cell growth profiles and biomass content were measured. Pigments, lipids, proteins and starch were further quantified for the reference low (0.04% and high (10% CO2 conditions. The expression level of candidate genes of sensitive reactions was measured and validated by quantitative real time qPCR. The sensitive analysis revealed mitochondrial compartment as the major affected by high CO2 levels and glycolysis/gluconeogenesis, glyoxylate and dicarboxylate metabolism among the affected metabolic pathways. Genes coding for glycerate kinase (GLYK, glycine cleavage system, H-protein (GCSH, NAD-dependent malate dehydrogenase (MDH3, low-CO2 inducible protein A (LCIA, carbonic anhydrase 5 (CAH5, E1 component, alpha subunit (PDC3, dual function alcohol dehydrogenase/acetaldehyde dehydrogenase (ADH1 and phosphoglucomutase (GPM2, were defined, among other genes, as sensitive nodes in the metabolic network simulations. These genes were experimentally responsive to the changes in the carbon fluxes in the system. We performed metabolomics analysis using mass spectrometry validating the modulation of carbon dioxide responsive pathways and metabolites. The changes on CO2 levels mostly affected the metabolism of amino acids found in the photorespiration pathway. Our updated metabolic network was compared to
Rahman, Tanzina; Millwater, Harry; Shipley, Heather J
2014-11-15
Aluminum oxide nanoparticles have been widely used in various consumer products and there are growing concerns regarding their exposure in the environment. This study deals with the modeling, sensitivity analysis and uncertainty quantification of one-dimensional transport of nano-sized (~82 nm) aluminum oxide particles in saturated sand. The transport of aluminum oxide nanoparticles was modeled using a two-kinetic-site model with a blocking function. The modeling was done at different ionic strengths, flow rates, and nanoparticle concentrations. The two sites representing fast and slow attachments along with a blocking term yielded good agreement with the experimental results from the column studies of aluminum oxide nanoparticles. The same model was used to simulate breakthrough curves under different conditions using experimental data and calculated 95% confidence bounds of the generated breakthroughs. The sensitivity analysis results showed that slow attachment was the most sensitive parameter for high influent concentrations (e.g. 150 mg/L Al2O3) and the maximum solid phase retention capacity (related to blocking function) was the most sensitive parameter for low concentrations (e.g. 50 mg/L Al2O3).
Li Wang
2017-02-01
Full Text Available The ability to obtain appropriate parameters for an advanced pressurized water reactor (PWR unit model is of great significance for power system analysis. The attributes of that ability include the following: nonlinear relationships, long transition time, intercoupled parameters and difficult obtainment from practical test, posed complexity and difficult parameter identification. In this paper, a model and a parameter identification method for the PWR primary loop system were investigated. A parameter identification process was proposed, using a particle swarm optimization (PSO algorithm that is based on random perturbation (RP-PSO. The identification process included model variable initialization based on the differential equations of each sub-module and program setting method, parameter obtainment through sub-module identification in the Matlab/Simulink Software (Math Works Inc., Natick, MA, USA as well as adaptation analysis for an integrated model. A lot of parameter identification work was carried out, the results of which verified the effectiveness of the method. It was found that the change of some parameters, like the fuel temperature and coolant temperature feedback coefficients, changed the model gain, of which the trajectory sensitivities were not zero. Thus, obtaining their appropriate values had significant effects on the simulation results. The trajectory sensitivities of some parameters in the core neutron dynamic module were interrelated, causing the parameters to be difficult to identify. The model parameter sensitivity could be different, which would be influenced by the model input conditions, reflecting the parameter identifiability difficulty degree for various input conditions.
Avolio, E.; Federico, S.; Miglietta, M.
2017-01-01
The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern...... Italy), in an area characterized by a complex orography near the sea. Results of 1kmÃ—1km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account...... the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching...
Arcella, D; Soggiu, M E; Leclercq, C
2003-10-01
For the assessment of exposure to food-borne chemicals, the most commonly used methods in the European Union follow a deterministic approach based on conservative assumptions. Over the past few years, to get a more realistic view of exposure to food chemicals, risk managers are getting more interested in the probabilistic approach. Within the EU-funded 'Monte Carlo' project, a stochastic model of exposure to chemical substances from the diet and a computer software program were developed. The aim of this paper was to validate the model with respect to the intake of saccharin from table-top sweeteners and cyclamate from soft drinks by Italian teenagers with the use of the software and to evaluate the impact of the inclusion/exclusion of indicators on market share and brand loyalty through a sensitivity analysis. Data on food consumption and the concentration of sweeteners were collected. A food frequency questionnaire aimed at identifying females who were high consumers of sugar-free soft drinks and/or of table top sweeteners was filled in by 3982 teenagers living in the District of Rome. Moreover, 362 subjects participated in a detailed food survey by recording, at brand level, all foods and beverages ingested over 12 days. Producers were asked to provide the intense sweeteners' concentration of sugar-free products. Results showed that consumer behaviour with respect to brands has an impact on exposure assessments. Only probabilistic models that took into account indicators of market share and brand loyalty met the validation criteria.
Hemmati, Reza; Saboori, Hedayat
2016-05-01
Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics.
Antolin, M. Q.; Marinho, F.; Palma, D. A. P.; Martinez, A. S.
2014-04-01
A simulation for the time evolution of the MYRRHA conceptual reactor was developed. The SERPENT code was used to simulate the nuclear fuel depletion and the spallation source which drives the system was simulated using both MCNPX and GEANT4 packages. The obtained results for the neutron energy spectrum from the spallation are coherent with each other and were used as input for the SERPENT code which simulated a constant power operation regime. The obtained results show that the criticality of the system is not sensitive to the spallation models employed and only relative small deviations with respect to the inverse kinetic model coming from the point kinetic equations proposed by Gandini were observed.
He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi
2012-06-15
This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested. Copyright © 2012 Elsevier B.V. All rights reserved.
Chen, Mingjie; Abriola, Linda M.; Amos, Benjamin K.; Suchomel, Eric J.; Pennell, Kurt D.; Löffler, Frank E.; Christ, John A.
2013-08-01
Reductive dechlorination catalyzed by organohalide-respiring bacteria is often considered for remediation of non-aqueous phase liquid (NAPL) source zones due to cost savings, ease of implementation, regulatory acceptance, and sustainability. Despite knowledge of the key dechlorinators, an understanding of the processes and factors that control NAPL dissolution rates and detoxification (i.e., ethene formation) is lacking. A recent column study demonstrated a 5-fold cumulative enhancement in tetrachloroethene (PCE) dissolution and ethene formation (Amos et al., 2009). Spatial and temporal monitoring of key geochemical and microbial (i.e., Geobacter lovleyi and Dehalococcoides mccartyi strains) parameters in the column generated a data set used herein as the basis for refinement and testing of a multiphase, compositional transport model. The refined model is capable of simulating the reactive transport of multiple chemical constituents produced and consumed by organohalide-respiring bacteria and accounts for substrate limitations and competitive inhibition. Parameter estimation techniques were used to optimize the values of sensitive microbial kinetic parameters, including maximum utilization rates, biomass yield coefficients, and endogenous decay rates. Comparison and calibration of model simulations with the experimental data demonstrate that the model is able to accurately reproduce measured effluent concentrations, while delineating trends in dechlorinator growth and reductive dechlorination kinetics along the column. Sensitivity analyses performed on the optimized model parameters indicate that the rates of PCE and cis-1,2-dichloroethene (cis-DCE) transformation and Dehalococcoides growth govern bioenhanced dissolution, as long as electron donor (i.e., hydrogen flux) is not limiting. Dissolution enhancements were shown to be independent of cis-DCE accumulation; however, accumulation of cis-DCE, as well as column length and flow rate (i.e., column residence time
NIR sensitivity analysis with the VANE
Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.
2016-05-01
Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.
Figueiro, Thiago; Choi, Kang-Hoon; Gutsch, Manuela; Freitag, Martin; Hohle, Christoph; Tortai, Jean-Hervé; Saib, Mohamed; Schiavone, Patrick
2012-11-01
In electron proximity effect correction (PEC), the quality of a correction is highly dependent on the quality of the model. Therefore it is of primary importance to have a reliable methodology to extract the parameters and assess the quality of a model. Among others the model describes how the energy of the electrons spreads out in the target material (via the Point Spread Function, PSF) as well as the influence of the resist process. There are different models available in previous studies, as well as several different approaches to obtain the appropriate value for their parameters. However, those are restricted in terms of complexity, or require a prohibitive number of measurements, which is limited for a certain PSF model. In this work, we propose a straightforward approach to obtain the value of parameters of a PSF. The methodology is general enough to apply for more sophisticated models as well. It focused on improving the three steps of model calibration procedure: First, it is using a good set of calibration patterns. Secondly, it secures the optimization step and avoids falling into a local optimum. And finally the developed method provides an improved analysis of the calibration step, which allows quantifying the quality of the model as well as enabling a comparison of different models. The methodology described in the paper is implemented as specific module in a commercial tool.
Lolli, Simone; Madonna, Fabio; Rosoldi, Marco; Pappalardo, Gelsomina; Welton, Ellsworth J.
2016-04-01
The aerosol and cloud impact on climate change is evaluated in terms of enhancement or reduction of the radiative energy, or heat, available in the atmosphere and at the Earth's surface, from the surface (SFC) to the Top Of the Atmosphere (TOA) covering a spectral range from the UV (extraterrestrial shortwave solar radiation) to the far-IR (outgoing terrestrial longwave radiation). Systematic Lidar network measurements from permanent observational sites across the globe are available from the beginning of this current millennium. From the retrieved lidar atmospheric extinction profiles, inputted in the Fu-Liou-Gu (FLG) Radiative Transfer code, it is possible to evaluate the net radiative effect and heating rate of the different aerosol species and clouds. Nevertheless, the lidar instruments may use different techniques (elastic lidar, Raman lidar, multi-wavelength lidar, etc) that translate into uncertainty of the lidar extinction retrieval. The goal of this study is to assess, applying a MonteCarlo technique and the FLG Radiative Transfer model, the sensitivity in calculating the net radiative effect and heating rate of aerosols and clouds for the different lidar techniques, using both synthetic and real lidar data. This sensitivity study is the first step to implement an automatic algorithm to retrieve the net radiative forcing effect of aerosols and clouds from the long records of aerosol measurements available in the frame of EARLINET and MPLNET lidar networks.
E. Simon
2005-04-01
Full Text Available Detailed one-dimensional multilayer biosphere-atmosphere models, also referred to as CANVEG models, are used for more than a decade to describe coupled water-carbon exchange between the terrestrial vegetation and the lower atmosphere. Within the present study, a modified CANVEG scheme is described. A generic parameterization and characterization of biophysical properties of Amazon rain forest canopies is inferred using available field measurements of canopy structure, in-canopy profiles of horizontal wind speed and radiation, canopy albedo, soil heat flux and soil respiration, photosynthetic capacity and leaf nitrogen as well as leaf level enclosure measurements made on sunlit and shaded branches of several Amazonian tree species during the wet and dry season. The sensitivity of calculated canopy energy and CO_{2} fluxes to the uncertainty of individual parameter values is assessed. In the companion paper, the predicted seasonal exchange of energy, CO_{2}, ozone and isoprene is compared to observations.
A bi-modal distribution of leaf area density with a total leaf area index of 6 is inferred from several observations in Amazonia. Predicted light attenuation within the canopy agrees reasonably well with observations made at different field sites. A comparison of predicted and observed canopy albedo shows a high model sensitivity to the leaf optical parameters for near-infrared short-wave radiation (NIR. The predictions agree much better with observations when the leaf reflectance and transmission coefficients for NIR are reduced by 25–40%. Available vertical distributions of photosynthetic capacity and leaf nitrogen concentration suggest a low but significant light acclimation of the rain forest canopy that scales nearly linearly with accumulated leaf area.
Evaluation of the biochemical leaf model, using the enclosure measurements, showed that recommended parameter
Nicoulaud-Gouin, V.; Metivier, J.M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Garcia-Sanchez, L. [Institut de Radioprotection et de Surete Nucleaire-PRPENV/SERIS/L2BT (France)
2014-07-01
The increasing spatial and temporal complexity of models demands methods capable of ranking the influence of their large numbers of parameters. This question specifically arises in assessment studies on the consequences of the Fukushima accident. Sensitivity analysis aims at measuring the influence of input variability on the output response. Generally, two main approaches are distinguished (Saltelli, 2001, Iooss, 2011): - Screening approach, less expensive in computation time and allowing to identify non influential parameters; - Measures of importance, introducing finer quantitative indices. In this category, there are regression-based methods, assuming a linear or monotonic response (Pearson coefficient, Spearman coefficient), and variance-based methods, without assumptions on the model but requiring an increasingly prohibitive number of evaluations when the number of parameters increases. These approaches are available in various statistical programs (notably R) but are still poorly integrated in modelling platforms of radioecological risk assessment. This work aimed at illustrating the benefits of sensitivity analysis in the course of radioecological risk assessments This study used two complementary state-of-art global sensitivity analysis methods: - The screening method of Morris (Morris, 1991; Campolongo et al., 2007) based on limited model evaluations with a one-at-a-time (OAT) design; - The variance-based Sobol' sensitivity analysis (Saltelli, 2002) based a large number of model evaluations in the parameter space with a quasi-random sampling (Owen, 2003). Sensitivity analyses were applied on a dynamic Soil-Plant Deposition Model (Gonze et al., submitted to this conference) predicting foliar concentration in weeds after atmospheric radionuclide fallout. The Soil-Plant Deposition Model considers two foliage pools and a root pool, and describes foliar biomass growth with a Verhulst model. The developed semi-analytic formulation of foliar concentration
N. Montaldo
2003-01-01
Full Text Available Recent developments have made land-surface models (LSMs more complex through the inclusion of more processes and controlling variables, increasing numbers of parameters and uncertainty in their estimates. To overcome these uncertainties, prior to applying a distributed LSM over the whole Toce basin (Italian Alps, a field campaign was carried out at an experimental plot within the basin before exploring the skill and parameter importance (sensitivity using the TOPLATS model, an existing LSM. In the summer and autumn of 1999, which included both wet (atmosphere controlled and dry (soil controlled periods, actual evapotranspiration estimates were performed using Bowen ratio and, for a short period, eddy correlation methods. Measurements performed with the two methods are in good agreement. The calibrated LSM predicts actual evapotranspiration quite well over the whole observation period. A sensitivity analysis of the evapotranspiration to model parameters was performed through the global multivariate technique during both wet and dry periods of the campaign. This approach studies the influence of each parameter without conditioning on certain values of the other variables. Hence, all parameters are varied simultaneously using, for instance, a uniform sampling strategy through a Monte Carlo simulation framework. The evapotranspiration is highly sensitive to the soil parameters, especially during wet periods. However, the evapotranspiration is also sensitive to some vegetation parameters and, during dry periods, wilting point is the most critical for evapotranspiration predictions. This result confirms the importance of correct representation of vegetation properties which, in water-limited conditions, control evapotranspiration. Keywords: evapotranspiration, sensitivity analysis, land surface model, eddy correlation, Alpine basin
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Van Winkle, W.; Christensen, S.W.; Kauffman, G.
1976-12-01
The description and justification for the compensation function developed and used by Lawler, Matusky and Skelly Engineers (LMS) (under contract to Consolidated Edison Company of New York) in their Hudson River striped bass models are presented. A sensitivity analysis of this compensation function is reported, based on computer runs with a modified version of the LMS completely mixed (spatially homogeneous) model. Two types of sensitivity analysis were performed: a parametric study involving at least five levels for each of the three parameters in the compensation function, and a study of the form of the compensation function itself, involving comparison of the LMS function with functions having no compensation at standing crops either less than or greater than the equilibrium standing crops. For the range of parameter values used in this study, estimates of percent reduction are least sensitive to changes in YS, the equilibrium standing crop, and most sensitive to changes in KXO, the minimum mortality rate coefficient. Eliminating compensation at standing crops either less than or greater than the equilibrium standing crops results in higher estimates of percent reduction. For all values of KXO and for values of YS and KX at and above the baseline values, eliminating compensation at standing crops less than the equilibrium standing crops results in a greater increase in percent reduction than eliminating compensation at standing crops greater than the equilibrium standing crops.
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
Moriyama, Kiyofumi; Park, Hyun Sun, E-mail: hejsunny@postech.ac.kr; Hwang, Byoungcheol; Jung, Woo Hyun
2016-06-15
Highlights: • Application of JASMINE code to melt jet breakup and coolability in APR1400 condition. • Coolability indexes for quasi steady state breakup and cooling process. • Typical case in complete breakup/solidification, film boiling quench not reached. • Significant impact of water depth and melt jet size; weak impact of model parameters. - Abstract: The breakup of a melt jet falling in a water pool and the coolability of the melt particles produced by such jet breakup are important phenomena in terms of the mitigation of severe accident consequences in light water reactors, because the molten and relocated core material is the primary heat source that governs the accident progression. We applied a modified version of the fuel–coolant interaction simulation code, JASMINE, developed at Japan Atomic Energy Agency (JAEA) to a plant scale simulation of melt jet breakup and cooling assuming an ex-vessel condition in the APR1400, a Korean advanced pressurized water reactor. Also, we examined the sensitivity on seven model parameters and five initial/boundary condition variables. The results showed that the melt cooling performance of a 6 m deep water pool in the reactor cavity is enough for removing the initial melt enthalpy for solidification, for a melt jet of 0.2 m initial diameter. The impacts of the model parameters were relatively weak and that of some of the initial/boundary condition variables, namely the water depth and melt jet diameter, were very strong. The present model indicated that a significant fraction of the melt jet is not broken up and forms a continuous melt pool on the containment floor in cases with a large melt jet diameter, 0.5 m, or a shallow water pool depth, ≤3 m.
Huclova, Sonja; Baumann, Dirk; Talary, Mark S.; Fröhlich, Jürg
2011-12-01
The sensitivity and specificity of dielectric spectroscopy for the detection of dielectric changes inside a multi-layered structure is investigated. We focus on providing a base for sensing physiological changes in the human skin, i.e. in the epidermal and dermal layers. The correlation between changes of the human skin's effective permittivity and changes of dielectric parameters and layer thickness of the epidermal and dermal layers is assessed using numerical simulations. Numerical models include fringing-field probes placed directly on a multi-layer model of the skin. The resulting dielectric spectra in the range from 100 kHz up to 100 MHz for different layer parameters and sensor geometries are used for a sensitivity and specificity analysis of this multi-layer system. First, employing a coaxial probe, a sensitivity analysis is performed for specific variations of the parameters of the epidermal and dermal layers. Second, the specificity of this system is analysed based on the roots and corresponding sign changes of the computed dielectric spectra and their first and second derivatives. The transferability of the derived results is shown by a comparison of the dielectric spectra of a coplanar probe and a scaled coaxial probe. Additionally, a comparison of the sensitivity of a coaxial probe and an interdigitated probe as a function of electrode distance is performed. It is found that the sensitivity for detecting changes of dielectric properties in the epidermal and dermal layers strongly depends on frequency. Based on an analysis of the dielectric spectra, changes in the effective dielectric parameters can theoretically be uniquely assigned to specific changes in permittivity and conductivity. However, in practice, measurement uncertainties may degrade the performance of the system.
Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C
2012-06-01
Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications.
Avolio, E.; Federico, S.; Miglietta, M. M.; Lo Feudo, T.; Calidonna, C. R.; Sempreviva, A. M.
2017-08-01
The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern Italy), in an area characterized by a complex orography near the sea. Results of 1 km × 1 km grid spacing simulations are compared with the data collected during a measurement campaign in summer 2009, considering hourly model outputs. Measurements from several instruments are taken into account for the performance evaluation: near surface variables (2 m temperature and relative humidity, downward shortwave radiation, 10 m wind speed and direction) from a surface station and a meteorological mast; vertical wind profiles from Lidar and Sodar; also, the aerosol backscattering from a ceilometer to estimate the PBL height. Results covering the whole measurement campaign show a cold and moist bias near the surface, mostly during daytime, for all schemes, as well as an overestimation of the downward shortwave radiation and wind speed. Wind speed and direction are also verified at vertical levels above the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes with the best performance in representing parameters near the surface and in the boundary layer during the analyzed campaign.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.
Metzger, Christine; Nilsson, Mats B.; Peichl, Matthias; Jansson, Per-Erik
2016-12-01
In contrast to previous peatland carbon dioxide (CO2) model sensitivity analyses, which usually focussed on only one or a few processes, this study investigates interactions between various biotic and abiotic processes and their parameters by comparing CoupModel v5 results with multiple observation variables. Many interactions were found not only within but also between various process categories simulating plant growth, decomposition, radiation interception, soil temperature, aerodynamic resistance, transpiration, soil hydrology and snow. Each measurement variable was sensitive to up to 10 (out of 54) parameters, from up to 7 different process categories. The constrained parameter ranges varied, depending on the variable and performance index chosen as criteria, and on other calibrated parameters (equifinalities). Therefore, transferring parameter ranges between models needs to be done with caution, especially if such ranges were achieved by only considering a few processes. The identified interactions and constrained parameters will be of great interest to use for comparisons with model results and data from similar ecosystems. All of the available measurement variables (net ecosystem exchange, leaf area index, sensible and latent heat fluxes, net radiation, soil temperatures, water table depth and snow depth) improved the model constraint. If hydraulic properties or water content were measured, further parameters could be constrained, resolving several equifinalities and reducing model uncertainty. The presented results highlight the importance of considering biotic and abiotic processes together and can help modellers and experimentalists to design and calibrate models as well as to direct experimental set-ups in peatland ecosystems towards modelling needs.
Sensitivity Analysis of Component Reliability
ZhenhuaGe
2004-01-01
In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.
Iorio, Lorenzo
2012-01-01
We analytically work out the long-term rates of change of the six osculating Keplerian orbital elements of a test particle acted upon by the Lorentz-violating gravitomagnetic acceleration due to a static body, as predicted by the Standard Model Extension (SME). We neither restrict to any specific spatial orientation for the symmetry-violating vector s nor make a priori simplifying assumptions concerning the orbital configuration of the perturbed test particle. Thus, our results are quite general, and can be applied for sensitivity analyses to a variety of specific astronomical and astrophysical scenarios. We find that, apart from the semimajor axis a, all the other orbital elements undergo non-vanishing secular variations. By comparing our results to the latest determinations of the supplementary advances of the perihelia of some planets of the solar system we preliminarily obtain s_x = (0.9 +/- 1.5) 10^-8, s_y = (-4 +/- 6) 10^-9, s_z = (0.3 +/- 1) 10^-9. Bounds from the terrestrial LAGEOS and LAGEOS II satel...
A Sensitivity Analysis of SOLPS Plasma Detachment
Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration
2016-10-01
Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.
The Sensitivity of State Differential Game Vessel Traffic Model
Lisowski Józef
2016-04-01
Full Text Available The paper presents the application of the theory of deterministic sensitivity control systems for sensitivity analysis implemented to game control systems of moving objects, such as ships, airplanes and cars. The sensitivity of parametric model of game ship control process in collision situations have been presented. First-order and k-th order sensitivity functions of parametric model of process control are described. The structure of the game ship control system in collision situations and the mathematical model of game control process in the form of state equations, are given. Characteristics of sensitivity functions of the game ship control process model on the basis of computer simulation in Matlab/Simulink software have been presented. In the end, have been given proposals regarding the use of sensitivity analysis to practical synthesis of computer-aided system navigator in potential collision situations.
Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.
2014-02-01
Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds they can be responsible for soil pollution. In this context, this paper proposes a hydro-morphodynamic modelling exercise aiming at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semi-empirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. In parallel to the modelling exercise, an extensive hydrological/geochemical database has been set up during two flood events. The main sensitive parameters were found to be the hydraulic friction coefficient and the sediment particle settling velocity in water. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.
Del Giudice Giuseppe
2016-10-01
Full Text Available An integrated Visual Basic Application interface is described that allows for sensitivity analysis, calibration and routing of hydraulichydrological models. The routine consists in the combination of three freeware tools performing hydrological modelling, hydraulic modelling and calibration. With such an approach, calibration is made possible even if information about sewers geometrical features is incomplete. Model parameters involve storage coefficient, time of concentration, runoff coefficient, initial abstraction and Manning coefficient; literature formulas are considered and manipulated to obtain novel expressions and variation ranges. A sensitivity analysis with a local method is performed to obtain information about collinearity among parameters and a ranking of influence. The least important parameters are given a fixed value, and for the remaining ones calibration is performed by means of a genetic algorithm implemented in GANetXL. Single-event calibration is performed with a selection of six rainfall events, which are chosen so to avoid non-uniform rainfall distribution; results are then successfully validated with a sequence of four events.
Del Giudice, Giuseppe; Padulano, Roberta
2016-10-01
An integrated Visual Basic Application interface is described that allows for sensitivity analysis, calibration and routing of hydraulichydrological models. The routine consists in the combination of three freeware tools performing hydrological modelling, hydraulic modelling and calibration. With such an approach, calibration is made possible even if information about sewers geometrical features is incomplete. Model parameters involve storage coefficient, time of concentration, runoff coefficient, initial abstraction and Manning coefficient; literature formulas are considered and manipulated to obtain novel expressions and variation ranges. A sensitivity analysis with a local method is performed to obtain information about collinearity among parameters and a ranking of influence. The least important parameters are given a fixed value, and for the remaining ones calibration is performed by means of a genetic algorithm implemented in GANetXL. Single-event calibration is performed with a selection of six rainfall events, which are chosen so to avoid non-uniform rainfall distribution; results are then successfully validated with a sequence of four events.
Steiner, Jakob; Pellicciotti, Francesca; Buri, Pascal; Brock, Ben
2016-04-01
Although some recent studies have attempted to model melt below debris cover in the Himalaya as well as the European Alps, field measurements remain rare and uncertainties of a number of parameters are difficult to constrain. The difficulty of accurately measuring sub-debris melt at one location over a longer period of time with stakes adds to the challenge of calibrating models adequately, as moving debris tends to tilt stakes. Based on measurements of sub-debris melt with stakes as well as air and surface temperature at the same location during three years from 2012 to 2014 at Lirung Glacier in the Nepalese Himalaya, we investigate results with the help of an earlier developed energy balance model. We compare stake readings to cumulative melt as well as observed to modelled surface temperatures. With timeseries stretching through the pre-Monsoon, Monsoon and post-Monsoon of different years we can show the difference of sensitive parameters during these seasons. Using radiation measurements from the AWS we can use a temporarily variable time series of albedo. A thorough analysis of thermistor data showing the stratigraphy of the temperature through the debris layer allows a detailed discussion of the variability as well as the uncertainty range of thermal conductivity. Distributed wind data as well as results from a distributed surface roughness assessment allows to constrain variability of turbulent fluxes between the different locations of the stakes. We show that model results are especially sensitive to thermal conductivity, a value that changes substantially between the seasons. Values obtained from the field are compared to earlier studies, which shows large differences within locations in the Himalaya. We also show that wind varies with more than a factor two between depressions and on debris mounds which has a significant influence on turbulent fluxes. Albedo decreases from the dry to the wet season and likely has some spatial variability that is
Shape design sensitivity analysis using domain information
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
Towards More Efficient and Effective Global Sensitivity Analysis
Razavi, Saman; Gupta, Hoshin
2014-05-01
Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.
A. Móring
2015-07-01
Full Text Available In this paper a new process-based, weather-driven model for ammonia (NH3 emission from a urine patch has been developed and its sensitivity to various factors assessed. This model, the GAG model (Generation of Ammonia from Grazing was developed as a part of a suite of weather-driven NH3 exchange models, as a necessary basis for assessing the effects of climate change on NH3 related atmospheric processes. GAG is capable of simulating the TAN (Total Ammoniacal Nitrogen content, pH and the water content of the soil under a urine patch. To calculate the TAN budget, GAG takes into account urea hydrolysis as a TAN input and NH3 volatilization as a loss. In the water budget, in addition to the water content of urine, precipitation and evaporation are also considered. In the pH module we assumed that the main regulating processes are the dissociation and dissolution equilibria related to the two products of urea hydrolysis: ammonium and bicarbonate. Finally, in the NH3 exchange flux calculation we adapted a canopy compensation point model that accounts for exchange with soil pores and stomata as well as deposition to the leaf surface. We validated our model against measurements, and carried out a sensitivity analysis. The validation showed that the simulated parameters (NH3 exchange flux, soil pH, TAN budget and water budget are well captured by the model (r > 0.5 for every parameter at p 3 emission. In addition, our results suggested that more sophisticated simulation of CO2 emission in the model could potentially improve the modelling of pH. The sensitivity analysis highlighted the vital role of temperature in NH3 exchange; however, presumably due to the TAN limitation, the GAG model currently provides only a modest overall temperature dependence in total NH3 emission compared with the values in the literature. Since all the input parameters can be obtained for study at larger scales, GAG is potentially suitable for larger scale application, such as
Sensitivity Analysis for Multidisciplinary Systems (SAMS)
2016-12-01
AFRL-RQ-WP-TM-2017-0017 SENSITIVITY ANALYSIS FOR MULTIDISCIPLINARY SYSTEMS (SAMS) Richard D. Snyder Design & Analysis Branch Aerospace Vehicles...for public release. Distribution is unlimited. 1 AFRL-NASA Collaboration Provide economical, accurate sensitivities for multidisciplinary design and... Concept Refinement Technology Development System Development & Demonstration Production & Deployment Operation & Support • Knowledge is most limited
Object-sensitive Type Analysis of PHP
Van der Hoek, Henk Erik; Hage, J
2015-01-01
In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the frame
Pohl, E.; Knoche, M.; Gloaguen, R.; Andermann, C.; Krause, P.
2015-07-01
the effect of interannual climatic variability on river flow to be inferred. We infer the existence of two subsurface reservoirs. The groundwater reservoir (providing 40 % of annual discharge) recharges in spring and summer and releases slowly during autumn and winter, when it provides the only source for river discharge. A not fully constrained shallow reservoir with very rapid retention times buffers meltwaters during spring and summer. The negative glacier mass balance (-0.6 m w.e. yr
气体扩散CFD建模敏感性分析%Sensitivity Analysis on Computational Fluid Dynamics Modeling of Gas Dispersion
章博; 陈国明
2011-01-01
The definition of sensitivity analysis on Computational Fluid Dynamics (CFD) modeling for gas dispersion has been presented.It is an uncertainty analysis method during the modeling process which firstly identifies the most sensitive ones from a number of uncertain factors.and then monitors and anlyzes their impacts on the simulation results.and lastly selects the most suitable modeling parameters.The advised orders for the analysis should be grid depemderey.boundary conditions.turbulence models.and parameter analysis for solution controls.successively.The analysis theories and methods for two key factors including grid dependency and turbulence model sensitivity have been discussed.A case study of gas dispersion in a station for high sulfide hydrogen natural gas gathering has also been carried out.The results show that grid dependency analysis could obtain a balance point between model prediction accuracy and computation cost;most suitable turbulence description method could be selected by means of comparison between the prediction results made by both trubulence models and empirical formulas.Sensitivity analysis is important for establishing proper computation models.enhancing prediction accuracy.and reducing computation cost, and it is an indispensable step for gas dispersion CFD modeling.%提出气体扩散计算流体力学建模敏感性分析的概念,即从众多影响模拟结果的不确定性因素中找出对建模有重要影响的方面,并监测、分析其对模拟结果的影响程度,进而选定最佳建模参数的一种不确定性分析方法.建议敏感性分析顺序应为网格依赖、边界条件、湍流模型及求解控制参数分析.重点对网格依赖和湍流模型敏感性分析的原理和方法进行了论证,并结合某高含硫集气站气体泄漏扩散建模进行了算例研究.研究结果表明,网格依赖分析可在模型预测精度和计算成本间获取平衡点;通过将各湍流模型预测结果与经验公
The sensitivity analysis of population projections
Hal Caswell
2015-10-01
Full Text Available Background: Population projections using the cohort component method can be written as time-varyingmatrix population models. The matrices are parameterized by schedules of mortality, fertility,immigration, and emigration over the duration of the projection. A variety of dependentvariables are routinely calculated (the population vector, various weighted population sizes, dependency ratios, etc. from such projections. Objective: Our goal is to derive and apply theory to compute the sensitivity and the elasticity (proportional sensitivity of any projection outcome to changes in any of the parameters, where those changes are applied at any time during the projection interval. Methods: We use matrix calculus to derive a set of equations for the sensitivity and elasticity of any vector valued outcome ξ(t at time t to any perturbation of a parameter vector Ɵ(s at anytime s. Results: The results appear in the form of a set of dynamic equations for the derivatives that areintegrated in parallel with the dynamic equations for the projection itself. We show resultsfor single-sex projections and for the more detailed case of projections including age distributions for both sexes. We apply the results to a projection of the population of Spain, from 2012 to 2052, prepared by the Instituto Nacional de Estadística, and determine the sensitivity and elasticity of (1 total population, (2 the school-age population, (3 the population subject to dementia, (4 the total dependency ratio, and (5 the economicsupport ratio. Conclusions: Writing population projections in matrix form makes sensitivity analysis possible. Such analyses are a powerful tool for the exploration of how detailed aspects of the projectionoutput are determined by the mortality, fertility, and migration schedules that underlie theprojection.
Precipitates/Salts Model Sensitivity Calculation
P. Mariner
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.
Quicken, Sjeng; Donders, Wouter P; van Disseldorp, Emiel M J; Gashi, Kujtim; Mees, Barend M E; van de Vosse, Frans N; Lopata, Richard G P; Delhaas, Tammo; Huberts, Wouter
2016-12-01
When applying models to patient-specific situations, the impact of model input uncertainty on the model output uncertainty has to be assessed. Proper uncertainty quantification (UQ) and sensitivity analysis (SA) techniques are indispensable for this purpose. An efficient approach for UQ and SA is the generalized polynomial chaos expansion (gPCE) method, where model response is expanded into a finite series of polynomials that depend on the model input (i.e., a meta-model). However, because of the intrinsic high computational cost of three-dimensional (3D) cardiovascular models, performing the number of model evaluations required for the gPCE is often computationally prohibitively expensive. Recently, Blatman and Sudret (2010, "An Adaptive Algorithm to Build Up Sparse Polynomial Chaos Expansions for Stochastic Finite Element Analysis," Probab. Eng. Mech., 25(2), pp. 183-197) introduced the adaptive sparse gPCE (agPCE) in the field of structural engineering. This approach reduces the computational cost with respect to the gPCE, by only including polynomials that significantly increase the meta-model's quality. In this study, we demonstrate the agPCE by applying it to a 3D abdominal aortic aneurysm (AAA) wall mechanics model and a 3D model of flow through an arteriovenous fistula (AVF). The agPCE method was indeed able to perform UQ and SA at a significantly lower computational cost than the gPCE, while still retaining accurate results. Cost reductions ranged between 70-80% and 50-90% for the AAA and AVF model, respectively.
Latent sensitization: a model for stress-sensitive chronic pain.
Marvizon, Juan Carlos; Walwyn, Wendy; Minasyan, Ani; Chen, Wenling; Taylor, Bradley K
2015-04-01
Latent sensitization is a rodent model of chronic pain that reproduces both its episodic nature and its sensitivity to stress. It is triggered by a wide variety of injuries ranging from injection of inflammatory agents to nerve damage. It follows a characteristic time course in which a hyperalgesic phase is followed by a phase of remission. The hyperalgesic phase lasts between a few days to several months, depending on the triggering injury. Injection of μ-opioid receptor inverse agonists (e.g., naloxone or naltrexone) during the remission phase induces reinstatement of hyperalgesia. This indicates that the remission phase does not represent a return to the normal state, but rather an altered state in which hyperalgesia is masked by constitutive activity of opioid receptors. Importantly, stress also triggers reinstatement. Here we describe in detail procedures for inducing and following latent sensitization in its different phases in rats and mice. Copyright © 2015 John Wiley & Sons, Inc.
Qiu, Linjing; Liu, Xiaodong
2016-04-01
Increases in the atmospheric CO2 concentration affect both the global climate and plant metabolism, particularly for high-altitude ecosystems. Because of the limitations of field experiments, it is difficult to evaluate the responses of vegetation to CO2 increases and separate the effects of CO2 and associated climate change using direct observations at a regional scale. Here, we used the Community Earth System Model (CESM, version 1.0.4) to examine these effects. Initiated from bare ground, we simulated the vegetation composition and productivity under two CO2 concentrations (367 and 734 ppm) and associated climate conditions to separate the comparative contributions of doubled CO2 and CO2-induced climate change to the vegetation dynamics on the Tibetan Plateau (TP). The results revealed whether the individual effect of doubled CO2 and its induced climate change or their combined effects caused a decrease in the foliage projective cover (FPC) of C3 arctic grass on the TP. Both doubled CO2 and climate change had a positive effect on the FPC of the temperate and tropical tree plant functional types (PFTs) on the TP, but doubled CO2 led to FPC decreases of C4 grass and broadleaf deciduous shrubs, whereas the climate change resulted in FPC decrease in C3 non-arctic grass and boreal needleleaf evergreen trees. Although the combination of the doubled CO2 and associated climate change increased the area-averaged leaf area index (LAI), the effect of doubled CO2 on the LAI increase (95 %) was larger than the effect of CO2-induced climate change (5 %). Similarly, the simulated gross primary productivity (GPP) and net primary productivity (NPP) were primarily sensitive to the doubled CO2, compared with the CO2-induced climate change, which alone increased the regional GPP and NPP by 251.22 and 87.79 g C m-2 year-1, respectively. Regionally, the vegetation response was most noticeable in the south-eastern TP. Although both doubled CO2 and associated climate change had a
An overview of the design and analysis of simulation experiments for sensitivity analysis
Kleijnen, J.P.C.
2005-01-01
Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs
SENSITIVE ERROR ANALYSIS OF CHAOS SYNCHRONIZATION
HUANG XIAN-GAO; XU JIAN-XUE; HUANG WEI; L(U) ZE-JUN
2001-01-01
We study the synchronizing sensitive errors of chaotic systems for adding other signals to the synchronizing signal.Based on the model of the Henon map masking, we examine the cause of the sensitive errors of chaos synchronization.The modulation ratio and the mean square error are defined to measure the synchronizing sensitive errors by quality.Numerical simulation results of the synchronizing sensitive errors are given for masking direct current, sinusoidal and speech signals, separately. Finally, we give the mean square error curves of chaos synchronizing sensitivity and threedimensional phase plots of the drive system and the response system for masking the three kinds of signals.
Sensitivity analysis of soil parameters based on interval
无
2008-01-01
Interval analysis is a new uncertainty analysis method for engineering struc-tures. In this paper, a new sensitivity analysis method is presented by introducing interval analysis which can expand applications of the interval analysis method. The interval anal-ysis process of sensitivity factor matrix of soil parameters is given. A method of parameter intervals and decision-making target intervals is given according to the interval analysis method. With FEM, secondary developments are done for Marc and the Duncan-Chang nonlinear elastic model. Mutual transfer between FORTRAN and Marc is implemented. With practial examples, rationality and feasibility are validated. Comparison is made with some published results.
Advancing sensitivity analysis to precisely characterize temporal parameter dominance
Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola
2016-04-01
Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological
Campbell, G. Garrett [Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, CO (United States); Kittel, Timothy G.F. [Natural Resource Ecology Laboratory, Colorado State University, Fort Collins, CO (United States); Meehl, Gerald A.; Washington, Warren M. [Climate and Global Dynamics Division, National Center for Atmospheric Research, Boulder, CO (United States)
1995-04-15
We used empirical orthogonal function (EOF) analysis to examine the monthly variance structure of several general circulation model (GCM) simulations to look for possible systematic changes of variability, not only due to increased carbon-dioxide (CO{sub 2}) concentration in the atmosphere but also due to model configuration. We evaluated four simulations which were present-day and doubled CO{sub 2} experiments with the same atmospheric GCM coupled to (1) a simple nondynamic mixed-layer ocean (termed `mixed-layer model`) and (2) an ocean GCM (termed `coupled model`). Model-generated variability, as represented by EOFs of 700-mb height, is similar in all cases for global analyses and is mainly characterized by an opposition of sign between mid- and high latitudes in both hemispheres. There are regional changes between 1xCO{sub 2} and 2xCO{sub 2} runs which are similar for the mixed-layer and coupled models. Changes in model configuration give rise to more extensive changes in the overall pattern of variation, with variability in Northern and Southern Hemispheres more tightly linked in the coupled model than in the mixed-layer model. We also computed EOFs using only model data for the tropics (between 30{sup o}N and 30{sup o}S). Consequently, different model configuration has a stronger effect on simulated interannual variability globally than does altered CO{sub 2} forcing. Because ENSO is not represented in the mixed-layer model, CO{sub 2}-induced changes in variability are not credible in that model. For the coupled model, regional increases in variability, such as over the monsoon region of south Asia, are consistent with results from other analyses. We also evaluated CO{sub 2} sensitivity of the coupled model`s seasonal cycle of surface air temperatures using a harmonic analysis. Strongly different seasonal cycles appear in the high latitudes of the Northern Hemisphere in the coupled model under different CO{sub 2} conditions. (Abstract Truncated)
Knorr, Wolfgang; Heimann, Martin
2001-01-01
Modeling the terrestrial biosphere's carbon exchanges constitutes a key tool for investigation of the global carbon cycle, which has lead to the recent development of numerous terrestrial biosphere models...
Sensitivity analysis of distributed volcanic source inversion
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
Scale Sensitive Analysis of Cellular Automata Model%元胞自动机模型的尺度敏感性分析
2011-01-01
以深圳市龙华镇为案例区,构建了土地利用/覆被变化的元胞自动机模型,从时间和空间两个方面定量研究了LUCC模型的尺度效应。通过改变模型输入数据的空间分辨率和模型模拟的时间长度,探讨了尺度对土地利用变化模型的影响。分别采用龙华镇1990年30,60,90,120,150,180,210和240 m空间分辨率的土地利用数据作为元胞自动机模型的输入,模拟研究区1995年和2000年的土地利用变化状况以诠释CA模型内在的尺度依赖特征,并依据模型的点对点模拟精度、Kappa系数、实际变化元胞的模拟精度3个指标评价了该%The authors present an analysis of how scale issues affect a cellular automata model of land use change developed for a research area in Longhua Town,Shenzhen City.The scale dependence of the model is explored by varying the resolution of the input data in 1990 used to calibrate the model and changing the length of model simulating time.To explore the impact of these scale relationships the model is run with input datasets constructed at the following spatial resolutions： 30,60,90,120,150,180,210 and 240 m for simulating land use in 1995 and 2000.Three kinds of indicator,i.e.point by point accuracy,Kappa and real change accuracy are used to assess the scale sensitivity of the model.The results show that 1） the more fine the cell sizes are,the higher the accuracy of the simulation results;2） path dependence of the isolated cells is an important source of the spatial scale sensitivity of CA model;3） the specific geographical process in different periods of time is an important source of the temporal sensitivity scale of CA model.The results have great significance for the scale selection of CA model.
Støy, Ann Cathrine Findal; Heegaard, Peter M. H.; Sangild, Per T.
2013-01-01
-related genes in IPEC-J2 cells stimulated for 2 h with milk formula (CELL-FORM), colostrum (CELL-COLOS), or growth medium (CELL-CONTR) and in distal small intestinal tissue samples from preterm pigs fed milk formula (PIG-FORM) or colostrum (PIG-COLOS). High throughput quantitative PCR analysis of 48 genes......The IPEC-J2 cell line was studied as a simple model for investigating responses of the newborn intestinal epithelium to diets. Especially, the small intestine of immature newborns is sensitive to diet-induced inflammation. We investigated gene expression of epithelial- and immune response...... revealed the expression of 22 genes in IPEC-J2 cells and 31 genes in intestinal samples. Principal component analysis (PCA) discriminated the gene expression profile of IPEC-J2 cells from that of intestinal samples. The expression profile of intestinal tissue was separated by PCA into 2 groups according...
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2014-01-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2013-01-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...
Global and local sensitivity analysis methods for a physical system
Morio, Jerome, E-mail: jerome.morio@onera.fr [Onera-The French Aerospace Lab, F-91761, Palaiseau Cedex (France)
2011-11-15
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.
Global and Local Sensitivity Analysis Methods for a Physical System
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Schryer, David W; Peterson, Pearu; Illaste, Ardo; Vendelin, Marko
2012-01-01
To characterize intracellular energy transfer in the heart, two organ-level methods have frequently been employed: ³¹P − NMR inversion and saturation transfer, and dynamic ¹⁸O labeling. Creatine kinase (CK) fluxes obtained by following oxygen labeling have been considerably smaller than the fluxes determined by ³¹P − NMR saturation transfer. It has been proposed that dynamic ¹⁸O labeling determines net flux through CK shuttle, whereas ³¹P − NMR saturation transfer measures total unidirectional flux. However, to our knowledge, no sensitivity analysis of flux determination by oxygen labeling has been performed, limiting our ability to compare flux distributions predicted by different methods. Here we analyze oxygen labeling in a physiological heart phosphotransfer network with active CK and adenylate kinase (AdK) shuttles and establish which fluxes determine the labeling state. A mathematical model consisting of a system of ordinary differential equations was composed describing ¹⁸O enrichment in each phosphoryl group and inorganic phosphate. By varying flux distributions in the model and calculating the labeling, we analyzed labeling sensitivity to different fluxes in the heart. We observed that the labeling state is predominantly sensitive to total unidirectional CK and AdK fluxes and not to net fluxes. We conclude that measuring dynamic incorporation of ¹⁸O into the high-energy phosphotransfer network in heart does not permit unambiguous determination of energetic fluxes with a higher magnitude than the ATP synthase rate when the bidirectionality of fluxes is taken into account. Our analysis suggests that the flux distributions obtained using dynamic ¹⁸O labeling, after removing the net flux assumption, are comparable with those from ³¹P − NMR inversion and saturation transfer.
Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.
2015-12-01
Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while
Sensitivity analysis for large-scale problems
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
An ESDIRK Method with Sensitivity Analysis Capabilities
Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove
2004-01-01
A new algorithm for numerical sensitivity analysis of ordinary differential equations (ODEs) is presented. The underlying ODE solver belongs to the Runge-Kutta family. The algorithm calculates sensitivities with respect to problem parameters and initial conditions, exploiting the special structure...
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical
Fixed point sensitivity analysis of interacting structured populations.
Barabás, György; Meszéna, Géza; Ostling, Annette
2014-03-01
Sensitivity analysis of structured populations is a useful tool in population ecology. Historically, methodological development of sensitivity analysis has focused on the sensitivity of eigenvalues in linear matrix models, and on single populations. More recently there have been extensions to the sensitivity of nonlinear models, and to communities of interacting populations. Here we derive a fully general mathematical expression for the sensitivity of equilibrium abundances in communities of interacting structured populations. Our method yields the response of an arbitrary function of the stage class abundances to perturbations of any model parameters. As a demonstration, we apply this sensitivity analysis to a two-species model of ontogenetic niche shift where each species has two stage classes, juveniles and adults. In the context of this model, we demonstrate that our theory is quite robust to violating two of its technical assumptions: the assumption that the community is at a point equilibrium and the assumption of infinitesimally small parameter perturbations. Our results on the sensitivity of a community are also interpreted in a niche theoretical context: we determine how the niche of a structured population is composed of the niches of the individual states, and how the sensitivity of the community depends on niche segregation.
Analysis of MEMS Accelerometer for Optimized Sensitivity
Khairun Nisa Khamil; Kok Swee Leong; Norizan Bin Mohamad; Norhayati Soin; Norshahida Saba
2014-01-01
.... The geometrical of the accelerometer, mass width, beam (length and width) of the device and its sensitivity are analyzed theoretically and also using finite element analysis software, COMSOL Multiphysics...
A Modified Sensitive Driving Cellular Automaton Model
GE Hong-Xia; DAI Shi-Qiang; DONG Li-Yun; LEI Li
2005-01-01
A modified cellular automaton model for traffic flow on highway is proposed with a novel concept about the variable security gap. The concept is first introduced into the original Nagel-Schreckenberg model, which is called the non-sensitive driving cellular automaton model. And then it is incorporated with a sensitive driving NaSch model,in which the randomization brake is arranged before the deterministic deceleration. A parameter related to the variable security gap is determined through simulation. Comparison of the simulation results indicates that the variable security gap has different influence on the two models. The fundamental diagram obtained by simulation with the modified sensitive driving NaSch model shows that the maximumflow are in good agreement with the observed data, indicating that the presented model is more reasonable and realistic.
Simon, E.; Meixner, F.X.; Ganzeveld, L.N.; Kesselmeier, J.
2005-01-01
Detailed one-dimensional multilayer biosphere-atmosphere models, also referred to as CANVEG models, are used for more than a decade to describe coupled water-carbon exchange between the terrestrial vegetation and the lower atmosphere. Within the present study, a modified CANVEG scheme is described.
Kruizinga, A.G.; Briggs, D.; Crevel, R.W.R.; Knulst, A.C.; Bosch, L.M.C.v.d.; Houben, G.F.
2008-01-01
Previously, TNO developed a probabilistic model to predict the likelihood of an allergic reaction, resulting in a quantitative assessment of the risk associated with unintended exposure to food allergens. The likelihood is estimated by including in the model the proportion of the population who is a
Anderson, Brian Curtis [Iowa State Univ., Ames, IA (United States)
2002-01-01
The underlying theme of this thesis is the use of polymeric materials in bioapplications. Chapters 2-5 either develop a fundamental understanding of current materials used for bioapplications or establish protocols and procedures used in characterizing and synthesizing novel materials. In chapters 6 and 7 these principles and procedures are applied to the development of materials to be used for gene therapy and drug delivery. Chapter one is an introduction to the ideas that will be necessary to understand the subsequent chapters, as well as a literature review of these topics. Chapter two is a paper that has been published in the ''Journal of Controlled Release'' that examines the mechanism of drug release from a polymer gel, as well as experimental design suggestions for the evaluation of water soluble drug delivery systems. Chapter three is a paper that has been published in the ''Journal of Pharmaceutical Sciences'' that discusses the effect ionic salts have on properties of the polymer systems examined in chapter two. Chapter four is a paper published in the Materials Research Society Fall 2000 Symposium Series dealing with the design and synthesis of a pH-sensitive polymeric drug delivery device. Chapter five is a paper that has been published in the journal ''Biomaterials'' proposing a novel polymer/metal composite for use as a biomaterial in hip arthroplasty surgery. Chapter six is a paper that will appear in an upcoming volume of the Journal ''Biomaterials'' dealing with the synthesis of a novel water soluble cationic polymer with possible applications in non-viral gene therapy. Chapter seven is a paper that has been submitted to ''Macromolecules'' discussing several novel block copolymers based on poly(ethylene glycol) and poly(diethylamino ethyl methacrylate) that possess both pH-sensitive and temperature sensitive properties. Chapter eight contains a
Brian Curtis Anderson
2002-08-27
The underlying theme of this thesis is the use of polymeric materials in bioapplications. Chapters 2-5 either develop a fundamental understanding of current materials used for bioapplications or establish protocols and procedures used in characterizing and synthesizing novel materials. In chapters 6 and 7 these principles and procedures are applied to the development of materials to be used for gene therapy and drug delivery. Chapter one is an introduction to the ideas that will be necessary to understand the subsequent chapters, as well as a literature review of these topics. Chapter two is a paper that has been published in the ''Journal of Controlled Release'' that examines the mechanism of drug release from a polymer gel, as well as experimental design suggestions for the evaluation of water soluble drug delivery systems. Chapter three is a paper that has been published in the ''Journal of Pharmaceutical Sciences'' that discusses the effect ionic salts have on properties of the polymer systems examined in chapter two. Chapter four is a paper published in the Materials Research Society Fall 2000 Symposium Series dealing with the design and synthesis of a pH-sensitive polymeric drug delivery device. Chapter five is a paper that has been published in the journal ''Biomaterials'' proposing a novel polymer/metal composite for use as a biomaterial in hip arthroplasty surgery. Chapter six is a paper that will appear in an upcoming volume of the Journal ''Biomaterials'' dealing with the synthesis of a novel water soluble cationic polymer with possible applications in non-viral gene therapy. Chapter seven is a paper that has been submitted to ''Macromolecules'' discussing several novel block copolymers based on poly(ethylene glycol) and poly(diethylamino ethyl methacrylate) that possess both pH-sensitive and temperature sensitive properties. Chapter eight contains a
Finite element model of needle electrode sensitivity
Høyum, P.; Kalvøy, H.; Martinsen, Ø. G.; Grimnes, S.
2010-04-01
We used the Finite Element (FE) Method to estimate the sensitivity of a needle electrode for bioimpedance measurement. This current conducting needle with insulated shaft was inserted in a saline solution and current was measured at the neutral electrode. FE model resistance and reactance were calculated and successfully compared with measurements on a laboratory model. The sensitivity field was described graphically based on these FE simulations.
HSPF 模型水文水质参数敏感性分析%Sensitivity Analysis of Hydrological and Water Quality Parameters of HSPF Model
罗川; 李兆富; 席庆; 潘剑君
2014-01-01
参数敏感性分析是模型不确定性量化的重要环节，有助于对关键参数的识别，减少参数的不确定性影响，进而提高参数优化效率。以太湖地区典型小流域为研究区，采用扰动分析法对 HSPF 模型水文模块、泥沙模块以及氮磷输移等水文、水质模拟过程的参数进行了敏感性分析。研究结果显示：水文模块选取的17个参数中有7个敏感：UZSN、INFILT、AGWRC 对径流的敏感级别为芋类，LZSN、DEEPFR、INTFW、IRC 敏感级别为域类。泥沙透水地面模块选取的9个参数中，KSER、KGER、JGER 为芋类敏感参数， JSER 为郁类敏感参数；不透水地面模块选取的4个参数中，KEIM、JEIM、ACCSDP 对泥沙产量的敏感级别为芋类；河道模块选取的5个参数中，KSAND、EXPSND 为芋类敏感参数，TAUCS、TAUCD 为域类敏感参数。总氮模拟选取了23个参数分析敏感性，其中WSQOP、SQOLIM、MON-GRND-CONC 为郁类敏感参数，KATM20、MON-IFLW-CONC 为芋类敏感参数，TCNIT、PHYSET、MALGR敏感级别为域类。磷素输移模拟选取了12个参数，MON-GRND-CONC 敏感级别为芋类，MON-POTFW、MON-IFLW-CONC、MALGR、PHYSET 敏感级别为域类。研究结果对于开展基于 HSPF 模型的流域水文水质研究工作参数的选取具有一定的参考价值，尤其对于太湖周边地区众多低山丘陵小流域进行 HSPF 模型水文水质模拟时敏感性参数的选取具有借鉴意义。%Model sensitivity analysis measures the variability of output variables caused by perturbations in parameter values and input data. It is important for parameter selection, model calibration, and model improvement. As one of the integrated watershed model, HSPF(Hydro-logical Simulation Program-Fortran)model has a lot of parameters related to the physical characteristics of local watershed. In order to as-certain the sensitive parameters for hydrology and water quality simulation of HSPF model
Tarantola, Stefano [European Commission , Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (Vatican City State, Holy See,), 21020 (Italy)]. E-mail: stefano.tarantola@jrc.it; Nardo, Michela [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (VA), 21020 (Italy); Saisana, Michaela [European Commission , Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (VA), 21020 (Italy); Gatelli, Debora [European Commission , Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (VA), 21020 (Italy)
2006-10-15
In this paper we propose and test a generalisation of the method originally proposed by Sobol', and recently extended by Saltelli, to estimate the first-order and total effect sensitivity indices. Exploiting the symmetries and the dualities of the formulas, we obtain additional estimates of first-order and total indices at no extra computational cost. We test the technique on a case study involving the construction of a composite indicator of e-business readiness, which is part of the initiative 'e-Readiness of European enterprises' of the European Commission 'e-Europe 2005' action plan. The method is used to assess the contribution of uncertainties in (a) the weights of the component indicators and (b) the imputation of missing data on the composite indicator values for several European countries.
Alonso, Rocio [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: rocio.alonso@ciemat.es; Elvira, Susana [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: susana.elvira@ciemat.es; Sanz, Maria J. [Fundacion CEAM, Charles Darwin 14, 46980 Paterna, Valencia (Spain)], E-mail: mjose@ceam.es; Gerosa, Giacomo [Department of Mathematics and Physics, Universita Cattolica del Sacro Cuore, via Musei 41, 25121 Brescia (Italy)], E-mail: giacomo.gerosa@unicatt.it; Emberson, Lisa D. [Stockholm Environment Institute, University of York, York YO 10 5DD (United Kingdom)], E-mail: lde1@york.ac.uk; Bermejo, Victoria [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: victoria.bermejo@ciemat.es; Gimeno, Benjamin S. [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: benjamin.gimeno@ciemat.es
2008-10-15
A sensitivity analysis of a proposed parameterization of the stomatal conductance (g{sub s}) module of the European ozone deposition model (DO{sub 3}SE) for Quercus ilex was performed. The performance of the model was tested against measured g{sub s} in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f{sub phen} and f{sub SWP} were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak.
Hostache, Renaud; Hissler, Christophe; Matgen, Patrick; Guignard, Cédric; Bates, Paul
2014-05-01
Recent years have seen a growing awareness for the central role that fine sediment loads play in transport and diffusion of pollutants by rivers and streams. Suspended sediment can potentially carry important amounts of nutrients and contaminants, such as trace metals among which some are recognized as Potential Harmful Elements (PHE). These threaten water quality in rivers and wetlands and soil quality in floodplains. Currently, many studies focusing on sediment transport modelling deal with marine and estuarine areas. Some studies evaluate sediment transport at basin scales and often evaluate yearly sediment fluxes using hydrologic and simplified hydraulic models. Some more theoretical studies develop and improve numerical models on the basis of physical model experiments. As a matter of fact, sediment transport modelling in small rivers at reach/floodplain scale is a rather new research field. In this study, we aim at simulating sediment transport at the floodplain scale and the single flood event scale in order to predict sediment spreading on alluvial soils. This simulation will help for the estimation of the potential pollution of soils due to the transport of PHEs by suspended sediments. The model is based upon the Telemac hydro-informatic system (i.e. dynamical coupling of Telemac-2D and Sysiphe). As empirical and semi-empirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. In parallel to the modelling exercise, an extensive hydrological/geochemical database has been set up for two flood events. The most sensitive parameters were found to be the hydraulic friction coefficient and the sediment particle settling velocity in water. Using the two monitored hydrological events for calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolved pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment
Silva, Nayana G M; von Sperling, Marcos
2008-01-01
Downstream of Capim Branco I hydroelectric dam (Minas Gerais state, Brazil), there is the need of keeping a minimum flow of 7 m3/s. This low flow reach (LFR) has a length of 9 km. In order to raise the water level in the low flow reach, the construction of intermediate dikes along the river bed was decided. The LFR has a tributary that receives the discharge of treated wastewater. As part of this study, water quality of the low-flow reach was modelled, in order to gain insight into its possible behaviour under different scenarios (without and with intermediate dikes). QUAL2E equations were implemented in FORTRAN code. The model takes into account point-source pollution and diffuse pollution. Uncertainty analysis was performed, presenting probabilistic results and allowing identification of the more important coefficients in the LFR water-quality model. The simulated results indicate, in general, very good conditions for most of the water quality parameters The variables of more influence found in the sensitivity analysis were the conversion coefficients (without and with dikes), the initial conditions in the reach (without dikes), the non-point incremental contributions (without dikes) and the hydraulic characteristics of the reach (with dikes).
Riedmann, R A; Gasic, B; Vernez, D
2015-02-01
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART. © 2015 Society for Risk Analysis.
Bashkirtseva, Irina; Neiman, Alexander B.; Ryashko, Lev
2015-05-01
We study the stochastic dynamics of a Hodgkin-Huxley neuron model in a regime of coexistent stable equilibrium and a limit cycle. In this regime, noise may suppress periodic firing by switching the neuron randomly to a quiescent state. We show that at a critical value of the injected current, the mean firing rate depends weakly on noise intensity, while the neuron exhibits giant variability of the interspike intervals and spike count. To reveal the dynamical origin of this noise-induced effect, we develop the stochastic sensitivity analysis and use the Mahalanobis metric for this four-dimensional stochastic dynamical system. We show that the critical point of giant variability corresponds to the matching of the Mahalanobis distances from attractors (stable equilibrium and limit cycle) to a three-dimensional surface separating their basins of attraction.
Bashkirtseva, Irina; Neiman, Alexander B; Ryashko, Lev
2015-05-01
We study the stochastic dynamics of a Hodgkin-Huxley neuron model in a regime of coexistent stable equilibrium and a limit cycle. In this regime, noise may suppress periodic firing by switching the neuron randomly to a quiescent state. We show that at a critical value of the injected current, the mean firing rate depends weakly on noise intensity, while the neuron exhibits giant variability of the interspike intervals and spike count. To reveal the dynamical origin of this noise-induced effect, we develop the stochastic sensitivity analysis and use the Mahalanobis metric for this four-dimensional stochastic dynamical system. We show that the critical point of giant variability corresponds to the matching of the Mahalanobis distances from attractors (stable equilibrium and limit cycle) to a three-dimensional surface separating their basins of attraction.
Burattini, Roberto; Bini, Silvia
2011-07-01
Physiological relevance of parameters of three arterial models, denominated W4P, W4S and IVW, was assessed by computation of parameter-related generalized sensitivity functions (GSFs), which allow the definition of heart-cycle time intervals where the information content of experimental data, useful for estimation of each model parameter, is concentrated. The W4P and W4S are derived from the three-element windkessel by connecting an inductance, L, in parallel or in series, respectively, with aortic characteristic impedance, R(c). In the IVW, L is placed in series at the input of a viscoelastic windkessel, incorporating a Voigt cell (a resistor, R(d), in series with a capacitor, C). Pressure and flow measured in the ascending aorta of five ferrets and five dogs were used to estimate all model parameters, by fitting to pressure. For each model structure, parameter-related GSFs were generated. Focusing on controversial L, R(c) and R(d) physical meaning, our GSF analysis yielded the conclusion that, in both the W4S and the IVW, but not in the W4P, the L-term is suitable to represent the inertial properties of blood motion. Moreover, the meaning of aortic characteristic impedance ascribed to R(c) is questionable; while R(d) is likely to account for viscous losses of arterial wall motion.
Reiter, Karsten; Hergert, Tobias; Heidbach, Oliver
2016-04-01
The in situ stress conditions are of key importance for the evaluation of radioactive waste repositories. In stage two of the Swiss site selection program, the three siting areas of high-level radioactive waste are located in the Alpine foreland in northern Switzerland. The sedimentary succession overlays the basement, consisting of variscan crystalline rocks as well as partly preserved Permo-Carboniferous deposits in graben structures. The Mesozoic sequence represents nearly the complete era and is covered by Cenozoic Molasse deposits as well as Quaternary sediments, mainly in the valleys. The target horizon (designated host rock) is an >100 m thick argillaceous Jurassic deposit (Opalinus Clay). To enlighten the impact of site-specific features on the state of stress within the sedimentary succession, 3-D-geomechanical-numerical models with elasto-plastic rock properties are set up for three potential siting areas. The lateral extent of the models ranges between 12 and 20 km, the vertical extent is up to a depth of 2.5 or 5 km below sea level. The sedimentary sequence plus the basement are separated into 10 to 14 rock mechanical units. The Mesozoic succession is intersected by regional fault zones; two or three of them are present in each model. The numerical problem is solved with the finite element method with a resolution of 100-150 m laterally and 10-30 m vertically. An initial stress state is established for all models taking into account the depth-dependent overconsolidation ratio in Opalinus Clay in northern Switzerland. The influence of topography, rock properties, friction on the faults as well as the impact of tectonic shortening on the state of stress is investigated. The tectonic stress is implemented with lateral displacement boundary conditions, calibrated on stress data that are compiled in Northern Switzerland. The model results indicate that the stress perturbation by the topography is significant to depths greater than the relief contrast. The
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the
George D Loizou
2015-06-01
Full Text Available Global sensitivity analysis (SA was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This information informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM data in a Bayesian framework using Markov chain Monte Carlo (MCMC simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Terjung, W. H.; Hayes, J. T.; O'Rourke, P. A.; Burt, J. E.; Todhunter, P. E.
1982-10-01
WATER, a parametric crop water use model, employs climatic data to calculate water consumption for a variety of crops, using a modification of the Penman equation which included specific crop and growth stage effects. The objective of this paper was to demonstrate the response of WATER, for a grain corn crop, to changes in a variety of important environmental and decision-making inputs: air temperature, solar radiation, relative humidity, irrigation frequency, and amount of irrigation water applied. Five temperature, five solar radiation, and six relative humidity regimes were examined for an entire growing season. Also, five different water application schemes and four irrigation frequencies were included in this experiment. Additionally, the effect of different soil types, wind regimes, and groundwater depths on crop water requirements were investigated. These analyses were performed using four annual climatic scenario combinations. Among the results, evapotranspiration (ET) increased on the average by about 2.5%/1°C increase in air temperature. One percent change in solar radiation resulted in a 1.5% change in ET, while a similar change in relative humidity caused a 0.4% response in ET. Contrasting soil types, in addition to affecting irrigation frequency, were capable of changing the responding ET by over 10%.
Supercritical extraction of oleaginous: parametric sensitivity analysis
Santos M.M.
2000-01-01
Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.
Malaguerra, Flavio; Albrechtsen, Hans-Jørgen; Binning, Philip John
2013-01-01
SummaryA reactive transport model is employed to evaluate the potential for contamination of drinking water wells by surface water pollution. The model considers various geologic settings, includes sorption and degradation processes and is tested by comparison with data from a tracer experiment where fluorescein dye injected in a river is monitored at nearby drinking water wells. Three compounds were considered: an older pesticide MCPP (Mecoprop) which is mobile and relatively persistent, glyphosate (Roundup), a newer biodegradable and strongly sorbing pesticide, and its degradation product AMPA. Global sensitivity analysis using the Morris method is employed to identify the dominant model parameters. Results show that the characteristics of clay aquitards (degree of fracturing and thickness), pollutant properties and well depths are crucial factors when evaluating the risk of drinking water well contamination from surface water. This study suggests that it is unlikely that glyphosate in streams can pose a threat to drinking water wells, while MCPP in surface water can represent a risk: MCPP concentration at the drinking water well can be up to 7% of surface water concentration in confined aquifers and up to 10% in unconfined aquifers. Thus, the presence of confining clay aquitards may not prevent contamination of drinking water wells by persistent compounds in surface water. Results are consistent with data on pesticide occurrence in Denmark where pesticides are found at higher concentrations at shallow depths and close to streams.
Quantifying uncertainty and sensitivity in sea ice models
Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-15
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
SWMM模型径流参数全局灵敏度分析%Global Sensitivity Analysis of Runoff Parameters of SWMM Model
孙艳伟; 把多铎; 王文川; 姜体胜; 王富强
2012-01-01
Based on practicability analysis of SWMM model parameters in the calibration process, four parameters of subcatchment slope, subcatchment width, Manning coefficient and depression depth on pervious area and three infiltration parameters were selected. Two popular infiltration models of Horton and Green - Ampt were examined respectively. Global sensitivity analysis method of Morris was used. Flow metrics of total rainfall depth and peak discharge were simulated for single rainfall events with different rainfall types and return periods while runoff coefficient was examined for the long-term rainfall data. Main results were: sensitivity analysis results for Tl and T2 rainfall events indicated great differences and T2 rainfall event with small return period was not suitable for parameters calibration; for Horton model, peak discharge of large Tl rainfall can be used for calibrating subcatchment width and slope while total runoff of large T2 can be used for calibrating infiltration parameters; for Green - Ampt model, peak discharge of small Tl rainfall can be used to calibrate subcatchement width and that of large T2 rainfall can be used to calibrate minimum infiltration rate and water deficiency; for the runoff coefficient, sensitivity analysis results of the two methods are similar.%选取基于Horton和Green-Ampt入渗模型的入渗参数,以及区域坡度、区域宽度、透水性区域的曼宁系数和可积水深度共7个SWMM模型参数,采用Morris方法进行全局灵敏度分析.并分别采用不同降水类型、不同重现期的单个降水事件及长期降水序列,分析各模型参数对总产流量、洪峰流量及径流系数3个输出变量的全局灵敏度.结果表明:T1和T2型降水的参数灵敏度分析结果呈现较大差异,T2型较小降水事件不适宜用于参数校核;对Horton入渗模型而言,可利用T1型较大降水事件的洪峰流量对区域形状系数进行校核,利用T2型较大降水事件的总产流量对最
Sensitivity analysis on parameters and processes affecting vapor intrusion risk
Picone, S.; Valstar, J.R.; Gaans, van P.; Grotenhuis, J.T.C.; Rijnaarts, H.H.M.
2012-01-01
A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the v
Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations
Petzold, L; Cao, Y; Li, S; Serban, R
2005-08-09
Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.
Attard, Guillaume; Rossier, Yvan; Eisenlohr, Laurent
2017-09-01
In a previous paper published in Journal of Hydrology, it was shown that underground structures are responsible for a mixing process between shallow and deep groundwater that can favour the spreading of urban contamination. In this paper, the impact of underground structures on the intrinsic vulnerability of urban aquifers was investigated. A sensitivity analysis was performed using a 2D deterministic modelling approach based on the reservoir theory generalized to hydrodispersive systems to better understand this mixing phenomenon and the mixing affected zone (MAZ) caused by underground structures. It was shown that the maximal extent of the MAZ caused by an underground structure is reached approximately 20 years after construction. Consequently, underground structures represent a long-term threat for deep aquifer reservoirs. Regarding the construction process, draining operations have a major impact and favour large-scale mixing between shallow and deep groundwater. Consequently, dewatering should be reduced and enclosed as much as possible. The role played by underground structures' dimensions was assessed. The obstruction of the first aquifer layer caused by construction has the greatest influence on the MAZ. The cumulative impact of several underground structures was assessed. It was shown that the total MAZ area increases linearly with underground structures' density. The role played by materials' properties and hydraulic gradient were assessed. Hydraulic conductivity, anisotropy and porosity have the strongest influence on the development of MAZ. Finally, an empirical law was derived to estimate the MAZ caused by an underground structure in a bi-layered aquifer under unconfined conditions. This empirical law, based on the results of the sensitivity analysis developed in this paper, allows for the estimation of MAZ dimensions under known material properties and underground structure dimensions. This empirical law can help urban planners assess the area of
Model Driven Development of Data Sensitive Systems
Olsen, Petur
2014-01-01
Model-driven development strives to use formal artifacts during the development process. Formal artifacts enables automatic analyses of some aspects of the system under development. This serves to increase the understanding of the (intended) behavior of the system as well as increasing error...... detection and pushing error detection to earlier stages of development. The complexity of modeling and the size of systems which can be analyzed is severely limited when introducing data variables. The state space grows exponentially in the number of variable and the domain size of the variables...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...
Kang, D.; Aneja, V. P.; Mathur, R.; Ray, J. D.
2001-12-01
A comprehensive modeling analysis is conducted using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone in three southeast United States national parks for a 15-day time period (July 14th to July 29th, 1995) characterized by high O3 surface concentrations. Nine emission scenarios including the base scenario are analyzed. Model predictions are compared with and contrasted against observed data at the three locations for the same time period. Model predictions (base scenario) tend to give lower daily maximum O3 concentrations than observation by 10.8% at Cove Mountain, Great Smoke Mountains National Park (GRSM), 26.8% at Mammoth Cave National Park (MACA), and 17.6% at Big Meadows, Shenandoah National Park (SHEN). Overall mean ozone concentrations are very similar at GRSM and SHEN (observed data at MACA are not available). Model predicted concentrations of lumped paraffin compounds match the observed values on the same order, while the observed concentrations for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene) are usually an order of magnitude higher than the predictions. Sensitivity analyses indicate each location has its own characteristics in terms of the capacity of volatile organic compounds (VOCs) to produce O3, but a maximum VOC capacity point (MVCP) exists at all locations that changes the influence of VOCs on O3 from production to destruction. Analysis of individual model process budgets shows that more than 50% of daytime O3 concentrations at these rural locations are transported from other areas, local chemistry is the second largest contributor (13% to 42%), all other processes combined contribute less than 10% of the daytime O3 concentrations. Local emissions (>99%) are predominantly responsible for VOCs at all locations, while vertical diffusion (>70%) is the predominant process to move VOCs away from the modeling grid. Dry deposition ( ~ 10%) and chemistry (2
Stochastic sensitivity analysis using HDMR and score function
Rajib Chowdhury; B N Rao; A Meher Prasad
2009-12-01
Probabilistic sensitivities provide an important insight in reliability analysis and often crucial towards understanding the physical behaviour underlying failure and modifying the design to mitigate and manage risk. This article presents a new computational approach for calculating stochastic sensitivities of mechanical systems with respect to distribution parameters of random variables. The method involves high dimensional model representation and score functions associated with probability distribution of a random input. The proposed approach facilitates first-and second-order approximation of stochastic sensitivity measures and statistical simulation. The formulation is general such that any simulation method can be used for the computation such as Monte Carlo, importance sampling, Latin hypercube, etc. Both the probabilistic response and its sensitivities can be estimated from a single probabilistic analysis, without requiring gradients of performance function. Numerical results indicate that the proposed method provides accurate and computationally efﬁcient estimates of sensitivities of statistical moments or reliability of structural system.
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Sensitivity Analysis of a Dynamical System Using C++
Donna Calhoun
1993-01-01
Full Text Available This article introduces basic principles of first order sensitivity analysis and presents an algorithm that can be used to compute the sensitivity of a dynamical system to a selected parameter. This analysis is performed by extending with sensitivity equations the set of differential equations describing the dynamical system. These additional equations require the evaluation of partial derivatives, and so a technique known as the table algorithm, which can be used to exactly and automatically compute these derivatives, is described. A C++ class which can be used to implement the table algorithm is presented along with a driver routine for evaluating the output of a model and its sensitivity to a single parameter. The use of this driver routine is illustrated with a specific application from environmental hazards modeling.
SENSITIVITY ANALYSIS FOR PARAMETERIZED VARIATIONAL INEQUALITY PROBLEMS
Li Fei
2004-01-01
This paper presents sensitivity analysis for parameterized variational inequality problems (VIP). Under appropriate assumption, it is shown that the perturbed solution to parameterized VIP is existent, unique, continuous and differentiable with respect to perturbation parameter. In the case of differentiability, we derive the equations forcalculating the derivative of solution variables with respect to the perturbation parameters.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
Sensitivity analysis of retrovirus HTLV-1 transactivation.
Corradin, Alberto; Di Camillo, Barbara; Ciminale, Vincenzo; Toffolo, Gianna; Cobelli, Claudio
2011-02-01
Human T-cell leukemia virus type 1 is a human retrovirus endemic in many areas of the world. Although many studies indicated a key role of the viral protein Tax in the control of viral transcription, the mechanisms controlling HTLV-1 expression and its persistence in vivo are still poorly understood. To assess Tax effects on viral kinetics, we developed a HTLV-1 model. Two parameters that capture both its deterministic and stochastic behavior were quantified: Tax signal-to-noise ratio (SNR), which measures the effect of stochastic phenomena on Tax expression as the ratio between the protein steady-state level and the variance of the noise causing fluctuations around this value; t(1/2), a parameter representative of the duration of Tax transient expression pulses, that is, of Tax bursts due to stochastic phenomena. Sensitivity analysis indicates that the major determinant of Tax SNR is the transactivation constant, the system parameter weighting the enhancement of retrovirus transcription due to transactivation. In contrast, t(1/2) is strongly influenced by the degradation rate of the mRNA. In addition to shedding light into the mechanism of Tax transactivation, the obtained results are of potential interest for novel drug development strategies since the two parameters most affecting Tax transactivation can be experimentally tuned, e.g. by perturbing protein phosphorylation and by RNA interference.
Applying incentive sensitization models to behavioral addiction
Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne
2014-01-01
The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...
Application of Stochastic Sensitivity Analysis to Integrated Force Method
X. F. Wei
2012-01-01
Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.
Sensitivity of a Shallow-Water Model to Parameters
Kazantsev, Eugene
2011-01-01
An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...
Engqvist, A. [A and I Engqvist Konsult HB, Vaxholm (Sweden); Andrejev, O. [Finnish Inst. of Marine Research, Helsinki (Finland)
2000-02-15
A sensitivity analysis with regard to variations of physical forcing has been performed using a 3D baroclinic model of the Oeregrundsgrepen area for a whole-year period with data pertaining to 1992. The results of these variations are compared to a nominal run with unaltered physical forcing. This nominal simulation is based on the experience gained in an earlier whole-year modelling of the same area; the difference is mainly that the present nominal simulation is run with identical parameters for the whole year. From a computational economy point of view it has been necessary to vary the time step between the month-long simulation periods. For all simulations with varied forcing, the same time step as for the nominal run has been used. The analysis also comprises the water turnover of a hypsographically defined subsection, the Bio Model area, located above the SFR depository. The external forcing factors that have been varied are the following (with their found relative impact on the volume average of the retention time of the Bio Model area over one year given within parentheses): atmospheric temperature increased/reduced by 2.5 deg C (-0.1% resp. +0.6%), local freshwater discharge rate doubled/halved (-1.6% resp. +0.01%), salinity range at the border increased/reduced a factor 2 (-0.84% resp. 0.00%), wind speed forcing reduced 10% (+8.6%). The results of these simulations, at least the yearly averages, permit a reasonably direct physical explanation, while the detailed dynamics is for natural reasons more intricate. Two additional full-year simulations of possible future hydrographic regimes have also been performed. The first mimics a hypothetical situation with permanent ice cover, which increases the average retention time 87%. The second regime entails the future hypsography with its anticipated shoreline displacement by an 11 m land-rise in the year 4000 AD, which also considerably increases the average retention times for the two remaining layers of the
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Lee HY
2013-07-01
Full Text Available Hwa-young Lee,1 Bong-Min Yang,1 Ji-min Hong,1 Tae-Jin Lee,1 Byoung-Gie Kim,2 Jae-Weon Kim,3 Young-Tae Kim,4 Yong-Man Kim,5 Sokbom Kang61Graduate School of Public Health, Seoul National University, Seoul, South Korea; 2Department of Obstetrics and Gynecology, Samsung Medical Center, Seoul, South Korea; 3Department of Obstetrics and Gynecology, Seoul National University, Seoul, South Korea; 4Department of Obstetrics and Gynecology, Yonsei University, Seoul, South Korea; 5Department of Obstetrics and Gynecology, University of Ulsan, Ulsan, South Korea; 6Department of Obstetrics and Gynecology, National Cancer Center, Kyeonggi-do, South KoreaObjective: We performed a cost–utility analysis to assess the cost-effectiveness of a chemotherapy sequence including a combination of polyethylene glycolated liposomal doxorubicin (PLD/carboplatin versus paclitaxel/carboplatin as a second-line treatment in women with platinum-sensitive ovarian cancer.Methods: A Markov model was constructed with a 10-year time horizon. The treatment sequence consisted of first- to sixth-line chemotherapies and best supportive care (BSC before death. Cycle length, a time interval for efficacy evaluation of chemotherapy, was 9 weeks. The model consisted of four health states: responsive, progressive, clinical remission, and death. At any given time, a patient may have remained on a current therapy or made a transition to the next therapy or death. Median time to progressions and overall survivals data were obtained through a systematic literature review and were pooled using a meta-analytical approach. If unavailable, this was elicited from an expert panel (eg, BSC. These outcomes were converted to transition probabilities using an appropriate formula. Direct costs included drug-acquisition costs for chemotherapies, premedication, adverse-event treatment and monitoring, efficacy evaluation, BSC, drug administration, and follow-up tests during remission. Indirect costs were
Uncertainty and Sensitivity in Surface Dynamics Modeling
Kettner, Albert J.; Syvitski, James P. M.
2016-05-01
Papers for this special issue on 'Uncertainty and Sensitivity in Surface Dynamics Modeling' heralds from papers submitted after the 2014 annual meeting of the Community Surface Dynamics Modeling System or CSDMS. CSDMS facilitates a diverse community of experts (now in 68 countries) that collectively investigate the Earth's surface-the dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere, by promoting, developing, supporting and disseminating integrated open source software modules. By organizing more than 1500 researchers, CSDMS has the privilege of identifying community strengths and weaknesses in the practice of software development. We recognize, for example, that progress has been slow on identifying and quantifying uncertainty and sensitivity in numerical modeling of earth's surface dynamics. This special issue is meant to raise awareness for these important subjects and highlight state-of-the-art progress.
Sensitivity analysis approach to multibody systems described by natural coordinates
Li, Xiufeng; Wang, Yabin
2014-03-01
The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.
Sensitivities and uncertainties of modeled ground temperatures in mountain environments
S. Gubler
2013-08-01
Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several
Ibáñez, J; Lavado Contador, J F; Schnabel, S; Martínez Valderrama, J
2016-02-15
An integrated dynamic model was used to evaluate the influence of climatic, soil, pastoral, economic and managerial factors on sheet erosion in rangelands of SW Spain (dehesas). This was achieved by means of a variance-based sensitivity analysis. Topsoil erodibility, climate change and a combined factor related to soil water storage capacity and the pasture production function were the factors which influenced water erosion the most. Of them, climate change is the main source of uncertainty, though in this study it caused a reduction in the mean and the variance of long-term erosion rates. The economic and managerial factors showed scant influence on soil erosion, meaning that it is unlikely to find such influence in the study area for the time being. This is because the low profitability of the livestock business maintains stocking rates at low levels. However, the potential impact of livestock, through which economic and managerial factors affect soil erosion, proved to be greater in absolute value than the impact of climate change. Therefore, if changes in some economic or managerial factors led to higher stocking rates in the future, significant increases in erosion rates would be expected.
Demonstration sensitivity analysis for RADTRAN III
Neuhauser, K S; Reardon, P C
1986-10-01
A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves.
Reyes F, M. C.; Del Valle G, E. [IPN, Escuela Superior de Fisica y Matematicas, Av. IPN s/n, Col. Lindavista, 07738 Ciudad de Mexico (Mexico); Gomez T, A. M. [ININ, Departamento de Sistemas Nucleares, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Sanchez E, V., E-mail: rf.melisa@gmail.com [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany)
2015-09-15
A methodology was implemented to carry out a sensitivity and uncertainty analysis for cross sections used in a coupled model for Trace/Parcs in a transient of control rod fall of a BWR-5. A model of the reactor core for the neutronic code Parcs was used, in which the assemblies located in the core are described. Thermo-hydraulic model in Trace was a simple model, where only a component type Chan was designed to represent all the core assemblies, which it was within a single vessel and boundary conditions were established. The thermo-hydraulic part was coupled with the neutron part, first for the steady state and then a transient of control rod fall was carried out for the sensitivity and uncertainty analysis. To carry out the analysis of cross sections used in the coupled model Trace/Parcs during the transient, the Probability Density Functions for 22 parameters selected from the total of neutronic parameters that use Parcs were generated, obtaining 100 different cases for the coupled model Trace/Parcs, each one with a database of different cross sections. All these cases were executed with the coupled model, obtaining in consequence 100 different output files for the transient of control rod fall doing emphasis in the nominal power, for which an uncertainty analysis was realized at the same time generate the band of uncertainty. With this analysis is possible to observe the ranges of results of the elected responses varying the selected uncertainty parameters. The sensitivity analysis complements the uncertainty analysis, identifying the parameter or parameters with more influence on the results and thus focuses on these parameters in order to better understand their effects. Beyond the obtained results, because is not a model with real operation data, the importance of this work is to know the application of the methodology to carry out the sensitivity and uncertainty analyses. (Author)
Lower extremity angle measurement with accelerometers - error and sensitivity analysis
Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, Herman B.K.
1991-01-01
The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found
Sensitivities in global scale modeling of isoprene
R. von Kuhlmann
2004-01-01
Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.
Applying incentive sensitization models to behavioral addiction
Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne
2014-01-01
The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....
Sensitivity Analysis of Fire Dynamics Simulation
Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.
2007-01-01
equations require solution of the issues of combustion and gas radiation to mention a few. This paper performs a sensitivity analysis of a fire dynamics simulation on a benchmark case where measurement results are available for comparison. The analysis is performed using the method of Elementary Effects......In case of fire dynamics simulation requirements to reliable results are most often very high due to the severe consequences of erroneous results. At the same time it is a well known fact that fire dynamics simulation constitutes rather complex physical phenomena which apart from flow and energy...
Water quality models are used to predict effects of conservation practices to mitigate the transport of herbicides to water bodies. We used two models - the Agricultural Policy/Environmental eXtender (APEX) and the Riparian Ecosystem Management Model (REMM) to predict the movement of atrazine from ...
Sensitivity of Footbridge Response to Load Modeling
Pedersen, Lars; Frier, Christian
The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many desig...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....
Sensitivity of Footbridge Response to Load Modeling
Pedersen, Lars; Frier, Christian
2012-01-01
The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many design...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....
Sensitivity of Footbridge Response to Load Modeling
Pedersen, Lars; Frier, Christian
The paper considers a stochastic approach to modeling the actions of walking and has focus on the vibration serviceability limit state of footbridges. The use of a stochastic approach is novel but useful as it is more advanced than the quite simplistic deterministic load models seen in many design...... matter to foresee their impact. The paper contributes by examining how some of these decisions influence the outcome of serviceability evaluations. The sensitivity study is made focusing on vertical footbridge response to single person loading....
Móring, Andrea; Vieno, Massimo; M. Doherty, Ruth
2016-01-01
In this paper a new process-based, weather-driven model for ammonia (NH3) emission from a urine patch has been developed and its sensitivity to various factors assessed. The GAG model (Generation of Ammonia from Grazing) is capable of simulating the TAN (total ammoniacal nitrogen) and the water...... content of the soil under a urine patch and also soil pH dynamics. The model tests suggest that ammonia volatilization from a urine patch can be affected by the possible restart of urea hydrolysis after a rain event as well as CO2 emission from the soil. The vital role of temperature in NH3 exchange...
Sensitivity Analysis of Automated Ice Edge Detection
Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien
2016-08-01
The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.
Sensitivities in global scale modeling of isoprene
R. von Kuhlmann
2003-06-01
Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30–60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HO_{x} production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O_{3} calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg(O_{3} from 273 to 299 Tg(O(_{3}. Thus, there is a spread of ±35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.
Weber, Benjamin; Hochhaus, Guenther
2015-01-01
The role of plasma pharmacokinetics (PK) for assessing bioequivalence at the target site, the lung, for orally inhaled drugs remains unclear. A validated semi-mechanistic model, considering the presence of mucociliary clearance in central lung regions, was expanded for quantifying the sensitivity of PK studies in detecting differences in the pulmonary performance (total lung deposition, central-to-peripheral lung deposition ratio, and pulmonary dissolution characteristics) between test (T) an...
G. Yarwood
2013-09-01
Full Text Available Photochemical grid models (PGMs are used to simulate tropospheric ozone and quantify its response to emission changes. PGMs are often applied for annual simulations to provide both maximum concentrations for assessing compliance with air quality standards and frequency distributions for assessing human exposure. Efficient methods for computing ozone at different emission levels can improve the quality of ozone air quality management efforts. This study demonstrates the feasibility of using the decoupled direct method (DDM to calculate first- and second-order sensitivity of ozone to anthropogenic NOx and VOC emissions in annual PGM simulations at continental scale. Algebraic models are developed that use Taylor series to produce complete annual frequency distributions of hourly ozone at any location and any anthropogenic emission level between zero and 100%, adjusted independently for NOx and VOC. We recommend computing the sensitivity coefficients at the midpoint of the emissions range over which they are intended to be applied, in this case with 50% anthropogenic emissions. The algebraic model predictions can be improved by combining sensitivity coefficients computed at 10 and 50% anthropogenic emissions. Compared to brute force simulations, algebraic model predictions tend to be more accurate in summer than winter, at rural than urban locations, and with 100% than zero anthropogenic emissions. Equations developed to combine sensitivity coefficients computed with 10 and 50% anthropogenic emissions are able to reproduce brute force simulation results with zero and 100% anthropogenic emissions with a mean bias of less than 2 ppb and mean error of less than 3 ppb averaged over 22 US cities.
G. Yarwood
2013-04-01
Full Text Available Photochemical grid models (PGMs are used to simulate tropospheric ozone and quantify its response to emission changes. PGMs are often applied for annual simulations to provide both maximum concentrations for assessing compliance with air quality standards and frequency distributions for assessing human exposure. Efficient methods for computing ozone at different emission levels can improve the quality of ozone air quality management efforts. This study demonstrates the feasibility of using the decoupled direct method (DDM to calculate first- and second-order sensitivity of ozone to anthropogenic NOx and VOC emissions in annual PGM simulations at continental scale. Algebraic models are developed that use Taylor series to produce complete annual frequency distributions of hourly ozone at any location and any anthropogenic emission level between zero and 100%, adjusted independently for NOx and VOC. We recommend computing the sensitivity coefficients at the mid-point of the emissions range over which they are intended to be applied, in this case with 50% anthropogenic emissions. The algebraic model predictions can be improved by combining sensitivity coefficients computed at 10% and 50% anthropogenic emissions. Compared to brute force simulations, algebraic model predictions tend to be more accurate in summer than winter, at rural than urban locations, and with 100% than zero anthropogenic emissions. Equations developed to combine sensitivity coefficients computed with 10% and 50% anthropogenic emissions are able to reproduce brute force simulation results with zero and 100% anthropogenic emissions with mean bias less than 2 ppb and mean error less than 3 ppb averaged over 22 US cities.
A. M. Dieye; Roy, D.P.; N. P. Hanan; Liu, S.(State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China); Hansen, M.; Touré, A.
2011-01-01
Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SO...
Gires, A.; Tchiguirinskaia, I.; Schertzer, D. J.; Lovejoy, S.
2011-12-01
In large urban areas, storm water management is a challenge with enlarging impervious areas. Many cities have implemented real time control (RTC) of their urban drainage system to either reduce overflow or limit urban contamination. A basic component of RTC is hydraulic/hydrologic model. In this paper we use the multifractal framework to suggest an innovative way to test the sensitivity of such a model to the spatio-temporal variability of its rainfall input. Indeed the rainfall variability is often neglected in urban context, being considered as a non-relevant issue at the scales involve. Our results show that on the contrary the rainfall variability should be taken into account. Universal multifractals (UM) rely on the concept of multiplicative cascade and are a standard tool to analyze and simulate with a reduced number of parameters geophysical processes that are extremely variable over a wide range of scales. This study is conducted on a 3 400 ha urban area located in Seine-Saint-Denis, in the North of Paris (France). We use the operational semi-distributed model that was calibrated by the local authority (Direction Eau et Assainnissement du 93) that is in charge of urban drainage. The rainfall data comes from the C-Band radar of Trappes operated by Météo-France. The rainfall event of February 9th, 2009 was used. A stochastic ensemble approach was implemented to quantify the uncertainty on discharge associated to the rainfall variability occurring at scales smaller than 1 km x 1 km x 5 min that is usually available with C-band radar networks. An analysis of the quantiles of the simulated peak flow showed that the uncertainty exceeds 20 % for upstream links. To evaluate a potential gain from a direct use of the rainfall data available at the resolution of X-band radar, we performed similar analysis of the rainfall fields of the degraded resolution of 9 km x 9 km x 20 min. The results show a clear decrease in uncertainty when the original resolution of C
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Sensitivity Analysis of a Bioinspired Refractive Index Based Gas Sensor
Yang Gao; Qi Xia; Guanglan Liao; Tielin Shi
2011-01-01
It was found out that the change of refractive index of ambient gas can lead to obvious change of the color of Morpho butterfly's wing. Such phenomenon has been employed as a sensing principle for detecting gas. In the present study, Rigorous Coupled-Wave Analysis (RCWA) was described briefly, and the partial derivative of optical reflection efficiency with respect to the refractive index of ambient gas, i.e., sensitivity of the sensor, was derived based on RCWA. A bioinspired grating model was constructed by mimicking the nanostructure on the ground scale of Morpho didius butterfly's wing. The analytical sensitivity was verified and the effect of the grating shape on the reflection spectra and its sensitivity were discussed. The results show that by tuning shape parameters of the grating, we can obtain desired reflection spectra and sensitivity, which can be applied to the design of the bioinspired refractive index based gas sensor.
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Rethinking Sensitivity Analysis of Nuclear Simulations with Topology
Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci
2016-01-01
In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of