Derivative based sensitivity analysis of gamma index
Directory of Open Access Journals (Sweden)
Biplab Sarkar
2015-01-01
Full Text Available Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD and distance-to-agreement (DTA measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm, the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA against the RP, the first and second order derivatives of the DDs (δD', δD" between these two curves were derived and used as the
... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...
Beyond the GUM: variance-based sensitivity analysis in metrology
International Nuclear Information System (INIS)
Lira, I
2016-01-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)
Parameter uncertainty effects on variance-based sensitivity analysis
International Nuclear Information System (INIS)
Yu, W.; Harris, T.J.
2009-01-01
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier
Sensitivity analysis practices: Strategies for model-based inference
International Nuclear Information System (INIS)
Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca
2006-01-01
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA
Sensitivity analysis practices: Strategies for model-based inference
Energy Technology Data Exchange (ETDEWEB)
Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)
2006-10-15
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.
Variance-based sensitivity analysis for wastewater treatment plant modelling.
Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B
2014-02-01
Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.
Sensitivity Analysis Based on Markovian Integration by Parts Formula
Directory of Open Access Journals (Sweden)
Yongsheng Hang
2017-10-01
Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.
Cross-covariance based global dynamic sensitivity analysis
Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng
2018-02-01
For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.
Weighting-Based Sensitivity Analysis in Causal Mediation Studies
Hong, Guanglei; Qin, Xu; Yang, Fan
2018-01-01
Through a sensitivity analysis, the analyst attempts to determine whether a conclusion of causal inference could be easily reversed by a plausible violation of an identification assumption. Analytic conclusions that are harder to alter by such a violation are expected to add a higher value to scientific knowledge about causality. This article…
Improved Extreme Learning Machine based on the Sensitivity Analysis
Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang
2018-03-01
Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis
Directory of Open Access Journals (Sweden)
S. Razmyan
2012-01-01
Full Text Available Discriminant analysis (DA is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.
Survey of sampling-based methods for uncertainty and sensitivity analysis
International Nuclear Information System (INIS)
Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.
2006-01-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Sensitivity Analysis of an Agent-Based Model of Culture's Consequences for Trade
Burgers, S.L.G.E.; Jonker, C.M.; Hofstede, G.J.; Verwaart, D.
2010-01-01
This paper describes the analysis of an agent-based model’s sensitivity to changes in parameters that describe the agents’ cultural background, relational parameters, and parameters of the decision functions. As agent-based models may be very sensitive to small changes in parameter values, it is of
Sensitivity based reduced approaches for structural reliability analysis
Indian Academy of Sciences (India)
captured by a safety-factor based approach due to the intricate nonlinear ... give the accounts of extensive research works which have been done over ... (ii) simulation based methods, for example, importance sampling (Bucher 1988; Mahade-.
Energy Technology Data Exchange (ETDEWEB)
Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA
2016-04-08
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance
Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Structure and sensitivity analysis of individual-based predator–prey models
International Nuclear Information System (INIS)
Imron, Muhammad Ali; Gergs, Andre; Berger, Uta
2012-01-01
The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.
International Nuclear Information System (INIS)
Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei
2013-01-01
The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method
System-Level Sensitivity Analysis of SiNW-bioFET-Based Biosensing Using Lockin Amplification
DEFF Research Database (Denmark)
Patou, François; Dimaki, Maria; Kjærgaard, Claus
2017-01-01
carry out for the first time the system-level sensitivity analysis of a generic SiNW-bioFET model coupled to a custom-design instrument based on the lock-in amplifier. By investigating a large parametric space spanning over both sensor and instrumentation specifications, we demonstrate that systemwide...
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Directory of Open Access Journals (Sweden)
Goutsias John
2010-05-01
Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory
International Nuclear Information System (INIS)
Cacuci, D. G.; Cacuci, D. G.; Ionescu-Bujor, M.
2008-01-01
The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)
Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory
Energy Technology Data Exchange (ETDEWEB)
Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safety, D-76021 Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)
2008-07-01
The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)
Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank
2017-01-01
Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Sensitivity Analysis of features in tolerancing based on constraint function level sets
International Nuclear Information System (INIS)
Ziegler, Philipp; Wartzack, Sandro
2015-01-01
Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use
Robust Stability Clearance of Flight Control Law Based on Global Sensitivity Analysis
Ou, Liuli; Liu, Lei; Dong, Shuai; Wang, Yongji
2014-01-01
To validate the robust stability of the flight control system of hypersonic flight vehicle, which suffers from a large number of parametrical uncertainties, a new clearance framework based on structural singular value ( $\\mu $ ) theory and global uncertainty sensitivity analysis (SA) is proposed. In this framework, SA serves as the preprocess of uncertain model to be analysed to help engineers to determine which uncertainties affect the stability of the closed loop system more slightly. By ig...
Toward a more robust variance-based global sensitivity analysis of model outputs
Energy Technology Data Exchange (ETDEWEB)
Tong, C
2007-10-15
Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.
Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.
Di Simone, Alessio
2016-06-25
Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.
Analysis of DNA methylation in Arabidopsis thaliana based on methylation-sensitive AFLP markers.
Cervera, M T; Ruiz-García, L; Martínez-Zapater, J M
2002-12-01
AFLP analysis using restriction enzyme isoschizomers that differ in their sensitivity to methylation of their recognition sites has been used to analyse the methylation state of anonymous CCGG sequences in Arabidopsis thaliana. The technique was modified to improve the quality of fingerprints and to visualise larger numbers of scorable fragments. Sequencing of amplified fragments indicated that detection was generally associated with non-methylation of the cytosine to which the isoschizomer is sensitive. Comparison of EcoRI/ HpaII and EcoRI/ MspI patterns in different ecotypes revealed that 35-43% of CCGG sites were differentially digested by the isoschizomers. Interestingly, the pattern of digestion among different plants belonging to the same ecotype is highly conserved, with the rate of intra-ecotype methylation-sensitive polymorphisms being less than 1%. However, pairwise comparisons of methylation patterns between samples belonging to different ecotypes revealed differences in up to 34% of the methylation-sensitive polymorphisms. The lack of correlation between inter-ecotype similarity matrices based on methylation-insensitive or methylation-sensitive polymorphisms suggests that whatever the mechanisms regulating methylation may be, they are not related to nucleotide sequence variation.
Zhong, Lin-sheng; Tang, Cheng-cai; Guo, Hua
2010-07-01
Based on the statistical data of natural ecology and social economy in Jinyintan Grassland Scenic Area in Qinghai Province in 2008, an evaluation index system for the ecological sensitivity of this area was established from the aspects of protected area rank, vegetation type, slope, and land use type. The ecological sensitivity of the sub-areas with higher tourism value and ecological function in the area was evaluated, and the tourism function zoning of these sub-areas was made by the technology of GIS and according to the analysis of eco-environmental characteristics and ecological sensitivity of each sensitive sub-area. It was suggested that the Jinyintan Grassland Scenic Area could be divided into three ecological sensitivity sub-areas (high, moderate, and low), three tourism functional sub-areas (restricted development ecotourism, moderate development ecotourism, and mass tourism), and six tourism functional sub-areas (wetland protection, primitive ecological sightseeing, agriculture and pasture tourism, grassland tourism, town tourism, and rural tourism).
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Pantazis, Yannis; Katsoulakis, Markos A; Vlachos, Dionisios G
2013-10-22
Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as "pathwise". The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the FIM can allow to efficiently address
DEFF Research Database (Denmark)
Liu, Zhou; Chen, Zhe; Sun, Haishun Sun
2012-01-01
the runtime emergent states of related system component. Based on sensitivity analysis between the relay operation margin and power system state variables, an optimal load shedding strategy is applied to adjust the emergent states timely before the unwanted relay operation. Load dynamics is also taken...... into account to compensate load shedding amount calculation. And the multi-agent technology is applied for the whole strategy implementation. A test system is built in real time digital simulator (RTDS) and has demonstrated the effectiveness of the proposed strategy.......In order to prevent long term voltage instability and induced cascading events, a load shedding strategy based on the sensitivity of relay operation margin to load powers is discussed and proposed in this paper. The operation margin of critical impedance backup relay is defined to identify...
Sensitivity analysis of a Pelton hydropower station based on a novel approach of turbine torque
International Nuclear Information System (INIS)
Xu, Beibei; Yan, Donglin; Chen, Diyi; Gao, Xiang; Wu, Changzhi
2017-01-01
Highlights: • A novel approach of the turbine torque is proposed. • A unify model is capable of the dynamic characteristics of Pelton hydropower stations. • Sensitivity analysis from hydraulic parameters, mechanic parameters and electric parameters are performed. • Numerical simulations show the sensitivity ranges of the above three parameters. - Abstract: Hydraulic turbine generator units with long-running operation may cause the values of hydraulic, mechanic or electric parameters changing gradually, which brings a new challenge, namely that whether the operating stability of these units will be changed in the next thirty or forty years. This paper is an attempt to seek a relatively unified model for sensitivity analysis from three aspects: hydraulic parameters (turbine flow and turbine head), mechanic parameters (axis coordinates and axial misalignment) and electric parameters (generator speed and excitation current). First, a novel approach of the Pelton turbine torque is proposed, which can make connections between the hydraulic turbine governing system and the shafting system of the hydro-turbine generator unit. Moreover, the correctness of this approach is verified by comparing with other three models of hydropower stations. Second, this latter is analyzed to obtain the sensitivity of electric parameter (excitation current), the mechanic parameters (axial misalignment, upper guide bearing rigidity, lower guide bearing rigidity, and turbine guide bearing rigidity) on hydraulic parameters on the operating stability of the unit. In addition to this, some critical values and ranges are proposed. Finally, these results can provide some bases for the design and stable operation of Peltonhydropower stations.
Sensitivity analysis of an individual-based model for simulation of influenza epidemics.
Directory of Open Access Journals (Sweden)
Elaine O Nsoesie
Full Text Available Individual-based epidemiology models are increasingly used in the study of influenza epidemics. Several studies on influenza dynamics and evaluation of intervention measures have used the same incubation and infectious period distribution parameters based on the natural history of influenza. A sensitivity analysis evaluating the influence of slight changes to these parameters (in addition to the transmissibility would be useful for future studies and real-time modeling during an influenza pandemic.In this study, we examined individual and joint effects of parameters and ranked parameters based on their influence on the dynamics of simulated epidemics. We also compared the sensitivity of the model across synthetic social networks for Montgomery County in Virginia and New York City (and surrounding metropolitan regions with demographic and rural-urban differences. In addition, we studied the effects of changing the mean infectious period on age-specific epidemics. The research was performed from a public health standpoint using three relevant measures: time to peak, peak infected proportion and total attack rate. We also used statistical methods in the design and analysis of the experiments. The results showed that: (i minute changes in the transmissibility and mean infectious period significantly influenced the attack rate; (ii the mean of the incubation period distribution appeared to be sufficient for determining its effects on the dynamics of epidemics; (iii the infectious period distribution had the strongest influence on the structure of the epidemic curves; (iv the sensitivity of the individual-based model was consistent across social networks investigated in this study and (v age-specific epidemics were sensitive to changes in the mean infectious period irrespective of the susceptibility of the other age groups. These findings suggest that small changes in some of the disease model parameters can significantly influence the uncertainty
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Sensitivity analysis of dynamic characteristic of the fixture based on design variables
International Nuclear Information System (INIS)
Wang Dongsheng; Nong Shaoning; Zhang Sijian; Ren Wanfa
2002-01-01
The research on the sensitivity analysis is dealt with of structural natural frequencies to structural design parameters. A typical fixture for vibration test is designed. Using I-DEAS Finite Element programs, the sensitivity of its natural frequency to design parameters is analyzed by Matrix Perturbation Method. The research result shows that the sensitivity analysis is a fast and effective dynamic re-analysis method to dynamic design and parameters modification of complex structures such as fixtures
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in
Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis
Directory of Open Access Journals (Sweden)
Lyu Kehong
2014-06-01
Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.
Sakamoto, Yuri; Uemura, Kohei; Ikuta, Takashi; Maehashi, Kenzo
2018-04-01
We have succeeded in fabricating a hydrogen gas sensor based on palladium-modified graphene field-effect transistors (FETs). The negative-voltage shift in the transfer characteristics was observed with exposure to hydrogen gas, which was explained by the change in work function. The hydrogen concentration dependence of the voltage shift was investigated using graphene FETs with palladium deposited by three different evaporation processes. The results indicate that the hydrogen detection sensitivity of the palladium-modified graphene FETs is strongly dependent on the palladium configuration. Therefore, the palladium-modified graphene FET is a candidate for breath analysis.
A new measure of uncertainty importance based on distributional sensitivity analysis for PSA
International Nuclear Information System (INIS)
Han, Seok Jung; Tak, Nam Il; Chun, Moon Hyun
1996-01-01
The main objective of the present study is to propose a new measure of uncertainty importance based on distributional sensitivity analysis. The new measure is developed to utilize a metric distance obtained from cumulative distribution functions (cdfs). The measure is evaluated for two cases: one is a cdf given by a known analytical distribution and the other given by an empirical distribution generated by a crude Monte Carlo simulation. To study its applicability, the present measure has been applied to two different cases. The results are compared with those of existing three methods. The present approach is a useful measure of uncertainty importance which is based on cdfs. This method is simple and easy to calculate uncertainty importance without any complex process. On the basis of the results obtained in the present work, the present method is recommended to be used as a tool for the analysis of uncertainty importance
Robust Stability Clearance of Flight Control Law Based on Global Sensitivity Analysis
Directory of Open Access Journals (Sweden)
Liuli Ou
2014-01-01
Full Text Available To validate the robust stability of the flight control system of hypersonic flight vehicle, which suffers from a large number of parametrical uncertainties, a new clearance framework based on structural singular value (μ theory and global uncertainty sensitivity analysis (SA is proposed. In this framework, SA serves as the preprocess of uncertain model to be analysed to help engineers to determine which uncertainties affect the stability of the closed loop system more slightly. By ignoring these unimportant uncertainties, the calculation of μ can be simplified. Instead of analysing the effect of uncertainties on μ which involves solving optimal problems repeatedly, a simpler stability analysis function which represents the effect of uncertainties on closed loop poles is proposed. Based on this stability analysis function, Sobol’s method, the most widely used global SA method, is extended and applied to the new clearance framework due to its suitability for system with strong nonlinearity and input factors varying in large interval, as well as input factors subjecting to random distributions. In this method, the sensitive indices can be estimated via Monte Carlo simulation conveniently. An example is given to illustrate the efficiency of the proposed method.
Wang, Zhihui; Deisboeck, Thomas S.; Cristini, Vittorio
2014-01-01
There are two challenges that researchers face when performing global sensitivity analysis (GSA) on multiscale in silico cancer models. The first is increased computational intensity, since a multiscale cancer model generally takes longer to run than does a scale-specific model. The second problem is the lack of a best GSA method that fits all types of models, which implies that multiple methods and their sequence need to be taken into account. In this article, we therefore propose a sampling-based GSA workflow consisting of three phases – pre-analysis, analysis, and post-analysis – by integrating Monte Carlo and resampling methods with the repeated use of analysis of variance (ANOVA); we then exemplify this workflow using a two-dimensional multiscale lung cancer model. By accounting for all parameter rankings produced by multiple GSA methods, a summarized ranking is created at the end of the workflow based on the weighted mean of the rankings for each input parameter. For the cancer model investigated here, this analysis reveals that ERK, a downstream molecule of the EGFR signaling pathway, has the most important impact on regulating both the tumor volume and expansion rate in the algorithm used. PMID:25257020
Energy Technology Data Exchange (ETDEWEB)
Kelsey, Adrian [Health and Safety Laboratory, Harpur Hill, Buxton (United Kingdom)
2015-12-15
Uncertainty in model predictions of the behaviour of fires is an important issue in fire safety analysis in nuclear power plants. A global sensitivity analysis can help identify the input parameters or sub-models that have the most significant effect on model predictions. However, to perform a global sensitivity analysis using Monte Carlo sampling might require thousands of simulations to be performed and therefore would not be practical for an analysis based on a complex fire code using computational fluid dynamics (CFD). An alternative approach is to perform a global sensitivity analysis using an emulator. Gaussian process emulators can be built using a limited number of simulations and once built a global sensitivity analysis can be performed on an emulator, rather than using simulations directly. Typically reliable emulators can be built using ten simulations for each parameter under consideration, therefore allowing a global sensitivity analysis to be performed, even for a complex computer code. In this paper we use an example of a large scale pool fire to demonstrate an emulator based approach to global sensitivity analysis. In that work an emulator based global sensitivity analysis was used to identify the key uncertain model inputs affecting the entrainment rates and flame heights in large Liquefied Natural Gas (LNG) fire plumes. The pool fire simulations were performed using the Fire Dynamics Simulator (FDS) software. Five model inputs were varied: the fire diameter, burn rate, radiative fraction, computational grid cell size and choice of turbulence model. The ranges used for these parameters in the analysis were determined from experiment and literature. The Gaussian process emulators used in the analysis were created using 127 FDS simulations. The emulators were checked for reliability, and then used to perform a global sensitivity analysis and uncertainty analysis. Large-scale ignited releases of LNG on water were performed by Sandia National
DEFF Research Database (Denmark)
Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad
2018-01-01
of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach
Aguilar, José G.; Magri, Luca; Juniper, Matthew P.
2017-07-01
Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.
Kanchanaketu, T; Sangduen, N; Toojinda, T; Hongtrakul, V
2012-04-13
Genetic analysis of 56 samples of Jatropha curcas L. collected from Thailand and other countries was performed using the methylation-sensitive amplification polymorphism (MSAP) technique. Nine primer combinations were used to generate MSAP fingerprints. When the data were interpreted as amplified fragment length polymorphism (AFLP) markers, 471 markers were scored. All 56 samples were classified into three major groups: γ-irradiated, non-toxic and toxic accessions. Genetic similarity among the samples was extremely high, ranging from 0.95 to 1.00, which indicated very low genetic diversity in this species. The MSAP fingerprint was further analyzed for DNA methylation polymorphisms. The results revealed differences in the DNA methylation level among the samples. However, the samples collected from saline areas and some species hybrids showed specific DNA methylation patterns. AFLP data were used, together with methylation-sensitive AFLP (MS-AFLP) data, to construct a phylogenetic tree, resulting in higher efficiency to distinguish the samples. This combined analysis separated samples previously grouped in the AFLP analysis. This analysis also distinguished some hybrids. Principal component analysis was also performed; the results confirmed the separation in the phylogenetic tree. Some polymorphic bands, involving both nucleotide and DNA methylation polymorphism, that differed between toxic and non-toxic samples were identified, cloned and sequenced. BLAST analysis of these fragments revealed differences in DNA methylation in some known genes and nucleotide polymorphism in chloroplast DNA. We conclude that MSAP is a powerful technique for the study of genetic diversity for organisms that have a narrow genetic base.
Sensitivity Analysis Based SVM Application on Automatic Incident Detection of Rural Road in China
Directory of Open Access Journals (Sweden)
Xingliang Liu
2018-01-01
Full Text Available Traditional automatic incident detection methods such as artificial neural networks, backpropagation neural network, and Markov chains are not suitable for addressing the incident detection problem of rural roads in China which have a relatively high accident rate and a low reaction speed caused by the character of small traffic volume. This study applies the support vector machine (SVM and parameter sensitivity analysis methods to build an accident detection algorithm in a rural road condition, based on real-time data collected in a field experiment. The sensitivity of four parameters (speed, front distance, vehicle group time interval, and free driving ratio is analyzed, and the data sets of two parameters with a significant sensitivity are chosen to form the traffic state feature vector. The SVM and k-fold cross validation (K-CV methods are used to build the accident detection algorithm, which shows an excellent performance in detection accuracy (98.15% of the training data set and 87.5% of the testing data set. Therefore, the problem of low incident reaction speed of rural roads in China could be solved to some extent.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-01-01
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC
Sensitivity analysis and calibration of a dynamic physically based slope stability model
Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens
2017-06-01
Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and
Directory of Open Access Journals (Sweden)
Iulian N. BUJOREANU
2011-01-01
Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.
Risk Assessment Method for Offshore Structure Based on Global Sensitivity Analysis
Directory of Open Access Journals (Sweden)
Zou Tao
2012-01-01
Full Text Available Based on global sensitivity analysis (GSA, this paper proposes a new risk assessment method for an offshore structure design. This method quantifies all the significances among random variables and their parameters at first. And by comparing the degree of importance, all minor factors would be negligible. Then, the global uncertainty analysis work would be simplified. Global uncertainty analysis (GUA is an effective way to study the complexity and randomness of natural events. Since field measured data and statistical results often have inevitable errors and uncertainties which lead to inaccurate prediction and analysis, the risk in the design stage of offshore structures caused by uncertainties in environmental loads, sea level, and marine corrosion must be taken into account. In this paper, the multivariate compound extreme value distribution model (MCEVD is applied to predict the extreme sea state of wave, current, and wind. The maximum structural stress and deformation of a Jacket platform are analyzed and compared with different design standards. The calculation result sufficiently demonstrates the new risk assessment method’s rationality and security.
Sensitivity Analysis Without Assumptions.
Ding, Peng; VanderWeele, Tyler J
2016-05-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.
International Nuclear Information System (INIS)
Keissar, K; Gilad, O; Maestri, R; Pinna, G D; La Rovere, M T
2010-01-01
A novel approach for the estimation of baroreflex sensitivity (BRS) is introduced based on time–frequency analysis of the transfer function (TF). The TF method (TF-BRS) is a well-established non-invasive technique which assumes stationarity. This condition is difficult to meet, especially in cardiac patients. In this study, the classical TF was replaced with a wavelet transfer function (WTF) and the classical coherence was replaced with wavelet transform coherence (WTC), adding the time domain as an additional degree of freedom with dynamic error estimation. Error analysis and comparison between WTF-BRS and TF-BRS were performed using simulated signals with known transfer function and added noise. Similar comparisons were performed for ECG and blood pressure signals, in the supine position, of 19 normal subjects, 44 patients with a history of previous myocardial infarction (MI) and 45 patients with chronic heart failure. This yielded an excellent linear association (R > 0.94, p < 0.001) for time-averaged WTF-BRS, validating the new method as consistent with a known method. The additional advantage of dynamic analysis of coherence and TF estimates was illustrated in two physiological examples of supine rest and change of posture showing the evolution of BRS synchronized with its error estimations and sympathovagal balance
Directory of Open Access Journals (Sweden)
Hui Wan
2015-06-01
Full Text Available Sensitivity analysis is a fundamental approach to identify the most significant and sensitive parameters, helping us to understand complex hydrological models, particularly for time-consuming distributed flood forecasting models based on complicated theory with numerous parameters. Based on Sobol’ method, this study compared the sensitivity and interactions of distributed flood forecasting model parameters with and without accounting for correlation. Four objective functions: (1 Nash–Sutcliffe efficiency (ENS; (2 water balance coefficient (WB; (3 peak discharge efficiency (EP; and (4 time to peak efficiency (ETP were implemented to the Liuxihe model with hourly rainfall-runoff data collected in the Nanhua Creek catchment, Pearl River, China. Contrastive results for the sensitivity and interaction analysis were also illustrated among small, medium, and large flood magnitudes. Results demonstrated that the choice of objective functions had no effect on the sensitivity classification, while it had great influence on the sensitivity ranking for both uncorrelated and correlated cases. The Liuxihe model behaved and responded uniquely to various flood conditions. The results also indicated that the pairwise parameters interactions revealed a non-ignorable contribution to the model output variance. Parameters with high first or total order sensitivity indices presented a corresponding high second order sensitivity indices and correlation coefficients with other parameters. Without considering parameter correlations, the variance contributions of highly sensitive parameters might be underestimated and those of normally sensitive parameters might be overestimated. This research laid a basic foundation to improve the understanding of complex model behavior.
Leurent, Baptiste; Gomes, Manuel; Faria, Rita; Morris, Stephen; Grieve, Richard; Carpenter, James R
2018-04-20
Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice.
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
Status of XSUSA for sampling based nuclear data uncertainty and sensitivity analysis
International Nuclear Information System (INIS)
Zwermann, W.; Gallner, L.; Klein, M.; Krzydacz-Hausmann; Pasichnyk, I.; Pautz, A.; Velkov, K.
2013-01-01
In the present contribution, an overview of the sampling based XSUSA method for sensitivity and uncertainty analysis with respect to nuclear data is given. The focus is on recent developments and applications of XSUSA. These applications include calculations for critical assemblies, fuel assembly depletion calculations, and steady state as well as transient reactor core calculations. The analyses are partially performed in the framework of international benchmark working groups (UACSA - Uncertainty Analyses for Criticality Safety Assessment, UAM - Uncertainty Analysis in Modelling). It is demonstrated that particularly for full-scale reactor calculations the influence of the nuclear data uncertainties on the results can be substantial. For instance, for the radial fission rate distributions of mixed UO 2 /MOX light water reactor cores, the 2σ uncertainties in the core centre and periphery can reach values exceeding 10%. For a fast transient, the resulting time behaviour of the reactor power was covered by a wide uncertainty band. Overall, the results confirm the necessity of adding systematic uncertainty analyses to best-estimate reactor calculations. (authors)
Uncertainty and sensitivity analysis of electro-mechanical impedance based SHM system
International Nuclear Information System (INIS)
Rosiek, M; Martowicz, A; Uhl, T
2010-01-01
The paper deals with the application of uncertainty and sensitivity analysis performed for FE simulations for electro-mechanical impedance based SHM system. The measurement of electro-mechanical impedance allows to follow changes of mechanical properties of monitored construction. Therefore it can be effectively applied to conclude about presence of damage. Coupled FE simulations have been carried out for simultaneous consideration of both structural dynamics and piezoelectric properties of a simple beam with bonded transducer. Several indexes have been used to assess the damage growth. In the paper the results obtained with both deterministic and stochastic simulations are shown and discussed. First, the relationship between size of introduced damage and its indexes has been studied. Second, ranges of variation of selected model properties have been assumed to find relationships between them and damage indexes. The most influential parameters have been found. Finally, the overall propagation of considered uncertainty has been assessed and related histograms plotted to discuss effectiveness and robustness of tested damage indexes based on the measurement of electro-mechanical impedance.
Unger, André J. A.
2010-02-01
This work is the second installment in a two-part series, and focuses on object-oriented programming methods to implement an augmented-state variable approach to aggregate the PCS index and introduce the Bermudan-style call feature into the proposed CAT bond model. The PCS index is aggregated quarterly using a discrete Asian running-sum formulation. The resulting aggregate PCS index augmented-state variable is used to specify the payoff (principle) on the CAT bond based on reinsurance layers. The purpose of the Bermudan-style call option is to allow the reinsurer to minimize their interest rate risk exposure on making fixed coupon payments under prevailing interest rates. A sensitivity analysis is performed to determine the impact of uncertainty in the frequency and magnitude of hurricanes on the price of the CAT bond. Results indicate that while the CAT bond is highly sensitive to the natural variability in the frequency of landfalling hurricanes between El Ninõ and non-El Ninõ years, it remains relatively insensitive to uncertainty in the magnitude of damages. In addition, results indicate that the maximum price of the CAT bond is insensitive to whether it is engineered to cover low frequency high magnitude events in a 'high' reinsurance layer relative to high frequency low magnitude events in a 'low' reinsurance layer. Also, while it is possible for the reinsurer to minimize their interest rate risk exposure on the fixed coupon payments, the impact of this risk on the price of the CAT bond appears small relative to the natural variability in the CAT bond price, and consequently catastrophic risk, due to uncertainty in the frequency and magnitude of landfalling hurricanes.
Zhu, Yueying; Alexandre Wang, Qiuping; Li, Wei; Cai, Xu
2017-09-01
The formation of continuous opinion dynamics is investigated based on a virtual gambling mechanism where agents fight for a limited resource. We propose a model with agents holding opinions between -1 and 1. Agents are segregated into two cliques according to the sign of their opinions. Local communication happens only when the opinion distance between corresponding agents is no larger than a pre-defined confidence threshold. Theoretical analysis regarding special cases provides a deep understanding of the roles of both the resource allocation parameter and confidence threshold in the formation of opinion dynamics. For a sparse network, the evolution of opinion dynamics is negligible in the region of low confidence threshold when the mindless agents are absent. Numerical results also imply that, in the presence of economic agents, high confidence threshold is required for apparent clustering of agents in opinion. Moreover, a consensus state is generated only when the following three conditions are satisfied simultaneously: mindless agents are absent, the resource is concentrated in one clique, and confidence threshold tends to a critical value(=1.25+2/ka ; k_a>8/3 , the average number of friends of individual agents). For fixed a confidence threshold and resource allocation parameter, the most chaotic steady state of the dynamics happens when the fraction of mindless agents is about 0.7. It is also demonstrated that economic agents are more likely to win at gambling, compared to mindless ones. Finally, the importance of three involved parameters in establishing the uncertainty of model response is quantified in terms of Latin hypercube sampling-based sensitivity analysis.
DEFF Research Database (Denmark)
Meng, Lexuan; Dragicevic, Tomislav; Vasquez, Juan Carlos
2015-01-01
of dynamic study. The aim of this paper is to model the complete DC microgrid system in z-domain and perform sensitivity analysis for the complete system. A generalized modeling method is proposed and the system dynamics under different control parameters, communication topologies and communication speed...
Directory of Open Access Journals (Sweden)
Rehan Balqis M.
2016-01-01
Full Text Available Current practice in flood frequency analysis assumes that the stochastic properties of extreme floods follow that of stationary conditions. As human intervention and anthropogenic climate change influences in hydrometeorological variables are becoming evident in some places, there have been suggestions that nonstationary statistics would be better to represent the stochastic properties of the extreme floods. The probabilistic estimation of non-stationary models, however, is surrounded with uncertainty related to scarcity of observations and modelling complexities hence the difficulty to project the future condition. In the face of uncertain future and the subjectivity of model choices, this study attempts to demonstrate the practical implications of applying a nonstationary model and compares it with a stationary model in flood risk assessment. A fully integrated framework to simulate decision makers’ behaviour in flood frequency analysis is thereby developed. The framework is applied to hypothetical flood risk management decisions and the outcomes are compared with those of known underlying future conditions. Uncertainty of the economic performance of the risk-based decisions is assessed through Monte Carlo simulations. Sensitivity of the results is also tested by varying the possible magnitude of future changes. The application provides quantitative and qualitative comparative results that satisfy a preliminary analysis of whether the nonstationary model complexity should be applied to improve the economic performance of decisions. Results obtained from the case study shows that the relative differences of competing models for all considered possible future changes are small, suggesting that stationary assumptions are preferred to a shift to nonstationary statistics for practical application of flood risk management. Nevertheless, nonstationary assumption should also be considered during a planning stage in addition to stationary assumption
Directory of Open Access Journals (Sweden)
A. Nallathambi
2015-05-01
Full Text Available In this paper Micro-electromechanical System (MEMS diaphragm based pressure sensor for environmental applications is discussed. The main focus of this paper is to design, simulate and analyze the sensitivity of MEMS based diaphragm using different structures to measure the low and high pressure values. The simulation is done through the finite element tool and specifications related the maximum convinced stress; deflection and sensitivity of the diaphragms have been analyzed using the software INTELLISUITE 8.7v. The change in pressure is to bending of the diaphragm that modifies the measured displacement between the substrate and the diaphragm. This change in displacement gives the measure of the pressure in that environment. The design of these studies can be used to improve the sensitivity of these devices. Here the diaphragm based pressure sensor produced better displacement, sensitivity and stress output responses are obtained from the square diaphragm. The pressure range from 0.6 MPa to 25 MPa and its maximum displacement is accordingly 59 mm over a pressure range of 0 to 2 MPa. Its sensitivity is therefore 2.35 [10E-12/Pa].
Directory of Open Access Journals (Sweden)
J. Li
2013-08-01
Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.
Interference and Sensitivity Analysis.
VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth
2014-11-01
Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.
Beccali, Marco; Cellura, Maurizio; Iudicello, Maria; Mistretta, Marina
2010-07-01
Though many studies concern the agro-food sector in the EU and Italy, and its environmental impacts, literature is quite lacking in works regarding LCA application on citrus products. This paper represents one of the first studies on the environmental impacts of citrus products in order to suggest feasible strategies and actions to improve their environmental performance. In particular, it is part of a research aimed to estimate environmental burdens associated with the production of the following citrus-based products: essential oil, natural juice and concentrated juice from oranges and lemons. The life cycle assessment of these products, published in a previous paper, had highlighted significant environmental issues in terms of energy consumption, associated CO(2) emissions, and water consumption. Starting from such results the authors carry out an improvement analysis of the assessed production system, whereby sustainable scenarios for saving water and energy are proposed to reduce environmental burdens of the examined production system. In addition, a sensitivity analysis to estimate the effects of the chosen methods will be performed, giving data on the outcome of the study. Uncertainty related to allocation methods, secondary data sources, and initial assumptions on cultivation, transport modes, and waste management is analysed. The results of the performed analyses allow stating that every assessed eco-profile is differently influenced by the uncertainty study. Different assumptions on initial data and methods showed very sensible variations in the energy and environmental performances of the final products. Besides, the results show energy and environmental benefits that clearly state the improvement of the products eco-profile, by reusing purified water use for irrigation, using the railway mode for the delivery of final products, when possible, and adopting efficient technologies, as the mechanical vapour recompression, in the pasteurisation and
Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D
2015-01-01
A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.
Directory of Open Access Journals (Sweden)
Annie eLumen
2015-05-01
Full Text Available A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local
Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology
Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberger, F.; Saltelli, A.; Pagano, A.
2007-05-01
In this paper, we discuss a joint approach to calibration and uncertainty estimation for hydrologic systems that combines a top-down, data-based mechanistic (DBM) modelling methodology; and a bottom-up, reductionist modelling methodology. The combined approach is applied to the modelling of the River Hodder catchment in North-West England. The top-down DBM model provides a well identified, statistically sound yet physically meaningful description of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. These characteristics are defined inductively from the data without prior assumptions about the model structure, other than it is within the generic class of nonlinear differential-delay equations. The bottom-up modelling is developed using the TOPMODEL, whose structure is assumed a priori and is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters. The subsequent exercises in calibration and validation, performed with Generalized Likelihood Uncertainty Estimation (GLUE), are carried out in the light of the GSA and DBM analyses. This allows for the pre-calibration of the the priors used for GLUE, in order to eliminate dynamical features of the TOPMODEL that have little effect on the model output and would be rejected at the structure identification phase of the DBM modelling analysis. In this way, the elements of meaningful subjectivity in the GLUE approach, which allow the modeler to interact in the modelling process by constraining the model to have a specific form prior to calibration, are combined with other more objective, data-based benchmarks for the final uncertainty estimation. GSA plays a major role in building a bridge between the hypothetico-deductive (bottom-up) and inductive (top-down) approaches and helps to improve the
Sensitivity theory for reactor burnup analysis based on depletion perturbation theory
International Nuclear Information System (INIS)
Yang, Wonsik.
1989-01-01
The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation
Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita
2017-05-01
Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.
International Nuclear Information System (INIS)
Danilov, A A; Rudnev, S G; V Vassilevski, Yu; Kramarenko, V K; Nikolaev, D V; Smirnov, A V; Salamatova, V Yu
2013-01-01
In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.
Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.
2013-04-01
In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.
International Nuclear Information System (INIS)
Zhai, Qingqing; Yang, Jun; Zhao, Yu
2014-01-01
Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one
Sensitivity analysis of land unit suitability for conservation using a knowledge-based system.
Humphries, Hope C; Bourgeron, Patrick S; Reynolds, Keith M
2010-08-01
The availability of spatially continuous data layers can have a strong impact on selection of land units for conservation purposes. The suitability of ecological conditions for sustaining the targets of conservation is an important consideration in evaluating candidate conservation sites. We constructed two fuzzy logic-based knowledge bases to determine the conservation suitability of land units in the interior Columbia River basin using NetWeaver software in the Ecosystem Management Decision Support application framework. Our objective was to assess the sensitivity of suitability ratings, derived from evaluating the knowledge bases, to fuzzy logic function parameters and to the removal of data layers (land use condition, road density, disturbance regime change index, vegetation change index, land unit size, cover type size, and cover type change index). The amount and geographic distribution of suitable land polygons was most strongly altered by the removal of land use condition, road density, and land polygon size. Removal of land use condition changed suitability primarily on private or intensively-used public land. Removal of either road density or land polygon size most strongly affected suitability on higher-elevation US Forest Service land containing small-area biophysical environments. Data layers with the greatest influence differed in rank between the two knowledge bases. Our results reinforce the importance of including both biophysical and socio-economic attributes to determine the suitability of land units for conservation. The sensitivity tests provided information about knowledge base structuring and parameterization as well as prioritization for future data needs.
Directory of Open Access Journals (Sweden)
Guo Ruijiang
1995-01-01
Full Text Available A finite element based sensitivity analysis procedure is developed for buckling and postbuckling of composite plates. This procedure is based on the direct differentiation approach combined with the reference volume concept. Linear elastic material model and nonlinear geometric relations are used. The sensitivity analysis technique results in a set of linear algebraic equations which are easy to solve. The procedure developed provides the sensitivity derivatives directly from the current load and responses by solving the set of linear equations. Numerical results are presented and are compared with those obtained using finite difference technique. The results show good agreement except at points near critical buckling load where discontinuities occur. The procedure is very efficient computationally.
Chemical kinetic functional sensitivity analysis: Elementary sensitivities
International Nuclear Information System (INIS)
Demiralp, M.; Rabitz, H.
1981-01-01
Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research
High order depletion sensitivity analysis
International Nuclear Information System (INIS)
Naguib, K.; Adib, M.; Morcos, H.N.
2002-01-01
A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations
Hardy, Jason; Campbell, Mark; Miller, Isaac; Schimpf, Brian
2008-10-01
The local path planner implemented on Cornell's 2007 DARPA Urban Challenge entry vehicle Skynet utilizes a novel mixture of discrete and continuous path planning steps to facilitate a safe, smooth, and human-like driving behavior. The planner first solves for a feasible path through the local obstacle map using a grid based search algorithm. The resulting path is then refined using a cost-based nonlinear optimization routine with both hard and soft constraints. The behavior of this optimization is influenced by tunable weighting parameters which govern the relative cost contributions assigned to different path characteristics. This paper studies the sensitivity of the vehicle's performance to these path planner weighting parameters using a data driven simulation based on logged data from the National Qualifying Event. The performance of the path planner in both the National Qualifying Event and in the Urban Challenge is also presented and analyzed.
International Nuclear Information System (INIS)
Song, H X; Wang, X D; Ma, L Q; Cai, M Z; Cao, T Z
2006-01-01
By using PSD as sensitive element, and laser diode as emitting element, laser displacement sensor based on triangulation method has been widely used. From the point of view of design, sensor and its performance were studied. Two different sensor configurations were described. Determination of the dimension, sensing resolution and comparison of the two different configurations were presented. The factors affecting the performance of the laser displacement sensor were discussed and two methods, which can eliminate the affection of dark current and environment light, are proposed
Davies, Misty D.; Gundy-Burlet, Karen
2010-01-01
A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.
Uncertainty and Sensitivity Analysis for an Ibuprofen Synthesis Model Based on Hoechst Path
DEFF Research Database (Denmark)
da Conceicao Do Carmo Montes, Frederico; Gernaey, Krist V.; Sin, Gürkan
2017-01-01
into consideration the effects of temperature, acidity, and the choice of the catalyst. Parameter estimation and uncertainty analysis were conducted on the kinetic model parameters using experimental data available in the literature. Finally, one factor at a time sensitivity analysis in the form of deviations......The pharmaceutical industry faces several challenges and barriers when implementing new or improving current pharmaceutical processes, such as competition from generic drug manufacturers and stricter regulations from the U.S. Food and Drug Administration and the European Medicine agency. The demand...... for efficient and reliable models to simulate and design/improve pharmaceutical processes is therefore increasing. For the case of ibuprofen, a well-known anti-inflammatory drug, the existing models do not include its complete synthesis path, usually referring only to one out of aset of different reactions...
MOVES regional level sensitivity analysis
2012-01-01
The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...
Moment-based metrics for global sensitivity analysis of hydrological systems
Directory of Open Access Journals (Sweden)
A. Dell'Oca
2017-12-01
Full Text Available We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE, other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
Energy Technology Data Exchange (ETDEWEB)
Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)
2010-07-01
This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.
Sensitivity Analysis of Uncertainty Parameter based on MARS-LMR Code on SHRT-45R of EBR II
Energy Technology Data Exchange (ETDEWEB)
Kang, Seok-Ju; Kang, Doo-Hyuk; Seo, Jae-Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Bae, Sung-Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeong, Hae-Yong [Sejong University, Seoul (Korea, Republic of)
2016-10-15
In order to assess the uncertainty quantification of the MARS-LMR code, the code has been improved by modifying the source code to accommodate calculation process required for uncertainty quantification. In the present study, a transient of Unprotected Loss of Flow(ULOF) is selected as typical cases of as Anticipated Transient without Scram(ATWS) which belongs to DEC category. The MARS-LMR input generation for EBR II SHRT-45R and execution works are performed by using the PAPIRUS program. The sensitivity analysis is carried out with Uncertainty Parameter of the MARS-LMR code for EBR-II SHRT-45R. Based on the results of sensitivity analysis, dominant parameters with large sensitivity to FoM are picked out. Dominant parameters selected are closely related to the development process of ULOF event.
Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.
2018-04-01
Despite advancements in developing physics-based formulations to estimate the sheet-flow travel time (tSHF), the quantification of the relative impacts of influential parameters on tSHF has not previously been considered. In this study, a brief review of the physics-based formulations to estimate tSHF including kinematic wave (K-W) theory in combination with Manning's roughness (K-M) and with Darcy-Weisbach friction formula (K-D) over single and multiple planes is provided. Then, the relative significance of input parameters to the developed approaches is quantified by a density-based global sensitivity analysis (GSA). The performance of K-M considering zero-upstream and uniform flow depth (so-called K-M1 and K-M2), and K-D formulae to estimate the tSHF over single plane surface were assessed using several sets of experimental data collected from the previous studies. The compatibility of the developed models to estimate tSHF over multiple planes considering temporal rainfall distributions of Natural Resources Conservation Service, NRCS (I, Ia, II, and III) are scrutinized by several real-world examples. The results obtained demonstrated that the main controlling parameters of tSHF through K-D and K-M formulae are the length of surface plane (mean sensitivity index T̂i = 0.72) and flow resistance (mean T̂i = 0.52), respectively. Conversely, the flow temperature and initial abstraction ratio of rainfall have the lowest influence on tSHF (mean T̂i is 0.11 and 0.12, respectively). The significant role of the flow regime on the estimation of tSHF over a single and a cascade of planes are also demonstrated. Results reveal that the K-D formulation provides more precise tSHF over the single plane surface with an average percentage of error, APE equal to 9.23% (the APE for K-M1 and K-M2 formulae were 13.8%, and 36.33%, respectively). The superiority of Manning-jointed formulae in estimation of tSHF is due to the incorporation of effects from different flow regimes as
Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance
Kato, H.; Ito, K.
2009-01-01
A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.
Super-Efficiency and Sensitivity Analysis Based on Input-Oriented DEA-R
Directory of Open Access Journals (Sweden)
M. R. Mozaffari∗
2012-03-01
Full Text Available This paper suggests a method of finding super-efficiency scores and modification of input-oriented models for sensitivity analysis of decision making units. First, by using DEA-R (ratiobased DEA models in the input orientation, the models of superefficiency and also models of super-efficiency modification are suggested. Second, the worst-case scenarios are considered where the efficiency of the test DMU is deteriorating while the efficiencies of the other DMUs are improving. Then, by combining these two ideas, a model is suggested which increases the super-efficiency score and modifies the change ranges in order to preserve the performance class. In the end, the super-efficiency and change interval of efficient decision making units for 23 branches of Zone 1 of the Islamic Azad University are calculated
Centelleghe, Cinzia; Beffagna, Giorgia; Zanetti, Rossella; Zappulli, Valentina; Di Guardo, Giovanni; Mazzariol, Sandro
2016-09-01
Cetacean Morbillivirus (CeMV) has been identified as the most pathogenic virus for cetaceans. Over the past three decades, this RNA virus has caused several outbreaks of lethal disease in odontocetes and mysticetes worldwide. Isolation and identification of CeMV RNA is very challenging in whales because of the poor preservation status frequently shown by tissues from stranded animals. Nested reverse transcription polymerase chain reaction (nested RT-PCR) is used instead of conventional RT-PCR when it is necessary to increase the sensitivity and the specificity of the reaction. This study describes a new nested RT-PCR technique useful to amplify small amounts of the cDNA copy of Cetacean morbillivirus (CeMV) when it is present in scant quantity in whales' biological specimens. This technique was used to analyze different tissues (lung, brain, spleen and other lymphoid tissues) from one under human care seal and seven cetaceans stranded along the Italian coastline between October 2011 and September 2015. A well-characterized, 200 base pair (bp) fragment of the dolphin Morbillivirus (DMV) haemagglutinin (H) gene, obtained by nested RT-PCR, was sequenced and used to confirm DMV positivity in all the eight marine mammals under study. In conclusion, this nested RT-PCR protocol can represent a sensitive detection method to identify CeMV-positive, poorly preserved tissue samples. Furthermore, this is also a rather inexpensive molecular technique, relatively easy to apply. Copyright © 2016 Elsevier B.V. All rights reserved.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Missing data in clinical trials: control-based mean imputation and sensitivity analysis.
Mehrotra, Devan V; Liu, Fang; Permutt, Thomas
2017-09-01
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between-treatment difference in population means of a clinical endpoint that is free from the confounding effects of "rescue" medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post-rescue data. We caution that the commonly used mixed-effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for "dropouts" (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient-level imputation is not required. A supplemental "dropout = failure" analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between-treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations. Copyright © 2017 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Farshad Hedayati Dezfuli
2016-01-01
Full Text Available Fiber-reinforced elastomeric isolators (FREIs are a new type of elastomeric base isolation systems. Producing FREIs in the form of long laminated pads and cutting them to the required size significantly reduces the time and cost of the manufacturing process. Due to the lack of adequate information on the performance of FREIs in bonded applications, the goal of this study is to assess the performance sensitivity of 1/4-scale carbon-FREIs based on the experimental tests. The scaled carbon-FREIs are manufactured using a fast cold-vulcanization process. The effect of several factors including the vertical pressure, the lateral cyclic rate, the number of rubber layers, and the thickness of carbon fiber-reinforced layers are explored on the cyclic behavior of rubber bearings. Results show that the effect of vertical pressure on the lateral response of base isolators is negligible. However, decreasing the cyclic loading rate increases the lateral flexibility and the damping capacity. Additionally, carbon fiber-reinforced layers can be considered as a minor source of energy dissipation.
Sensitivity Analysis of Viscoelastic Structures
Directory of Open Access Journals (Sweden)
A.M.G. de Lima
2006-01-01
Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.
A Sensitivity Analysis of a Computer Model-Based Leak Detection System for Oil Pipelines
Directory of Open Access Journals (Sweden)
Zhe Lu
2017-08-01
Full Text Available Improving leak detection capability to eliminate undetected releases is an area of focus for the energy pipeline industry, and the pipeline companies are working to improve existing methods for monitoring their pipelines. Computer model-based leak detection methods that detect leaks by analyzing the pipeline hydraulic state have been widely employed in the industry, but their effectiveness in practical applications is often challenged by real-world uncertainties. This study quantitatively assessed the effects of uncertainties on leak detectability of a commonly used real-time transient model-based leak detection system. Uncertainties in fluid properties, field sensors, and the data acquisition system were evaluated. Errors were introduced into the input variables of the leak detection system individually and collectively, and the changes in leak detectability caused by the uncertainties were quantified using simulated leaks. This study provides valuable quantitative results contributing towards a better understanding of how real-world uncertainties affect leak detection. A general ranking of the importance of the uncertainty sources was obtained: from high to low it is time skew, bulk modulus error, viscosity error, and polling time. It was also shown that inertia-dominated pipeline systems were less sensitive to uncertainties compared to friction-dominated systems.
Sensitivity analysis in remote sensing
Ustinov, Eugene A
2015-01-01
This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Probabilistic sensitivity analysis of biochemical reaction systems.
Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John
2009-09-07
Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
With the fast growth of Chinese economic,more and more capital will be invested in environmental projects.How to select the environmental investment projects(alternatives)for obtaining the best environmental quality and economic benefits is an important problem for the decision makers.The purpose of this paper is to develop a decision-making model to rank a finite number of alternatives with several and sometimes conflicting criteria.A model for ranking the projects of municipal sewage treatment plants is proposed by using exports' information and the data of the real projects.And,the ranking result is given based on the PROMETHEE method. Furthermore,by means of the concept of the weight stability intervals(WSI),the sensitivity of the ranking results to the size of criteria values and the change of weights value of criteria are discussed.The result shows that some criteria,such as"proportion of benefit to projoct cost",will influence the ranking result of alternatives very strong while others not.The influence are not only from the value of criterion but also from the changing the weight of criterion.So,some criteria such as"proportion of benefit to projoct cost" are key critera for ranking the projects. Decision makers must be cautious to them.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Object-sensitive Type Analysis of PHP
Van der Hoek, Henk Erik; Hage, J
2015-01-01
In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the
Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde
2017-01-01
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.
Directory of Open Access Journals (Sweden)
Florian Schumacher
2016-01-01
Full Text Available Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth’s interior remains of high interest in Earth sciences. Here, we give a description from a user’s and programmer’s perspective of the highly modular, flexible and extendable software package ASKI–Analysis of Sensitivity and Kernel Inversion–recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski.
Corominas, Lluís; Neumann, Marc B
2014-10-01
Urban wastewater systems discharge organic matter, nutrients and other pollutants (including toxic substances) to receiving waters, even after removing more than 90% of incoming pollutants from human activities. Understanding their interactions with the receiving water bodies is essential for the implementation of ecosystem-based management strategies. Using mathematical modeling and sensitivity analysis we quantified how 19 operational variables of an urban wastewater system affect river water quality. The mathematical model of the Congost system (in the Besòs catchment, Spain) characterizes the dynamic interactions between sewers, storage tanks, wastewater treatment plants and the river. The sensitivity analysis shows that the use of storage tanks for peak shaving and the use of a connection between two neighboring wastewater treatment plants are the most important factors influencing river water quality. We study how the sensitivity of the water quality variables towards changes in the operational variables varies along the river due to discharge locations and river self-purification processes. We demonstrate how to use the approach to identify interactions and how to discard non-influential operational variables. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey
2017-02-01
Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.
Sensitivity analysis of limit state functions for probability-based plastic design
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
(abstract) Sensitivity to Forest Biomass Based on Analysis of Scattering Mechanism
Way, JoBea; Bachman, Jennifer E.; Paige, David A.
1993-01-01
The estimation of forest biomass on a global scale is an important input to global climate and carbon cycle models. Remote sensing using synthetic aperture radar offers a means to obtain such a data set. Although it has been clear for some time that radar signals penetrate forest canopies, only recently has it been demonstrated that these signals are indeed sensitive to biomass. Inasmuch as the majority of a forest's biomass is in the trunks, it is important that the radar is sensing the trunk biomass as opposed to the branch or leaf biomass. In this study we use polarimetric AIRSAR P- and L-band data from a variety of forests to determine if the radar penetrates to the trunk by examining the scattering mechanism as determined using van Zyl's scattering interaction model, and the levels at which saturation occurs with respect to sensitivity of radar backscatter to total biomass. In particular, the added sensitivity of P-band relative to L-band is addressed. Results using data from the Duke Forest in North Carolina, the Bonanza Creek Experimental Forest in Alaska, Shasta Forest in California, the Black Forest in Germany, the temporate/boreal transition forests in northern Michigan, and coastal forests along the Oregon Transect will be presented.
TEMAC, Top Event Sensitivity Analysis
International Nuclear Information System (INIS)
Iman, R.L.; Shortencarier, M.J.
1988-01-01
1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement
Maternal sensitivity: a concept analysis.
Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae
2008-11-01
The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.
A hybrid approach for global sensitivity analysis
International Nuclear Information System (INIS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-01-01
Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.
Sensitivity Analysis in Ictneo
Energy Technology Data Exchange (ETDEWEB)
Bielza, C.; Rios-Insua, S.; Gomez, M.; Fernandez del Pozo, J. A.
2001-07-01
Neonatal jaundice is a common medical problem which arises in a healthy newborn because of the breakdown of excess red blood cells in his system. Bilirubin accumulates when the liver does not excrete it at a normal rate. Pathological jaundice may cause potentially serious central nervous system damages. Current recommendations try to balance out the risks of under treatment and over treatment, but the current protocol does not delimit clearly when it is best to start each treatment and which treatment to administer. As a consequence of this difficulty among others, the Neonatology Service of Gregorio Maranon Hospital in Madrid suggested the development of a decision support system IctNeo to provide the doctors with an automated problem-solving tool as an aid improving jaundice management. The development of the system has been very complex and time consuming, both the structuring of the diagram and the elicitation of probabilities and utilities. IctNeo finds a maximum expected utility treatment strategy based on an influence diagram (ID). Due to the computational intractability of its large ID, IctNeo incorporates some procedures to the standard evaluation algorithm. A user-friendly interface allows for data entry of a patient already treated by the doctor. Then, we can compare the system recommendations and the doctor decisions in order to draw conclusions. (Author) 5 refs.
Variance-based sensitivity analysis of BIOME-BGC for gross and net primary production
Raj, R.; Hamm, N.A.S.; van der Tol, C.; Stein, A.
2014-01-01
Parameterization and calibration of a process-based simulator (PBS) is a major challenge when simulating gross and net primary production (GPP and NPP). The large number of parameters makes the calibration computationally expensive and is complicated by the dependence of several parameters on other
Directory of Open Access Journals (Sweden)
Huawei Zhou
2016-10-01
Full Text Available Achieving an effective combination of various temperature control measures is critical for temperature control and crack prevention of concrete dams. This paper presents a procedure for optimizing the temperature control scheme of roller compacted concrete (RCC dams that couples the finite element method (FEM with a sensitivity analysis method. In this study, seven temperature control schemes are defined according to variations in three temperature control measures: concrete placement temperature, water-pipe cooling time, and thermal insulation layer thickness. FEM is employed to simulate the equivalent temperature field and temperature stress field obtained under each of the seven designed temperature control schemes for a typical overflow dam monolith based on the actual characteristics of a RCC dam located in southwestern China. A sensitivity analysis is subsequently conducted to investigate the degree of influence each of the three temperature control measures has on the temperature field and temperature tensile stress field of the dam. Results show that the placement temperature has a substantial influence on the maximum temperature and tensile stress of the dam, and that the placement temperature cannot exceed 15 °C. The water-pipe cooling time and thermal insulation layer thickness have little influence on the maximum temperature, but both demonstrate a substantial influence on the maximum tensile stress of the dam. The thermal insulation thickness is significant for reducing the probability of cracking as a result of high thermal stress, and the maximum tensile stress can be controlled under the specification limit with a thermal insulation layer thickness of 10 cm. Finally, an optimized temperature control scheme for crack prevention is obtained based on the analysis results.
International Nuclear Information System (INIS)
Kim, Song Hyun; Song, Myung Sub; Shin, Chang Ho; Noh, Jae Man
2014-01-01
In using the perturbation theory, the uncertainty of the response can be estimated by a single transport simulation, and therefore it requires small computational load. However, it has a disadvantage that the computation methodology must be modified whenever estimating different response type such as multiplication factor, flux, or power distribution. Hence, it is suitable for analyzing few responses with lots of perturbed parameters. Statistical approach is a sampling based method which uses randomly sampled cross sections from covariance data for analyzing the uncertainty of the response. XSUSA is a code based on the statistical approach. The cross sections are only modified with the sampling based method; thus, general transport codes can be directly utilized for the S/U analysis without any code modifications. However, to calculate the uncertainty distribution from the result, code simulation should be enough repeated with randomly sampled cross sections. Therefore, this inefficiency is known as a disadvantage of the stochastic method. In this study, an advanced sampling method of the cross sections is proposed and verified to increase the estimation efficiency of the sampling based method. In this study, to increase the estimation efficiency of the sampling based S/U method, an advanced sampling and estimation method was proposed. The main feature of the proposed method is that the cross section averaged from each single sampled cross section is used. For the use of the proposed method, the validation was performed using the perturbation theory
Directory of Open Access Journals (Sweden)
Ting Yuan
2015-03-01
Full Text Available Synthetic Aperture Radar (SAR has been successfully used to map wetland’s inundation extents and types of vegetation based on the fact that the SAR backscatter signal from the wetland is mainly controlled by the wetland vegetation type and water level changes. This study describes the relation between L-band PALSAR and seasonal water level changes obtained from Envisat altimetry over the island of Île Mbamou in the Congo Basin where two distinctly different vegetation types are found. We found positive correlations between and water level changes over the forested southern Île Mbamou whereas both positive and negative correlations were observed over the non-forested northern Île Mbamou depending on the amount of water level increase. Based on the analysis of sensitivity, we found that denser vegetation canopy leads to less sensitive variation with respect to the water level changes regardless of forested or non-forested canopy. Furthermore, we attempted to estimate water level changes which were then compared with the Envisat altimetry and InSAR results. Our results demonstrated a potential to generate two-dimensional maps of water level changes over the wetlands, and thus may have substantial synergy with the planned Surface Water and Ocean Topography (SWOT mission.
Multiple predictor smoothing methods for sensitivity analysis
International Nuclear Information System (INIS)
Helton, Jon Craig; Storlie, Curtis B.
2006-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Global optimization and sensitivity analysis
International Nuclear Information System (INIS)
Cacuci, D.G.
1990-01-01
A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints
A Sensitivity Analysis of a Computer Model-Based Leak Detection System for Oil Pipelines
Zhe Lu; Yuntong She; Mark Loewen
2017-01-01
Improving leak detection capability to eliminate undetected releases is an area of focus for the energy pipeline industry, and the pipeline companies are working to improve existing methods for monitoring their pipelines. Computer model-based leak detection methods that detect leaks by analyzing the pipeline hydraulic state have been widely employed in the industry, but their effectiveness in practical applications is often challenged by real-world uncertainties. This study quantitatively ass...
Shi, Kan; Chen, Gong; Pistolozzi, Marco; Xia, Fenggeng; Wu, Zhenqiang
2016-09-01
Monascus pigments, a mixture of azaphilones mainly composed of red, orange and yellow pigments, are usually prepared in aqueous ethanol and analysed by ultraviolet-visible (UV-Vis) spectroscopy. The pH of aqueous ethanol used during sample preparation and analysis has never been considered a key parameter to control; however, this study shows that the UV-Vis spectra and colour characteristics of the six major pigments are strongly influenced by the pH of the solvent employed. In addition, the increase of solvent pH results in a remarkable increase of the amination reaction of orange pigments with amino compounds, and at higher pH (≥ 6.0) a significant amount of orange pigment derivatives rapidly form. The consequent impact of these pH-sensitive properties on pigment analysis is further discussed. Based on the presented results, we propose that the sample preparation and analysis of Monascus pigments should be uniformly performed at low pH (≤ 2.5) to avoid variations of UV-Vis spectra and the creation of artefacts due to the occurrence of amination reactions, and ensure an accurate analysis that truly reflects pigment characteristics in the samples.
Energy Technology Data Exchange (ETDEWEB)
Vignati, E.; Hertel, O.; Berkowicz, R. [National Environmental Research Inst., Dept. of Atmospheric Enviroment (Denmark); Raaschou-Nielsen, O. [Danish Cancer Society, Division of Cancer Epidemiology (Denmark)
1997-05-01
The method for generation of the input data for the calculations with OSPM is presented in this report. The described method which is based on information provided from a questionnaire, will be used for model calculations of long term exposure for a large number of children in connection with an epidemiological study. A test of the calculation method has been performed on a few locations in which detailed measurements of air pollution, meteorological data and traffic were available. Comparisons between measured and calculated concentrations were made for hourly, monthly and yearly values. Beside the measured concentrations, the test results were compared to results obtained with the optimal street configuration data and measured traffic. The main conclusions drawn from this investigation are: (1) The calculation method works satisfactory well for long term averages, whereas the uncertainties are high when short term averages are considered. (2) The street width is one of the most crucial input parameters for the calculation of street pollution levels for both short and long term averages. Using H.C. Andersens Boulevard as an example, it was shown that estimation of street width based on traffic amount can lead to large overestimation of the concentration levels (in this case 50% for NO{sub x} and 30% for NO{sub 2}). (3) The street orientation and geometry is important for prediction of short term concentrations but this importance diminished for longer term averages. (4) The uncertainties in diurnal traffic profiles can influence the accuracy of short term averages, but are less important for long term averages. The correlation is good between modelled and measured concentrations when the actual background concentrations are replaced with the generated values. Even though extreme situations are difficult to reproduce with this method, the comparison between the yearly averaged modelled and measured concentrations is very good. (LN) 20 refs.
DEFF Research Database (Denmark)
Nielsen, Anker; Wittchen, Kim Bjarne; Bertelsen, Niels Haldor
2014-01-01
performance certificate. The Danish Building Research Institute has described a method that can be applied for estimating the energy demand of dwellings. This is based on the information in the Danish Building and Dwelling Register and requirements in the Danish Building Regulations from the year......The EU Directive on the Energy Performance of Buildings requires that energy certification of buildings should be implemented in Denmark so that houses that are sold or let should have an energy performance certificate. The result is that only a small part of existing houses has an energy...... of construction of the house. The result is an estimate of the energy demand of each building with a variation. This makes it possible to make an automatic classification of all buildings. The paper discusses the uncertainties and makes a sensitivity analysis to find the important parameters. The variations...
DEFF Research Database (Denmark)
Nielsen, Anker; Wittchen, Kim Bjarne; Bertelsen, Niels Haldor
2014-01-01
performance certificate. The Danish Building Research Institute has described a method that can be applied for estimating the energy demand of dwellings. This is based on the information in the Danish Building and Dwelling Register and requirements in the Danish Building Regulations from the year......The EU Directive on the Energy Performance of Buildings requires that energy certification of buildings should be implemented in Denmark so that houses that are sold or let should have an energy performance certificate. The result is that only a small part of existing houses has an energy...... of construction of the house. The result is an estimate of the energy demand of each building with a variation. This makes it possible to make an automatic classification of all buildings. The paper discusses the uncertainties and makes a sensitivity analysis to find the important parameters. The variations...
Structural development and web service based sensitivity analysis of the Biome-BGC MuSo model
Hidy, Dóra; Balogh, János; Churkina, Galina; Haszpra, László; Horváth, Ferenc; Ittzés, Péter; Ittzés, Dóra; Ma, Shaoxiu; Nagy, Zoltán; Pintér, Krisztina; Barcza, Zoltán
2014-05-01
-BGC with multi-soil layer). Within the frame of the BioVeL project (http://www.biovel.eu) an open source and domain independent scientific workflow management system (http://www.taverna.org.uk) are used to support 'in silico' experimentation and easy applicability of different models including Biome-BGC MuSo. Workflows can be built upon functionally linked sets of web services like retrieval of meteorological dataset and other parameters; preparation of single run or spatial run model simulation; desk top grid technology based Monte Carlo experiment with parallel processing; model sensitivity analysis, etc. The newly developed, Monte Carlo experiment based sensitivity analysis is described in this study and results are presented about differences in the sensitivity of the original and the developed Biome-BGC model.
Sensitivity analysis of brain morphometry based on MRI-derived surface models
Klein, Gregory J.; Teng, Xia; Schoenemann, P. T.; Budinger, Thomas F.
1998-07-01
Quantification of brain structure is important for evaluating changes in brain size with growth and aging and for characterizing neurodegeneration disorders. Previous quantification efforts using ex vivo techniques suffered considerable error due to shrinkage of the cerebrum after extraction from the skull, deformation of slices during sectioning, and numerous other factors. In vivo imaging studies of brain anatomy avoid these problems and allow repetitive studies following progression of brain structure changes due to disease or natural processes. We have developed a methodology for obtaining triangular mesh models of the cortical surface from MRI brain datasets. The cortex is segmented from nonbrain tissue using a 2D region-growing technique combined with occasional manual edits. Once segmented, thresholding and image morphological operations (erosions and openings) are used to expose the regions between adjacent surfaces in deep cortical folds. A 2D region- following procedure is then used to find a set of contours outlining the cortical boundary on each slice. The contours on all slices are tiled together to form a closed triangular mesh model approximating the cortical surface. This model can be used for calculation of cortical surface area and volume, as well as other parameters of interest. Except for the initial segmentation of the cortex from the skull, the technique is automatic and requires only modest computation time on modern workstations. Though the use of image data avoids many of the pitfalls of ex vivo and sectioning techniques, our MRI-based technique is still vulnerable to errors that may impact the accuracy of estimated brain structure parameters. Potential inaccuracies include segmentation errors due to incorrect thresholding, missed deep sulcal surfaces, falsely segmented holes due to image noise and surface tiling artifacts. The focus of this paper is the characterization of these errors and how they affect measurements of cortical surface
Louka, Panagiota; Petropoulos, George; Papanikolaou, Ioannis
2015-04-01
The ability to map the spatiotemporal distribution of extreme climatic conditions, such as frost, is a significant tool in successful agricultural management and decision making. Nowadays, with the development of Earth Observation (EO) technology, it is possible to obtain accurately, timely and in a cost-effective way information on the spatiotemporal distribution of frost conditions, particularly over large and otherwise inaccessible areas. The present study aimed at developing and evaluating a frost risk prediction model, exploiting primarily EO data from MODIS and ASTER sensors and ancillary ground observation data. For the evaluation of our model, a region in north-western Greece was selected as test site and a detailed sensitivity analysis was implemented. The agreement between the model predictions and the observed (remotely sensed) frost frequency obtained by MODIS sensor was evaluated thoroughly. Also, detailed comparisons of the model predictions were performed against reference frost ground observations acquired from the Greek Agricultural Insurance Organization (ELGA) over a period of 10-years (2000-2010). Overall, results evidenced the ability of the model to produce reasonably well the frost conditions, following largely explainable patterns in respect to the study site and local weather conditions characteristics. Implementation of our proposed frost risk model is based primarily on satellite imagery analysis provided nowadays globally at no cost. It is also straightforward and computationally inexpensive, requiring much less effort in comparison for example to field surveying. Finally, the method is adjustable to be potentially integrated with other high resolution data available from both commercial and non-commercial vendors. Keywords: Sensitivity analysis, frost risk mapping, GIS, remote sensing, MODIS, Greece
LBLOCA sensitivity analysis using meta models
International Nuclear Information System (INIS)
Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.
2014-01-01
This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)
Energy Technology Data Exchange (ETDEWEB)
Guo, Kong-Ming, E-mail: kmguo@xidian.edu.cn [School of Electromechanical Engineering, Xidian University, P.O. Box 187, Xi' an 710071 (China); Jiang, Jun, E-mail: jun.jiang@mail.xjtu.edu.cn [State Key Laboratory for Strength and Vibration, Xi' an Jiaotong University, Xi' an 710049 (China)
2014-07-04
To apply stochastic sensitivity function method, which can estimate the probabilistic distribution of stochastic attractors, to non-autonomous dynamical systems, a 1/N-period stroboscopic map for a periodic motion is constructed in order to discretize the continuous cycle into a discrete one. In this way, the sensitivity analysis of a cycle for discrete map can be utilized and a numerical algorithm for the stochastic sensitivity analysis of periodic solutions of non-autonomous nonlinear dynamical systems under stochastic disturbances is devised. An external excited Duffing oscillator and a parametric excited laser system are studied as examples to show the validity of the proposed method. - Highlights: • A method to analyze sensitivity of stochastic periodic attractors in non-autonomous dynamical systems is proposed. • Probabilistic distribution around periodic attractors in an external excited Φ{sup 6} Duffing system is obtained. • Probabilistic distribution around a periodic attractor in a parametric excited laser system is determined.
International Nuclear Information System (INIS)
Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.
1990-01-01
A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
Sensitivity analysis using probability bounding
International Nuclear Information System (INIS)
Ferson, Scott; Troy Tucker, W.
2006-01-01
Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values
Energy Technology Data Exchange (ETDEWEB)
Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)
2008-07-01
In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)
International Nuclear Information System (INIS)
Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.
2008-01-01
In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)
Ethical sensitivity in professional practice: concept analysis.
Weaver, Kathryn; Morse, Janice; Mitcham, Carl
2008-06-01
This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.
DEFF Research Database (Denmark)
Vangsgaard, Anna Katrine; Mauricio Iglesias, Miguel; Gernaey, Krist
2012-01-01
A comprehensive and global sensitivity analysis was conducted under a range of operating conditions. The relative importance of mass transfer resistance versus kinetic parameters was studied and found to depend on the operating regime as follows: Operating under the optimal loading ratio of 1.90 ...
International Nuclear Information System (INIS)
Greenspan, E.
1982-01-01
This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory
Zhao, Zhenlong
2013-01-17
Part 1 (10.1021/ef3014103) of this series describes a new rotary reactor for gas-fueled chemical-looping combustion (CLC), in which, a solid wheel with microchannels rotates between the reducing and oxidizing streams. The oxygen carrier (OC) coated on the surfaces of the channels periodically adsorbs oxygen from air and releases it to oxidize the fuel. A one-dimensional model is also developed in part 1 (10.1021/ef3014103). This paper presents the simulation results based on the base-case design parameters. The results indicate that both the fuel conversion efficiency and the carbon separation efficiency are close to unity. Because of the relatively low reduction rate of copper oxide, fuel conversion occurs gradually from the inlet to the exit. A total of 99.9% of the fuel is converted within 75% of the channel, leading to 25% redundant length near the exit, to ensure robustness. In the air sector, the OC is rapidly regenerated while consuming a large amount of oxygen from air. Velocity fluctuations are observed during the transition between sectors because of the complete reactions of OCs. The gas temperature increases monotonically from 823 to 1315 K, which is mainly determined by the solid temperature, whose variations with time are limited within 20 K. The overall energy in the solid phase is balanced between the reaction heat release, conduction, and convective cooling. In the sensitivity analysis, important input parameters are identified and varied around their base-case values. The resulting changes in the model-predicted performance revealed that the most important parameters are the reduction kinetics, the operating pressure, and the feed stream temperatures. © 2012 American Chemical Society.
Directory of Open Access Journals (Sweden)
Nan-Hung Hsieh
2018-06-01
Full Text Available Traditionally, the solution to reduce parameter dimensionality in a physiologically-based pharmacokinetic (PBPK model is through expert judgment. However, this approach may lead to bias in parameter estimates and model predictions if important parameters are fixed at uncertain or inappropriate values. The purpose of this study was to explore the application of global sensitivity analysis (GSA to ascertain which parameters in the PBPK model are non-influential, and therefore can be assigned fixed values in Bayesian parameter estimation with minimal bias. We compared the elementary effect-based Morris method and three variance-based Sobol indices in their ability to distinguish “influential” parameters to be estimated and “non-influential” parameters to be fixed. We illustrated this approach using a published human PBPK model for acetaminophen (APAP and its two primary metabolites APAP-glucuronide and APAP-sulfate. We first applied GSA to the original published model, comparing Bayesian model calibration results using all the 21 originally calibrated model parameters (OMP, determined by “expert judgment”-based approach vs. the subset of original influential parameters (OIP, determined by GSA from the OMP. We then applied GSA to all the PBPK parameters, including those fixed in the published model, comparing the model calibration results using this full set of 58 model parameters (FMP vs. the full set influential parameters (FIP, determined by GSA from FMP. We also examined the impact of different cut-off points to distinguish the influential and non-influential parameters. We found that Sobol indices calculated by eFAST provided the best combination of reliability (consistency with other variance-based methods and efficiency (lowest computational cost to achieve convergence in identifying influential parameters. We identified several originally calibrated parameters that were not influential, and could be fixed to improve computational
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
Hannah, M. A.; Simeone, M.
2017-12-01
On interdisciplinary teams, expertise is varied, as is evidenced by differences in team members' language use. Developing strategies to combine that expertise and bridge differentiated language practices is especially difficult between geoscience subdisciplines as researchers assume they use a shared language—vocabulary, jargon, codes, linguistic styles. In our paper, we discuss a network-based approach used to identify varied expertise and language practices between geoscientists (n=29) on a NSF team funded to study how deep and surface Earth processes worked together to give rise to the Great Oxygenation Event. We describe how we modeled the team's expertise from a language corpus consisting of 220 oxygen-related terms frequently used by team members and then compared their understanding of the terms to develop interventions to bridge the team's expertise. Corpus terms were identified via team member interviews, observations of members' interactions at research meetings, and discourse analysis of members' publications. Comparisons of members' language use were based on a Likert scale survey that asked members to assess how they understood a term; how frequently they used a term; and whether they conceptualized a term as an object or process. Rather than use our method as a communication audit tool (Zwijze-Koning & de Jong, 2015), teams can proactively use it in a project's early stages to assess the contours of the team's differentiated expertise and show where specialized knowledge resides in the team, where latent or non-obvious expertise exists, where expertise overlaps, and where gaps are in the team's knowledge. With this information, teams can make evidence based recommendations to forward their work such as allocating resources; identifying and empowering members to serve as connectors and lead cross-functional project initiatives; and developing strategies to avoid communication barriers. The method also generates models for teaching language
Directory of Open Access Journals (Sweden)
Domingos M. C. Rodrigues
2017-12-01
Full Text Available Conventional pathogen detection methods require trained personnel, specialized laboratories and can take days to provide a result. Thus, portable biosensors with rapid detection response are vital for the current needs for in-loco quality assays. In this work the authors analyze the characteristics of an immunosensor based on the evanescent field in plastic optical fibers with macro curvature by comparing experimental with simulated results. The work studies different shapes of evanescent-wave based fiber optic sensors, adopting a computational modeling to evaluate the probes with the best sensitivity. The simulation showed that for a U-Shaped sensor, the best results can be achieved with a sensor of 980 µm diameter by 5.0 mm in curvature for refractive index sensing, whereas the meander-shaped sensor with 250 μm in diameter with radius of curvature of 1.5 mm, showed better sensitivity for either bacteria and refractive index (RI sensing. Then, an immunosensor was developed, firstly to measure refractive index and after that, functionalized to detect Escherichia coli. Based on the results with the simulation, we conducted studies with a real sensor for RI measurements and for Escherichia coli detection aiming to establish the best diameter and curvature radius in order to obtain an optimized sensor. On comparing the experimental results with predictions made from the modelling, good agreements were obtained. The simulations performed allowed the evaluation of new geometric configurations of biosensors that can be easily constructed and that promise improved sensitivity.
Rodrigues, Domingos M C; Lopes, Rafaela N; Franco, Marcos A R; Werneck, Marcelo M; Allil, Regina C S B
2017-12-19
Conventional pathogen detection methods require trained personnel, specialized laboratories and can take days to provide a result. Thus, portable biosensors with rapid detection response are vital for the current needs for in-loco quality assays. In this work the authors analyze the characteristics of an immunosensor based on the evanescent field in plastic optical fibers with macro curvature by comparing experimental with simulated results. The work studies different shapes of evanescent-wave based fiber optic sensors, adopting a computational modeling to evaluate the probes with the best sensitivity. The simulation showed that for a U-Shaped sensor, the best results can be achieved with a sensor of 980 µm diameter by 5.0 mm in curvature for refractive index sensing, whereas the meander-shaped sensor with 250 μm in diameter with radius of curvature of 1.5 mm, showed better sensitivity for either bacteria and refractive index (RI) sensing. Then, an immunosensor was developed, firstly to measure refractive index and after that, functionalized to detect Escherichia coli . Based on the results with the simulation, we conducted studies with a real sensor for RI measurements and for Escherichia coli detection aiming to establish the best diameter and curvature radius in order to obtain an optimized sensor. On comparing the experimental results with predictions made from the modelling, good agreements were obtained. The simulations performed allowed the evaluation of new geometric configurations of biosensors that can be easily constructed and that promise improved sensitivity.
UMTS Common Channel Sensitivity Analysis
DEFF Research Database (Denmark)
Pratas, Nuno; Rodrigues, António; Santos, Frederico
2006-01-01
and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....
USE OF SENSITIVITY ANALYSIS ON A PHYSIOLOGICALLY BASED PHARMACOKINETIC (PBPK) MODEL FOR CHLOROFORM IN RATS TO DETERMINE AGE-RELATED TOXICITY.CR Eklund, MV Evans, and JE Simmons. US EPA, ORD, NHEERL, ETD,PKB, Research Triangle Park, NC. Chloroform (CHCl3) is a disinfec...
Sinha, D.; De, D.; Ayaz, A.
2018-03-01
Environmental friendly natural dye curcumin extracted from low-cost Curcumina longa stem is used as a photo-sensitizer for the fabrication of ZnO-based dye-sensitized solar cells (DSSC). Nanostructured ZnO is fabricated on a transparent conducting glass (TCO), using a cost-effective chemical bath deposition technique. Scanning electron microscopic images show hexagonal patterned ZnO nano-towers decorated with several nanosteps. The average length of ZnO nano-tower is 5 μm and diameter is 1.2 μm. The UV-Vis spectroscopic study of the curcumin dye is used to understand the light absorption behavior as well as band gap energy of the extracted natural dye. The dye shows wider absorption band-groups over 350-470 nm and 500-600 nm with two peaks positioned at 425 nm and 525 nm. The optical band gap energy and energy band position of the dye is derived which supports its stability and high electron affinity that makes it suitable for light harvesting and effortless electron transfer from dye to the semiconductor or interface between them. FTIR spectrum of curcumin dye-sensitized ZnO-based DSSC shows the presence of anchoring groups and colouring constitutes. The I-V and P-V curves of the fabricated DSSC are measured under simulated light (100 mW/cm2). The highest visible light to electric conversion efficiency of 0.266% (using ITO) and 0.33% (using FTO) is achieved from the curcumin dye-sensitized cell.
Impedance-based analysis and study of phase sensitivity in slow-wave two-beam accelerators
International Nuclear Information System (INIS)
Wurtele, J.S.; Whittum, D.H.; Sessler, A.M.
1992-06-01
This paper presents a new formalism which makes the analysis and understanding of both the relativistic klystron (RK) and the standing-wave free-electron laser (SWFEL) two-beam accelerator (TBA) available to a wide audience of accelerator physicists. A ''coupling impedance'' for both the RK and SWFEWL is introduced, which can include realistic cavity features, such as beam and vacuum ports, in a simple manner. The RK and SWFEL macroparticle equations, which govern the energy and phase evolution of successive bunches in the beam, are of identical form, differing only by multiplicative factors. Expressions are derived for the phase and amplitude sensitivities of the TBA schemes to errors (shot-to-shot jitter) in current and energy. The analysis allows, for the first time, relative comparisons of the RK and the SWFEL TBAs
Ji, Xiaoyu; Liu, Xiaoqiang; Peng, Yuanxia; Zhan, Ruoting; Xu, Hui; Ge, Xijin
2017-12-09
Emodin has a strong antibacterial activity, including methicillin-resistant Staphylococcus aureus (MRSA). However, the mechanism by which emodin induces growth inhibition against MRSA remains unclear. In this study, the isobaric tags for relative and absolute quantitation (iTRAQ) proteomics approach was used to investigate the modes of action of emodin on a MRSA isolate and methicillin-sensitive S. aureus ATCC29213(MSSA). Proteomic analysis showed that expression levels of 145 and 122 proteins were changed significantly in MRSA and MSSA, respectively, after emodin treatment. Comparative analysis of the functions of differentially expressed proteins between the two strains was performed via bioinformatics tools blast2go and STRING database. Proteins related to pyruvate pathway imbalance induction, protein synthesis inhibition, and DNA synthesis suppression were found in both methicillin-sensitive and resistant strains. Moreover, Interference proteins related to membrane damage mechanism were also observed in MRSA. Our findings indicate that emodin is a potential antibacterial agent targeting MRSA via multiple mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Lagravère, M O; Major, P W; Carey, J
2010-01-01
Objectives The purpose of this study was to evaluate the potential errors associated with superimposition of serial cone beam CT (CBCT) images utilizing reference planes based on cranial base landmarks using a sensitivity analysis. Methods CBCT images from 62 patients participating in a maxillary expansion clinical trial were analysed. The left and right auditory external meatus (AEM), dorsum foramen magnum (DFM) and the midpoint between the left and right foramen spinosum (ELSA) were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Intraclass correlation coefficients for all four landmarks were obtained. Transformation of the reference system was carried out using the four landmarks and mathematical comparison of values. Results Excellent intrareliability values for each dimension were obtained for each landmark. Evaluation of the method to transform the co-ordinate system was first done by comparing interlandmark distances before and after transformations, giving errors in lengths in the order of 10–14% (software rounding error). A sensitivity evaluation was performed by adding 0.25 mm, 0.5 mm and 1 mm error in one axis of the ELSA. A positioning error of 0.25 mm in the ELSA can produce up to 1.0 mm error in other cranial base landmark co-ordinates. These errors could be magnified to distant landmarks where in some cases menton and infraorbital landmarks were displaced 4–6 mm. Conclusions Minor variations in location of the ELSA, both the AEM and the DFM landmarks produce large and potentially clinically significant uncertainty in co-ordinate system alignment. PMID:20841457
Zou, Bin; Lee, Victor H F; Yan, Hong
2018-03-07
Non-small cell lung cancer (NSCLC) with activating EGFR mutations, especially exon 19 deletions and the L858R point mutation, is particularly responsive to gefitinib and erlotinib. However, the sensitivity varies for less common and rare EGFR mutations. There are various explanations for the low sensitivity of EGFR exon 20 insertions and the exon 20 T790 M point mutation to gefitinib/erlotinib. However, few studies discuss, from a structural perspective, why less common mutations, like G719X and L861Q, have moderate sensitivity to gefitinib/erlotinib. To decode the drug sensitivity/selectivity of EGFR mutants, it is important to analyze the interaction between EGFR mutants and EGFR inhibitors. In this paper, the 30 most common EGFR mutants were selected and the technique of protein-ligand interaction fingerprint (IFP) was applied to analyze and compare the binding modes of EGFR mutant-gefitinib/erlotinib complexes. Molecular dynamics simulations were employed to obtain the dynamic trajectory and a matrix of IFPs for each EGFR mutant-inhibitor complex. Multilinear Principal Component Analysis (MPCA) was applied for dimensionality reduction and feature selection. The selected features were further analyzed for use as a drug sensitivity predictor. The results showed that the accuracy of prediction of drug sensitivity was very high for both gefitinib and erlotinib. Targeted Projection Pursuit (TPP) was used to show that the data points can be easily separated based on their sensitivities to gefetinib/erlotinib. We can conclude that the IFP features of EGFR mutant-TKI complexes and the MPCA-based tensor object feature extraction are useful to predict the drug sensitivity of EGFR mutants. The findings provide new insights for studying and predicting drug resistance/sensitivity of EGFR mutations in NSCLC and can be beneficial to the design of future targeted therapies and innovative drug discovery.
Kulasiri, Don; Liang, Jingyi; He, Yao; Samarasinghe, Sandhya
2017-04-21
We investigate the epistemic uncertainties of parameters of a mathematical model that describes the dynamics of CaMKII-NMDAR complex related to memory formation in synapses using global sensitivity analysis (GSA). The model, which was published in this journal, is nonlinear and complex with Ca 2+ patterns with different level of frequencies as inputs. We explore the effects of parameter on the key outputs of the model to discover the most sensitive ones using GSA and partial ranking correlation coefficient (PRCC) and to understand why they are sensitive and others are not based on the biology of the problem. We also extend the model to add presynaptic neurotransmitter vesicles release to have action potentials as inputs of different frequencies. We perform GSA on this extended model to show that the parameter sensitivities are different for the extended model as shown by PRCC landscapes. Based on the results of GSA and PRCC, we reduce the original model to a less complex model taking the most important biological processes into account. We validate the reduced model against the outputs of the original model. We show that the parameter sensitivities are dependent on the inputs and GSA would make us understand the sensitivities and the importance of the parameters. A thorough phenomenological understanding of the relationships involved is essential to interpret the results of GSA and hence for the possible model reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jiao, J; Wu, J; Lv, Z; Sun, C; Gao, L; Yan, X; Cui, L; Tang, Z; Yan, B; Jia, Y
2015-11-26
This study aimed to investigate cytosine methylation profiles in different tobacco (Nicotiana tabacum) cultivars grown in China. Methylation-sensitive amplified polymorphism was used to analyze genome-wide global methylation profiles in four tobacco cultivars (Yunyan 85, NC89, K326, and Yunyan 87). Amplicons with methylated C motifs were cloned by reamplified polymerase chain reaction, sequenced, and analyzed. The results show that geographical location had a greater effect on methylation patterns in the tobacco genome than did sampling time. Analysis of the CG dinucleotide distribution in methylation-sensitive polymorphic restriction fragments suggested that a CpG dinucleotide cluster-enriched area is a possible site of cytosine methylation in the tobacco genome. The sequence alignments of the Nia1 gene (that encodes nitrate reductase) in Yunyan 87 in different regions indicate that a C-T transition might be responsible for the tobacco phenotype. T-C nucleotide replacement might also be responsible for the tobacco phenotype and may be influenced by geographical location.
Zhou, Yaoyu; Tang, Lin; Zeng, Guangming; Chen, Jun; Cai, Ye; Zhang, Yi; Yang, Guide; Liu, Yuanyuan; Zhang, Chen; Tang, Wangwang
2014-11-15
Herein, we reported here a promising biosensor by taking advantage of the unique ordered mesoporous carbon nitride material (MCN) to convert the recognition information into a detectable signal with enzyme firstly, which could realize the sensitive, especially, selective detection of catechol and phenol in compost bioremediation samples. The mechanism including the MCN based on electrochemical, biosensor assembly, enzyme immobilization, and enzyme kinetics (elucidating the lower detection limit, different linear range and sensitivity) was discussed in detail. Under optimal conditions, GCE/MCN/Tyr biosensor was evaluated by chronoamperometry measurements and the reduction current of phenol and catechol was proportional to their concentration in the range of 5.00 × 10(-8)-9.50 × 10(-6)M and 5.00 × 10(-8)-1.25 × 10(-5)M with a correlation coefficient of 0.9991 and 0.9881, respectively. The detection limits of catechol and phenol were 10.24 nM and 15.00 nM (S/N=3), respectively. Besides, the data obtained from interference experiments indicated that the biosensor had good specificity. All the results showed that this material is suitable for load enzyme and applied to the biosensor due to the proposed biosensor exhibited improved analytical performances in terms of the detection limit and specificity, provided a powerful tool for rapid, sensitive, especially, selective monitoring of catechol and phenol simultaneously. Moreover, the obtained results may open the way to other MCN-enzyme applications in the environmental field. Copyright © 2014 Elsevier B.V. All rights reserved.
Infrared sensing based sensitive skin
Institute of Scientific and Technical Information of China (English)
CAO Zheng-cai; FU Yi-li; WANG Shu-guo; JIN Bao
2006-01-01
Developed robotics sensitive skin is a modularized, flexible, mini-type array of infrared sensors with data processing capabilities, which can be used to cover the body of a robot. Depending on the infrared sensors and periphery processing circuit, robotics sensitive skin can in real-time provide existence and distance information about obstacles for robots within sensory areas. The methodology of designing sensitive skin and the algorithm of a mass of IR data fusion are presented. The experimental results show that the multi-joint robot with this sensitive skin can work autonomously in an unknown environment.
Radtke, J.; Sponner, J.; Jakobi, C.; Schneider, J.; Sommer, M.; Teichmann, T.; Ullrich, W.; Henniger, J.; Kormoll, T.
2018-01-01
Single photon detection applied to optically stimulated luminescence (OSL) dosimetry is a promising approach due to the low level of luminescence light and the known statistical behavior of single photon events. Time resolved detection allows to apply a variety of different and independent data analysis methods. Furthermore, using amplitude modulated stimulation impresses time- and frequency information into the OSL light and therefore allows for additional means of analysis. Considering the impressed frequency information, data analysis by using Fourier transform algorithms or other digital filters can be used for separating the OSL signal from unwanted light or events generated by other phenomena. This potentially lowers the detection limits of low dose measurements and might improve the reproducibility and stability of obtained data. In this work, an OSL system based on a single photon detector, a fast and accurate stimulation unit and an FPGA is presented. Different analysis algorithms which are applied to the single photon data are discussed.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Global sensitivity analysis by polynomial dimensional decomposition
Energy Technology Data Exchange (ETDEWEB)
Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2011-07-15
This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.
Pizzoli, Giuliano; Lobello, Maria Grazia; Carlotti, Benedetta; Elisei, Fausto; Nazeeruddin, Mohammad K; Vitillaro, Giuseppe; De Angelis, Filippo
2012-10-14
We report a combined spectro-photometric and computational investigation of the acid-base equilibria of the N3 solar cell sensitizer [Ru(dcbpyH(2))(2)(NCS)(2)] (dcbpyH(2) = 4,4'-dicarboxyl-2,2' bipyridine) in aqueous/ethanol solutions. The absorption spectra of N3 recorded at various pH values were analyzed by Single Value Decomposition techniques, followed by Global Fitting procedures, allowing us to identify four separate acid-base equilibria and their corresponding ground state pK(a) values. DFT/TDDFT calculations were performed for the N3 dye in solution, investigating the possible relevant species obtained by sequential deprotonation of the four dye carboxylic groups. TDDFT excited state calculations provided UV-vis absorption spectra which nicely agree with the experimental spectral shapes at various pH values. The calculated pK(a) values are also in good agreement with experimental data, within <1 pK(a) unit. Based on the calculated energy differences a tentative assignment of the N3 deprotonation pathway is reported.
Data fusion qualitative sensitivity analysis
International Nuclear Information System (INIS)
Clayton, E.A.; Lewis, R.E.
1995-09-01
Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables
Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.
2015-12-01
Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while
Comparative analysis of the sensitivity of the scanner rSPECT: using GAMOS: a Geant4-based framework
International Nuclear Information System (INIS)
Martinez Turtos, Rosana; Diaz Garcia, Angelina; Abreu Alfonso, Yamiel; Arteche, Jossue; Leyva Pernia, Diana
2012-01-01
The molecular imaging of cellular processes in vivo using preclinical animal studies and SPECT technique is one of the main reasons for the design of new devices with high spatial resolution. As an auxiliary tool, Monte Carlo simulation has allowed the characterization and optimization of those medical imaging systems effectively. At present there is a new simulation framework called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations); which code, libraries and particle transport method correspond to those developed by GEANT4 and contains specific applications for nuclear medicine. This tool has been already validated for PET technique by comparison with experimental data, while not yet been done the correct evaluation of GAMOS for SPECT systems. Present work have demonstrated the potential of GAMOS in obtaining simulated realistic data using this nuclear imaging technique. For this purpose, simulation of a novel installation 'rSPECT' ,dedicated to the study of rodents, has been done. The study comprises the collimation and detection geometries and the fundamental characteristics of the previous published experimental measurements for rSPECT installation. Studies have been done using 99mTc and 20% energy window. Sensitivity values obtained by simulation revealed an acceptable agreement with experimental values. Therefore we can conclude that simulation results have shown good agreement with the real data. This fact allowed to estimate the behavior of the new GEANT4 simulation platform 'GAMOS' in SPECT applications and have demonstrated the feasibility of reproducing experimental data. (author)
Directory of Open Access Journals (Sweden)
Hue-Yu Wang
Full Text Available BACKGROUND: An adaptive-network-based fuzzy inference system (ANFIS was compared with an artificial neural network (ANN in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C, pH level (5.5 to 7.5, sodium chloride level (0.25% to 6.25% and sodium nitrite level (0 to 200 ppm on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. METHODS: THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE, root mean square error (RMSE, standard error of prediction percentage (SEP, bias factor (Bf, accuracy factor (Af, and absolute fraction of variance (R (2. Graphical plots were also used for model comparison. CONCLUSIONS: The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.
International Nuclear Information System (INIS)
Boyaghchi, Fateme Ahmadi; Molaie, Hanieh
2015-01-01
Highlights: • The advanced exergy destruction components of a real CCPP are calculated. • The TIT and r c variation are investigated on exergy destruction parts of the cycle. • The TIT and r c growth increase the improvement potential in the most of components. • The TIT and r c growth decrease the unavoidable part in some components. - Abstract: The advanced exergy analysis extends engineering knowledge beyond the respective conventional methods by improving the design and operation of energy conversion systems. In advanced exergy analysis, the exergy destruction is splitting into endogenous/exogenous and avoidable/unavoidable parts. In this study, an advanced exergy analysis of a real combined cycle power plant (CCPP) with supplementary firing is done. The endogenous/exogenous irreversibilities of each component as well as their combination with avoidable/unavoidable irreversibilities are determined. A parametric study is presented discussing the sensitivity of various performance indicators to the turbine inlet temperature (TIT), and compressor pressure ratio (r c ). It is observed that the thermal and exergy efficiencies increase when TIT and r c rise. Results show that combustion chamber (CC) concentrates most of the exergy destruction (more than 62%), dominantly in unavoidable endogenous form which is decreased by 11.89% and 13.12% while the avoidable endogenous exergy destruction increase and is multiplied by the factors of 1.3 and 8.6 with increasing TIT and r c , respectively. In addition, TIT growth strongly increases the endogenous avoidable exergy destruction in high pressure superheater (HP.SUP), CC and low pressure evaporator (LP.EVAP). It, also, increases the exogenous avoidable exergy destruction of HP.SUP and low pressure steam turbine (LP.ST) and leads to the high decrement in the endogenous exergy destruction of the preheater (PRE) by about 98.8%. Furthermore, r c growth extremely rises the endogenous avoidable exergy destruction of gas
Systemization of burnup sensitivity analysis code. 2
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2005-02-01
Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For
Sensitivity Analysis of a Physiochemical Interaction Model ...
African Journals Online (AJOL)
In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.
Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L
2015-01-01
We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Tan, Ming-pu
2010-01-01
Water stress is known to alter cytosine methylation, which generally represses transcription. However, little is known about the role of methylation alteration in maize under osmotic stress. Here, methylation-sensitive amplified polymorphism (MSAP) was used to screen PEG- or NaCl-induced methylation alteration in maize seedlings. The sequences of 25 differentially amplified fragments relevant to stress were successfully obtained. Two stress-specific fragments from leaves, LP166 and LPS911, shown to be homologous to retrotransposon Gag-Pol protein genes, suggested that osmotic stress-induced methylation of retrotransposons. Three MSAP fragments, representing drought-induced or salt-induced methylation in leaves, were homologous to a maize aluminum-induced transporter. Besides these, heat shock protein HSP82, Poly [ADP-ribose] polymerase 2, Lipoxygenase, casein kinase (CK2), and dehydration-responsive element-binding (DREB) factor were also homologs of MSAP sequences from salt-treated roots. One MSAP fragment amplified from salt-treated roots, designated RS39, was homologous to the first intron of maize protein phosphatase 2C (zmPP2C), whereas - LS103, absent from salt-treated leaves, was homologous to maize glutathione S-transferases (zmGST). Expression analysis showed that salt-induced intron methylation of root zmPP2C significantly downregulated its expression, while salt-induced demethylation of leaf zmGST weakly upregulated its expression. The results suggested that salinity-induced methylation downregulated zmPP2C expression, a negative regulator of the stress response, while salinity-induced demethylation upregulated zmGST expression, a positive effecter of the stress response. Altered methylation, in response to stress, might also be involved in stress acclimation. Copyright 2009 Elsevier Masson SAS. All rights reserved.
Contributions to sensitivity analysis and generalized discriminant analysis
International Nuclear Information System (INIS)
Jacques, J.
2005-12-01
Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)
Systemization of burnup sensitivity analysis code
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2004-02-01
To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this
Sensitivity analysis of the Two Geometry Method
International Nuclear Information System (INIS)
Wichers, V.A.
1993-09-01
The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)
Directory of Open Access Journals (Sweden)
Jin Zhou
2018-01-01
Full Text Available When a carrier-based aircraft is in arrested landing on deck, the impact loads on landing gears and airframe are closely related to landing states. The distribution and extreme values of the landing loads obtained during life-cycle analysis provide an important basis for buffering parameter design and fatigue design. In this paper, the effect of the multivariate distribution was studied based on military standards and guides. By establishment of a virtual prototype, the extended Fourier amplitude sensitivity test (EFAST method is applied on sensitivity analysis of landing variables. The results show that sinking speed and rolling angle are the main influencing factors on the landing gear’s course load and vertical load; sinking speed, rolling angle, and yawing angle are the main influencing factors on the landing gear’s lateral load; and sinking speed is the main influencing factor on the barycenter overload. The extreme values of loads show that the typical condition design in the structural strength analysis is safe. The maximum difference value of the vertical load of the main landing gear is 12.0%. This research may provide some reference for structure design of landing gears and compilation of load spectrum for carrier-based aircrafts.
Directory of Open Access Journals (Sweden)
A. M. Hashemi
2000-01-01
Full Text Available Regionalized and at-site flood frequency curves exhibit considerable variability in their shapes, but the factors controlling the variability (other than sampling effects are not well understood. An application of the Monte Carlo simulation-based derived distribution approach is presented in this two-part paper to explore the influence of climate, described by simulated rainfall and evapotranspiration time series, and basin factors on the flood frequency curve (ffc. The sensitivity analysis conducted in the paper should not be interpreted as reflecting possible climate changes, but the results can provide an indication of the changes to which the flood frequency curve might be sensitive. A single site Neyman Scott point process model of rainfall, with convective and stratiform cells (Cowpertwait, 1994; 1995, has been employed to generate synthetic rainfall inputs to a rainfall runoff model. The time series of the potential evapotranspiration (ETp demand has been represented through an AR(n model with seasonal component, while a simplified version of the ARNO rainfall-runoff model (Todini, 1996 has been employed to simulate the continuous discharge time series. All these models have been parameterised in a realistic manner using observed data and results from previous applications, to obtain ‘reference’ parameter sets for a synthetic case study. Subsequently, perturbations to the model parameters have been made one-at-a-time and the sensitivities of the generated annual maximum rainfall and flood frequency curves (unstandardised, and standardised by the mean have been assessed. Overall, the sensitivity analysis described in this paper suggests that the soil moisture regime, and, in particular, the probability distribution of soil moisture content at the storm arrival time, can be considered as a unifying link between the perturbations to the several parameters and their effects on the standardised and unstandardised ffcs, thus revealing the
Risk Characterization uncertainties associated description, sensitivity analysis
International Nuclear Information System (INIS)
Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.
2013-01-01
The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations
Global sensitivity analysis using polynomial chaos expansions
International Nuclear Information System (INIS)
Sudret, Bruno
2008-01-01
Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices
Global sensitivity analysis using polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr
2008-07-15
Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.
Choi, Jongwan; Kim, Felix Sunjoo
2018-03-01
We studied the influence of photoanode thickness on the photovoltaic characteristics and impedance responses of the dye-sensitized solar cells based on a ruthenium dye containing a hexyloxyl-substituted carbazole unit (Ru-HCz). As the thickness of photoanode increases from 4.2 μm to 14.8 μm, the dye-loading amount and the efficiency increase. The device with thicker photoanode shows a decrease in the efficiency due to the higher probability of recombination of electron-hole pairs before charge extraction. We also analyzed the electron-transfer and recombination characteristics as a function of photoanode thickness through detailed electrochemical impedance spectroscopy analysis.
Pustil'Nik, Lev
We consider a problem of the possible influence of unfavorable states of the space weather on agriculture markets through the chain of connections: "space weather"-"earth weather"- "agriculture crops"-"price reaction". We show that new manifestations of "space weather"- "earth weather" relations discovered in the recent time allow revising a wide range of the expected solar-terrestrial connections. In the previous works we proposed possible mechanisms of wheat market reaction on the specific unfavorable states of space weather in the form of price bursts and price asymmetry. We point out that implementation of considered "price reaction scenarios" is possible only for the case of simultaneous realization of several necessary conditions: high sensitivity of local earth weather in the selected region to space weather; the state of "high risk agriculture" in the selected agriculture zone; high sensitivity of agricultural market to a possible deficit of yield. Results of our previous works (I, II), including application of this approach to the Medieval England wheat market (1250-1700) and to the modern USA durum market (1910-1992), showed that connection between wheat price bursts and space weather state in these cases was absolutely real. The aim of the present work is to answer the question why wheat markets in one selected region may be sensitive to a space weather factor, while in other regions wheat markets demonstrate absolutely indifferent reaction on the space weather. For this aim, we consider dependence of sensitivity of wheat markets to space weather as a function of their location in different climatic zones of Europe. We analyze a database of 95 European wheat markets from 14 countries for the 600-year period (1260-1912). We show that the observed sensitivity of wheat markets to space weather effects is controlled, first of all, by a type of predominant climate in different zones of agricultural production. Wheat markets in the Northern and, partly, in
Variance-based sensitivity indices for models with dependent inputs
International Nuclear Information System (INIS)
Mara, Thierry A.; Tarantola, Stefano
2012-01-01
Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.
Directory of Open Access Journals (Sweden)
Ting Zhao
2015-01-01
Full Text Available Accurate and reliable state of charge (SOC estimation is a key enabling technique for large format lithium-ion battery pack due to its vital role in battery safety and effective management. This paper tries to make three contributions to existing literatures through robust algorithms. (1 Observer based SOC estimation error model is established, where the crucial parameters on SOC estimation accuracy are determined by quantitative analysis, being a basis for parameters update. (2 The estimation method for a battery pack in which the inconsistency of cells is taken into consideration is proposed, ensuring all batteries’ SOC ranging from 0 to 1, effectively avoiding the battery overcharged/overdischarged. Online estimation of the parameters is also presented in this paper. (3 The SOC estimation accuracy of the battery pack is verified using the hardware-in-loop simulation platform. The experimental results at various dynamic test conditions, temperatures, and initial SOC difference between two cells demonstrate the efficacy of the proposed method.
Sensitivity analysis of a PWR pressurizer
International Nuclear Information System (INIS)
Bruel, Renata Nunes
1997-01-01
A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)
Sensitivity analysis of a greedy heuristic for knapsack problems
Ghosh, D; Chakravarti, N; Sierksma, G
2006-01-01
In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit
Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis
Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi
To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.
Energy Technology Data Exchange (ETDEWEB)
Tang, Chun Xiang [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Zhang, Long Jiang, E-mail: kevinzhlj@163.com [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Han, Zong Hong; Zhou, Chang Sheng [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Krazinski, Aleksander W.; Silverman, Justin R. [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Schoepf, U. Joseph [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China); Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC (United States); Lu, Guang Ming, E-mail: cjr.luguangming@vip.163.com [Department of Medical Imaging, Jinling Hospital, Clinical School of Medical College, Nanjing University, Nanjing, Jiangsu 210002 (China)
2013-12-01
Purpose: To evaluate the performance of dual-energy CT (DECT) based vascular iodine analysis for the detection of acute peripheral pulmonary thrombus (PE) in a canine model with histopathological findings as the reference standard. Materials and methods: The study protocol was approved by our institutional animal committee. Thrombi (n = 12) or saline (n = 4) were intravenously injected via right femoral vein in sixteen dogs, respectively. CT pulmonary angiography (CTPA) in DECT mode was performed and conventional CTPA images and DECT based vascular iodine studies using Lung Vessels application were reconstructed. Two radiologists visually evaluated the number and location of PEs using conventional CTPA and DECT series on a per-animal and a per-clot basis. Detailed histopathological examination of lung specimens and catheter angiography served as reference standard. Sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) of DECT and CTPA were calculated on a segmental and subsegmental or more distal pulmonary artery basis. Weighted κ values were computed to evaluate inter-modality and inter-reader agreement. Results: Thirteen dogs were enrolled for final image analysis (experimental group = 9, control group = 4). Histopathological results revealed 237 emboli in 45 lung lobes in 9 experimental dogs, 11 emboli in segmental pulmonary arteries, 49 in subsegmental pulmonary arteries, 177 in fifth-order or more distal pulmonary arteries. Overall sensitivity, specificity, accuracy, PPV, and NPV for CTPA plus DECT were 93.1%, 76.9%, 87.8%, 89.4%, and 84.2% for the detection of pulmonary emboli. With CTPA versus DECT, sensitivities, specificities, accuracies, PPVs, and NPVs are all 100% for the detection of pulmonary emboli on a segmental pulmonary artery basis, 88.9%, 100%, 96.0%, 100%, and 94.1% for CTPA and 90.4%, 93.0%, 92.0%, 88.7%, and 94.1% for DECT on a subsegmental pulmonary artery basis; 23.8%, 96.4%, 50.4%, 93
Sensitivity analysis for large-scale problems
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Sensitivity analysis in life cycle assessment
Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.
2014-01-01
Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two
Dye-sensitized solar cells based on purple corn sensitizers
International Nuclear Information System (INIS)
Phinjaturus, Kawin; Maiaugree, Wasan; Suriharn, Bhalang; Pimanpaeng, Samuk; Amornkitbamrung, Vittaya; Swatsitang, Ekaphan
2016-01-01
Graphical abstract: - Highlights: • Extract from husk, cob and silk of purple corn was used as a photosensitizer in DSSC. • Effect of solvents i.e. acetone, ethanol and DI water on DSSC efficiency was studied. • The highest efficiency of 1.06% was obtained in DSSC based on acetone extraction. - Abstract: Natural dye extracted from husk, cob and silk of purple corn, were used for the first time as photosensitizers in dye sensitized solar cells (DSSCs). The dye sensitized solar cells fabrication process has been optimized in terms of solvent extraction. The resulting maximal efficiency of 1.06% was obtained from purple corn husk extracted by acetone. The ultraviolet–visible (UV–vis) spectroscopy, Fourier transform infrared spectroscopy (FTIR), electrochemical impedance spectroscopy (EIS) and incident photon-to-current efficiency (IPCE) were employed to characterize the natural dye and the DSSCs.
Dye-sensitized solar cells based on purple corn sensitizers
Energy Technology Data Exchange (ETDEWEB)
Phinjaturus, Kawin [Materials Science and Nanotechnology Program, Faculty of Science, Khon Kaen University, Khon Kaen 40002 (Thailand); Maiaugree, Wasan [Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen 40002 (Thailand); Suriharn, Bhalang [Department of Plant Science and Agricultural Resources, Faculty of Agriculture, Khon Kaen University, Khon Kaen 40002 (Thailand); Pimanpaeng, Samuk; Amornkitbamrung, Vittaya [Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen 40002 (Thailand); Integrated Nanotechnology Research Center (INRC), Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen 40002 (Thailand); Swatsitang, Ekaphan, E-mail: ekaphan@kku.ac.th [Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen 40002 (Thailand); Integrated Nanotechnology Research Center (INRC), Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen 40002 (Thailand); Nanotec-KKU Center of Excellence on Advanced Nanomaterials for Energy Production and Storage, Khon Kaen 40002 (Thailand)
2016-09-01
Graphical abstract: - Highlights: • Extract from husk, cob and silk of purple corn was used as a photosensitizer in DSSC. • Effect of solvents i.e. acetone, ethanol and DI water on DSSC efficiency was studied. • The highest efficiency of 1.06% was obtained in DSSC based on acetone extraction. - Abstract: Natural dye extracted from husk, cob and silk of purple corn, were used for the first time as photosensitizers in dye sensitized solar cells (DSSCs). The dye sensitized solar cells fabrication process has been optimized in terms of solvent extraction. The resulting maximal efficiency of 1.06% was obtained from purple corn husk extracted by acetone. The ultraviolet–visible (UV–vis) spectroscopy, Fourier transform infrared spectroscopy (FTIR), electrochemical impedance spectroscopy (EIS) and incident photon-to-current efficiency (IPCE) were employed to characterize the natural dye and the DSSCs.
sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
user
of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
DEFF Research Database (Denmark)
Blaabjerg, Frede; Chiarantoni, Ernesto; Aquila, Antonio Dell
2004-01-01
Three-phase active rectifiers based on the voltage source converter topology can successfully replace traditional thyristor based rectifiers or diode bridge plus chopper in interfacing dc-systems to the grid. However, if the application in which they are employed has a high safety issue......, to the grid side stiffness and to the parameters of the controller has never been detailed considered. In this paper the experimental results of an LCL-filter-based three-phase active rectifier are analysed with the circuit theory approach. A ?virtual circuit? is synthesized in role of the digital controller...
An ESDIRK Method with Sensitivity Analysis Capabilities
DEFF Research Database (Denmark)
Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove
2004-01-01
of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...
Sensitivity analysis in optimization and reliability problems
International Nuclear Information System (INIS)
Castillo, Enrique; Minguez, Roberto; Castillo, Carmen
2008-01-01
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods
Sensitivity analysis in optimization and reliability problems
Energy Technology Data Exchange (ETDEWEB)
Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es
2008-12-15
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.
Techniques for sensitivity analysis of SYVAC results
International Nuclear Information System (INIS)
Prust, J.O.
1985-05-01
Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2014-01-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...
Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH
2017-12-01
Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).
Subset simulation for structural reliability sensitivity analysis
International Nuclear Information System (INIS)
Song Shufang; Lu Zhenzhou; Qiao Hongwei
2009-01-01
Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes
Multiple predictor smoothing methods for sensitivity analysis: Description of techniques
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis: Example results
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Sha, A H; Lin, X H; Huang, J B; Zhang, D P
2005-07-01
DNA methylation is known to play an important role in the regulation of gene expression in eukaryotes. The rice cultivar Wase Aikoku 3 becomes resistant to the blight pathogen Xanthomonas oryzae pv. oryzae at the adult stage. Using methylation-sensitive amplified polymorphism (MSAP) analysis, we compared the patterns of cytosine methylation in seedlings and adult plants of the rice cultivar Wase Aikoku 3 that had been inoculated with the pathogen Xanthomonas oryzae pv. oryzae, subjected to mock inoculation or left untreated. In all, 2000 DNA fragments, each representing a recognition site cleaved by either or both of two isoschizomers, were amplified using 60 pairs of selective primers. A total of 380 sites were found to be methylated. Of these, 45 showed differential cytosine methylation among the seedlings and adult plants subjected to different treatments, and overall levels of methylation were higher in adult plants than in seedlings. All polymorphic fragments were sequenced, and six showed homology to genes that code for products of known function. Northern analysis of three fragments indicated that their expression varied with methylation pattern, with hypermethylation being correlated with repression of transcription, as expected. The results suggest that significant differences in cytosine methylation exist between seedlings and adult plants, and that hypermethylation or hypomethylation of specific genes may be involved in the development of adult plant resistance (APR) in rice plants.
Dynamic Resonance Sensitivity Analysis in Wind Farms
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2017-01-01
(PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...
International Nuclear Information System (INIS)
Kamyab, Shahabeddin; Nematollahi, Mohammadreza; Shafiee, Golnoush
2013-01-01
Highlights: ► Importance and sensitivity analysis has been performed for a digitized reactor trip system. ► The results show acceptable trip unavailability, for software failure probabilities below 1E −4 . ► However, the value of Fussell–Vesley indicates that software common cause failure is still risk significant. ► Diversity and effective test is founded beneficial to reduce software contribution. - Abstract: The reactor trip system has been digitized in advanced nuclear power plants, since the programmable nature of computer based systems has a number of advantages over non-programmable systems. However, software is still vulnerable to common cause failure (CCF). Residual software faults represent a CCF concern, which threat the implemented achievements. This study attempts to assess the effectiveness of so-called defensive strategies against software CCF with respect to reliability. Sensitivity analysis has been performed by re-quantifying the models upon changing the software failure probability. Importance measures then have been estimated in order to reveal the specific contribution of software CCF in the trip failure probability. The results reveal the importance and effectiveness of signal and software diversity as applicable strategies to ameliorate inefficiencies due to software CCF in the reactor trip system (RTS). No significant change has been observed in the rate of RTS failure probability for the basic software CCF greater than 1 × 10 −4 . However, the related Fussell–Vesley has been greater than 0.005, for the lower values. The study concludes that consideration of risk associated with the software based systems is a multi-variant function which requires compromising among them in more precise and comprehensive studies
Attard, Guillaume; Rossier, Yvan; Eisenlohr, Laurent
2017-09-01
In a previous paper published in Journal of Hydrology, it was shown that underground structures are responsible for a mixing process between shallow and deep groundwater that can favour the spreading of urban contamination. In this paper, the impact of underground structures on the intrinsic vulnerability of urban aquifers was investigated. A sensitivity analysis was performed using a 2D deterministic modelling approach based on the reservoir theory generalized to hydrodispersive systems to better understand this mixing phenomenon and the mixing affected zone (MAZ) caused by underground structures. It was shown that the maximal extent of the MAZ caused by an underground structure is reached approximately 20 years after construction. Consequently, underground structures represent a long-term threat for deep aquifer reservoirs. Regarding the construction process, draining operations have a major impact and favour large-scale mixing between shallow and deep groundwater. Consequently, dewatering should be reduced and enclosed as much as possible. The role played by underground structures' dimensions was assessed. The obstruction of the first aquifer layer caused by construction has the greatest influence on the MAZ. The cumulative impact of several underground structures was assessed. It was shown that the total MAZ area increases linearly with underground structures' density. The role played by materials' properties and hydraulic gradient were assessed. Hydraulic conductivity, anisotropy and porosity have the strongest influence on the development of MAZ. Finally, an empirical law was derived to estimate the MAZ caused by an underground structure in a bi-layered aquifer under unconfined conditions. This empirical law, based on the results of the sensitivity analysis developed in this paper, allows for the estimation of MAZ dimensions under known material properties and underground structure dimensions. This empirical law can help urban planners assess the area of
Sensitivity analysis of Smith's AMRV model
International Nuclear Information System (INIS)
Ho, Chih-Hsiang
1995-01-01
Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Probabilistic sensitivity analysis in health economics.
Baio, Gianluca; Dawid, A Philip
2015-12-01
Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.
TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER
Directory of Open Access Journals (Sweden)
Richard E. Wendell
2010-12-01
Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.
Directory of Open Access Journals (Sweden)
M. Franchini
2000-01-01
Full Text Available The sensitivity analysis described in Hashemi et al. (2000 is based on one-at-a-time perturbations to the model parameters. This type of analysis cannot highlight the presence of parameter interactions which might indeed affect the characteristics of the flood frequency curve (ffc even more than the individual parameters. For this reason, the effects of the parameters of the rainfall, rainfall runoff models and of the potential evapotranspiration demand on the ffc are investigated here through an analysis of the results obtained from a factorial experimental design, where all the parameters are allowed to vary simultaneously. This latter, more complex, analysis confirms the results obtained in Hashemi et al. (2000 thus making the conclusions drawn there of wider validity and not related strictly to the reference set selected. However, it is shown that two-factor interactions are present not only between different pairs of parameters of an individual model, but also between pairs of parameters of different models, such as rainfall and rainfall-runoff models, thus demonstrating the complex interaction between climate and basin characteristics affecting the ffc and in particular its curvature. Furthermore, the wider range of climatic regime behaviour produced within the factorial experimental design shows that the probability distribution of soil moisture content at the storm arrival time is no longer sufficient to explain the link between the perturbations to the parameters and their effects on the ffc, as was suggested in Hashemi et al. (2000. Other factors have to be considered, such as the probability distribution of the soil moisture capacity, and the rainfall regime, expressed through the annual maximum rainfalls over different durations. Keywords: Monte Carlo simulation; factorial experimental design; analysis of variance (ANOVA
Water-Based Pressure-Sensitive Paints
Jordan, Jeffrey D.; Watkins, A. Neal; Oglesby, Donald M.; Ingram, JoAnne L.
2006-01-01
Water-based pressure-sensitive paints (PSPs) have been invented as alternatives to conventional organic-solvent-based pressure-sensitive paints, which are used primarily for indicating distributions of air pressure on wind-tunnel models. Typically, PSPs are sprayed onto aerodynamic models after they have been mounted in wind tunnels. When conventional organic-solvent-based PSPs are used, this practice creates a problem of removing toxic fumes from inside the wind tunnels. The use of water-based PSPs eliminates this problem. The waterbased PSPs offer high performance as pressure indicators, plus all the advantages of common water-based paints (low toxicity, low concentrations of volatile organic compounds, and easy cleanup by use of water).
Directory of Open Access Journals (Sweden)
I. A. A. C. Esteves
2016-01-01
Full Text Available A sensitive method was developed and experimentally validated for the in-line analysis and quantification of gaseous feed and product streams of separation processes under research and development based on column chromatography. The analysis uses a specific mass spectrometry method coupled to engineering processes, such as Pressure Swing Adsorption (PSA and Simulated Moving Bed (SMB, which are examples of popular continuous separation technologies that can be used in applications such as natural gas and biogas purifications or carbon dioxide sequestration. These processes employ column adsorption equilibria on adsorbent materials, thus requiring real-time gas stream composition quantification. For this assay, an internal standard is assumed and a single-point calibration is used in a simple mixture-specific algorithm. The accuracy of the method was found to be between 0.01% and 0.25% (-mol for mixtures of CO2, CH4, and N2, tested as case-studies. This makes the method feasible for streams with quality control levels that can be used as a standard monitoring and analyzing procedure.
Sensitivity Analysis of Centralized Dynamic Cell Selection
DEFF Research Database (Denmark)
Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.
2016-01-01
and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...
Sensitivity analysis in a structural reliability context
International Nuclear Information System (INIS)
Lemaitre, Paul
2014-01-01
This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)
Applications of advances in nonlinear sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Werbos, P J
1982-01-01
The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.
*Corresponding Author Sensitivity Analysis of a Physiochemical ...
African Journals Online (AJOL)
Michael Horsfall
The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.
Sensitivity Enhancement of FBG-Based Strain Sensor.
Li, Ruiya; Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Li, Tianliang; Mao, Jian
2018-05-17
A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments.
The identification of model effective dimensions using global sensitivity analysis
International Nuclear Information System (INIS)
Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang
2011-01-01
It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.
The identification of model effective dimensions using global sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)
2011-04-15
It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.
Risk and sensitivity analysis in relation to external events
International Nuclear Information System (INIS)
Alzbutas, R.; Urbonas, R.; Augutis, J.
2001-01-01
This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)
Global sensitivity analysis in wind energy assessment
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present
High sensitivity MOSFET-based neutron dosimetry
International Nuclear Information System (INIS)
Fragopoulou, M.; Konstantakos, V.; Zamani, M.; Siskos, S.; Laopoulos, T.; Sarrabayrouse, G.
2010-01-01
A new dosemeter based on a metal-oxide-semiconductor field effect transistor sensitive to both neutrons and gamma radiation was manufactured at LAAS-CNRS Laboratory, Toulouse, France. In order to be used for neutron dosimetry, a thin film of lithium fluoride was deposited on the surface of the gate of the device. The characteristics of the dosemeter, such as the dependence of its response to neutron dose and dose rate, were investigated. The studied dosemeter was very sensitive to gamma rays compared to other dosemeters proposed in the literature. Its response in thermal neutrons was found to be much higher than in fast neutrons and gamma rays.
Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information
Directory of Open Access Journals (Sweden)
Chuanqi Li
2014-11-01
Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.
Sensitivity Analysis in Two-Stage DEA
Directory of Open Access Journals (Sweden)
Athena Forghani
2015-07-01
Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.
Sensitivity Analysis in Two-Stage DEA
Directory of Open Access Journals (Sweden)
Athena Forghani
2015-12-01
Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Application of Stochastic Sensitivity Analysis to Integrated Force Method
Directory of Open Access Journals (Sweden)
X. F. Wei
2012-01-01
Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.
Carbon dioxide capture processes: Simulation, design and sensitivity analysis
DEFF Research Database (Denmark)
Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul
2012-01-01
equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Lueders, Tillmann; Manefield, Mike; Friedrich, Michael W
2004-01-01
Stable isotope probing (SIP) of nucleic acids allows the detection and identification of active members of natural microbial populations that are involved in the assimilation of an isotopically labelled compound into nucleic acids. SIP is based on the separation of isotopically labelled DNA or rRNA by isopycnic density gradient centrifugation. We have developed a highly sensitive protocol for the detection of 'light' and 'heavy' nucleic acids in fractions of centrifugation gradients. It involves the fluorometric quantification of total DNA or rRNA, and the quantification of either 16S rRNA genes or 16S rRNA in gradient fractions by real-time PCR with domain-specific primers. Using this approach, we found that fully 13C-labelled DNA or rRNA of Methylobacterium extorquens was quantitatively resolved from unlabelled DNA or rRNA of Methanosarcina barkeri by cesium chloride or cesium trifluoroacetate density gradient centrifugation respectively. However, a constant low background of unspecific nucleic acids was detected in all DNA or rRNA gradient fractions, which is important for the interpretation of environmental SIP results. Consequently, quantitative analysis of gradient fractions provides a higher precision and finer resolution for retrieval of isotopically enriched nucleic acids than possible using ethidium bromide or gradient fractionation combined with fingerprinting analyses. This is a prerequisite for the fine-scale tracing of microbial populations metabolizing 13C-labelled compounds in natural ecosystems.
Global sensitivity analysis of computer models with functional inputs
International Nuclear Information System (INIS)
Iooss, Bertrand; Ribatet, Mathieu
2009-01-01
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.
Ligmann-Zielinska, A.; Kramer, D. B.; Spence Cheruvelil, K.; Soranno, P.
2012-12-01
Socio-ecological systems are dynamic and nonlinear. To account for this complexity, we employ agent-based models (ABMs) to study macro-scale phenomena resulting from micro-scale interactions among system components. Because ABMs typically have many parameters, it is challenging to identify which parameters contribute to the emerging macro-scale patterns. In this paper, we address the following question: What is the extent of participation in agricultural land conservation programs given heterogeneous landscape, economic, social, and individual decision making criteria in complex lakesheds? To answer this question, we: [1] built an ABM for our model system; [2] simulated land use change resulting from agent decision making, [3] estimated the uncertainty of the model output, decomposed it and apportioned it to each of the parameters in the model. Our model system is a freshwater socio-ecological system - that of farmland and lake water quality within a region containing a large number of lakes and high proportions of agricultural lands. Our study focuses on examining how agricultural land conversion from active to fallow reduces freshwater nutrient loading and improves water quality. Consequently, our ABM is composed of farmer agents who make decisions related to participation in a government-sponsored Conservation Reserve Program (CRP) managed by the Farm Service Agency (FSA). We also include an FSA agent, who selects enrollment offers made by farmers and announces the signup results leading to land use change. The model is executed in a Monte Carlo simulation framework to generate a distribution of maps of fallow lands that are used for calculating nutrient loading to lakes. What follows is a variance-based sensitivity analysis of the results. We compute sensitivity indices for individual parameters and their combinations, allowing for identification of the most influential as well as the insignificant inputs. In the case study, we observe that farmland
Demonstration sensitivity analysis for RADTRAN III
International Nuclear Information System (INIS)
Neuhauser, K.S.; Reardon, P.C.
1986-10-01
A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, María
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Dye-sensitized solar cells based on purple corn sensitizers
Phinjaturus, Kawin; Maiaugree, Wasan; Suriharn, Bhalang; Pimanpaeng, Samuk; Amornkitbamrung, Vittaya; Swatsitang, Ekaphan
2016-09-01
Natural dye extracted from husk, cob and silk of purple corn, were used for the first time as photosensitizers in dye sensitized solar cells (DSSCs). The dye sensitized solar cells fabrication process has been optimized in terms of solvent extraction. The resulting maximal efficiency of 1.06% was obtained from purple corn husk extracted by acetone. The ultraviolet-visible (UV-vis) spectroscopy, Fourier transform infrared spectroscopy (FTIR), electrochemical impedance spectroscopy (EIS) and incident photon-to-current efficiency (IPCE) were employed to characterize the natural dye and the DSSCs.
Optical indicators based on environment sensitive fluorophors
Energy Technology Data Exchange (ETDEWEB)
Shakhsher, Z.M.; Seitz, W.R. (Univ. of New Hampshire, Durham, NC (USA))
1990-01-01
The authors are interested in the development of optical indicators based on environment sensitive fluorophors. The fluorophor is immobilized on a solid substrate. Interaction with analyte modifies the fluorophor environment, leading to a shift in the distribution of emission wavelengths. Because the indicator is based on spectral shift, it is possible to relate analyte concentration to a ratio of intensities at two different wavelengths. This parameter is insensitive to instrumental drift and slow loss of indicator. Two indicator systems have been investigated. Both involve dansyl derivation, i.e., derivatives of 5-dimethylamino-1-naphthalene sulfonic acid.
Coordinate sensitive detectors based on microchannel plates
International Nuclear Information System (INIS)
Gruntman, M.A.
1984-01-01
Coordinate-sensitive detectors (CSD) on the basis of microchannel plates permit to determine in a digital form the coordinates of every recorded particle and they are used in different fields of physical experiment. The sensitive surface diameter of such detectors can reach 10 cm, and spatial resolution - 10 μm. In the review provided CSD with microchannel plates are classified according to the ways of coordinate determination, different types of the detectors, pecUliarities of their design and electron flowsheet are described. It is pointed out that there are reasons for introduction of CSD into practice of laboratory physical investigations in various fields, where the particle recorded is electron or is able to form a secondary electron. It is attributed to nuclear physics, physics of electron and atom collisions, optics, mass-spectrometry, electron microscopy, X-ray analysis, investigation of surfaces
Sensitivity analysis of the nuclear data for MYRRHA reactor modelling
International Nuclear Information System (INIS)
Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan
2014-01-01
A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)
Sensitivity analysis of periodic errors in heterodyne interferometry
International Nuclear Information System (INIS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-01-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors
Sensitivity analysis of periodic errors in heterodyne interferometry
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Water-Based Pressure Sensitive Paint
Oglesby, Donald M.; Ingram, JoAnne L.; Jordan, Jeffrey D.; Watkins, A. Neal; Leighty, Bradley D.
2004-01-01
Preparation and performance of a water-based pressure sensitive paint (PSP) is described. A water emulsion of an oxygen permeable polymer and a platinum porphyrin type luminescent compound were dispersed in a water matrix to produce a PSP that performs well without the use of volatile, toxic solvents. The primary advantages of this PSP are reduced contamination of wind tunnels in which it is used, lower health risk to its users, and easier cleanup and disposal. This also represents a cost reduction by eliminating the need for elaborate ventilation and user protection during application. The water-based PSP described has all the characteristics associated with water-based paints (low toxicity, very low volatile organic chemicals, and easy water cleanup) but also has high performance as a global pressure sensor for PSP measurements in wind tunnels. The use of a water-based PSP virtually eliminates the toxic fumes associated with the application of PSPs to a model in wind tunnels.
International Nuclear Information System (INIS)
Barber, A. D.; Busch, R.
2009-01-01
The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)
Sensitivity Analysis of a Horizontal Earth Electrode under Impulse ...
African Journals Online (AJOL)
This paper presents the sensitivity analysis of an earthing conductor under the influence of impulse current arising from a lightning stroke. The approach is based on the 2nd order finite difference time domain (FDTD). The earthing conductor is regarded as a lossy transmission line where it is divided into series connected ...
International Nuclear Information System (INIS)
Christie, W.H.
1978-01-01
Sheathed Chromel versus Alumel thermocouples decalibrate when exposed to temperatures in excess of 1100 0 C. Thermocouples sheathed in Inconel-600 and type 304 stainless steel were studied in this work. Quantified SIMS data showed that the observed decalibrations were due to significant alterations that took place in the Chromel and Alumel thermoelements. The amount of alteration was different for each thermocouple and was influenced by the particular sheath material used in the thermocouple construction. Relative sensitivity factors, indexed by a matrix ion species ratio, were used to quantify SIMS data for three nickel-based alloys, Chromel, Alumel, and Inconel-600, and an iron-based alloy, type 304 stainless steel. Oxygen pressure >2 x 10 -6 torr in the sputtering region gave enhanced sensitivity and superior quantitative results as compared to data obtained at instrumental residual pressure
Variance estimation for sensitivity analysis of poverty and inequality measures
Directory of Open Access Journals (Sweden)
Christian Dudel
2017-04-01
Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.
Probabilistic and sensitivity analysis of Botlek Bridge structures
Directory of Open Access Journals (Sweden)
Králik Juraj
2017-01-01
Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.
Automated sensitivity analysis using the GRESS language
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.; Wright, R.Q.
1986-04-01
An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies
Continuous integration congestion cost allocation based on sensitivity
International Nuclear Information System (INIS)
Wu, Z.Q.; Wang, Y.N.
2004-01-01
Congestion cost allocation is a very important topic in congestion management. Allocation methods based on the Aumann-Shapley value use the discrete numerical integration method, which needs to solve the incremented OPF solution many times, and as such it is not suitable for practical application to large-scale systems. The optimal solution and its sensitivity change tendency during congestion removal using a DC optimal power flow (OPF) process is analysed. A simple continuous integration method based on the sensitivity is proposed for the congestion cost allocation. The proposed sensitivity analysis method needs a smaller computation time than the method based on using the quadratic method and inner point iteration. The proposed congestion cost allocation method uses a continuous integration method rather than discrete numerical integration. The method does not need to solve the incremented OPF solutions; which allows it use in large-scale systems. The method can also be used for AC OPF congestion management. (author)
International Nuclear Information System (INIS)
Di Maio, Francesco; Nicola, Giancarlo; Borgonovo, Emanuele; Zio, Enrico
2016-01-01
Sensitivity Analysis (SA) is performed to gain fundamental insights on a system behavior that is usually reproduced by a model and to identify the most relevant input variables whose variations affect the system model functional response. For the reliability analysis of passive safety systems of Nuclear Power Plants (NPPs), models are Best Estimate (BE) Thermal Hydraulic (TH) codes, that predict the system functional response in normal and accidental conditions and, in this paper, an ensemble of three alternative invariant SA methods is innovatively set up for a SA on the TH code input variables. The ensemble aggregates the input variables raking orders provided by Pearson correlation ratio, Delta method and Beta method. The capability of the ensemble is shown on a BE–TH code of the Passive Containment Cooling System (PCCS) of an Advanced Pressurized water reactor AP1000, during a Loss Of Coolant Accident (LOCA), whose output probability density function (pdf) is approximated by a Finite Mixture Model (FMM), on the basis of a limited number of simulations. - Highlights: • We perform the reliability analysis of a passive safety system of Nuclear Power Plant (NPP). • We use a Thermal Hydraulic (TH) code for predicting the NPP response to accidents. • We propose an ensemble of Invariant Methods for the sensitivity analysis of the TH code • The ensemble aggregates the rankings of Pearson correlation, Delta and Beta methods. • The approach is tested on a Passive Containment Cooling System of an AP1000 NPP.
Sensitivity analysis of reactive ecological dynamics.
Verdy, Ariane; Caswell, Hal
2008-08-01
Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.
Xiao, Hong; Lin, Xiao-ling; Dai, Xiang-yu; Gao, Li-dong; Chen, Bi-yun; Zhang, Xi-xing; Zhu, Pei-juan; Tian, Huai-yu
2012-05-01
To analyze the periodicity of pandemic influenza A (H1N1) in Changsha in year 2009 and its correlation with sensitive climatic factors. The information of 5439 cases of influenza A (H1N1) and synchronous meteorological data during the period between May 22th and December 31st in year 2009 (223 days in total) in Changsha city were collected. The classification and regression tree (CART) was employed to screen the sensitive climatic factors on influenza A (H1N1); meanwhile, cross wavelet transform and wavelet coherence analysis were applied to assess and compare the periodicity of the pandemic disease and its association with the time-lag phase features of the sensitive climatic factors. The results of CART indicated that the daily minimum temperature and daily absolute humidity were the sensitive climatic factors for the popularity of influenza A (H1N1) in Changsha. The peak of the incidence of influenza A (H1N1) was in the period between October and December (Median (M) = 44.00 cases per day), simultaneously the daily minimum temperature (M = 13°C) and daily absolute humidity (M = 6.69 g/m(3)) were relatively low. The results of wavelet analysis demonstrated that a period of 16 days was found in the epidemic threshold in Changsha, while the daily minimum temperature and daily absolute humidity were the relatively sensitive climatic factors. The number of daily reported patients was statistically relevant to the daily minimum temperature and daily absolute humidity. The frequency domain was mostly in the period of (16 ± 2) days. In the initial stage of the disease (from August 9th and September 8th), a 6-day lag was found between the incidence and the daily minimum temperature. In the peak period of the disease, the daily minimum temperature and daily absolute humidity were negatively relevant to the incidence of the disease. In the pandemic period, the incidence of influenza A (H1N1) showed periodic features; and the sensitive climatic factors did have a "driving
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Simple Sensitivity Analysis for Orion GNC
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Sensitivity analysis of floating offshore wind farms
International Nuclear Information System (INIS)
Castro-Santos, Laura; Diaz-Casas, Vicente
2015-01-01
Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1990-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1991-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab
Croicu, Ana-Maria; Jarrett, Angela M; Cogan, N G; Hussaini, M Yousuff
2017-11-01
HIV infection is one of the most difficult infections to control and manage. The most recent recommendations to control this infection vary according to the guidelines used (US, European, WHO) and are not patient-specific. Unfortunately, no two individuals respond to infection and treatment quite the same way. The purpose of this paper is to make use of the uncertainty and sensitivity analysis to investigate possible short-term treatment options that are patient-specific. We are able to identify the most significant parameters that are responsible for ART outcome and to formulate some insights into the ART success.
Soltysik, David A; Thomasson, David; Rajan, Sunder; Biassou, Nadia
2015-02-15
Functional magnetic resonance imaging (fMRI) time series are subject to corruption by many noise sources, especially physiological noise and motion. Researchers have developed many methods to reduce physiological noise, including RETROICOR, which retroactively removes cardiac and respiratory waveforms collected during the scan, and CompCor, which applies principal components analysis (PCA) to remove physiological noise components without any physiological monitoring during the scan. We developed four variants of the CompCor method. The optimized CompCor method applies PCA to time series in a noise mask, but orthogonalizes each component to the BOLD response waveform and uses an algorithm to determine a favorable number of components to use as "nuisance regressors." Whole brain component correction (WCompCor) is similar, except that it applies PCA to time-series throughout the whole brain. Low-pass component correction (LCompCor) identifies low-pass filtered components throughout the brain, while high-pass component correction (HCompCor) identifies high-pass filtered components. We compared the new methods with the original CompCor method by examining the resulting functional contrast-to-noise ratio (CNR), sensitivity, and specificity. (1) The optimized CompCor method increased the CNR and sensitivity compared to the original CompCor method and (2) the application of WCompCor yielded the best improvement in the CNR and sensitivity. The sensitivity of the optimized CompCor, WCompCor, and LCompCor methods exceeded that of the original CompCor method. However, regressing noise signals showed a paradoxical consequence of reducing specificity for all noise reduction methods attempted. Published by Elsevier B.V.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis
DEFF Research Database (Denmark)
Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen
2017-01-01
simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...
Probabilistic sensitivity analysis of system availability using Gaussian processes
International Nuclear Information System (INIS)
Daneshkhah, Alireza; Bedford, Tim
2013-01-01
The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis
Sensitivity analysis of a modified energy model
International Nuclear Information System (INIS)
Suganthi, L.; Jagadeesan, T.R.
1997-01-01
Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)
Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I
National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
A new importance measure for sensitivity analysis
International Nuclear Information System (INIS)
Liu, Qiao; Homma, Toshimitsu
2010-01-01
Uncertainty is an integral part of risk assessment of complex engineering systems, such as nuclear power plants and space crafts. The aim of sensitivity analysis is to identify the contribution of the uncertainty in model inputs to the uncertainty in the model output. In this study, a new importance measure that characterizes the influence of the entire input distribution on the entire output distribution was proposed. It represents the expected deviation of the cumulative distribution function (CDF) of the model output that would be obtained when one input parameter of interest were known. The applicability of this importance measure was tested with two models, a nonlinear nonmonotonic mathematical model and a risk model. In addition, a comparison of this new importance measure with several other importance measures was carried out and the differences between these measures were explained. (author)
DEA Sensitivity Analysis for Parallel Production Systems
Directory of Open Access Journals (Sweden)
J. Gerami
2011-06-01
Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.
Sensitivity of SBLOCA analysis to model nodalization
International Nuclear Information System (INIS)
Lee, C.; Ito, T.; Abramson, P.B.
1983-01-01
The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery
Sensitivity and uncertainty analysis of NET/ITER shielding blankets
International Nuclear Information System (INIS)
Hogenbirk, A.; Gruppelaar, H.; Verschuur, K.A.
1990-09-01
Results are presented of sensitivity and uncertainty calculations based upon the European fusion file (EFF-1). The effect of uncertainties in Fe, Cr and Ni cross sections on the nuclear heating in the coils of a NET/ITER shielding blanket has been studied. The analysis has been performed for the total cross section as well as partial cross sections. The correct expression for the sensitivity profile was used, including the gain term. The resulting uncertainty in the nuclear heating lies between 10 and 20 per cent. (author). 18 refs.; 2 figs.; 2 tabs
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Energy Technology Data Exchange (ETDEWEB)
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Touch sensitive electrorheological fluid based tactile display
Liu, Yanju; Davidson, Rob; Taylor, Paul
2005-12-01
A tactile display is programmable device whose controlled surface is intended to be investigated by human touch. It has a great number of potential applications in the field of virtual reality and elsewhere. In this research, a 5 × 5 tactile display array including electrorheological (ER) fluid has been developed and investigated. Force responses of the tactile display array have been measured while a probe was moved across the upper surface. The purpose of this was to simulate the action of touch performed by human finger. Experimental results show that the sensed surface information could be controlled effectively by adjusting the voltage activation pattern imposed on the tactels. The performance of the tactile display is durable and repeatable. The touch sensitivity of this ER fluid based tactile display array has also been investigated in this research. The results show that it is possible to sense the touching force normal to the display's surface by monitoring the change of current passing through the ER fluid. These encouraging results are helpful for constructing a new type of tactile display based on ER fluid which can act as both sensor and actuator at the same time.
Calibration, validation, and sensitivity analysis: What's what
International Nuclear Information System (INIS)
Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.
2006-01-01
One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty
Time-dependent reliability sensitivity analysis of motion mechanisms
International Nuclear Information System (INIS)
Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng
2016-01-01
Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.
Fines Classification Based on Sensitivity to Pore-Fluid Chemistry
Jang, Junbong
2015-12-28
The 75-μm particle size is used to discriminate between fine and coarse grains. Further analysis of fine grains is typically based on the plasticity chart. Whereas pore-fluid-chemistry-dependent soil response is a salient and distinguishing characteristic of fine grains, pore-fluid chemistry is not addressed in current classification systems. Liquid limits obtained with electrically contrasting pore fluids (deionized water, 2-M NaCl brine, and kerosene) are combined to define the soil "electrical sensitivity." Liquid limit and electrical sensitivity can be effectively used to classify fine grains according to their fluid-soil response into no-, low-, intermediate-, or high-plasticity fine grains of low, intermediate, or high electrical sensitivity. The proposed methodology benefits from the accumulated experience with liquid limit in the field and addresses the needs of a broader range of geotechnical engineering problems. © ASCE.
Fines classification based on sensitivity to pore-fluid chemistry
Jang, Junbong; Santamarina, J. Carlos
2016-01-01
The 75-μm particle size is used to discriminate between fine and coarse grains. Further analysis of fine grains is typically based on the plasticity chart. Whereas pore-fluid-chemistry-dependent soil response is a salient and distinguishing characteristic of fine grains, pore-fluid chemistry is not addressed in current classification systems. Liquid limits obtained with electrically contrasting pore fluids (deionized water, 2-M NaCl brine, and kerosene) are combined to define the soil “electrical sensitivity.” Liquid limit and electrical sensitivity can be effectively used to classify fine grains according to their fluid-soil response into no-, low-, intermediate-, or high-plasticity fine grains of low, intermediate, or high electrical sensitivity. The proposed methodology benefits from the accumulated experience with liquid limit in the field and addresses the needs of a broader range of geotechnical engineering problems.
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2014-01-01
The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method
Sun, Aili; Qi, Qingan; Wang, Xuannian; Bie, Ping
2014-07-15
For the first time, a sensitive electrochemical aptasensor for thrombin (TB) was developed by using porous platinum nanotubes (PtNTs) labeled with hemin/G-quadruplex and glucose dehydrogenase (GDH) as labels. Porous PtNTs with large surface area exhibited the peroxidase-like activity. Coupling with GDH and hemin/G-quadruplex as NADH oxidase and HRP-mimicking DNAzyme, the cascade signal amplification was achieved by the following ways: in the presence of glucose and NAD(+) in the working buffer, GDH electrocatalyzed the oxidation of glucose with the production of NADH. Then, hemin/G-quadruplex as NADH oxidase catalyzed the oxidation of NADH to in situ generate H2O2. Based on the corporate electrocatalysis of PtNTs and hemin/G-quadruplex toward H2O2, the electrochemical signal was significantly amplified, allowing the detection limit of TB down to 0.15 pM level. Moreover, the proposed strategy was simple because the intercalated hemin offered the readout signal, avoiding the adding of additional redox mediator as signal donator. Such an electrochemical aptasensor is highly promising for sensitive detection of other proteins in clinical diagnostics. Copyright © 2014 Elsevier B.V. All rights reserved.
Sensitivity analysis and power for instrumental variable studies.
Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S
2018-03-31
In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.
DEFF Research Database (Denmark)
Móring, A; Vieno, M.; Doherty, R M
2015-01-01
models, as a necessary basis for assessing the effects of climate change on NH3 related atmospheric processes. GAG is capable of simulating the TAN (Total Ammoniacal Nitrogen) content, pH and the water content of the soil under a urine patch. To calculate the TAN budget, GAG takes into account urea......In this paper a new process-based, weather-driven model for ammonia (NH3) emission from a urine patch has been developed and its sensitivity to various factors assessed. This model, the GAG model (Generation of Ammonia from Grazing) was developed as a part of a suite of weather-driven NH3 exchange...... hydrolysis as a TAN input and NH3 volatilization as a loss. In the water budget, in addition to the water content of urine, precipitation and evaporation are also considered. In the pH module we assumed that the main regulating processes are the dissociation and dissolution equilibria related to the two...
International Nuclear Information System (INIS)
Harper, W.V.; Gupta, S.K.
1983-10-01
A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables
Global sensitivity analysis using a Gaussian Radial Basis Function metamodel
International Nuclear Information System (INIS)
Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua
2016-01-01
Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.
Lebedeva, Galina; Sorokin, Anatoly; Faratian, Dana; Mullen, Peter; Goltsov, Alexey; Langdon, Simon P.; Harrison, David J.; Goryanin, Igor
2012-01-01
High levels of variability in cancer-related cellular signalling networks and a lack of parameter identifiability in large-scale network models hamper translation of the results of modelling studies into the process of anti-cancer drug development. Recently global sensitivity analysis (GSA) has been recognised as a useful technique, capable of addressing the uncertainty of the model parameters and generating valid predictions on parametric sensitivities. Here we propose a novel implementation of model-based GSA specially designed to explore how multi-parametric network perturbations affect signal propagation through cancer-related networks. We use area-under-the-curve for time course of changes in phosphorylation of proteins as a characteristic for sensitivity analysis and rank network parameters with regard to their impact on the level of key cancer-related outputs, separating strong inhibitory from stimulatory effects. This allows interpretation of the results in terms which can incorporate the effects of potential anti-cancer drugs on targets and the associated biological markers of cancer. To illustrate the method we applied it to an ErbB signalling network model and explored the sensitivity profile of its key model readout, phosphorylated Akt, in the absence and presence of the ErbB2 inhibitor pertuzumab. The method successfully identified the parameters associated with elevation or suppression of Akt phosphorylation in the ErbB2/3 network. From analysis and comparison of the sensitivity profiles of pAkt in the absence and presence of targeted drugs we derived predictions of drug targets, cancer-related biomarkers and generated hypotheses for combinatorial therapy. Several key predictions have been confirmed in experiments using human ovarian carcinoma cell lines. We also compared GSA-derived predictions with the results of local sensitivity analysis and discuss the applicability of both methods. We propose that the developed GSA procedure can serve as a
Importance measures in global sensitivity analysis of nonlinear models
International Nuclear Information System (INIS)
Homma, Toshimitsu; Saltelli, Andrea
1996-01-01
The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost
Rethinking Sensitivity Analysis of Nuclear Simulations with Topology
Energy Technology Data Exchange (ETDEWEB)
Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci
2016-01-01
In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.
Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.
van Erp, Sara; Mulder, Joris; Oberski, Daniel L
2017-11-27
Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Wear-Out Sensitivity Analysis Project Abstract
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Supercritical extraction of oleaginous: parametric sensitivity analysis
Directory of Open Access Journals (Sweden)
Santos M.M.
2000-01-01
Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
Cyanex based uranyl sensitive polymeric membrane electrodes.
Badr, Ibrahim H A; Zidan, W I; Akl, Z F
2014-01-01
Novel uranyl selective polymeric membrane electrodes were prepared using three different low-cost and commercially available Cyanex extractants namely, bis(2,4,4-trimethylpentyl) phosphinic acid [L1], bis(2,4,4-trimethylpentyl) monothiophosphinic acid [L2] and bis(2,4,4-trimethylpentyl) dithiophosphinic acid [L3]. Optimization and performance characteristics of the developed Cyanex based polymer membrane electrodes were determined. The influence of membrane composition (e.g., amount and type of ionic sites, as well as type of plasticizer) on potentiometric responses of the prepared membrane electrodes was studied. Optimized Cyanex-based membrane electrodes exhibited Nernstian responses for UO₂(2+) ion over wide concentration ranges with fast response times. The optimized membrane electrodes based on L1, L2 and L3 exhibited Nernstian responses towards uranyl ion with slopes of 29.4, 28.0 and 29.3 mV decade(-1), respectively. The optimized membrane electrodes based on L1-L3 showed detection limits of 8.3 × 10(-5), 3.0 × 10(-5) and 3.3 × 10(-6) mol L(-1), respectively. The selectivity studies showed that the optimized membrane electrodes exhibited high selectivity towards UO₂(2+) ion over large number of other cations. Membrane electrodes based on L3 exhibited superior potentiometric response characteristics compared to those based on L1 and L2 (e.g., widest linear range and lowest detection limit). The analytical utility of uranyl membrane electrodes formulated with Cyanex extractant L3 was demonstrated by the analysis of uranyl ion in different real samples for nuclear safeguards verification purposes. The results obtained using direct potentiometry and flow-injection methods were compared with those measured using the standard UV-visible and inductively coupled plasma spectroscopic methods. © 2013 Published by Elsevier B.V.
Fang, J.
2015-12-01
Marine sediments cover more than two-thirds of the Earth's surface and represent a major part of the deep biosphere. Microbial cells and microbial activity appear to be widespread in these sediments. Recently, we reported the isolation of gram-positive anaerobic spore-forming piezophilic bacteria and detection of bacterial endospores in marine subsurface sediment from the Shimokita coalbed, Japan. However, the modern molecular microbiological methods (e.g., DNA-based microbial detection techniques) cannot detect bacterial endospore, because endospores are impermeable and are not stained by fluorescence DNA dyes or by ribosomal RNA staining techniques such as catalysed reporter deposition fluorescence in situ hybridization. Thus, the total microbial cell abundance in the deep biosphere may has been globally underestimated. This emphasizes the need for a new cultivation independent approach for the quantification of bacterial endospores in the deep subsurface. Dipicolinic acid (DPA, pyridine-2,6-dicarboxylic acid) is a universal and specific component of bacterial endospores, representing 5-15wt% of the dry spore, and therefore is a useful indicator and quantifier of bacterial endospores and permits to estimate total spore numbers in the subsurface biosphere. We developed a sensitive analytical method to quantify DPA content in environmental samples using gas chromatography-mass spectrometry. The method is sensitive and more convenient in use than other traditional methods. We applied this method to analyzing sediment samples from the South China Sea (obtained from IODP Exp. 349) to determine the abundance of spore-forming bacteria in the deep marine subsurface sediment. Our results suggest that gram-positive, endospore-forming bacteria may be the "unseen majority" in the deep biosphere.
SENSIT: a cross-section and design sensitivity and uncertainty analysis code
International Nuclear Information System (INIS)
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE
Multitarget global sensitivity analysis of n-butanol combustion.
Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T
2013-05-02
A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.
Sensitivity analysis of LOFT L2-5 test calculations
International Nuclear Information System (INIS)
Prosek, Andrej
2014-01-01
The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be
Sensitivity analysis in multi-parameter probabilistic systems
International Nuclear Information System (INIS)
Walker, J.R.
1987-01-01
Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model
Directory of Open Access Journals (Sweden)
Holbrook Michael R
2009-07-01
Full Text Available Abstract Nipah virus (NiV and Hendra virus (HeV are the only paramyxoviruses requiring Biosafety Level 4 (BSL-4 containment. Thus, study of henipavirus entry at less than BSL-4 conditions necessitates the use of cell-cell fusion or pseudotyped reporter virus assays. Yet, these surrogate assays may not fully emulate the biological properties unique to the virus being studied. Thus, we developed a henipaviral entry assay based on a β-lactamase-Nipah Matrix (βla-M fusion protein. We first codon-optimized the bacterial βla and the NiV-M genes to ensure efficient expression in mammalian cells. The βla-M construct was able to bud and form virus-like particles (VLPs that morphologically resembled paramyxoviruses. βla-M efficiently incorporated both NiV and HeV fusion and attachment glycoproteins. Entry of these VLPs was detected by cytosolic delivery of βla-M, resulting in enzymatic and fluorescent conversion of the pre-loaded CCF2-AM substrate. Soluble henipavirus receptors (ephrinB2 or antibodies against the F and/or G proteins blocked VLP entry. Additionally, a Y105W mutation engineered into the catalytic site of βla increased the sensitivity of our βla-M based infection assays by 2-fold. In toto, these methods will provide a more biologically relevant assay for studying henipavirus entry at less than BSL-4 conditions.
International Nuclear Information System (INIS)
Bidaud, A.
2005-10-01
Neutron transport simulation of nuclear reactors is based on the knowledge of the neutron-nucleus interaction (cross-sections, fission neutron yields and spectra...) for the dozens of nuclei present in the core over a very large energy range (fractions of eV to several MeV). To obtain the goal of the sustainable development of nuclear power, future reactors must have new and more strict constraints to their design: optimization of ore materials will necessitate breeding (generation of fissile material from fertile material), and waste management will require transmutation. Innovative reactors that could achieve such objectives - generation IV or ADS (accelerator driven system) - are loaded with new fuels (thorium, heavy actinides) and function with neutron spectra for which nuclear data do not benefit from 50 years of industrial experience, and thus present particular challenges. After validation on an experimental reactor using an international benchmark, we take classical reactor physics tools along with available nuclear data uncertainties to calculate the sensitivities and uncertainties of the criticality and temperature coefficient of a thorium molten salt reactor. In addition, a study based on the important reaction rates for the calculation of cycle's equilibrium allows us to estimate the efficiency of different reprocessing strategies and the contribution of these reaction rates on the uncertainty of the breeding and then on the uncertainty of the size of the reprocessing plant. Finally, we use this work to propose an improvement of the high priority experimental request list. (author)
Sensitivity Analysis of Fire Dynamics Simulation
DEFF Research Database (Denmark)
Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.
2007-01-01
(Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...
Global sensitivity analysis of multiscale properties of porous materials
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
Zhao, Ye; Chen, Muyan; Storey, Kenneth B; Sun, Lina; Yang, Hongsheng
2015-03-01
DNA methylation plays an important role in regulating transcriptional change in response to environmental stimuli. In the present study, DNA methylation levels of tissues of the sea cucumber Apostichopus japonicus were analyzed by the fluorescence-labeled methylation-sensitive amplified polymorphism (F-MSAP) technique over three stages of the aestivation cycle. Overall, a total of 26,963 fragments were amplified including 9112 methylated fragments among four sea cucumber tissues using 18 pairs of selective primers. Results indicated an average DNA methylation level of 33.79% for A. japonicus. The incidence of DNA methylation was different across tissue types in the non-aestivation stage: intestine (30.16%), respiratory tree (27.61%), muscle (27.94%) and body wall (56.25%). Our results show that hypermethylation accompanied deep-aestivation in A. japonicus, which suggests that DNA methylation may have an important role in regulating global transcriptional suppression during aestivation. Further analysis indicated that the main DNA modification sites were focused on intestine and respiratory tree tissues and that full-methylation but not hemi-methylation levels exhibited significant increases in the deep-aestivation stage. Copyright © 2014 Elsevier Inc. All rights reserved.
Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.
2013-01-01
We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.
Stochastic sensitivity analysis and Langevin simulation for neural network learning
International Nuclear Information System (INIS)
Koda, Masato
1997-01-01
A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method
Sensitivity analysis for improving nanomechanical photonic transducers biosensors
International Nuclear Information System (INIS)
Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C
2015-01-01
The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8 × 10 −2 nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)
Superconducting Accelerating Cavity Pressure Sensitivity Analysis
International Nuclear Information System (INIS)
Rodnizki, J.; Horvits, Z.; Ben Aliz, Y.; Grin, A.; Weissman, L.
2014-01-01
The measured sensitivity of the cavity was evaluated and it is full consistent with the measured values. It was explored that the tuning system (the fog structure) has a significant contribution to the cavity sensitivity. By using ribs or by modifying the rigidity of the fog we may reduce the HWR sensitivity. During cool down and warming up we have to analyze the stresses on the HWR to avoid plastic deformation to the HWR since the Niobium yield is an order of magnitude lower in room temperature
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.
Huang, Xiaojia; Qiu, Ningning; Yuan, Dongxing
2009-11-13
A simple, rapid, and sensitive method for the quantitative monitoring of five sulfonamide antibacterial residues (SAs) in milk was developed by stir bar sorptive extraction (SBSE) coupling to high performance liquid chromatography with diode array detection. The analytes were concentrated by SBSE based on poly (vinylimidazole-divinylbenzene) monolithic material as coating. The extraction procedure was very simple, milk was diluted with water then directly sorptive extraction without elimination of fats and protein in samples was required. To achieve optimum extraction performance for SAs, several parameters, including extraction and desorption time, desorption solvent, ionic strength and pH value of sample matrix were investigated. Under the optimized experimental conditions, low detection limits (S/N=3) quantification limits (S/N=10) of the proposed method for the target compounds were achieved within the range of 1.30-7.90 ng/mL and 4.29-26.3 ng/mL from spiked milk, respectively. Good linearities were obtained for SAs with the correlation coefficients (R(2)) above 0.996. Finally, the proposed method was successfully applied to the determination of SAs compounds in different milk samples and satisfied recoveries of spiked target compounds in real samples were obtained.
Chen, Lei; Mei, Meng; Huang, Xiaojia; Yuan, Dongxing
2016-05-15
A simple, sensitive and environmentally friendly method using polymeric ionic liquid-based stir cake sorptive extraction followed by high performance liquid chromatography with diode array detection (HPLC/DAD) has been developed for efficient quantification of six selected estrogens in environmental waters. To extract trace estrogens effectively, a poly (1-ally-3-vinylimidazolium chloride-co-ethylene dimethacrylate) monolithic cake was prepared and used as the sorbent of stir cake sorptive extraction (SCSE). The effects of preparation conditions of sorbent and extraction parameters of SCSE for estrogens were investigated and optimized. Under optimal conditions, the developed method showed satisfactory analytical performance for targeted analytes. Low limits of detection (S/N=3) and quantification limits (S/N=10) were achieved within the range of 0.024-0.057 µg/L and 0.08-0.19 µg/L, respectively. Good linearity of method was obtained for analytes with the correlation coefficients (R(2)) above 0.99. At the same time, satisfactory method repeatability and reproducibility was achieved in terms of intra- and inter-day precisions, respectively. Finally, the established SCSE-HPLC/DAD method was successfully applied for the determination of estrogens in different environmental water samples. Recoveries obtained for the determination of estrogens in spiked samples ranged from 71.2% to 108%, with RSDs below 10% in all cases. Copyright © 2016 Elsevier B.V. All rights reserved.
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
MOVES2010a regional level sensitivity analysis
2012-12-10
This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...
Understanding dynamics using sensitivity analysis: caveat and solution
2011-01-01
Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095
Systemization of burnup sensitivity analysis code (2) (Contract research)
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2008-08-01
Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion
Regional and parametric sensitivity analysis of Sobol' indices
International Nuclear Information System (INIS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2015-01-01
Nowadays, utilizing the Monte Carlo estimators for variance-based sensitivity analysis has gained sufficient popularity in many research fields. These estimators are usually based on n+2 sample matrices well designed for computing both the main and total effect indices, where n is the input dimension. The aim of this paper is to use such n+2 sample matrices to investigate how the main and total effect indices change when the uncertainty of the model inputs are reduced. For this purpose, the regional main and total effect functions are defined for measuring the changes on the main and total effect indices when the distribution range of one input is reduced, and the parametric main and total effect functions are introduced to quantify the residual main and total effect indices due to the reduced variance of one input. Monte Carlo estimators are derived for all the developed sensitivity concepts based on the n+2 samples matrices originally used for computing the main and total effect indices, thus no extra computational cost is introduced. The Ishigami function, a nonlinear model and a planar ten-bar structure are utilized for illustrating the developed sensitivity concepts, and for demonstrating the efficiency and accuracy of the derived Monte Carlo estimators. - Highlights: • The regional main and total effect functions are developed. • The parametric main and total effect functions are introduced. • The proposed sensitivity functions are all generalizations of Sobol' indices. • The Monte Carlo estimators are derived for the four sensitivity functions. • The computational cost of the estimators is the same as that of Sobol' indices
Sensitivity analysis: Interaction of DOE SNF and packaging materials
International Nuclear Information System (INIS)
Anderson, P.A.; Kirkham, R.J.; Shaber, E.L.
1999-01-01
A sensitivity analysis was conducted to evaluate the technical issues pertaining to possible destructive interactions between spent nuclear fuels (SNFs) and the stainless steel canisters. When issues are identified through such an analysis, they provide the technical basis for answering what if questions and, if needed, for conducting additional analyses, testing, or other efforts to resolve them in order to base the licensing on solid technical grounds. The analysis reported herein systematically assessed the chemical and physical properties and the potential interactions of the materials that comprise typical US Department of Energy (DOE) SNFs and the stainless steel canisters in which they will be stored, transported, and placed in a geologic repository for final disposition. The primary focus in each step of the analysis was to identify any possible phenomena that could potentially compromise the structural integrity of the canisters and to assess their thermodynamic feasibility
Linear regression and sensitivity analysis in nuclear reactor design
International Nuclear Information System (INIS)
Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.
2015-01-01
Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data
Weber, Benjamin; Hochhaus, Guenther
2015-07-01
The role of plasma pharmacokinetics (PK) for assessing bioequivalence at the target site, the lung, for orally inhaled drugs remains unclear. A validated semi-mechanistic model, considering the presence of mucociliary clearance in central lung regions, was expanded for quantifying the sensitivity of PK studies in detecting differences in the pulmonary performance (total lung deposition, central-to-peripheral lung deposition ratio, and pulmonary dissolution characteristics) between test (T) and reference (R) inhaled fluticasone propionate (FP) products. PK bioequivalence trials for inhaled FP were simulated based on this PK model for a varying number of subjects and T products. The statistical power to conclude bioequivalence when T and R products are identical was demonstrated to be 90% for approximately 50 subjects. Furthermore, the simulations demonstrated that PK metrics (area under the concentration time curve (AUC) and C max) are capable of detecting differences between T and R formulations of inhaled FP products when the products differ by more than 20%, 30%, and 25% for total lung deposition, central-to-peripheral lung deposition ratio, and pulmonary dissolution characteristics, respectively. These results were derived using a rather conservative risk assessment approach with an error rate of <10%. The simulations thus indicated that PK studies might be a viable alternative to clinical studies comparing pulmonary efficacy biomarkers for slowly dissolving inhaled drugs. PK trials for pulmonary efficacy equivalence testing should be complemented by in vitro studies to avoid false positive bioequivalence assessments that are theoretically possible for some specific scenarios. Moreover, a user-friendly web application for simulating such PK equivalence trials with inhaled FP is provided.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Global sensitivity analysis for models with spatially dependent outputs
International Nuclear Information System (INIS)
Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.
2011-01-01
The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)
NPV Sensitivity Analysis: A Dynamic Excel Approach
Mangiero, George A.; Kraten, Michael
2017-01-01
Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…
Sensitivity Analysis for Multidisciplinary Systems (SAMS)
2016-12-01
release. Distribution is unlimited. 14 Server and Client Code Server from geometry import Point, Geometry import math import zmq class Server...public release; Distribution is unlimited. DISTRIBUTION STATEMENT A: Approved for public release. Distribution is unlimited. 19 Example Application Boeing...Materials Conference, 2011. Cross, D. M., Local continuum sensitivity method for shape design derivatives using spatial gradient reconstruction. Diss
Parametric Sensitivity Analysis of the WAVEWATCH III Model
Directory of Open Access Journals (Sweden)
Beng-Chun Lee
2009-01-01
Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.
Extended forward sensitivity analysis of one-dimensional isothermal flow
International Nuclear Information System (INIS)
Johnson, M.; Zhao, H.
2013-01-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
Sensitivity Analysis of BLISK Airfoil Wear †
Directory of Open Access Journals (Sweden)
Andreas Kellersmann
2018-05-01
Full Text Available The decreasing performance of jet engines during operation is a major concern for airlines and maintenance companies. Among other effects, the erosion of high-pressure compressor (HPC blades is a critical one and leads to a changed aerodynamic behavior, and therefore to a change in performance. The maintenance of BLISKs (blade-integrated-disks is especially challenging because the blade arrangement cannot be changed and individual blades cannot be replaced. Thus, coupled deteriorated blades have a complex aerodynamic behavior which can have a stronger influence on compressor performance than a conventional HPC. To ensure effective maintenance for BLISKs, the impact of coupled misshaped blades are the key factor. The present study addresses these effects on the aerodynamic performance of a first-stage BLISK of a high-pressure compressor. Therefore, a design of experiments (DoE is done to identify the geometric properties which lead to a reduction in performance. It is shown that the effect of coupled variances is dependent on the operating point. Based on the DoE analysis, the thickness-related parameters, the stagger angle, and the max. profile camber as coupled parameters are identified as the most important parameters for all operating points.
The role of sensitivity analysis in probabilistic safety assessment
International Nuclear Information System (INIS)
Hirschberg, S.; Knochenhauer, M.
1987-01-01
The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)
Mønster, Jacob G; Samuelsson, Jerker; Kjeldsen, Peter; Rella, Chris W; Scheutz, Charlotte
2014-08-01
Using a dual species methane/acetylene instrument based on cavity ring down spectroscopy (CRDS), the dynamic plume tracer dispersion method for quantifying the emission rate of methane was successfully tested in four measurement campaigns: (1) controlled methane and trace gas release with different trace gas configurations, (2) landfill with unknown emission source locations, (3) landfill with closely located emission sources, and (4) comparing with an Fourier transform infrared spectroscopy (FTIR) instrument using multiple trace gasses for source separation. The new real-time, high precision instrument can measure methane plumes more than 1.2 km away from small sources (about 5 kg h(-1)) in urban areas with a measurement frequency allowing plume crossing at normal driving speed. The method can be used for quantification of total methane emissions from diffuse area sources down to 1 kg per hour and can be used to quantify individual sources with the right choice of wind direction and road distance. The placement of the trace gas is important for obtaining correct quantification and uncertainty of up to 36% can be incurred when the trace gas is not co-located with the methane source. Measurements made at greater distances are less sensitive to errors in trace gas placement and model calculations showed an uncertainty of less than 5% in both urban and open-country for placing the trace gas 100 m from the source, when measurements were done more than 3 km away. Using the ratio of the integrated plume concentrations of tracer gas and methane gives the most reliable results for measurements at various distances to the source, compared to the ratio of the highest concentration in the plume, the direct concentration ratio and using a Gaussian plume model. Under suitable weather and road conditions, the CRDS system can quantify the emission from different sources located close to each other using only one kind of trace gas due to the high time resolution, while the FTIR
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...
Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model
Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance
2014-01-01
Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...
Personalization of models with many model parameters : an efficient sensitivity analysis approach
Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.
2015-01-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of
2010-01-01
al., 2005]), or on systems where only one output at a time is considered (e.g., [ Sobol , 2001; Helton and Davis, 2003]). Moreover, most of the...Structure Obtained by a System Decomposition Method,” AIAA Journal, 29(2), pp. 264-270. [69] Sobol , I. M., 2001, “Global Sensitivity Indices for
Abbasi, Amirali; Sardroodi, Jaber Jahanbin
2018-02-01
We presented a density functional theory study of the adsorption of O3 and NO2 molecules on ZnO nanoparticles. Various adsorption geometries of O3 and NO2 over the nanoparticles were considered. For both O3 and NO2 adsorption systems, it was found that the adsorption on the N-doped nanoparticle is more favorable in energy than that on the pristine one. Therefore, the N-doped ZnO has a better efficiency to be utilized as O3 and NO2 detection device. For all cases, the binding sites were located on the zinc atoms of the nanoparticle. The charge analysis based on natural bond orbital (NBO) analysis indicates that charge was transferred from the surface to the adsorbed molecule. The projected density of states of the interacting atoms represent the formation of chemical bonds at the interface region. Molecular orbitals of the adsorption systems indicate that the HOMOs were mainly localized on the adsorbed O3 and NO2 molecules, whereas the electronic densities in the LUMOs were dominant at the ZnO nanocrystal surface. By examining the distribution of spin densities, we found that the magnetization was mainly located over the adsorbed molecules. For NO2 adsorbate, we found that the symmetric and asymmetric stretches were shifted to a lower frequency. The bending stretch mode was shifted to the higher frequency. Our DFT results thus provide a theoretical basis for why the adsorption of O3 and NO2 molecules on the N-doped ZnO nanoparticles may increase, giving rise to design and development of innovative and highly efficient sensor devices for O3 and NO2 recognition.
Wen, Mingjian; Shirodkar, Sharmila N.; Plecháč, Petr; Kaxiras, Efthimios; Elliott, Ryan S.; Tadmor, Ellad B.
2017-12-01
Two-dimensional molybdenum disulfide (MoS2) is a promising material for the next generation of switchable transistors and photodetectors. In order to perform large-scale molecular simulations of the mechanical and thermal behavior of MoS2-based devices, an accurate interatomic potential is required. To this end, we have developed a Stillinger-Weber potential for monolayer MoS2. The potential parameters are optimized to reproduce the geometry (bond lengths and bond angles) of MoS2 in its equilibrium state and to match as closely as possible the forces acting on the atoms along a dynamical trajectory obtained from ab initio molecular dynamics. Verification calculations indicate that the new potential accurately predicts important material properties including the strain dependence of the cohesive energy, the elastic constants, and the linear thermal expansion coefficient. The uncertainty in the potential parameters is determined using a Fisher information theory analysis. It is found that the parameters are fully identified, and none are redundant. In addition, the Fisher information matrix provides uncertainty bounds for predictions of the potential for new properties. As an example, bounds on the average vibrational thickness of a MoS2 monolayer at finite temperature are computed and found to be consistent with the results from a molecular dynamics simulation. The new potential is available through the OpenKIM interatomic potential repository at https://openkim.org/cite/MO_201919462778_000.
Directory of Open Access Journals (Sweden)
Zhong Wu
2017-04-01
Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.
Sensitivity Analysis of OECD Benchmark Tests in BISON
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1986-01-01
An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies
Developing context sensitive BIM based applications
Hartmann, Timo; Underwood, J.; Isikdag, U.
2010-01-01
Current Building Information Model (BIM) based applications do not integrate well with the varying and frequently changing work processes of Architectural, Engineering, and Construction (AEC) professionals. One cause for this problem is that traditionally software developers apply software design
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Directory of Open Access Journals (Sweden)
Georgios Arampatzis
Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of
Shi, Meng-Ting; Yang, Xin-An; Qin, Li-Ming; Zhang, Wang-Bing
2018-09-26
A gold particle deposited glassy carbon electrode (Au/GCE) was first used in electrochemical vapor generation (ECVG) technology and demonstrated to have excellent catalytic property for the electrochemical conversion process of aqueous mercury, especially for methylmercury (CH 3 Hg + ), to gaseous mercury. Systematical research has shown that the highly consistent or distinct difference between the atomic fluorescence spectroscopy signals of CH 3 Hg + and Hg 2+ can be achieved by controlling the electrolytic parameters of ECVG. Hereby, a new green and accurate method for mercury speciation analysis based on the distinguishing electrochemical reaction behavior of Hg 2+ and CH 3 Hg + on the modified electrode was firstly established. Furthermore, electrochemical impedance spectra and the square wave voltammetry displayed that the ECVG reaction of CH 3 Hg + may belong to the electrocatalytic mechanism. Under the selected conditions, the limits of detection of Hg 2+ and CH 3 Hg + are 5.3 ng L -1 and 4.4 ng L -1 for liquid samples and 0.53 pg mg -1 and 0.44 pg mg -1 for solid samples, respectively. The precision of the 5 measurements is less than 6% within the concentration of Hg 2+ and CH 3 Hg + ranging from 0.2 to 15.0 μg L -1 . The accuracy and practicability of the proposed method was verified by analyzing the mercury content in the certified reference material and several fish as well as water samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Sensitivity analysis for the effects of multiple unmeasured confounders.
Groenwold, Rolf H H; Sterne, Jonathan A C; Lawlor, Debbie A; Moons, Karel G M; Hoes, Arno W; Tilling, Kate
2016-09-01
Observational studies are prone to (unmeasured) confounding. Sensitivity analysis of unmeasured confounding typically focuses on a single unmeasured confounder. The purpose of this study was to assess the impact of multiple (possibly weak) unmeasured confounders. Simulation studies were performed based on parameters estimated from the British Women's Heart and Health Study, including 28 measured confounders and assuming no effect of ascorbic acid intake on mortality. In addition, 25, 50, or 100 unmeasured confounders were simulated, with various mutual correlations and correlations with measured confounders. The correlated unmeasured confounders did not need to be strongly associated with exposure and outcome to substantially bias the exposure-outcome association at interest, provided that there are sufficiently many unmeasured confounders. Correlations between unmeasured confounders, in addition to the strength of their relationship with exposure and outcome, are key drivers of the magnitude of unmeasured confounding and should be considered in sensitivity analyses. However, if the unmeasured confounders are correlated with measured confounders, the bias yielded by unmeasured confounders is partly removed through adjustment for the measured confounders. Discussions of the potential impact of unmeasured confounding in observational studies, and sensitivity analyses to examine this, should focus on the potential for the joint effect of multiple unmeasured confounders to bias results. Copyright © 2016 Elsevier Inc. All rights reserved.
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Dannhauer, Torben; Sattler, Martina; Wirth, Wolfgang; Hunter, David J; Kwoh, C Kent; Eckstein, Felix
2014-08-01
Biomechanical measurement of muscle strength represents established technology in evaluating limb function. Yet, analysis of longitudinal change suffers from relatively large between-measurement variability. Here, we determine the sensitivity to change of magnetic resonance imaging (MRI)-based measurement of thigh muscle anatomical cross sectional areas (ACSAs) versus isometric strength in limbs with and without structural progressive knee osteoarthritis (KOA), with focus on the quadriceps. Of 625 "Osteoarthritis Initiative" participants with radiographic KOA, 20 had MRI cartilage and radiographic joint space width loss in the right knee isometric muscle strength measurement and axial T1-weighted spin-echo acquisitions of the thigh. Muscle ACSAs were determined from manual segmentation at 33% femoral length (distal to proximal). In progressor knees, the reduction in quadriceps ACSA between baseline and 2-year follow-up was -2.8 ± 7.9 % (standardized response mean [SRM] = -0.35), and it was -1.8 ± 6.8% (SRM = -0.26) in matched, non-progressive KOA controls. The decline in extensor strength was more variable than that in ACSAs, both in progressors (-3.9 ± 20%; SRM = -0.20) and in non-progressive controls (-4.5 ± 28%; SRM = -0.16). MRI-based analysis of quadriceps muscles ACSAs appears to be more sensitive to longitudinal change than isometric extensor strength and is suggestive of greater loss in limbs with structurally progressive KOA than in non-progressive controls.
Sensitive Spectroscopic Analysis of Biomarkers in Exhaled Breath
Bicer, A.; Bounds, J.; Zhu, F.; Kolomenskii, A. A.; Kaya, N.; Aluauee, E.; Amani, M.; Schuessler, H. A.
2018-06-01
We have developed a novel optical setup which is based on a high finesse cavity and absorption laser spectroscopy in the near-IR spectral region. In pilot experiments, spectrally resolved absorption measurements of biomarkers in exhaled breath, such as methane and acetone, were carried out using cavity ring-down spectroscopy (CRDS). With a 172-cm-long cavity, an efficient optical path of 132 km was achieved. The CRDS technique is well suited for such measurements due to its high sensitivity and good spectral resolution. The detection limits for methane of 8 ppbv and acetone of 2.1 ppbv with spectral sampling of 0.005 cm-1 were achieved, which allowed to analyze multicomponent gas mixtures and to observe absorption peaks of 12CH4 and 13CH4. Further improvements of the technique have the potential to realize diagnostics of health conditions based on a multicomponent analysis of breath samples.
Topological sensitivity based far-field detection of elastic inclusions
Directory of Open Access Journals (Sweden)
Tasawar Abbas
2018-03-01
Full Text Available The aim of this article is to present and rigorously analyze topological sensitivity based algorithms for detection of diametrically small inclusions in an isotropic homogeneous elastic formation using single and multiple measurements of the far-field scattering amplitudes. A L2-cost functional is considered and a location indicator is constructed from its topological derivative. The performance of the indicator is analyzed in terms of the topological sensitivity for location detection and stability with respect to measurement and medium noises. It is established that the location indicator does not guarantee inclusion detection and achieves only a low resolution when there is mode-conversion in an elastic formation. Accordingly, a weighted location indicator is designed to tackle the mode-conversion phenomenon. It is substantiated that the weighted function renders the location of an inclusion stably with resolution as per Rayleigh criterion. 2000 MSC: 35R30, 35L05, 74B05, 47A52, 65J20, Keywords: Inverse elastic scattering, Elasticity imaging, Topological derivative, Resolution analysis, Stability analysis
Advanced Fuel Cycle Economic Sensitivity Analysis
Energy Technology Data Exchange (ETDEWEB)
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
The role of sensitivity analysis in assessing uncertainty
International Nuclear Information System (INIS)
Crick, M.J.; Hill, M.D.
1987-01-01
Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice
Low Power and High Sensitivity MOSFET-Based Pressure Sensor
International Nuclear Information System (INIS)
Zhang Zhao-Hua; Ren Tian-Ling; Zhang Yan-Hong; Han Rui-Rui; Liu Li-Tian
2012-01-01
Based on the metal-oxide-semiconductor field effect transistor (MOSFET) stress sensitive phenomenon, a low power MOSFET pressure sensor is proposed. Compared with the traditional piezoresistive pressure sensor, the present pressure sensor displays high performances on sensitivity and power consumption. The sensitivity of the MOSFET sensor is raised by 87%, meanwhile the power consumption is decreased by 20%. (cross-disciplinary physics and related areas of science and technology)
Analysis of Sensitivity Experiments - An Expanded Primer
2017-03-08
conducted with this purpose in mind. Due diligence must be paid to the structure of the dosage levels and to the number of trials. The chosen data...analysis. System reliability is of paramount importance for protecting both the investment of funding and human life . Failing to accurately estimate
Sensitivity analysis of hybrid thermoelastic techniques
W.A. Samad; J.M. Considine
2017-01-01
Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...
Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs
Global and Local Sensitivity Analysis Methods for a Physical System
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Adjoint sensitivity analysis of high frequency structures with Matlab
Bakr, Mohamed; Demir, Veysel
2017-01-01
This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.
Complex finite element sensitivity method for creep analysis
International Nuclear Information System (INIS)
Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry
2015-01-01
The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions
Deterministic sensitivity analysis for the numerical simulation of contaminants transport
International Nuclear Information System (INIS)
Marchand, E.
2007-12-01
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
A global sensitivity analysis of crop virtual water content
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for
Dispersion sensitivity analysis & consistency improvement of APFSDS
Directory of Open Access Journals (Sweden)
Sangeeta Sharma Panda
2017-08-01
In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Sensitivity analysis of the RESRAD, a dose assessment code
International Nuclear Information System (INIS)
Yu, C.; Cheng, J.J.; Zielen, A.J.
1991-01-01
The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational
A sensitivity analysis approach to optical parameters of scintillation detectors
International Nuclear Information System (INIS)
Ghal-Eh, N.; Koohi-Fayegh, R.
2008-01-01
In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators
Sobol’ sensitivity analysis for stressor impacts on honeybee colonies
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...
Global sensitivity analysis using low-rank tensor approximations
International Nuclear Information System (INIS)
Konakli, Katerina; Sudret, Bruno
2016-01-01
In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.
Experimental Design for Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2001-01-01
This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as
Sensitivity analysis of numerical solutions for environmental fluid problems
International Nuclear Information System (INIS)
Tanaka, Nobuatsu; Motoyama, Yasunori
2003-01-01
In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
DEFF Research Database (Denmark)
Mønster, Jacob; Samuelsson, Jerker; Kjeldsen, Peter
2014-01-01
Using a dual species methane/acetylene instrument based on cavity ring down spectroscopy (CRDS), the dynamic plume tracer dispersion method for quantifying the emission rate of methane was successfully tested in four measurement campaigns: (1) controlled methane and trace gas release with differe...
ECOS - analysis of sensitivity to database and input parameters
International Nuclear Information System (INIS)
Sumerling, T.J.; Jones, C.H.
1986-06-01
The sensitivity of doses calculated by the generic biosphere code ECOS to parameter changes has been investigated by the authors for the Department of the Environment as part of its radioactive waste management research programme. The sensitivity of results to radionuclide dependent parameters has been tested by specifying reasonable parameter ranges and performing code runs for best estimate, upper-bound and lower-bound parameter values. The work indicates that doses are most sensitive to scenario parameters: geosphere input fractions, area of contaminated land, land use and diet, flux of contaminated waters and water use. Recommendations are made based on the results of sensitivity. (author)
High sensitivity analysis of atmospheric gas elements
International Nuclear Information System (INIS)
Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo
2006-01-01
We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10 17 atoms/cm 3 for H, of 3 x 10 16 atoms/cm 3 for C and of 2 x 10 16 atoms/cm 3 for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it
High sensitivity analysis of atmospheric gas elements
Energy Technology Data Exchange (ETDEWEB)
Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)
2006-07-30
We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10{sup 17} atoms/cm{sup 3} for H, of 3 x 10{sup 16} atoms/cm{sup 3} for C and of 2 x 10{sup 16} atoms/cm{sup 3} for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it.
Hasegawa, Raiden; Small, Dylan
2017-12-01
In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.
Probability density adjoint for sensitivity analysis of the Mean of Chaos
Energy Technology Data Exchange (ETDEWEB)
Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
2014-08-01
Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.
Sensitive determination of citrinin based on molecular imprinted electrochemical sensor
Energy Technology Data Exchange (ETDEWEB)
Atar, Necip [Department of Chemical Engineering, Faculty of Engineering, Pamukkale University, Denizli (Turkey); Yola, Mehmet Lütfi, E-mail: mehmetyola@gmail.com [Department of Metallurgical and Materials Engineering, Faculty of Engineering, Sinop University, Sinop (Turkey); Eren, Tanju [Department of Chemical Engineering, Faculty of Engineering, Pamukkale University, Denizli (Turkey)
2016-01-30
Graphical abstract: - Highlights: • Citrinin-imprinted electrochemical sensor is developed for the sensitive detection of citrinin. • The nanomaterial and citrinin-imprinted surfaces were characterized by several methods. • Citrinin-imprinted electrochemical sensor is sensitive and selective in analysis of food. • Citrinin-imprinted electrochemical sensor is preferred to the other methods. - Abstract: In this report, a novel molecular imprinted voltammetric sensor based on glassy carbon electrode (GCE) modified with platinum nanoparticles (PtNPs) involved in a polyoxometalate (H{sub 3}PW{sub 12}O{sub 40}, POM) functionalized reduced graphene oxide (rGO) was prepared for the determination of citrinin (CIT). The developed surfaces were characterized by using scanning electron microscope (SEM), transmission electron microscope (TEM), X-ray photoelectron spectroscopy (XPS) and X-ray diffraction (XRD) method. CIT imprinted GCE was prepared via electropolymerization process of 80.0 mM pyrrole as monomer in the presence of phosphate buffer solution (pH 6.0) containing 20.0 mM CIT. The linearity range and the detection limit of the developed method were calculated as 1.0 × 10{sup −12}–1.0 × 10{sup −10} M and 2.0 × 10{sup −13} M, respectively. In addition, the voltammetric sensor was applied to rye samples. The stability and selectivity of the voltammetric sensor were also reported.
Sensitive determination of citrinin based on molecular imprinted electrochemical sensor
International Nuclear Information System (INIS)
Atar, Necip; Yola, Mehmet Lütfi; Eren, Tanju
2016-01-01
Graphical abstract: - Highlights: • Citrinin-imprinted electrochemical sensor is developed for the sensitive detection of citrinin. • The nanomaterial and citrinin-imprinted surfaces were characterized by several methods. • Citrinin-imprinted electrochemical sensor is sensitive and selective in analysis of food. • Citrinin-imprinted electrochemical sensor is preferred to the other methods. - Abstract: In this report, a novel molecular imprinted voltammetric sensor based on glassy carbon electrode (GCE) modified with platinum nanoparticles (PtNPs) involved in a polyoxometalate (H_3PW_1_2O_4_0, POM) functionalized reduced graphene oxide (rGO) was prepared for the determination of citrinin (CIT). The developed surfaces were characterized by using scanning electron microscope (SEM), transmission electron microscope (TEM), X-ray photoelectron spectroscopy (XPS) and X-ray diffraction (XRD) method. CIT imprinted GCE was prepared via electropolymerization process of 80.0 mM pyrrole as monomer in the presence of phosphate buffer solution (pH 6.0) containing 20.0 mM CIT. The linearity range and the detection limit of the developed method were calculated as 1.0 × 10"−"1"2–1.0 × 10"−"1"0 M and 2.0 × 10"−"1"3 M, respectively. In addition, the voltammetric sensor was applied to rye samples. The stability and selectivity of the voltammetric sensor were also reported.
Sharman, James E; Boutouyrie, Pierre; Perier, Marie-Cécile; Thomas, Frédérique; Guibout, Catherine; Khettab, Hakim; Pannier, Bruno; Laurent, Stéphane; Jouven, Xavier; Empana, Jean-Philippe
2018-02-14
People with exaggerated exercise blood pressure (BP) have adverse cardiovascular outcomes. Mechanisms are unknown but could be explained through impaired neural baroreflex sensitivity (BRS) and/or large artery stiffness. This study aimed to determine the associations of carotid BRS and carotid stiffness with exaggerated exercise BP. Blood pressure was recorded at rest and following an exercise step-test among 8976 adults aged 50 to 75 years from the Paris Prospective Study III. Resting carotid BRS (low frequency gain, from carotid distension rate, and heart rate) and stiffness were measured by high-precision echotracking. A systolic BP threshold of ≥ 150 mmHg defined exaggerated exercise BP and ≥140/90 mmHg defined resting hypertension (±antihypertensive treatment). Participants with exaggerated exercise BP had significantly lower BRS [median (Q1; Q3) 0.10 (0.06; 0.16) vs. 0.12 (0.08; 0.19) (ms2/mm) 2×108; P < 0.001] but higher stiffness [mean ± standard deviation (SD); 7.34 ± 1.37 vs. 6.76 ± 1.25 m/s; P < 0.001) compared to those with non-exaggerated exercise BP. However, only lower BRS (per 1SD decrement) was associated with exaggerated exercise BP among people without hypertension at rest {specifically among those with optimal BP; odds ratio (OR) 1.16 [95% confidence intervals (95% CI) 1.01; 1.33], P = 0.04 and high-normal BP; OR, 1.19 (95% CI 1.07; 1.32), P = 0.001} after adjustment for age, sex, body mass index, smoking, alcohol, total cholesterol, high-density lipoprotein cholesterol, resting heart rate, and antihypertensive medications. Impaired BRS, but not carotid stiffness, is independently associated with exaggerated exercise BP even among those with well controlled resting BP. This indicates a potential pathway from depressed neural baroreflex function to abnormal exercise BP and clinical outcomes. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For
Emissivity compensated spectral pyrometry—algorithm and sensitivity analysis
International Nuclear Information System (INIS)
Hagqvist, Petter; Sikström, Fredrik; Christiansson, Anna-Karin; Lennartson, Bengt
2014-01-01
In order to solve the problem of non-contact temperature measurements on an object with varying emissivity, a new method is herein described and evaluated. The method uses spectral radiance measurements and converts them to temperature readings. It proves to be resilient towards changes in spectral emissivity and tolerates noisy spectral measurements. It is based on an assumption of smooth changes in emissivity and uses historical values of spectral emissivity and temperature for estimating current spectral emissivity. The algorithm, its constituent steps and accompanying parameters are described and discussed. A thorough sensitivity analysis of the method is carried out through simulations. No rigorous instrument calibration is needed for the presented method and it is therefore industrially tractable. (paper)
Sequence length variation, indel costs, and congruence in sensitivity analysis
DEFF Research Database (Denmark)
Aagesen, Lone; Petersen, Gitte; Seberg, Ole
2005-01-01
The behavior of two topological and four character-based congruence measures was explored using different indel treatments in three empirical data sets, each with different alignment difficulties. The analyses were done using direct optimization within a sensitivity analysis framework in which...... the cost of indels was varied. Indels were treated either as a fifth character state, or strings of contiguous gaps were considered single events by using linear affine gap cost. Congruence consistently improved when indels were treated as single events, but no congruence measure appeared as the obviously...... preferable one. However, when combining enough data, all congruence measures clearly tended to select the same alignment cost set as the optimal one. Disagreement among congruence measures was mostly caused by a dominant fragment or a data partition that included all or most of the length variation...
The EVEREST project: sensitivity analysis of geological disposal systems
International Nuclear Information System (INIS)
Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus
1997-01-01
The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied
Molin, Laura; Cristoni, Simone; Crotti, Sara; Bernardi, Luigi Rossi; Seraglia, Roberta; Traldi, Pietro
2008-11-01
Spraying of oligonucleotide-matrix solutions through a stainless steel (ss) sieve (38 microm, 450 mesh) leads to the formation, on the matrix-assisted laser desorption/ionization (MALDI) sample holder, of uniformly distributed microcrystals, well separated from each other. When the resulting sample holder surface is irradiated by laser, abundant molecular species form, with a clear increase in both intensity and resolution with respect to values obtained by 'Dried Droplet', 'Double Layer', and 'Sandwich' deposition methods. In addition, unlike the usual situation, the sample is perfectly homogeneous, and identical spectra are obtained by irradiating different areas. On one hand, the data indicate that this method is highly effective for oligonucleotide MALDI analysis, and on the other, that it can be validly employed for fully automated MALDI procedures.
Sensitivity analysis techniques applied to a system of hyperbolic conservation laws
International Nuclear Information System (INIS)
Weirs, V. Gregory; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.
2012-01-01
Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.
Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.
Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun
2017-12-01
Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.
A general first-order global sensitivity analysis method
International Nuclear Information System (INIS)
Xu Chonggang; Gertner, George Zdzislaw
2008-01-01
Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters
Directory of Open Access Journals (Sweden)
Xiao-meng Song
2013-01-01
Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.
Sensitivity analysis of energy demands on performance of CCHP system
International Nuclear Information System (INIS)
Li, C.Z.; Shi, Y.M.; Huang, X.H.
2008-01-01
Sensitivity analysis of energy demands is carried out in this paper to study their influence on performance of CCHP system. Energy demand is a very important and complex factor in the optimization model of CCHP system. Average, uncertainty and historical peaks are adopted to describe energy demands. The mix-integer nonlinear programming model (MINLP) which can reflect the three aspects of energy demands is established. Numerical studies are carried out based on energy demands of a hotel and a hospital. The influence of average, uncertainty and peaks of energy demands on optimal facility scheme and economic advantages of CCHP system are investigated. The optimization results show that the optimal GT's capacity and economy of CCHP system mainly lie on the average energy demands. Sum of capacities of GB and HE is equal to historical heating demand peaks, and sum of capacities of AR and ER are equal to historical cooling demand peaks. Maximum of PG is sensitive with historical peaks of energy demands and not influenced by uncertainty of energy demands, while the corresponding influence on DH is adverse
A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors
Directory of Open Access Journals (Sweden)
Xi Yu
2014-01-01
Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.
Directory of Open Access Journals (Sweden)
Ma Ning
2013-09-01
Full Text Available Purpose: Nowadays, governments around the world are active in constructing the high-speed railway. Therefore, it is significant to make research on this increasingly prevalent transport.Design/methodology/approach: In this paper, we simulate the process of the passenger’s travel mode choice by adjusting the ticket fare and the run-time based on the multi-agent system (MAS.Findings: From the research we get the conclusion that increasing the run-time appropriately and reducing the ticket fare in some extent are effective ways to enhance the passenger sharing of the high-speed railway.Originality/value: We hope it can provide policy recommendations for the railway sectors in developing the long-term plan on high-speed railway in the future.
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
International Nuclear Information System (INIS)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae
2016-01-01
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
Energy Technology Data Exchange (ETDEWEB)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)
2016-10-15
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.
Energy Technology Data Exchange (ETDEWEB)
Driscoll, Donald D [Case Western Reserve Univ., Cleveland, OH (United States)
2004-05-01
The Cryogenic Dark Matter Search (CDMS) uses cryogenically-cooled detectors made of germanium and silicon in an attempt to detect dark matter in the form of Weakly-Interacting Massive Particles (WIMPs). The expected interaction rate of these particles is on the order of 1/kg/day, far below the 200/kg/day expected rate of background interactions after passive shielding and an active cosmic ray muon veto. Our detectors are instrumented to make a simultaneous measurement of both the ionization energy and thermal energy deposited by the interaction of a particle with the crystal substrate. A comparison of these two quantities allows for the rejection of a background of electromagnetically-interacting particles at a level of better than 99.9%. The dominant remaining background at a depth of ~ 11 m below the surface comes from fast neutrons produced by cosmic ray muons interacting in the rock surrounding the experiment. Contamination of our detectors by a beta emitter can add an unknown source of unrejected background. In the energy range of interest for a WIMP study, electrons will have a short penetration depth and preferentially interact near the surface. Some of the ionization signal can be lost to the charge contacts there and a decreased ionization signal relative to the thermal signal will cause a background event which interacts at the surface to be misidentified as a signal event. We can use information about the shape of the thermal signal pulse to discriminate against these surface events. Using a subset of our calibration set which contains a large fraction of electron events, we can characterize the expected behavior of surface events and construct a cut to remove them from our candidate signal events. This thesis describes the development of the 6 detectors (4 x 250 g Ge and 2 x 100 g Si) used in the 2001-2002 CDMS data run at the Stanford Underground Facility with a total of 119 livedays of data. The preliminary results presented are based on the first use
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
A tool model for predicting atmospheric kinetics with sensitivity analysis
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.
DEFF Research Database (Denmark)
Jeon, Jun-Seo; Lee, Seung-Rae; Pasquinelli, Lisa
2015-01-01
., it is getting more attention as these issues are gradually alleviated. In this study, a sensitivity analysis of recovery efficiency in two cases of HT-ATES system with a single well is conducted to select key parameters. For a fractional factorial design used to choose input parameters with uniformity...... with Smoothly Clopped Absolute Deviation Penalty, is utilized. Finally, the sensitivity analysis is performed based on the variation decomposition. According to the result of sensitivity analysis, the most important input variables are selected and confirmed to consider the interaction effects for each case...
Contribution to the sample mean plot for graphical and numerical sensitivity analysis
International Nuclear Information System (INIS)
Bolado-Lavin, R.; Castaings, W.; Tarantola, S.
2009-01-01
The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling
Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory
International Nuclear Information System (INIS)
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2005-01-01
This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik
2009-01-01
satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....
Sensitivity analysis of network DEA illustrated in branch banking
N. Avkiran
2010-01-01
Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...
N -annulated perylene-based push-pull-type sensitizers
Qi, Qingbiao; Wang, Xingzhu; Fan, Li; Zheng, Bin; Zeng, Wangdong; Luo, Jie; Huang, Kuo-Wei; Wang, Qing; Wu, Jishan
2015-01-01
Alkoxy-wrapped N-annulated perylene (NP) was synthesized and used as a rigid and coplanar π-linker for three push-pull type metal-free sensitizers QB1-QB3. Their optical and electrochemical properties were tuned by varying the structure of acceptor. These new dyes were applied in Co(II)/(III) based dye-sensitized solar cells, and power conversion efficiency up to 6.95% was achieved, indicating that NP could be used as a new building block for the design of high-performance sensitizers in the future.
N -annulated perylene-based push-pull-type sensitizers
Qi, Qingbiao
2015-02-06
Alkoxy-wrapped N-annulated perylene (NP) was synthesized and used as a rigid and coplanar π-linker for three push-pull type metal-free sensitizers QB1-QB3. Their optical and electrochemical properties were tuned by varying the structure of acceptor. These new dyes were applied in Co(II)/(III) based dye-sensitized solar cells, and power conversion efficiency up to 6.95% was achieved, indicating that NP could be used as a new building block for the design of high-performance sensitizers in the future.
Variable screening and ranking using sampling-based sensitivity measures
International Nuclear Information System (INIS)
Wu, Y-T.; Mohanty, Sitakanta
2006-01-01
This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables
A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.
Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P
2018-04-01
Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.
Sensitivity analysis of hybrid power systems using Power Pinch Analysis considering Feed-in Tariff
International Nuclear Information System (INIS)
Mohammad Rozali, Nor Erniza; Wan Alwi, Sharifah Rafidah; Manan, Zainuddin Abdul; Klemeš, Jiří Jaromír
2016-01-01
Feed-in Tariff (FiT) has been one of the most effective policies in accelerating the development of renewable energy (RE) projects. The amount of RE electricity in the FiT purchase agreement is an important decision that has to be made by the RE project developers. They have to consider various crucial factors associated with RE system operation as well as its stochastic nature. The presented work aims to assess the sensitivity and profitability of a hybrid power system (HPS) in cases of RE system failure or shutdown. The amount of RE electricity for the FiT purchase agreement in various scenarios was determined using a novel tool called On-Grid Problem Table based on the Power Pinch Analysis (PoPA). A sensitivity table has also been introduced to assist planners to evaluate the effects of the RE system's failure on the profitability of the HPS. This table offers insights on the variance of the RE electricity. The sensitivity analysis of various possible scenarios shows that the RE projects can still provide financial benefits via the FiT, despite the losses incurred from the penalty levied. - Highlights: • A Power Pinch Analysis (PoPA) tool to assess the economics of an HPS with FiT. • The new On-Grid Problem Table for targeting the available RE electricity for FiT sale. • A sensitivity table showing the effect of RE electricity changes on the HPS profitability.
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Sensitivity analysis of the reactor safety study. Final report
International Nuclear Information System (INIS)
Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.
1979-01-01
The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates
Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone
International Nuclear Information System (INIS)
Hayes, F.; Jones, M.L.M.; Mills, G.; Ashmore, M.
2007-01-01
This study identified 83 species from existing publications suitable for inclusion in a database of sensitivity of species to ozone (OZOVEG database). An index, the relative sensitivity to ozone, was calculated for each species based on changes in biomass in order to test for species traits associated with ozone sensitivity. Meta-analysis of the ozone sensitivity data showed a wide inter-specific range in response to ozone. Some relationships in comparison to plant physiological and ecological characteristics were identified. Plants of the therophyte lifeform were particularly sensitive to ozone. Species with higher mature leaf N concentration were more sensitive to ozone than those with lower leaf N concentration. Some relationships between relative sensitivity to ozone and Ellenberg habitat requirements were also identified. In contrast, no relationships between relative sensitivity to ozone and mature leaf P concentration, Grime's CSR strategy, leaf longevity, flowering season, stomatal density and maximum altitude were found. The relative sensitivity of species and relationships with plant characteristics identified in this study could be used to predict sensitivity to ozone of untested species and communities. - Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone showed some relationships with physiological and ecological characteristics
Energy Technology Data Exchange (ETDEWEB)
Cheng, Jun; Cao, Yaxiong; Liang, Xiaozhong; Zheng, Jingxia; Zhang, Fang [Ministry of Education Key Laboratory of Interface Science and Engineering in Advanced Materials, Research Center of Advanced Materials Science and Technology, Taiyuan University of Technology, Taiyuan 030024 (China); Wei, Shuxian; Lu, Xiaoqing [College of Science, China University of Petroleum, Qingdao, Shandong 266555 (China); Guo, Kunpeng, E-mail: guokunpeng@tyut.edu.cn [Ministry of Education Key Laboratory of Interface Science and Engineering in Advanced Materials, Research Center of Advanced Materials Science and Technology, Taiyuan University of Technology, Taiyuan 030024 (China); Yang, Shihe, E-mail: chsyang@ust.hk [Ministry of Education Key Laboratory of Interface Science and Engineering in Advanced Materials, Research Center of Advanced Materials Science and Technology, Taiyuan University of Technology, Taiyuan 030024 (China); Department of Chemistry, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong (China)
2017-05-01
Three dithiafulvene-based metal-free organic sensitizers all using pyridine as the acceptor but with different π-bridges of phenyl (DTF-Py1), thienyl (DTF-Py2) and phenyl-thienyl (DTF-Py3) have been designed, synthesized and used as photosensitizers for dye-sensitized solar cells (DSCs). Introducing thienyl unit into the π-bridge, as well as extension of the π-bridge can dramatically improve their light harvesting ability and suppress the electron recombination, thus uplifting the performance of DSCs. The overall power conversion efficiency of DSC based on DTF-Py3 shows the highest efficiency of 2.61% with a short-circuit photocurrent density of 7.99 mA cm{sup -2}, an open-circuit photovoltage of 630 mV, and a fill factor of 0.52, under standard global AM 1.5 solar light condition. More importantly, the long-term stability of the DTF-Py3 based DSCs under 500 h light-soaking has been demonstrated. - Highlights: • Dithiafulvene sensitizers using pyridine ring as the acceptor were synthesized for the first time. • The power conversion efficiency of 2.61% was obtained for DTF-Py3 sensitized cell. • DTF-Py3 loaded TiO{sub 2} film shows improved light harvesting ability and suppressed electron recombination.
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks
Directory of Open Access Journals (Sweden)
Harry R. Millwater
2006-01-01
Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.
Application of sensitivity analysis for optimized piping support design
International Nuclear Information System (INIS)
Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.
1993-01-01
The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Otis, M.D.
1983-01-01
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Korayem, M H; Mahmoodi, Z; Mohammadi, M
2018-01-07
The imaging and manipulation tools being the same in an AFM has necessitated the modeling and simulation of the AFM-based manipulation processes. In earlier studies, the dynamic behavior of biological particles in the course of manipulation has been modeled and simulated two-dimensionally. Now, with the advancements made in the modeling techniques, a 3D model of the manipulation of biological particles is more accurate than its 2D counterpart. In this paper, the effect of humidity has been taken into consideration in the three-dimensional modeling of the manipulation. By employing this model, the equations for the motion modes of particles (sliding, rolling, and spinning) at the onset of movement have been derived and the critical force magnitude has been obtained. In order to reduce the potential damage to the manipulated biological particle, the maximum radius of the tip has been determined. The effective parameters in this process have been extracted by performing sensitivity analysis using the Sobol method. In comparison to the results obtained for a dry environment, the results obtained by simulating the manipulation of a yeast particle in a wet environment shows that the critical force for the onset of particle movement diminishes by considering the moisture effect (high humidity levels). The parameters influencing the magnitude of the critical force include the particle radius, particle material, surface energy of the chosen substrate, amount of preload and the contact angle. Also, the results of the performed sensitivity analysis indicate a very high influence of particle radius on the critical manipulation force and a very low impact of cantilever width on the critical force. Copyright © 2017 Elsevier Ltd. All rights reserved.
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost
Relative performance of academic departments using DEA with sensitivity analysis.
Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P
2009-05-01
The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.
Global sensitivity analysis using sparse grid interpolation and polynomial chaos
International Nuclear Information System (INIS)
Buzzard, Gregery T.
2012-01-01
Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.
Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon
2018-05-18
We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.
Directory of Open Access Journals (Sweden)
Min-Uk Kim
2018-05-01
Full Text Available We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA tools. We used OCA tools Korea Offsite Risk Assessment (KORA and Areal Location of Hazardous Atmospheres (ALOHA in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH3, 35% hydrogen chloride (HCl, 50% hydrofluoric acid (HF, and 69% nitric acid (HNO3. The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.
Sensitivity analysis of water consumption in an office building
Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan
2018-02-01
This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.
Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities
Directory of Open Access Journals (Sweden)
Thi Thanh Huyen Nguyen
2015-11-01
Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.
Seismic analysis of steam generator and parameter sensitivity studies
International Nuclear Information System (INIS)
Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun
2013-01-01
Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)
Reliability-based sensitivity of mechanical components with arbitrary distribution parameters
International Nuclear Information System (INIS)
Zhang, Yi Min; Yang, Zhou; Wen, Bang Chun; He, Xiang Dong; Liu, Qiaoling
2010-01-01
This paper presents a reliability-based sensitivity method for mechanical components with arbitrary distribution parameters. Techniques from the perturbation method, the Edgeworth series, the reliability-based design theory, and the sensitivity analysis approach were employed directly to calculate the reliability-based sensitivity of mechanical components on the condition that the first four moments of the original random variables are known. The reliability-based sensitivity information of the mechanical components can be accurately and quickly obtained using a practical computer program. The effects of the design parameters on the reliability of mechanical components were studied. The method presented in this paper provides the theoretic basis for the reliability-based design of mechanical components
Monte Carlo evaluation of derivative-based global sensitivity measures
Energy Technology Data Exchange (ETDEWEB)
Kucherenko, S. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)], E-mail: s.kucherenko@ic.ac.uk; Rodriguez-Fernandez, M. [Process Engineering Group, Instituto de Investigaciones Marinas, Spanish Council for Scientific Research (C.S.I.C.), C/ Eduardo Cabello, 6, 36208 Vigo (Spain); Pantelides, C.; Shah, N. [Centre for Process Systems Engineering, Imperial College London, London SW7 2AZ (United Kingdom)
2009-07-15
A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.
Monte Carlo evaluation of derivative-based global sensitivity measures
International Nuclear Information System (INIS)
Kucherenko, S.; Rodriguez-Fernandez, M.; Pantelides, C.; Shah, N.
2009-01-01
A novel approach for evaluation of derivative-based global sensitivity measures (DGSM) is presented. It is compared with the Morris and the Sobol' sensitivity indices methods. It is shown that there is a link between DGSM and Sobol' sensitivity indices. DGSM are very easy to implement and evaluate numerically. The computational time required for numerical evaluation of DGSM is many orders of magnitude lower than that for estimation of the Sobol' sensitivity indices. It is also lower than that for the Morris method. Efficiencies of Monte Carlo (MC) and quasi-Monte Carlo (QMC) sampling methods for calculation of DGSM are compared. It is shown that the superiority of QMC over MC depends on the problem's effective dimension, which can also be estimated using DGSM.
Energy Technology Data Exchange (ETDEWEB)
Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)
2015-08-15
Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.
A cell-based in vitro alternative to identify skin sensitizers by gene expression
International Nuclear Information System (INIS)
Hooyberghs, Jef; Schoeters, Elke; Lambrechts, Nathalie; Nelissen, Inge; Witters, Hilda; Schoeters, Greet; Heuvel, Rosette van den
2008-01-01
The ethical and economic burden associated with animal testing for assessment of skin sensitization has triggered intensive research effort towards development and validation of alternative methods. In addition, new legislation on the registration and use of cosmetics and chemicals promote the use of suitable alternatives for hazard assessment. Our previous studies demonstrated that human CD34 + progenitor-derived dendritic cells from cord blood express specific gene profiles upon exposure to low molecular weight sensitizing chemicals. This paper presents a classification model based on this cell type which is successful in discriminating sensitizing chemicals from non-sensitizing chemicals based on transcriptome analysis of 13 genes. Expression profiles of a set of 10 sensitizers and 11 non-sensitizers were analyzed by RT-PCR using 9 different exposure conditions and a total of 73 donor samples. Based on these data a predictive dichotomous classifier for skin sensitizers has been constructed, which is referred to as . In a first step the dimensionality of the input data was reduced by selectively rejecting a number of exposure conditions and genes. Next, the generalization of a linear classifier was evaluated by a cross-validation which resulted in a prediction performance with a concordance of 89%, a specificity of 97% and a sensitivity of 82%. These results show that the present model may be a useful human in vitro alternative for further use in a test strategy towards the reduction of animal use for skin sensitization
Automated sensitivity analysis: New tools for modeling complex dynamic systems
International Nuclear Information System (INIS)
Pin, F.G.
1987-01-01
Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed
Sensitization trajectories in childhood revealed by using a cluster analysis
DEFF Research Database (Denmark)
Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik
2017-01-01
Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...
Directory of Open Access Journals (Sweden)
Timofeeva Maria
2012-03-01
Full Text Available A theoretical and experimental comparison of optimized search coils based magnetometers, operating either in the Flux mode or in the classical Lenz-Faraday mode, is presented. The improvements provided by the Flux mode in terms of bandwidth and measuring range of the sensor are detailed. Theory, SPICE model and measurements are in good agreement. The spatial resolution of the sensor is studied which is an important parameter for applications in non destructive evaluation. A general expression of the magnetic sensitivity of search coils sensors is derived. Solutions are proposed to design magnetometers with reduced weight and volume without degrading the magnetic sensitivity. An original differential search coil based magnetometer, made of coupled coils, operating in flux mode and connected to a differential transimpedance amplifier is proposed. It is shown that this structure is better in terms of volume occupancy than magnetometers using two separated coils without any degradation in magnetic sensitivity. Experimental results are in good agreement with calculations.
A high sensitivity nanomaterial based SAW humidity sensor
Energy Technology Data Exchange (ETDEWEB)
Wu, T-T; Chou, T-H [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Chen, Y-Y [Department of Mechanical Engineering, Tatung University, Taipei 104, Taiwan (China)], E-mail: wutt@ndt.iam.ntu.edu.tw
2008-04-21
In this paper, a highly sensitive humidity sensor is reported. The humidity sensor is configured by a 128{sup 0}YX-LiNbO{sub 3} based surface acoustic wave (SAW) resonator whose operating frequency is at 145 MHz. A dual delay line configuration is realized to eliminate external temperature fluctuations. Moreover, for nanostructured materials possessing high surface-to-volume ratio, large penetration depth and fast charge diffusion rate, camphor sulfonic acid doped polyaniline (PANI) nanofibres are synthesized by the interfacial polymerization method and further deposited on the SAW resonator as selective coating to enhance sensitivity. The humidity sensor is used to measure various relative humidities in the range 5-90% at room temperature. Results show that the PANI nanofibre based SAW humidity sensor exhibits excellent sensitivity and short-term repeatability.
Analytic uncertainty and sensitivity analysis of models with input correlations
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Sensitivity Analysis Applied in Design of Low Energy Office Building
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik
2008-01-01
satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind
2007-01-01
satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...
Sensitivity analysis of physiochemical interaction model: which pair ...
African Journals Online (AJOL)
... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
Sensitivity analysis for contagion effects in social networks
VanderWeele, Tyler J.
2014-01-01
Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037
Sensitivity analysis of the Ohio phosphorus risk index
The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...
Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations
DEFF Research Database (Denmark)
Kamran, Faisal; Andersen, Peter E.
2015-01-01
profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Sensitivity analysis of railpad parameters on vertical railway track dynamics
Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.
2016-01-01
This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad
Methods for global sensitivity analysis in life cycle assessment
Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.
2017-01-01
Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to
Sensitivity analysis on ultimate strength of aluminium stiffened panels
DEFF Research Database (Denmark)
Rigo, P.; Sarghiuta, R.; Estefen, S.
2003-01-01
This paper presents the results of an extensive sensitivity analysis carried out by the Committee III.1 "Ultimate Strength" of ISSC?2003 in the framework of a benchmark on the ultimate strength of aluminium stiffened panels. Previously, different benchmarks were presented by ISSC committees on ul...
Sensitivity and specificity of coherence and phase synchronization analysis
International Nuclear Information System (INIS)
Winterhalder, Matthias; Schelter, Bjoern; Kurths, Juergen; Schulze-Bonhage, Andreas; Timmer, Jens
2006-01-01
In this Letter, we show that coherence and phase synchronization analysis are sensitive but not specific in detecting the correct class of underlying dynamics. We propose procedures to increase specificity and demonstrate the power of the approach by application to paradigmatic dynamic model systems
Sensitivity Analysis of Structures by Virtual Distortion Method
DEFF Research Database (Denmark)
Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard
1991-01-01
are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...
Design tradeoff studies and sensitivity analysis. Appendix B
Energy Technology Data Exchange (ETDEWEB)
1979-05-25
The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)
Tuning pentacene based dye-sensitized solar cells.
Kunzmann, Andreas; Gruber, Marco; Casillas, Rubén; Tykwinski, Rik R; Costa, Rubén D; Guldi, Dirk M
2018-05-10
We report on the synthesis, as well as photophysical and electrochemical characterization of a new family of pentacene derivatives, which are applied in n-type dye-sensitized solar cells (DSSCs). As far as the molecular structure of the pentacene is concerned, the synthetic design focuses on cyano acrylic tethered at the 13-position of the pentacene chromophore. The electrolyte composition features increasing amounts of Li+ ions as an additive. In general, the increase of Li+ concentrations extrinsically reduces the quasi Fermi level of the photoanode and as such facilitates the electron injection process. We demonstrate that pentacene derivatives give rise to a unique charge injection process, which is controlled by the positioning of the quasi Fermi level energies as a function of the Li+ concentration. As a result of the enhanced charge injection, device efficiencies as high as 1.5% are achieved, representing a 3-fold increase from previously reported efficiencies in pentacene-based DSSCs. These findings are supported by device analysis in combination with transient absorption and electrochemical impedance spectroscopy assays.
Monte Carlo sensitivity analysis of an Eulerian large-scale air pollution model
International Nuclear Information System (INIS)
Dimov, I.; Georgieva, R.; Ostromsky, Tz.
2012-01-01
Variance-based approaches for global sensitivity analysis have been applied and analyzed to study the sensitivity of air pollutant concentrations according to variations of rates of chemical reactions. The Unified Danish Eulerian Model has been used as a mathematical model simulating a remote transport of air pollutants. Various Monte Carlo algorithms for numerical integration have been applied to compute Sobol's global sensitivity indices. A newly developed Monte Carlo algorithm based on Sobol's quasi-random points MCA-MSS has been applied for numerical integration. It has been compared with some existing approaches, namely Sobol's ΛΠ τ sequences, an adaptive Monte Carlo algorithm, the plain Monte Carlo algorithm, as well as, eFAST and Sobol's sensitivity approaches both implemented in SIMLAB software. The analysis and numerical results show advantages of MCA-MSS for relatively small sensitivity indices in terms of accuracy and efficiency. Practical guidelines on the estimation of Sobol's global sensitivity indices in the presence of computational difficulties have been provided. - Highlights: ► Variance-based global sensitivity analysis is performed for the air pollution model UNI-DEM. ► The main effect of input parameters dominates over higher-order interactions. ► Ozone concentrations are influenced mostly by variability of three chemical reactions rates. ► The newly developed MCA-MSS for multidimensional integration is compared with other approaches. ► More precise approaches like MCA-MSS should be applied when the needed accuracy has not been achieved.
Sensitivity-based research prioritization through stochastic characterization modeling
DEFF Research Database (Denmark)
Wender, Ben A.; Prado-Lopez, Valentina; Fantke, Peter
2018-01-01
to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according...
Examining Appearance-Based Rejection Sensitivity during Early Adolescence
Bowker, Julie C.; Thomas, Katelyn K.; Spencer, Sarah V.; Park, Lora E.
2013-01-01
The present study of 150 adolescents ("M" age = 13.05 years) examined the associations between appearance-based rejection sensitivity (Appearance-RS) and psychological adjustment during early adolescence, and evaluated three types of other-gender peer experiences (other-gender friendship, peer acceptance, and romantic relationships) as…
Content sensitivity based access control framework for Hadoop
Directory of Open Access Journals (Sweden)
T.K. Ashwin Kumar
2017-11-01
Full Text Available Big data technologies have seen tremendous growth in recent years. They are widely used in both industry and academia. In spite of such exponential growth, these technologies lack adequate measures to protect data from misuse/abuse. Corporations that collect data from multiple sources are at risk of liabilities due to the exposure of sensitive information. In the current implementation of Hadoop, only file-level access control is feasible. Providing users with the ability to access data based on the attributes in a dataset or the user’s role is complicated because of the sheer volume and multiple formats (structured, unstructured and semi-structured of data. In this paper, we propose an access control framework, which enforces access control policies dynamically based on the sensitivity of the data. This framework enforces access control policies by harnessing the data context, usage patterns and information sensitivity. Information sensitivity changes over time with the addition and removal of datasets, which can lead to modifications in access control decisions. The proposed framework accommodates these changes. The proposed framework is automated to a large extent as the data itself determines the sensitivity with minimal user intervention. Our experimental results show that the proposed framework is capable of enforcing access control policies on non-multimedia datasets with minimal overhead.
International Nuclear Information System (INIS)
Song, Kee Nam
1998-01-01
The elastic formula of leaf type hold down spring(HDS) assembly is verified by comparing the values of elastic stiffness with the characteristic test results of the HDS's specimens. The comparisons show that the derived elastic stiffness formula is useful in reliably estimating the elastic stiffness of leaf type HDS assembly. The elastic stiffness sensitivity of leaf type HDS assembly is analyzed using the formula and its gradient vectors obtained from the mid-point formula. As a result of sensitivity analysis, the elastic stiffness sensitivity with respect to each design variable is quantified and design variables of large sensitivity are identified. Among the design variables, leaf thickness is identified as the most sensitive design variable to the elastic of leaf type HDS assembly. In addition, the elastic stiffness sensitivity, with respect to design variable, is in power-law type correlation to the base thickness of the leaf. (author)
Sensitivity analysis methods and a biosphere test case implemented in EIKOS
International Nuclear Information System (INIS)
Ekstroem, P.A.; Broed, R.
2006-05-01
Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked
Sensitivity analysis methods and a biosphere test case implemented in EIKOS
Energy Technology Data Exchange (ETDEWEB)
Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)
2006-05-15
Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard; Nakshatrala, Praveen B.; Tortorelli, Daniel A.
2014-01-01
Gradient-based topology optimization typically involves thousands or millions of design variables. This makes efficient sensitivity analysis essential and for this the adjoint variable method (AVM) is indispensable. For transient problems it has been observed that the traditional AVM, based on a ...
Methods and computer codes for probabilistic sensitivity and uncertainty analysis
International Nuclear Information System (INIS)
Vaurio, J.K.
1985-01-01
This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables
Sensitivity analysis of critical experiments with evaluated nuclear data libraries
International Nuclear Information System (INIS)
Fujiwara, D.; Kosaka, S.
2008-01-01
Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)
International Nuclear Information System (INIS)
Lee, Tae Hee; Yoo, Jung Hun; Choi, Hyeong Cheol
2002-01-01
A finite element package is often used as a daily design tool for engineering designers in order to analyze and improve the design. The finite element analysis can provide the responses of a system for given design variables. Although finite element analysis can quite well provide the structural behaviors for given design variables, it cannot provide enough information to improve the design such as design sensitivity coefficients. Design sensitivity analysis is an essential step to predict the change in responses due to a change in design variables and to optimize a system with the aid of the gradient-based optimization techniques. To develop a numerical method of design sensitivity analysis, analytical derivatives that are based on analytical differentiation of the continuous or discrete finite element equations are effective but analytical derivatives are difficult because of the lack of internal information of the commercial finite element package such as shape functions. Therefore, design sensitivity analysis outside of the finite element package is necessary for practical application in an industrial setting. In this paper, the semi-analytic method for design sensitivity analysis is used for the development of the design sensitivity module outside of a commercial finite element package of ANSYS. The direct differentiation method is employed to compute the design derivatives of the response and the pseudo-load for design sensitivity analysis is effectively evaluated by using the design variation of the related internal nodal forces. Especially, we suggest an effective method for stress and nonlinear design sensitivity analyses that is independent of the commercial finite element package is also discussed. Numerical examples are illustrated to show the accuracy and efficiency of the developed method and to provide insights for implementation of the suggested method into other commercial finite element packages
Sensitivity analysis of alkaline plume modelling: influence of mineralogy
International Nuclear Information System (INIS)
Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.
2010-01-01
Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Anisotropic analysis for seismic sensitivity of groundwater monitoring wells
Pan, Y.; Hsu, K.
2011-12-01
Taiwan is located at the boundaries of Eurasian Plate and the Philippine Sea Plate. The movement of plate causes crustal uplift and lateral deformation to lead frequent earthquakes in the vicinity of Taiwan. The change of groundwater level trigged by earthquake has been observed and studied in Taiwan for many years. The change of groundwater may appear in oscillation and step changes. The former is caused by seismic waves. The latter is caused by the volumetric strain and reflects the strain status. Since the setting of groundwater monitoring well is easier and cheaper than the setting of strain gauge, the groundwater measurement may be used as a indication of stress. This research proposes the concept of seismic sensitivity of groundwater monitoring well and apply to DonHer station in Taiwan. Geostatistical method is used to analysis the anisotropy of seismic sensitivity. GIS is used to map the sensitive area of the existing groundwater monitoring well.
Sensitivity analysis of predictive models with an automated adjoint generator
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.
1987-01-01
The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig
Sensitivity analysis of time-dependent laminar flows
International Nuclear Information System (INIS)
Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.
2004-01-01
This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)
Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety
International Nuclear Information System (INIS)
Broadhead, B.L.; Childs, R.L.; Rearden, B.T.
1999-01-01
Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community
Generic Reliability-Based Inspection Planning for Fatigue Sensitive Details
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Straub, Daniel; Faber, Michael Havbro
2005-01-01
of fatigue sensitive details in fixed offshore steel jacket platforms and FPSO ship structures. Inspection and maintenance activities are planned such that code based requirements to the safety of personnel and environment for the considered structure are fulfilled and at the same time such that the overall......The generic approach for planning of in-service NDT inspections is extended to cover the case where the fatigue load is modified during the design lifetime of the structure. Generic reliability-based inspection planning has been developed as a practical approach to perform inspection planning...... expected costs for design, inspections, repairs and failures are minimized. The method is based on the assumption of “no-finds” of cracks during inspections. Each fatigue sensitive detail is categorized according to their type of details (SN curves), FDF values, RSR values, inspection, repair and failure...
International Nuclear Information System (INIS)
Pi Ting; Zhang Yunqing; Chen Liping
2012-01-01
Design sensitivity analysis of flexible multibody systems is important in optimizing the performance of mechanical systems. The choice of coordinates to describe the motion of multibody systems has a great influence on the efficiency and accuracy of both the dynamic and sensitivity analysis. In the flexible multibody system dynamics, both the floating frame of reference formulation (FFRF) and absolute nodal coordinate formulation (ANCF) are frequently utilized to describe flexibility, however, only the former has been used in design sensitivity analysis. In this article, ANCF, which has been recently developed and focuses on modeling of beams and plates in large deformation problems, is extended into design sensitivity analysis of flexible multibody systems. The Motion equations of a constrained flexible multibody system are expressed as a set of index-3 differential algebraic equations (DAEs), in which the element elastic forces are defined using nonlinear strain-displacement relations. Both the direct differentiation method and adjoint variable method are performed to do sensitivity analysis and the related dynamic and sensitivity equations are integrated with HHT-I3 algorithm. In this paper, a new method to deduce system sensitivity equations is proposed. With this approach, the system sensitivity equations are constructed by assembling the element sensitivity equations with the help of invariant matrices, which results in the advantage that the complex symbolic differentiation of the dynamic equations is avoided when the flexible multibody system model is changed. Besides that, the dynamic and sensitivity equations formed with the proposed method can be efficiently integrated using HHT-I3 method, which makes the efficiency of the direct differentiation method comparable to that of the adjoint variable method when the number of design variables is not extremely large. All these improvements greatly enhance the application value of the direct differentiation
Restructuring of burnup sensitivity analysis code system by using an object-oriented design approach
International Nuclear Information System (INIS)
Kenji, Yokoyama; Makoto, Ishikawa; Masahiro, Tatsumi; Hideaki, Hyoudou
2005-01-01
A new burnup sensitivity analysis code system was developed with help from the object-oriented technique and written in Python language. It was confirmed that they are powerful to support complex numerical calculation procedure such as reactor burnup sensitivity analysis. The new burnup sensitivity analysis code system PSAGEP was restructured from a complicated old code system and reborn as a user-friendly code system which can calculate the sensitivity coefficients of the nuclear characteristics considering multicycle burnup effect based on the generalized perturbation theory (GPT). A new encapsulation framework for conventional codes written in Fortran was developed. This framework supported to restructure the software architecture of the old code system by hiding implementation details and allowed users of the new code system to easily calculate the burnup sensitivity coefficients. The framework can be applied to the other development projects since it is carefully designed to be independent from PSAGEP. Numerical results of the burnup sensitivity coefficient of a typical fast breeder reactor were given with components based on GPT and the multicycle burnup effects on the sensitivity coefficient were discussed. (authors)
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Sensitivity analysis of a low-level waste environmental transport code
International Nuclear Information System (INIS)
Hiromoto, G.
1989-01-01
Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt
Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization
Directory of Open Access Journals (Sweden)
Jianjun Tang
2014-01-01
Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.
An extremely sensitive monoboronic acid based fluorescent sensor for glucose
International Nuclear Information System (INIS)
Sun Xiangying; Liu Bin; Jiang Yunbao
2004-01-01
An extremely sensitive monoboronic acid based fluorescent sensor for glucose was developed. This was carried out by assembling a fluorescent monoboronic acid, 3-aminophenylboronic acid (PBA) indirectly onto gold surface via its electrostatic interaction with cysteine (Cys) that was directly assembled on the gold surface. The formation of self-assembled bilayers (SAB) was confirmed and primarily characterized by cyclic voltammetry and X-ray photoelectron spectra (XPS). The SAB containing PBA was found fluorescent and its fluorescence showed an extremely high sensitivity to the presence of glucose and other monosaccharides such as galactose and fructose with quenching constants at 10 8 M -1 order of magnitude compared to those at 10 2 M -1 in bulk solutions. The quenching constants were found to vary in the order of D-glucose>D-galactose>D-fructose>D-mannose that is different from that in bulk solution which shows the highest binding affinity toward D-fructose and very low sensitivity toward glucose. The reported monoboronic acid based SAB fluorescent sensor showed the highest sensitivity towards glucose with the capacity of detecting saccharides of concentration down to nanomolar level. It was also demonstrated that the fluorescence from PBA/Cys/Au can be easily recovered after each measurement event and therefore also represents a new reusable method for immobilizing reagent in fabricating chemosensors
Environmentally Sensitive Fluorescent Sensors Based on Synthetic Peptides
Directory of Open Access Journals (Sweden)
Laurence Choulier
2010-03-01
Full Text Available Biosensors allow the direct detection of molecular analytes, by associating a biological receptor with a transducer able to convert the analyte-receptor recognition event into a measurable signal. We review recent work aimed at developing synthetic fluorescent molecular sensors for a variety of analytes, based on peptidic receptors labeled with environmentally sensitive fluorophores. Fluorescent indicators based on synthetic peptides are highly interesting alternatives to protein-based sensors, since they can be synthesized chemically, are stable, and can be easily modified in a site-specific manner for fluorophore coupling and for immobilization on solid supports.
Depletion GPT-free sensitivity analysis for reactor eigenvalue problems
International Nuclear Information System (INIS)
Kennedy, C.; Abdel-Khalik, H.
2013-01-01
This manuscript introduces a novel approach to solving depletion perturbation theory problems without the need to set up or solve the generalized perturbation theory (GPT) equations. The approach, hereinafter denoted generalized perturbation theory free (GPT-Free), constructs a reduced order model (ROM) using methods based in perturbation theory and computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error from using the ROM is quantified in the GPT-Free approach by means of a Wilks' order statistics error metric denoted the K-metric. Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally intractable. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT capabilities unless envisioned during code development. The GPT-Free approach addresses this limitation by requiring only the ability to compute the fundamental adjoint. This manuscript demonstrates the GPT-Free approach for depletion reactor calculations performed in SCALE6 using the 7x7 UAM assembly model. A ROM is developed for the assembly over a time horizon of 990 days. The approach both calculates the reduction error over the lifetime of the simulation using the K-metric and benchmarks the obtained sensitivities using sample calculations. (authors)
Energy Technology Data Exchange (ETDEWEB)
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
Energy Technology Data Exchange (ETDEWEB)
Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models
Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko
2015-01-01
Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600
Sensitivity analysis overlaps of friction elements in cartridge seals
Directory of Open Access Journals (Sweden)
Žmindák Milan
2018-01-01
Full Text Available Cartridge seals are self-contained units consisting of a shaft sleeve, seals, and gland plate. The applications of mechanical seals are numerous. The most common example of application is in bearing production for automobile industry. This paper deals with the sensitivity analysis of overlaps friction elements in cartridge seal and their influence on the friction torque sealing and compressive force. Furthermore, it describes materials for the manufacture of sealings, approaches usually used to solution of hyperelastic materials by FEM and short introduction into the topic wheel bearings. The practical part contains one of the approach for measurement friction torque, which results were used to specifying the methodology and precision of FEM calculation realized by software ANSYS WORKBENCH. This part also contains the sensitivity analysis of overlaps friction elements.
An overview of the design and analysis of simulation experiments for sensitivity analysis
Kleijnen, J.P.C.
2005-01-01
Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs
Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks
Harry R. Millwater; R. Wesley Osborn
2006-01-01
A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel) and common damage mechanisms (inherent defects or su...
Influence analysis to assess sensitivity of the dropout process
Molenberghs, Geert; Verbeke, Geert; Thijs, Herbert; Lesaffre, Emmanuel; Kenward, Michael
2001-01-01
Diggle and Kenward (Appl. Statist. 43 (1994) 49) proposed a selection model for continuous longitudinal data subject to possible non-random dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumption upon which this type of models invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. One of their examples is a set of da...
Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)
2015-04-01
determined. From the results of the study, UN is safe to store under normal operating conditions. 15. SUBJECT TERMS urea, nitrate , sensitivity, thermal ...HNO3). Due to its simple composition, ease of manufacture, and higher detonation parameters than ammonium nitrate , it has become one of the...an H50 value of 10.054 ± 0.620 inches. 5. Conclusions From the results of the thermal analysis study, it can be concluded that urea nitrate is
Applications of the TSUNAMI sensitivity and uncertainty analysis methodology
International Nuclear Information System (INIS)
Rearden, Bradley T.; Hopper, Calvin M.; Elam, Karla R.; Goluoglu, Sedat; Parks, Cecil V.
2003-01-01
The TSUNAMI sensitivity and uncertainty analysis tools under development for the SCALE code system have recently been applied in four criticality safety studies. TSUNAMI is used to identify applicable benchmark experiments for criticality code validation, assist in the design of new critical experiments for a particular need, reevaluate previously computed computational biases, and assess the validation coverage and propose a penalty for noncoverage for a specific application. (author)
Time-sensitive Customer Churn Prediction based on PU Learning
Wang, Li; Chen, Chaochao; Zhou, Jun; Li, Xiaolong
2018-01-01
With the fast development of Internet companies throughout the world, customer churn has become a serious concern. To better help the companies retain their customers, it is important to build a customer churn prediction model to identify the customers who are most likely to churn ahead of time. In this paper, we propose a Time-sensitive Customer Churn Prediction (TCCP) framework based on Positive and Unlabeled (PU) learning technique. Specifically, we obtain the recent data by shortening the...
Sensitivity Analysis of Launch Vehicle Debris Risk Model
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model
International Nuclear Information System (INIS)
Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.
2016-01-01
Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.
Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India
Kumar, M.; Raghubanshi, A. S.
2011-08-01
A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.
SENSITIVITY ANALYSIS OF BIOME-BGC MODEL FOR DRY TROPICAL FORESTS OF VINDHYAN HIGHLANDS, INDIA
M. Kumar; A. S. Raghubanshi
2012-01-01
A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to...
International Nuclear Information System (INIS)
Britti, F.; Cesarano, L.; Costantini, M.; Gentile, V.; Minati, F.; Pietranera, L.
2013-01-01
The COSMO-SkyMed Constellation, four VHR Earth Observation SAR satellites, can be an extremely useful source of information for monitoring programs, and in particular for monitoring of nuclear facilities safeguards, ranging from environmental analysis to human activity characterization. Thanks to its very high revisit coupled with the all weather capability and its dawn to dusk operations, the COSMO-SkyMed constellation is an ideal tool for improving already existing VHR (Very High Resolution) optical satellites monitoring by enhancing classical change detection activities. Thanks to its multi-mode acquisition capability with resolution up to one meter, the COSMO-SkyMed constellation can cover large areas in a very short time to monitor nuclear sites and surrounding areas, thereby providing additional information for the potential detection of undeclared nuclear activities. In particular, thanks to the interferometric capabilities of the SAR sensor, coherence analysis introduces additional information closely related to the changes occurred and occurring over the area of interest within the desired time interval (up to one day at best conditions). Indeed, thanks to the high sensitivity to variations of this added-value product, available only with SAR data, guaranteed by the wavelength used by COSMO-SkyMed sensors (3 cm), in-time analysis through coherence can be a strong indicator of human activity, particularly over areas characterized by a stable environment (i.e. coherent areas), such as deserts/arid zones or ice or snow-covered areas. The aim of this work is to provide a detailed description of how COSMO-SkyMed data and e-GEOS added-value products are able to improve intelligence analysis over critical sites (and their surrounding areas), allowing: -) enhanced change detection through both amplitude and coherence information, -) high frequency site monitoring, -) data integration with other sources of information (optical or on-ground measurements). e-GEOS, a