WorldWideScience

Sample records for sensitivity analysis methods

  1. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  2. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  3. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  4. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  5. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  6. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  7. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  8. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  9. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  10. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  11. Sensitivity Analysis of Structures by Virtual Distortion Method

    DEFF Research Database (Denmark)

    Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard

    1991-01-01

    are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...

  12. Overview of methods for uncertainty analysis and sensitivity analysis in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.; Helton, J.C.

    1985-01-01

    Probabilistic Risk Assessment (PRA) is playing an increasingly important role in the nuclear reactor regulatory process. The assessment of uncertainties associated with PRA results is widely recognized as an important part of the analysis process. One of the major criticisms of the Reactor Safety Study was that its representation of uncertainty was inadequate. The desire for the capability to treat uncertainties with the MELCOR risk code being developed at Sandia National Laboratories is indicative of the current interest in this topic. However, as yet, uncertainty analysis and sensitivity analysis in the context of PRA is a relatively immature field. In this paper, available methods for uncertainty analysis and sensitivity analysis in a PRA are reviewed. This review first treats methods for use with individual components of a PRA and then considers how these methods could be combined in the performance of a complete PRA. In the context of this paper, the goal of uncertainty analysis is to measure the imprecision in PRA outcomes of interest, and the goal of sensitivity analysis is to identify the major contributors to this imprecision. There are a number of areas that must be considered in uncertainty analysis and sensitivity analysis for a PRA: (1) information, (2) systems analysis, (3) thermal-hydraulic phenomena/fission product behavior, (4) health and economic consequences, and (5) display of results. Each of these areas and the synthesis of them into a complete PRA are discussed

  13. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  14. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  15. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  16. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  17. A method of the sensitivity analysis of build-up and decay of actinides

    International Nuclear Information System (INIS)

    Mitani, Hiroshi; Koyama, Kinji; Kuroi, Hideo

    1977-07-01

    To make sensitivity analysis of build-up and decay of actinides, mathematical methods related to this problem have been investigated in detail. Application of time-dependent perturbation technique and Bateman method to sensitivity analysis is mainly studied. For the purpose, a basic equation and its adjoint equation for build-up and decay of actinides are systematically solved by introducing Laplace and modified Laplace transforms and their convolution theorems. Then, the mathematical method of sensitivity analyses is formulated by the above technique; its physical significance is also discussed. Finally, application of eigenvalue-method is investigated. Sensitivity coefficients can be directly calculated by this method. (auth.)

  18. Sensitivity analysis of the Two Geometry Method

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1993-09-01

    The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)

  19. Application of Monte Carlo filtering method in regional sensitivity analysis of AASHTOWare Pavement ME design

    Directory of Open Access Journals (Sweden)

    Zhong Wu

    2017-04-01

    Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.

  20. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  1. Sensitivity analysis for decision-making using the MORE method-A Pareto approach

    International Nuclear Information System (INIS)

    Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.

    2009-01-01

    Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.

  2. Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2015-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...

  3. Sensitivity analysis of infectious disease models: methods, advances and their application

    Science.gov (United States)

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  4. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  5. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  6. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    Science.gov (United States)

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  7. Application of Wielandt method in continuous-energy nuclear data sensitivity analysis with RMC code

    International Nuclear Information System (INIS)

    Qiu Yishu; Wang Kan; She Ding

    2015-01-01

    The Iterated Fission Probability (IFP) method, an accurate method to estimate adjoint-weighted quantities in the continuous-energy Monte Carlo criticality calculations, has been widely used for calculating kinetic parameters and nuclear data sensitivity coefficients. By using a strategy of waiting, however, this method faces the challenge of high memory usage to store the tallies of original contributions which size is proportional to the number of particle histories in each cycle. Recently, the Wielandt method, applied by Monte Carlo code McCARD to calculate kinetic parameters, estimates adjoint fluxes in a single particle history and thus can save memory usage. In this work, the Wielandt method has been applied in Rector Monte Carlo code RMC for nuclear data sensitivity analysis. The methodology and algorithm of applying Wielandt method in estimation of adjoint-based sensitivity coefficients are discussed. Verification is performed by comparing the sensitivity coefficients calculated by Wielandt method with analytical solutions, those computed by IFP method which is also implemented in RMC code for sensitivity analysis, and those from the multi-group TSUNAMI-3D module in SCALE code package. (author)

  8. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  9. An easily implemented static condensation method for structural sensitivity analysis

    Science.gov (United States)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  10. Complex finite element sensitivity method for creep analysis

    International Nuclear Information System (INIS)

    Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry

    2015-01-01

    The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions

  11. Sensitivity and Uncertainty Analysis of Coupled Reactor Physics Problems : Method Development for Multi-Physics in Reactors

    NARCIS (Netherlands)

    Perkó, Z.

    2015-01-01

    This thesis presents novel adjoint and spectral methods for the sensitivity and uncertainty (S&U) analysis of multi-physics problems encountered in the field of reactor physics. The first part focuses on the steady state of reactors and extends the adjoint sensitivity analysis methods well

  12. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  13. An ESDIRK Method with Sensitivity Analysis Capabilities

    DEFF Research Database (Denmark)

    Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove

    2004-01-01

    of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...

  14. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  15. Sensitive emission spectrometric method for the analysis of airborne particulate matter

    International Nuclear Information System (INIS)

    Sugimae, A.

    1975-01-01

    A rapid and sensitive emission spectrometric method for the routine analysis of airborne particulate matter collected on the glass fiber filter is reported. The method is a powder--dc arc technique involving no chemical pre-enrichment procedures. The elements--Ag, BA: Be, Bi, Cd, Co, Cr, Cu, Fe, Ga, La, Mn, Ni, Pb, Sn, V, Y, Yb, and Zn--were determined. (U.S.)

  16. Deterministic sensitivity analysis of two-phase flow systems: forward and adjoint methods. Final report

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1984-07-01

    This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations

  17. The Role of Numerical Methods in the Sensitivity Analysis of a ...

    African Journals Online (AJOL)

    The mathematical modelling of physiochemical interaction in the framework of industrial and environmental physics which relies on an initial value problem is defined by a first order ordinary differential equation. Two numerical methods of studying sensitivity analysis of physiochemical interaction data are developed.

  18. Application of perturbation methods for sensitivity analysis for nuclear power plant steam generators

    International Nuclear Information System (INIS)

    Gurjao, Emir Candeia

    1996-02-01

    The differential and GPT (Generalized Perturbation Theory) formalisms of the Perturbation Theory were applied in this work to a simplified U-tubes steam generator model to perform sensitivity analysis. The adjoint and importance equations, with the corresponding expressions for the sensitivity coefficients, were derived for this steam generator model. The system was numerically was numerically solved in a Fortran program, called GEVADJ, in order to calculate the sensitivity coefficients. A transient loss of forced primary coolant in the nuclear power plant Angra-1 was used as example case. The average and final values of functionals: secondary pressure and enthalpy were studied in relation to changes in the secondary feedwater flow, enthalpy and total volume in secondary circuit. Absolute variations in the above functionals were calculated using the perturbative methods, considering the variations in the feedwater flow and total secondary volume. Comparison with the same variations obtained via direct model showed in general good agreement, demonstrating the potentiality of perturbative methods for sensitivity analysis of nuclear systems. (author)

  19. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    Science.gov (United States)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  20. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    Science.gov (United States)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  1. Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis

    International Nuclear Information System (INIS)

    Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.

    1991-01-01

    The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs

  2. Reliability and Sensitivity Analysis for Laminated Composite Plate Using Response Surface Method

    International Nuclear Information System (INIS)

    Lee, Seokje; Kim, Ingul; Jang, Moonho; Kim, Jaeki; Moon, Jungwon

    2013-01-01

    Advanced fiber-reinforced laminated composites are widely used in various fields of engineering to reduce weight. The material property of each ply is well known; specifically, it is known that ply is less reliable than metallic materials and very sensitive to the loading direction. Therefore, it is important to consider this uncertainty in the design of laminated composites. In this study, reliability analysis is conducted using Callosum and Meatball interactions for a laminated composite plate for the case in which the tip deflection is the design requirement and the material property is a random variable. Furthermore, the efficiency and accuracy of the approximation method is identified, and a probabilistic sensitivity analysis is conducted. As a result, we can prove the applicability of the advanced design method for the stabilizer of an underwater vehicle

  3. Reliability and Sensitivity Analysis for Laminated Composite Plate Using Response Surface Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seokje; Kim, Ingul [Chungnam National Univ., Daejeon (Korea, Republic of); Jang, Moonho; Kim, Jaeki; Moon, Jungwon [LIG Nex1, Yongin (Korea, Republic of)

    2013-04-15

    Advanced fiber-reinforced laminated composites are widely used in various fields of engineering to reduce weight. The material property of each ply is well known; specifically, it is known that ply is less reliable than metallic materials and very sensitive to the loading direction. Therefore, it is important to consider this uncertainty in the design of laminated composites. In this study, reliability analysis is conducted using Callosum and Meatball interactions for a laminated composite plate for the case in which the tip deflection is the design requirement and the material property is a random variable. Furthermore, the efficiency and accuracy of the approximation method is identified, and a probabilistic sensitivity analysis is conducted. As a result, we can prove the applicability of the advanced design method for the stabilizer of an underwater vehicle.

  4. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  5. Overview of hybrid subspace methods for uncertainty quantification, sensitivity analysis

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Bang, Youngsuk; Wang, Congjian

    2013-01-01

    Highlights: ► We overview the state-of-the-art in uncertainty quantification and sensitivity analysis. ► We overview new developments in above areas using hybrid methods. ► We give a tutorial introduction to above areas and the new developments. ► Hybrid methods address the explosion in dimensionality in nonlinear models. ► Representative numerical experiments are given. -- Abstract: The role of modeling and simulation has been heavily promoted in recent years to improve understanding of complex engineering systems. To realize the benefits of modeling and simulation, concerted efforts in the areas of uncertainty quantification and sensitivity analysis are required. The manuscript intends to serve as a pedagogical presentation of the material to young researchers and practitioners with little background on the subjects. We believe this is important as the role of these subjects is expected to be integral to the design, safety, and operation of existing as well as next generation reactors. In addition to covering the basics, an overview of the current state-of-the-art will be given with particular emphasis on the challenges pertaining to nuclear reactor modeling. The second objective will focus on presenting our own development of hybrid subspace methods intended to address the explosion in the computational overhead required when handling real-world complex engineering systems.

  6. 'PSA-SPN' - A Parameter Sensitivity Analysis Method Using Stochastic Petri Nets: Application to a Production Line System

    International Nuclear Information System (INIS)

    Labadi, Karim; Saggadi, Samira; Amodeo, Lionel

    2009-01-01

    The dynamic behavior of a discrete event dynamic system can be significantly affected for some uncertain changes in its decision parameters. So, parameter sensitivity analysis would be a useful way in studying the effects of these changes on the system performance. In the past, the sensitivity analysis approaches are frequently based on simulation models. In recent years, formal methods based on stochastic process including Markov process are proposed in the literature. In this paper, we are interested in the parameter sensitivity analysis of discrete event dynamic systems by using stochastic Petri nets models as a tool for modelling and performance evaluation. A sensitivity analysis approach based on stochastic Petri nets, called PSA-SPN method, will be proposed with an application to a production line system.

  7. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  8. Screening, sensitivity, and uncertainty for the CREAM method of Human Reliability Analysis

    International Nuclear Information System (INIS)

    Bedford, Tim; Bayley, Clare; Revie, Matthew

    2013-01-01

    This paper reports a sensitivity analysis of the Cognitive Reliability and Error Analysis Method for Human Reliability Analysis. We consider three different aspects: the difference between the outputs of the Basic and Extended methods, on the same HRA scenario; the variability in outputs through the choices made for common performance conditions (CPCs); and the variability in outputs through the assignment of choices for cognitive function failures (CFFs). We discuss the problem of interpreting categories when applying the method, compare its quantitative structure to that of first generation methods and discuss also how dependence is modelled with the approach. We show that the control mode intervals used in the Basic method are too narrow to be consistent with the Extended method. This motivates a new screening method that gives improved accuracy with respect to the Basic method, in the sense that (on average) halves the uncertainty associated with the Basic method. We make some observations on the design of a screening method that are generally applicable in Risk Analysis. Finally, we propose a new method of combining CPC weights with nominal probabilities so that the calculated probabilities are always in range (i.e. between 0 and 1), while satisfying sensible properties that are consistent with the overall CREAM method

  9. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  10. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  11. PROMETHEE Method and Sensitivity Analysis in the Software Application for the Support of Decision-Making

    Directory of Open Access Journals (Sweden)

    Petr Moldrik

    2008-01-01

    Full Text Available PROMETHEE is one of methods, which fall into multi-criteria analysis (MCA. The MCA, as the name itself indicates, deals with the evaluation of particular variants according to several criteria. Developed software application (MCA8 for the support of multi-criteria decision-making was upgraded about PROMETHEE method and a graphic tool, which enables the execution of the sensitivity analysis. This analysis is used to ascertain how a given model output depends upon the input parameters. The MCA8 software application with mentioned graphic upgrade was developed for purposes of solving multi-criteria decision tasks. In the MCA8 is possible to perform sensitivity analysis by a simple form – through column graphs. We can change criteria significances (weights in these column graphs directly and watch the changes of the order of variants immediately.

  12. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  13. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  14. Risk Assessment Method for Offshore Structure Based on Global Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Zou Tao

    2012-01-01

    Full Text Available Based on global sensitivity analysis (GSA, this paper proposes a new risk assessment method for an offshore structure design. This method quantifies all the significances among random variables and their parameters at first. And by comparing the degree of importance, all minor factors would be negligible. Then, the global uncertainty analysis work would be simplified. Global uncertainty analysis (GUA is an effective way to study the complexity and randomness of natural events. Since field measured data and statistical results often have inevitable errors and uncertainties which lead to inaccurate prediction and analysis, the risk in the design stage of offshore structures caused by uncertainties in environmental loads, sea level, and marine corrosion must be taken into account. In this paper, the multivariate compound extreme value distribution model (MCEVD is applied to predict the extreme sea state of wave, current, and wind. The maximum structural stress and deformation of a Jacket platform are analyzed and compared with different design standards. The calculation result sufficiently demonstrates the new risk assessment method’s rationality and security.

  15. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  16. Technical Note: Method of Morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models

    Directory of Open Access Journals (Sweden)

    J. D. Herman

    2013-07-01

    Full Text Available The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.

  17. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  18. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  19. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  20. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  1. Sensitivity analysis of an environmental model: an application of different analysis methods

    International Nuclear Information System (INIS)

    Campolongo, Francesca; Saltelli, Andrea

    1997-01-01

    A parametric sensitivity analysis (SA) was conducted on a well known model for the production of a key sulphur bearing compound from algal biota. The model is of interest because of the climatic relevance of the gas (dimethylsulphide, DMS), an initiator of cloud particles. A screening test at low sample size is applied first (Morris method) followed by a computationally intensive variance based measure. Standardised regression coefficients are also computed. The various SA measures are compared with each other, and the use of bootstrap is suggested to extract empirical confidence bounds on the SA estimators. For some of the input factors, investigators guess about the parameters relevance was confirmed; for some others, the results shed new light on the system mechanism and on the data parametrisation

  2. Application of perturbation methods and sensitivity analysis to water hammer problems in hydraulic networks

    International Nuclear Information System (INIS)

    Balino, Jorge L.; Larreteguy, Axel E.; Andrade Lima, Fernando R.

    1995-01-01

    The differential method was applied to the sensitivity analysis for water hammer problems in hydraulic networks. Starting from the classical water hammer equations in a single-phase liquid with friction, the state vector comprising the piezometric head and the velocity was defined. Applying the differential method the adjoint operator, the adjoint equations with the general form of their boundary conditions, and the general form of the bilinear concomitant were calculated. The discretized adjoint equations and the corresponding boundary conditions were programmed and solved by using the so called method of characteristics. As an example, a constant-level tank connected through a pipe to a valve discharging to atmosphere was considered. The bilinear concomitant was calculated for this particular case. The corresponding sensitivity coefficients due to the variation of different parameters by using both the differential method and the response surface generated by the computer code WHAT were also calculated. The results obtained with these methods show excellent agreement. (author). 11 refs, 2 figs, 2 tabs

  3. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    Science.gov (United States)

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  5. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  6. Global and Local Sensitivity Analysis Methods for a Physical System

    Science.gov (United States)

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  7. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  8. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  9. Testing of the derivative method and Kruskal-Wallis technique for sensitivity analysis of SYVAC

    International Nuclear Information System (INIS)

    Prust, J.O.; Edwards, H.H.

    1985-04-01

    The Kruskal-Wallis method of one-way analysis of variance by ranks has proved successful in identifying input parameters which have an important influence on dose. This technique was extended to test for first order interactions between parameters. In view of a number of practical difficulties and the computing resources required to carry out a large number of runs, this test is not recommended for detecting interactions between parameters. The derivative method of sensitivity analysis examines the partial derivative values of each input parameter with dose at various points across the parameter range. Important input parameters are associated with high derivatives and the results agreed well with previous sensitivity studies. The derivative values also provided information on the data generation distributions to be used for the input parameters in order to concentrate sampling in the high dose region of the parameter space to improve the sampling efficiency. Furthermore, the derivative values provided information on parameter interactions, the feasibility of developing a high dose algorithm and formed the basis for developing a regression equation. (author)

  10. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  11. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  12. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  13. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  14. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  15. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  16. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  17. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  18. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  19. Multitarget global sensitivity analysis of n-butanol combustion.

    Science.gov (United States)

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-02

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.

  20. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  1. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  2. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  3. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    International Nuclear Information System (INIS)

    Williams, Mark L.; Rearden, Bradley T.

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  4. Methods for global sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.

    2017-01-01

    Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to

  5. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  6. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  7. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    Energy Technology Data Exchange (ETDEWEB)

    Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  8. The use of graph theory in the sensitivity analysis of the model output: a second order screening method

    International Nuclear Information System (INIS)

    Campolongo, Francesca; Braddock, Roger

    1999-01-01

    Sensitivity analysis screening methods aim to isolate the most important factors in experiments involving a large number of significant factors and interactions. This paper extends the one-factor-at-a-time screening method proposed by Morris. The new method, in addition to the 'overall' sensitivity measures already provided by the traditional Morris method, offers estimates of the two-factor interaction effects. The number of model evaluations required is O(k 2 ), where k is the number of model input factors. The efficient sampling strategy in the parameter space is based on concepts of graph theory and on the solution of the 'handcuffed prisoner problem'

  9. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  10. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  11. Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

    International Nuclear Information System (INIS)

    Weirs, V. Gregory; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.

    2012-01-01

    Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.

  12. A practical sensitivity analysis method for ranking sources of uncertainty in thermal–hydraulics applications

    Energy Technology Data Exchange (ETDEWEB)

    Pourgol-Mohammad, Mohammad, E-mail: pourgolmohammad@sut.ac.ir [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of); Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Hoseyni, Seyed Mojtaba [Building & Housing Research Center, Tehran (Iran, Islamic Republic of); Sepanloo, Kamran [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)

    2016-08-15

    Highlights: • Existing uncertainty ranking methods prove inconsistent for TH applications. • Introduction of a new method for ranking sources of uncertainty in TH codes. • Modified PIRT qualitatively identifies and ranks uncertainty sources more precisely. • The importance of parameters is calculated by a limited number of TH code executions. • Methodology is applied successfully on LOFT-LB1 test facility. - Abstract: In application to thermal–hydraulic calculations by system codes, sensitivity analysis plays an important role for managing the uncertainties of code output and risk analysis. Sensitivity analysis is also used to confirm the results of qualitative Phenomena Identification and Ranking Table (PIRT). Several methodologies have been developed to address uncertainty importance assessment. Generally, uncertainty importance measures, mainly devised for the Probabilistic Risk Assessment (PRA) applications, are not affordable for computationally demanding calculations of the complex thermal–hydraulics (TH) system codes. In other words, for effective quantification of the degree of the contribution of each phenomenon to the total uncertainty of the output, a practical approach is needed by considering high computational burden of TH calculations. This study aims primarily to show the inefficiency of the existing approaches and then introduces a solution to cope with the challenges in this area by modification of variance-based uncertainty importance method. Important parameters are identified by the modified PIRT approach qualitatively then their uncertainty importance is quantified by a local derivative index. The proposed index is attractive from its practicality point of view on TH applications. It is capable of calculating the importance of parameters by a limited number of TH code executions. Application of the proposed methodology is demonstrated on LOFT-LB1 test facility.

  13. A practical sensitivity analysis method for ranking sources of uncertainty in thermal–hydraulics applications

    International Nuclear Information System (INIS)

    Pourgol-Mohammad, Mohammad; Hoseyni, Seyed Mohsen; Hoseyni, Seyed Mojtaba; Sepanloo, Kamran

    2016-01-01

    Highlights: • Existing uncertainty ranking methods prove inconsistent for TH applications. • Introduction of a new method for ranking sources of uncertainty in TH codes. • Modified PIRT qualitatively identifies and ranks uncertainty sources more precisely. • The importance of parameters is calculated by a limited number of TH code executions. • Methodology is applied successfully on LOFT-LB1 test facility. - Abstract: In application to thermal–hydraulic calculations by system codes, sensitivity analysis plays an important role for managing the uncertainties of code output and risk analysis. Sensitivity analysis is also used to confirm the results of qualitative Phenomena Identification and Ranking Table (PIRT). Several methodologies have been developed to address uncertainty importance assessment. Generally, uncertainty importance measures, mainly devised for the Probabilistic Risk Assessment (PRA) applications, are not affordable for computationally demanding calculations of the complex thermal–hydraulics (TH) system codes. In other words, for effective quantification of the degree of the contribution of each phenomenon to the total uncertainty of the output, a practical approach is needed by considering high computational burden of TH calculations. This study aims primarily to show the inefficiency of the existing approaches and then introduces a solution to cope with the challenges in this area by modification of variance-based uncertainty importance method. Important parameters are identified by the modified PIRT approach qualitatively then their uncertainty importance is quantified by a local derivative index. The proposed index is attractive from its practicality point of view on TH applications. It is capable of calculating the importance of parameters by a limited number of TH code executions. Application of the proposed methodology is demonstrated on LOFT-LB1 test facility.

  14. Methylated site display (MSD)-AFLP, a sensitive and affordable method for analysis of CpG methylation profiles.

    Science.gov (United States)

    Aiba, Toshiki; Saito, Toshiyuki; Hayashi, Akiko; Sato, Shinji; Yunokawa, Harunobu; Maruyama, Toru; Fujibuchi, Wataru; Kurita, Hisaka; Tohyama, Chiharu; Ohsako, Seiichiroh

    2017-03-09

    It has been pointed out that environmental factors or chemicals can cause diseases that are developmental in origin. To detect abnormal epigenetic alterations in DNA methylation, convenient and cost-effective methods are required for such research, in which multiple samples are processed simultaneously. We here present methylated site display (MSD), a unique technique for the preparation of DNA libraries. By combining it with amplified fragment length polymorphism (AFLP) analysis, we developed a new method, MSD-AFLP. Methylated site display libraries consist of only DNAs derived from DNA fragments that are CpG methylated at the 5' end in the original genomic DNA sample. To test the effectiveness of this method, CpG methylation levels in liver, kidney, and hippocampal tissues of mice were compared to examine if MSD-AFLP can detect subtle differences in the levels of tissue-specific differentially methylated CpGs. As a result, many CpG sites suspected to be tissue-specific differentially methylated were detected. Nucleotide sequences adjacent to these methyl-CpG sites were identified and we determined the methylation level by methylation-sensitive restriction endonuclease (MSRE)-PCR analysis to confirm the accuracy of AFLP analysis. The differences of the methylation level among tissues were almost identical among these methods. By MSD-AFLP analysis, we detected many CpGs showing less than 5% statistically significant tissue-specific difference and less than 10% degree of variability. Additionally, MSD-AFLP analysis could be used to identify CpG methylation sites in other organisms including humans. MSD-AFLP analysis can potentially be used to measure slight changes in CpG methylation level. Regarding the remarkable precision, sensitivity, and throughput of MSD-AFLP analysis studies, this method will be advantageous in a variety of epigenetics-based research.

  15. Sensitivity and Interaction Analysis Based on Sobol’ Method and Its Application in a Distributed Flood Forecasting Model

    Directory of Open Access Journals (Sweden)

    Hui Wan

    2015-06-01

    Full Text Available Sensitivity analysis is a fundamental approach to identify the most significant and sensitive parameters, helping us to understand complex hydrological models, particularly for time-consuming distributed flood forecasting models based on complicated theory with numerous parameters. Based on Sobol’ method, this study compared the sensitivity and interactions of distributed flood forecasting model parameters with and without accounting for correlation. Four objective functions: (1 Nash–Sutcliffe efficiency (ENS; (2 water balance coefficient (WB; (3 peak discharge efficiency (EP; and (4 time to peak efficiency (ETP were implemented to the Liuxihe model with hourly rainfall-runoff data collected in the Nanhua Creek catchment, Pearl River, China. Contrastive results for the sensitivity and interaction analysis were also illustrated among small, medium, and large flood magnitudes. Results demonstrated that the choice of objective functions had no effect on the sensitivity classification, while it had great influence on the sensitivity ranking for both uncorrelated and correlated cases. The Liuxihe model behaved and responded uniquely to various flood conditions. The results also indicated that the pairwise parameters interactions revealed a non-ignorable contribution to the model output variance. Parameters with high first or total order sensitivity indices presented a corresponding high second order sensitivity indices and correlation coefficients with other parameters. Without considering parameter correlations, the variance contributions of highly sensitive parameters might be underestimated and those of normally sensitive parameters might be overestimated. This research laid a basic foundation to improve the understanding of complex model behavior.

  16. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    Science.gov (United States)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  17. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  18. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  19. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  20. Sensitive method for the analysis of carbohydrates by gas chromatography of 3H-labeled alditol acetates

    International Nuclear Information System (INIS)

    Prehm, P.; Scheid, A.

    1978-01-01

    A highly sensitive method has been developed for the analysis of carbohydrates from glycoproteins or lipopolysaccharides. The method is based on labeling the carbohydrates with [ 3 H] sodium borohydride, acetylating the resulting alditols and separating them by gas chromatography. The gas effluent is fractionated by trapping on silicone-coated glass beads and the amount of radioactivity is determined. This permits the quantitation of as little as 0.2 nmoles monosaccharide with an accuracy of 10 to 15%. (Auth)

  1. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  2. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  3. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  4. System reliability assessment via sensitivity analysis in the Markov chain scheme

    International Nuclear Information System (INIS)

    Gandini, A.

    1988-01-01

    Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given [fr

  5. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  6. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  7. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  8. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  9. Lake and Reservoir Evaporation Estimation: Sensitivity Analysis and Ranking Existing Methods

    Directory of Open Access Journals (Sweden)

    maysam majidi

    2016-02-01

    were acquired from the Doosti Dam weather station. Relative humidity, wind speed, atmospheric pressure and precipitation were acquired from the Pol−Khatoon weather station. Dew point temperature and sunshine data were collected from the Sarakhs weather station. Lake area was estimated from hypsometric curve in relation to lake level data. Temperature measurements were often performed in 16−day periods or biweekly from September 2011 to September 2012. Temperature profile of the lake (required for lake evaporation estimation was measured at different points of the reservoir using a portable multi−meter. The eighteen existing methods were compared and ranked based on Bowen ratio energy balance method (BREB. Results and Discussion: The estimated annual evaporation values by all of the applied methods in this study, ranged from 21 to 113mcm (million cubic meters. BREB annual evaporation obtained value was equal to 69.86mcm and evaporation rate averaged 5.47mm d-1 during the study period. According to the results, there is a relatively large difference between the obtained evaporation values from the adopted methods. The sensitivity analysis of evaporation methods for some input parameters indicated that the Hamon method (Eq. 16 was the most sensitive to the input parameters followed by the Brutsaert−Stricker and BREB, and radiation−temperature methods (Makkink, Jensen−Haise and Stephen−Stewart had the least sensitivity to input data. Besides, the air temperature, solar radiation (sunshine data, water surface temperature and wind speed data had the most effect on lake evaporation estimations, respectively. Finally, all evaporation estimation methods in this study have been ranked based on RMSD values. On a daily basis, the Jensen−Haise and the Makkink (solar radiation, temperature group, Penman (Combination group and Hamon (temperature, day length group methods had a relatively reasonable performance. As the results on a monthly scale, the Jensen−Haise and

  10. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  11. Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation

    International Nuclear Information System (INIS)

    Blonigan, Patrick J.; Wang, Qiqi

    2014-01-01

    Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters

  12. Sensitivity analysis of the use of Life Cycle Impact Assessment methods: a case study on building materials

    DEFF Research Database (Denmark)

    Bueno, Cristiane; Hauschild, Michael Zwicky; Rossignolo, Joao Adriano

    2016-01-01

    The main aim of this research is to perform a sensitivity analysis of a Life Cycle Assessment (LCA) case study to understand if the use of different Life Cycle Impact Assessment (LCIA) methods may lead to different conclusions by decision makers and stakeholders. A complete LCA was applied to non...

  13. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    Science.gov (United States)

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  14. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  15. Sensitivity analysis of dynamic characteristic of the fixture based on design variables

    International Nuclear Information System (INIS)

    Wang Dongsheng; Nong Shaoning; Zhang Sijian; Ren Wanfa

    2002-01-01

    The research on the sensitivity analysis is dealt with of structural natural frequencies to structural design parameters. A typical fixture for vibration test is designed. Using I-DEAS Finite Element programs, the sensitivity of its natural frequency to design parameters is analyzed by Matrix Perturbation Method. The research result shows that the sensitivity analysis is a fast and effective dynamic re-analysis method to dynamic design and parameters modification of complex structures such as fixtures

  16. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  17. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  18. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  19. Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics

    DEFF Research Database (Denmark)

    Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter

    2014-01-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...

  20. First order sensitivity analysis of flexible multibody systems using absolute nodal coordinate formulation

    International Nuclear Information System (INIS)

    Pi Ting; Zhang Yunqing; Chen Liping

    2012-01-01

    Design sensitivity analysis of flexible multibody systems is important in optimizing the performance of mechanical systems. The choice of coordinates to describe the motion of multibody systems has a great influence on the efficiency and accuracy of both the dynamic and sensitivity analysis. In the flexible multibody system dynamics, both the floating frame of reference formulation (FFRF) and absolute nodal coordinate formulation (ANCF) are frequently utilized to describe flexibility, however, only the former has been used in design sensitivity analysis. In this article, ANCF, which has been recently developed and focuses on modeling of beams and plates in large deformation problems, is extended into design sensitivity analysis of flexible multibody systems. The Motion equations of a constrained flexible multibody system are expressed as a set of index-3 differential algebraic equations (DAEs), in which the element elastic forces are defined using nonlinear strain-displacement relations. Both the direct differentiation method and adjoint variable method are performed to do sensitivity analysis and the related dynamic and sensitivity equations are integrated with HHT-I3 algorithm. In this paper, a new method to deduce system sensitivity equations is proposed. With this approach, the system sensitivity equations are constructed by assembling the element sensitivity equations with the help of invariant matrices, which results in the advantage that the complex symbolic differentiation of the dynamic equations is avoided when the flexible multibody system model is changed. Besides that, the dynamic and sensitivity equations formed with the proposed method can be efficiently integrated using HHT-I3 method, which makes the efficiency of the direct differentiation method comparable to that of the adjoint variable method when the number of design variables is not extremely large. All these improvements greatly enhance the application value of the direct differentiation

  1. A New Approach for the Analysis of Hyperspectral Data: Theory and Sensitivity Analysis of the Moment Distance Method

    Directory of Open Access Journals (Sweden)

    Eric Ariel L. Salas

    2013-12-01

    Full Text Available We present the Moment Distance (MD method to advance spectral analysis in vegetation studies. It was developed to take advantage of the information latent in the shape of the reflectance curve that is not available from other spectral indices. Being mathematically simple but powerful, the approach does not require any curve transformation, such as smoothing or derivatives. Here, we show the formulation of the MD index (MDI and demonstrate its potential for vegetation studies. We simulated leaf and canopy reflectance samples derived from the combination of the PROSPECT and SAIL models to understand the sensitivity of the new method to leaf and canopy parameters. We observed reasonable agreements between vegetation parameters and the MDI when using the 600 to 750 nm wavelength range, and we saw stronger agreements in the narrow red-edge region 720 to 730 nm. Results suggest that the MDI is more sensitive to the Chl content, especially at higher amounts (Chl > 40 mg/cm2 compared to other indices such as NDVI, EVI, and WDRVI. Finally, we found an indirect relationship of MDI against the changes of the magnitude of the reflectance around the red trough with differing values of LAI.

  2. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  3. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    Science.gov (United States)

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  4. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  5. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  6. Justification of investment projects of biogas systems by the sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Perebijnos Vasilij Ivanovich

    2015-06-01

    Full Text Available Methodical features of sensitivity analysis application for evaluation of biogas plants investment projects are shown in the article. Risk factors of the indicated investment projects have been studied. Methodical basis for the use of sensitivity analysis and calculation of elasticity coefficient has been worked out. Calculation of sensitivity analysis and elasticity coefficient of three biogas plants projects, which differ in direction of biogas transformation: use in co-generation plant, application of biomethane as motor fuel and resulting carbon dioxide as marketable product, has been made. Factors strongly affecting projects efficiency have been revealed.

  7. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  8. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  9. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  10. Semianalytic Design Sensitivity Analysis of Nonlinear Structures With a Commercial Finite Element Package

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Yoo, Jung Hun; Choi, Hyeong Cheol

    2002-01-01

    A finite element package is often used as a daily design tool for engineering designers in order to analyze and improve the design. The finite element analysis can provide the responses of a system for given design variables. Although finite element analysis can quite well provide the structural behaviors for given design variables, it cannot provide enough information to improve the design such as design sensitivity coefficients. Design sensitivity analysis is an essential step to predict the change in responses due to a change in design variables and to optimize a system with the aid of the gradient-based optimization techniques. To develop a numerical method of design sensitivity analysis, analytical derivatives that are based on analytical differentiation of the continuous or discrete finite element equations are effective but analytical derivatives are difficult because of the lack of internal information of the commercial finite element package such as shape functions. Therefore, design sensitivity analysis outside of the finite element package is necessary for practical application in an industrial setting. In this paper, the semi-analytic method for design sensitivity analysis is used for the development of the design sensitivity module outside of a commercial finite element package of ANSYS. The direct differentiation method is employed to compute the design derivatives of the response and the pseudo-load for design sensitivity analysis is effectively evaluated by using the design variation of the related internal nodal forces. Especially, we suggest an effective method for stress and nonlinear design sensitivity analyses that is independent of the commercial finite element package is also discussed. Numerical examples are illustrated to show the accuracy and efficiency of the developed method and to provide insights for implementation of the suggested method into other commercial finite element packages

  11. Gravimetric and titrimetric methods of analysis

    International Nuclear Information System (INIS)

    Rives, R.D.; Bruks, R.R.

    1983-01-01

    Gravimetric and titrimetric methods of analysis are considered. Methods of complexometric titration are mentioned, as well as methods of increasing sensitivity in titrimetry. Gravimetry and titrimetry are applied during analysis for traces of geological materials

  12. Global sensitivity analysis using a Gaussian Radial Basis Function metamodel

    International Nuclear Information System (INIS)

    Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua

    2016-01-01

    Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.

  13. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  15. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  16. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  17. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  18. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis.

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-21

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  19. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  20. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  1. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  2. Sensitivity analysis of time-dependent laminar flows

    International Nuclear Information System (INIS)

    Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.

    2004-01-01

    This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)

  3. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  4. Mehar Methods for Fuzzy Optimal Solution and Sensitivity Analysis of Fuzzy Linear Programming with Symmetric Trapezoidal Fuzzy Numbers

    Directory of Open Access Journals (Sweden)

    Sukhpreet Kaur Sidhu

    2014-01-01

    Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.

  5. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  6. An adjoint sensitivity-based data assimilation method and its comparison with existing variational methods

    Directory of Open Access Journals (Sweden)

    Yonghan Choi

    2014-01-01

    Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.

  7. Probabilistic and sensitivity analysis of Botlek Bridge structures

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2017-01-01

    Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.

  8. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  9. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method (ESQM v5.2)

    Science.gov (United States)

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-12-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.

  10. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  11. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  12. Adjoint sensitivity analysis of high frequency structures with Matlab

    CERN Document Server

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  13. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  14. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  15. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  16. An introduction to sensitivity analysis for unobserved confounding in nonexperimental prevention research.

    Science.gov (United States)

    Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A

    2013-12-01

    Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.

  17. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  18. Development of the high-order decoupled direct method in three dimensions for particulate matter: enabling advanced sensitivity analysis in air quality models

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2012-03-01

    Full Text Available The high-order decoupled direct method in three dimensions for particulate matter (HDDM-3D/PM has been implemented in the Community Multiscale Air Quality (CMAQ model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity analysis of ISORROPIA, the inorganic aerosol module of CMAQ. A case-specific approach has been applied, and the sensitivities of activity coefficients and water content are explicitly computed. Stand-alone tests are performed for ISORROPIA by comparing the sensitivities (first- and second-order computed by HDDM and the brute force (BF approximations. Similar comparison has also been carried out for CMAQ sensitivities simulated using a week-long winter episode for a continental US domain. Second-order sensitivities of aerosol species (e.g., sulfate, nitrate, and ammonium with respect to domain-wide SO2, NOx, and NH3 emissions show agreement with BF results, yet exhibit less noise in locations where BF results are demonstrably inaccurate. Second-order sensitivity analysis elucidates poorly understood nonlinear responses of secondary inorganic aerosols to their precursors and competing species. Adding second-order sensitivity terms to the Taylor series projection of the nitrate concentrations with a 50% reduction in domain-wide NOx or SO2 emissions rates improves the prediction with statistical significance.

  19. Sensitivity Analysis of features in tolerancing based on constraint function level sets

    International Nuclear Information System (INIS)

    Ziegler, Philipp; Wartzack, Sandro

    2015-01-01

    Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use

  20. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  1. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    Science.gov (United States)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  2. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  3. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  4. Invariant methods for an ensemble-based sensitivity analysis of a passive containment cooling system of an AP1000 nuclear power plant

    International Nuclear Information System (INIS)

    Di Maio, Francesco; Nicola, Giancarlo; Borgonovo, Emanuele; Zio, Enrico

    2016-01-01

    Sensitivity Analysis (SA) is performed to gain fundamental insights on a system behavior that is usually reproduced by a model and to identify the most relevant input variables whose variations affect the system model functional response. For the reliability analysis of passive safety systems of Nuclear Power Plants (NPPs), models are Best Estimate (BE) Thermal Hydraulic (TH) codes, that predict the system functional response in normal and accidental conditions and, in this paper, an ensemble of three alternative invariant SA methods is innovatively set up for a SA on the TH code input variables. The ensemble aggregates the input variables raking orders provided by Pearson correlation ratio, Delta method and Beta method. The capability of the ensemble is shown on a BE–TH code of the Passive Containment Cooling System (PCCS) of an Advanced Pressurized water reactor AP1000, during a Loss Of Coolant Accident (LOCA), whose output probability density function (pdf) is approximated by a Finite Mixture Model (FMM), on the basis of a limited number of simulations. - Highlights: • We perform the reliability analysis of a passive safety system of Nuclear Power Plant (NPP). • We use a Thermal Hydraulic (TH) code for predicting the NPP response to accidents. • We propose an ensemble of Invariant Methods for the sensitivity analysis of the TH code • The ensemble aggregates the rankings of Pearson correlation, Delta and Beta methods. • The approach is tested on a Passive Containment Cooling System of an AP1000 NPP.

  5. Sensitivity analysis of a radionuclide transfer model describing contaminated vegetation in Fukushima prefecture, using Morris and Sobol' - Application of sensitivity analysis on a radionuclides transfer model in the environment describing weeds contamination in Fukushima Prefecture, using Morris method and Sobol' indices indices

    Energy Technology Data Exchange (ETDEWEB)

    Nicoulaud-Gouin, V.; Metivier, J.M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Garcia-Sanchez, L. [Institut de Radioprotection et de Surete Nucleaire-PRPENV/SERIS/L2BT (France)

    2014-07-01

    The increasing spatial and temporal complexity of models demands methods capable of ranking the influence of their large numbers of parameters. This question specifically arises in assessment studies on the consequences of the Fukushima accident. Sensitivity analysis aims at measuring the influence of input variability on the output response. Generally, two main approaches are distinguished (Saltelli, 2001, Iooss, 2011): - Screening approach, less expensive in computation time and allowing to identify non influential parameters; - Measures of importance, introducing finer quantitative indices. In this category, there are regression-based methods, assuming a linear or monotonic response (Pearson coefficient, Spearman coefficient), and variance-based methods, without assumptions on the model but requiring an increasingly prohibitive number of evaluations when the number of parameters increases. These approaches are available in various statistical programs (notably R) but are still poorly integrated in modelling platforms of radioecological risk assessment. This work aimed at illustrating the benefits of sensitivity analysis in the course of radioecological risk assessments This study used two complementary state-of-art global sensitivity analysis methods: - The screening method of Morris (Morris, 1991; Campolongo et al., 2007) based on limited model evaluations with a one-at-a-time (OAT) design; - The variance-based Sobol' sensitivity analysis (Saltelli, 2002) based a large number of model evaluations in the parameter space with a quasi-random sampling (Owen, 2003). Sensitivity analyses were applied on a dynamic Soil-Plant Deposition Model (Gonze et al., submitted to this conference) predicting foliar concentration in weeds after atmospheric radionuclide fallout. The Soil-Plant Deposition Model considers two foliage pools and a root pool, and describes foliar biomass growth with a Verhulst model. The developed semi-analytic formulation of foliar concentration

  6. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  7. Sensitivity analysis for publication bias in meta-analysis of diagnostic studies for a continuous biomarker.

    Science.gov (United States)

    Hattori, Satoshi; Zhou, Xiao-Hua

    2018-02-10

    Publication bias is one of the most important issues in meta-analysis. For standard meta-analyses to examine intervention effects, the funnel plot and the trim-and-fill method are simple and widely used techniques for assessing and adjusting for the influence of publication bias, respectively. However, their use may be subjective and can then produce misleading insights. To make a more objective inference for publication bias, various sensitivity analysis methods have been proposed, including the Copas selection model. For meta-analysis of diagnostic studies evaluating a continuous biomarker, the summary receiver operating characteristic (sROC) curve is a very useful method in the presence of heterogeneous cutoff values. To our best knowledge, no methods are available for evaluation of influence of publication bias on estimation of the sROC curve. In this paper, we introduce a Copas-type selection model for meta-analysis of diagnostic studies and propose a sensitivity analysis method for publication bias. Our method enables us to assess the influence of publication bias on the estimation of the sROC curve and then judge whether the result of the meta-analysis is sufficiently confident or should be interpreted with much caution. We illustrate our proposed method with real data. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  9. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  10. Increased sensitivity of OSHA method analysis of diacetyl and 2,3-pentanedione in air.

    Science.gov (United States)

    LeBouf, Ryan; Simmons, Michael

    2017-05-01

    Gas chromatography/mass spectrometry (GC/MS) operated in selected ion monitoring mode was used to enhance the sensitivity of OSHA Methods 1013/1016 for measuring diacetyl and 2,3-pentanedione in air samples. The original methods use flame ionization detection which cannot achieve the required sensitivity to quantify samples at or below the NIOSH recommended exposure limits (REL: 5 ppb for diacetyl and 9.3 ppb for 2,3-pentanedione) when sampling for both diacetyl and 2,3-pentanedione. OSHA Method 1012 was developed to measure diacetyl at lower levels but requires an electron capture detector, and a sample preparation time of 36 hours. Using GC/MS allows detection of these two alpha-diketones at lower levels than OSHA Method 1012 for diacetyl and OSHA Method 1016 for 2,3-pentanedione. Acetoin and 2,3-hexanedione may also be measured using this technique. Method quantification limits were 1.1 ppb for diacetyl (22% of the REL), 1.1 ppb for 2,3-pentanedione (12% of the REL), 1.1 ppb for 2,3-hexanedione, and 2.1 ppb for acetoin. Average extraction efficiencies above the limit of quantitation were 100% for diacetyl, 92% for 2,3-pentanedione, 89% for 2,3-hexanedione, and 87% for acetoin. Mass spectrometry with OSHA Methods 1013/1016 could be used by analytical laboratories to provide more sensitive and accurate measures of exposure to diacetyl and 2,3-pentanedione.

  11. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method (ESQM v5.2

    Directory of Open Access Journals (Sweden)

    T. S. Kalra

    2017-12-01

    Full Text Available Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean–Atmosphere–Wave–Sediment Transport (COAWST modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.

  12. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  13. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  14. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  15. Sensitivity/uncertainty analysis of a borehole scenario comparing Latin Hypercube Sampling and deterministic sensitivity approaches

    International Nuclear Information System (INIS)

    Harper, W.V.; Gupta, S.K.

    1983-10-01

    A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables

  16. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  17. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  18. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    Science.gov (United States)

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  19. Parametric sensitivity analysis for biochemical reaction networks based on pathwise information theory.

    Science.gov (United States)

    Pantazis, Yannis; Katsoulakis, Markos A; Vlachos, Dionisios G

    2013-10-22

    Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as "pathwise". The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the FIM can allow to efficiently address

  20. Radioimmunoassay (RIA), a highly specific, extremely sensitive quantitative method of analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strecker, H; Hachmann, H; Seidel, L [Farbwerke Hoechst A.G., Frankfurt am Main (Germany, F.R.). Radiochemisches Lab.

    1979-02-01

    Radioimmunoassay is an analytical method combining the sensitivity of radioactivity measurements and the specificity of the antigen-antibody-reaction. Thus, substances can be measured in concentrations as low as picograms per milliliter serum besides a millionfold excess of otherwise disturbing material (for example in serum). The method is simple to perform and is at present mainly used in the field of endocrinology. Further areas of possible application are in the diagnosis of infectious disease, drug research, environmental protection, forensic medicine as well as general analytics. Quantities of radioactivity, exclusively used in vitro, are in the nano-Curie range. Therefore the radiation dose is negligible.

  1. Perturbative methods for sensitivity calculation in safety problems of nuclear reactors: state-of-the-art

    International Nuclear Information System (INIS)

    Lima, Fernando R.A.; Lira, Carlos A.B.O.; Gandini, Augusto

    1995-01-01

    During the last two decades perturbative methods became an efficient tool to perform sensitivity analysis in nuclear reactor safety problems. In this paper, a comparative study taking into account perturbation formalisms (Diferential and Matricial Mthods and generalized Perturbation Theory - GPT) is considered. Then a few number of applications are described to analyze the sensitivity of some functions relavant to thermal hydraulics designs or safety analysis of nuclear reactor cores and steam generators. The behaviours of the nuclear reactor cores and steam generators are simulated, respectively, by the COBRA-IV-I and GEVAP codes. Results of sensitivity calculations have shown a good agreement when compared to those obtained directly by using the mentioned codes. So, a significative computational time safe can be obtained with perturbative methods performing sensitivity analysis in nuclear power plants. (author). 25 refs., 5 tabs

  2. An Introduction to Sensitivity Analysis for Unobserved Confounding in Non-Experimental Prevention Research

    Science.gov (United States)

    Kuramoto, S. Janet; Stuart, Elizabeth A.

    2013-01-01

    Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282

  3. The adjoint sensitivity method, a contribution to the code uncertainty evaluation

    International Nuclear Information System (INIS)

    Ounsy, A.; Crecy, F. de; Brun, B.

    1993-01-01

    The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs

  4. The adjoint sensitivity method. A contribution to the code uncertainty evaluation

    International Nuclear Information System (INIS)

    Ounsy, A.; Brun, B.

    1993-01-01

    The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs

  5. The adjoint sensitivity method. A contribution to the code uncertainty evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Ounsy, A; Brun, B

    1994-12-31

    The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs.

  6. The adjoint sensitivity method, a contribution to the code uncertainty evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Ounsy, A; Crecy, F de; Brun, B

    1994-12-31

    The application of the ASM (Adjoint Sensitivity Method) to thermohydraulic codes, is examined. The advantage of the method is to be very few CPU time consuming in comparison with usual approach requiring one complete code run per sensitivity determination. The mathematical aspects of the problem are first described, and the applicability of the method of the functional-type response of a thermalhydraulic model is demonstrated. On a simple example of non linear hyperbolic equation (Burgers equation) the problem has been analyzed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the Continuous ASM and the Discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the DASM constitutes a practical solution for thermalhydraulic codes. The application of the DASM to the thermalhydraulic safety code CATHARE is then presented for two examples. They demonstrate that ASM constitutes an efficient tool for the analysis of code sensitivity. (authors) 7 figs., 5 tabs., 8 refs.

  7. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  8. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  9. Sensitivity analysis in Gaussian Bayesian networks using a symbolic-numerical technique

    International Nuclear Information System (INIS)

    Castillo, Enrique; Kjaerulff, Uffe

    2003-01-01

    The paper discusses the problem of sensitivity analysis in Gaussian Bayesian networks. The algebraic structure of the conditional means and variances, as rational functions involving linear and quadratic functions of the parameters, are used to simplify the sensitivity analysis. In particular the probabilities of conditional variables exceeding given values and related probabilities are analyzed. Two examples of application are used to illustrate all the concepts and methods

  10. Code development for eigenvalue total sensitivity analysis and total uncertainty analysis

    International Nuclear Information System (INIS)

    Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Zu, Tiejun; Shen, Wei

    2015-01-01

    Highlights: • We develop a new code for total sensitivity and uncertainty analysis. • The implicit effects of cross sections can be considered. • The results of our code agree well with TSUNAMI-1D. • Detailed analysis for origins of implicit effects is performed. - Abstract: The uncertainties of multigroup cross sections notably impact eigenvalue of neutron-transport equation. We report on a total sensitivity analysis and total uncertainty analysis code named UNICORN that has been developed by applying the direct numerical perturbation method and statistical sampling method. In order to consider the contributions of various basic cross sections and the implicit effects which are indirect results of multigroup cross sections through resonance self-shielding calculation, an improved multigroup cross-section perturbation model is developed. The DRAGON 4.0 code, with application of WIMSD-4 format library, is used by UNICORN to carry out the resonance self-shielding and neutron-transport calculations. In addition, the bootstrap technique has been applied to the statistical sampling method in UNICORN to obtain much steadier and more reliable uncertainty results. The UNICORN code has been verified against TSUNAMI-1D by analyzing the case of TMI-1 pin-cell. The numerical results show that the total uncertainty of eigenvalue caused by cross sections can reach up to be about 0.72%. Therefore the contributions of the basic cross sections and their implicit effects are not negligible

  11. SENSITIVITY ANALYSIS as a methodical approach to the development of design strategies for environmentally sustainable buildings

    DEFF Research Database (Denmark)

    Hansen, Hanne Tine Ring

    . The research methodology applied in the project combines a literature study of descriptions of methodical approaches and built examples with a sensitivity analysis and a qualitative interview with two designers from a best practice example of a practice that has achieved environmentally sustainable...... architecture, such as: ecological, green, bio-climatic, sustainable, passive, low-energy and environmental architecture. This PhD project sets out to gain a better understanding of environmentally sustainable architecture and the methodical approaches applied in the development of this type of architecture...... an increase in scientific and political awareness, which has lead to an escalation in the number of research publications in the field, as well as, legislative demands for the energy consumption of buildings. The publications in the field refer to many different approaches to environmentally sustainable...

  12. Inverse modelling of atmospheric tracers: non-Gaussian methods and second-order sensitivity analysis

    Directory of Open Access Journals (Sweden)

    M. Bocquet

    2008-02-01

    Full Text Available For a start, recent techniques devoted to the reconstruction of sources of an atmospheric tracer at continental scale are introduced. A first method is based on the principle of maximum entropy on the mean and is briefly reviewed here. A second approach, which has not been applied in this field yet, is based on an exact Bayesian approach, through a maximum a posteriori estimator. The methods share common grounds, and both perform equally well in practice. When specific prior hypotheses on the sources are taken into account such as positivity, or boundedness, both methods lead to purposefully devised cost-functions. These cost-functions are not necessarily quadratic because the underlying assumptions are not Gaussian. As a consequence, several mathematical tools developed in data assimilation on the basis of quadratic cost-functions in order to establish a posteriori analysis, need to be extended to this non-Gaussian framework. Concomitantly, the second-order sensitivity analysis needs to be adapted, as well as the computations of the averaging kernels of the source and the errors obtained in the reconstruction. All of these developments are applied to a real case of tracer dispersion: the European Tracer Experiment [ETEX]. Comparisons are made between a least squares cost function (similar to the so-called 4D-Var approach and a cost-function which is not based on Gaussian hypotheses. Besides, the information content of the observations which is used in the reconstruction is computed and studied on the application case. A connection with the degrees of freedom for signal is also established. As a by-product of these methodological developments, conclusions are drawn on the information content of the ETEX dataset as seen from the inverse modelling point of view.

  13. Contribution to the sample mean plot for graphical and numerical sensitivity analysis

    International Nuclear Information System (INIS)

    Bolado-Lavin, R.; Castaings, W.; Tarantola, S.

    2009-01-01

    The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling

  14. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    Science.gov (United States)

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop

  15. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    Directory of Open Access Journals (Sweden)

    Karolina Chwialkowska

    2017-11-01

    Full Text Available Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq. We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation

  16. Advanced surrogate model and sensitivity analysis methods for sodium fast reactor accident assessment

    International Nuclear Information System (INIS)

    Marrel, A.; Marie, N.; De Lozzo, M.

    2015-01-01

    Within the framework of the generation IV Sodium Fast Reactors, the safety in case of severe accidents is assessed. From this statement, CEA has developed a new physical tool to model the accident initiated by the Total Instantaneous Blockage (TIB) of a sub-assembly. This TIB simulator depends on many uncertain input parameters. This paper aims at proposing a global methodology combining several advanced statistical techniques in order to perform a global sensitivity analysis of this TIB simulator. The objective is to identify the most influential uncertain inputs for the various TIB outputs involved in the safety analysis. The proposed statistical methodology combining several advanced statistical techniques enables to take into account the constraints on the TIB simulator outputs (positivity constraints) and to deal simultaneously with various outputs. To do this, a space-filling design is used and the corresponding TIB model simulations are performed. Based on this learning sample, an efficient constrained Gaussian process metamodel is fitted on each TIB model outputs. Then, using the metamodels, classical sensitivity analyses are made for each TIB output. Multivariate global sensitivity analyses based on aggregated indices are also performed, providing additional valuable information. Main conclusions on the influence of each uncertain input are derived. - Highlights: • Physical-statistical tool for Sodium Fast Reactors TIB accident. • 27 uncertain parameters (core state, lack of physical knowledge) are highlighted. • Constrained Gaussian process efficiently predicts TIB outputs (safety criteria). • Multivariate sensitivity analyses reveal that three inputs are mainly influential. • The type of corium propagation (thermal or hydrodynamic) is the most influential

  17. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  18. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  19. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    Science.gov (United States)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  20. Uncertainty and sensitivity analysis applied to coupled code calculations for a VVER plant transient

    International Nuclear Information System (INIS)

    Langenbuch, S.; Krzykacz-Hausmann, B.; Schmidt, K. D.

    2004-01-01

    The development of coupled codes, combining thermal-hydraulic system codes and 3D neutron kinetics, is an important step to perform best-estimate plant transient calculations. It is generally agreed that the application of best-estimate methods should be supplemented by an uncertainty and sensitivity analysis to quantify the uncertainty of the results. The paper presents results from the application of the GRS uncertainty and sensitivity method for a VVER-440 plant transient, which was already studied earlier for the validation of coupled codes. For this application, the main steps of the uncertainty method are described. Typical results of the method applied to the analysis of the plant transient by several working groups using different coupled codes are presented and discussed The results demonstrate the capability of an uncertainty and sensitivity analysis. (authors)

  1. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    Science.gov (United States)

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  2. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili, E-mail: wangnsrl@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029 (China); Zhang, Kai [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Zhu, Peiping; Wu, Ziyu, E-mail: wuzy@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China and Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.

  3. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili; Zhang, Kai; Zhu, Peiping; Wu, Ziyu

    2015-01-01

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations

  4. An efficient sensitivity analysis method for modified geometry of Macpherson suspension based on Pearson correlation coefficient

    Science.gov (United States)

    Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh

    2017-06-01

    The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.

  5. Systemization of burnup sensitivity analysis code (2) (Contract research)

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2008-08-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion

  6. Quasi-random Monte Carlo application in CGE systematic sensitivity analysis

    NARCIS (Netherlands)

    Chatzivasileiadis, T.

    2017-01-01

    The uncertainty and robustness of Computable General Equilibrium models can be assessed by conducting a Systematic Sensitivity Analysis. Different methods have been used in the literature for SSA of CGE models such as Gaussian Quadrature and Monte Carlo methods. This paper explores the use of

  7. The adjoint sensitivity method, a contribution to the code uncertainty evaluation

    International Nuclear Information System (INIS)

    Ounsy, A.; Brun, B.; De Crecy, F.

    1994-01-01

    This paper deals with the application of the adjoint sensitivity method (ASM) to thermal hydraulic codes. The advantage of the method is to use small central processing unit time in comparison with the usual approach requiring one complete code run per sensitivity determination. In the first part the mathematical aspects of the problem are treated, and the applicability of the method of the functional-type response of a thermal hydraulic model is demonstrated. On a simple example of non-linear hyperbolic equation (Burgers equation) the problem has been analysed. It is shown that the formalism used in the literature treating this subject is not appropriate. A new mathematical formalism circumventing the problem is proposed. For the discretized form of the problem, two methods are possible: the continuous ASM and the discrete ASM. The equivalence of both methods is demonstrated; nevertheless only the discrete ASM constitutes a practical solution for thermal hydraulic codes. The application of the discrete ASM to the thermal hydraulic safety code CATHARE is then presented for two examples. They demonstrate that the discrete ASM constitutes an efficient tool for the analysis of code sensitivity. ((orig.))

  8. An UPLC-MS/MS method for highly sensitive high-throughput analysis of phytohormones in plant tissues

    Directory of Open Access Journals (Sweden)

    Balcke Gerd Ulrich

    2012-11-01

    Full Text Available Abstract Background Phytohormones are the key metabolites participating in the regulation of multiple functions of plant organism. Among them, jasmonates, as well as abscisic and salicylic acids are responsible for triggering and modulating plant reactions targeted against pathogens and herbivores, as well as resistance to abiotic stress (drought, UV-irradiation and mechanical wounding. These factors induce dramatic changes in phytohormone biosynthesis and transport leading to rapid local and systemic stress responses. Understanding of underlying mechanisms is of principle interest for scientists working in various areas of plant biology. However, highly sensitive, precise and high-throughput methods for quantification of these phytohormones in small samples of plant tissues are still missing. Results Here we present an LC-MS/MS method for fast and highly sensitive determination of jasmonates, abscisic and salicylic acids. A single-step sample preparation procedure based on mixed-mode solid phase extraction was efficiently combined with essential improvements in mobile phase composition yielding higher efficiency of chromatographic separation and MS-sensitivity. This strategy resulted in dramatic increase in overall sensitivity, allowing successful determination of phytohormones in small (less than 50 mg of fresh weight tissue samples. The method was completely validated in terms of analyte recovery, sensitivity, linearity and precision. Additionally, it was cross-validated with a well-established GC-MS-based procedure and its applicability to a variety of plant species and organs was verified. Conclusion The method can be applied for the analyses of target phytohormones in small tissue samples obtained from any plant species and/or plant part relying on any commercially available (even less sensitive tandem mass spectrometry instrumentation.

  9. Optimum shape design of incompressible hyperelastic structures with analytical sensitivity analysis

    International Nuclear Information System (INIS)

    Jarraya, A.; Wali, M.; Dammark, F.

    2014-01-01

    This paper is focused on the structural shape optimization of incompressible hyperelastic structures. An analytical sensitivity is developed for the rubber like materials. The whole shape optimization process is carried out by coupling a closed geometric shape in R 2 with boundaries, defined by B-splines curves, exact sensitivity analysis and mathematical programming method (S.Q.P: sequential quadratic programming). Design variables are the control points coordinate. The objective function is to minimize Von-Mises stress, constrained to the total material volume of the structure remains constant. In order to validate the exact Jacobian method, the sensitivity calculation is performed: numerically by an efficient finite difference scheme and by the exact Jacobian method. Numerical optimization examples are presented for elastic and hyperelastic materials using the proposed method.

  10. Sensitivity analysis for unobserved confounding of direct and indirect effects using uncertainty intervals.

    Science.gov (United States)

    Lindmark, Anita; de Luna, Xavier; Eriksson, Marie

    2018-05-10

    To estimate direct and indirect effects of an exposure on an outcome from observed data, strong assumptions about unconfoundedness are required. Since these assumptions cannot be tested using the observed data, a mediation analysis should always be accompanied by a sensitivity analysis of the resulting estimates. In this article, we propose a sensitivity analysis method for parametric estimation of direct and indirect effects when the exposure, mediator, and outcome are all binary. The sensitivity parameters consist of the correlations between the error terms of the exposure, mediator, and outcome models. These correlations are incorporated into the estimation of the model parameters and identification sets are then obtained for the direct and indirect effects for a range of plausible correlation values. We take the sampling variability into account through the construction of uncertainty intervals. The proposed method is able to assess sensitivity to both mediator-outcome confounding and confounding involving the exposure. To illustrate the method, we apply it to a mediation study based on the data from the Swedish Stroke Register (Riksstroke). An R package that implements the proposed method is available. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  12. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  13. The surface analysis methods

    International Nuclear Information System (INIS)

    Deville, J.P.

    1998-01-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.)

  14. Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2005-01-01

    This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning

  15. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  16. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    Science.gov (United States)

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop

  17. An efficient method of reducing glass dispersion tolerance sensitivity

    Science.gov (United States)

    Sparrold, Scott W.; Shepard, R. Hamilton

    2014-12-01

    Constraining the Seidel aberrations of optical surfaces is a common technique for relaxing tolerance sensitivities in the optimization process. We offer an observation that a lens's Abbe number tolerance is directly related to the magnitude by which its longitudinal and transverse color are permitted to vary in production. Based on this observation, we propose a computationally efficient and easy-to-use merit function constraint for relaxing dispersion tolerance sensitivity. Using the relationship between an element's chromatic aberration and dispersion sensitivity, we derive a fundamental limit for lens scale and power that is capable of achieving high production yield for a given performance specification, which provides insight on the point at which lens splitting or melt fitting becomes necessary. The theory is validated by comparing its predictions to a formal tolerance analysis of a Cooke Triplet, and then applied to the design of a 1.5x visible linescan lens to illustrate optimization for reduced dispersion sensitivity. A selection of lenses in high volume production is then used to corroborate the proposed method of dispersion tolerance allocation.

  18. Computing eigenvalue sensitivity coefficients to nuclear data based on the CLUTCH method with RMC code

    International Nuclear Information System (INIS)

    Qiu, Yishu; She, Ding; Tang, Xiao; Wang, Kan; Liang, Jingang

    2016-01-01

    Highlights: • A new algorithm is proposed to reduce memory consumption for sensitivity analysis. • The fission matrix method is used to generate adjoint fission source distributions. • Sensitivity analysis is performed on a detailed 3D full-core benchmark with RMC. - Abstract: Recently, there is a need to develop advanced methods of computing eigenvalue sensitivity coefficients to nuclear data in the continuous-energy Monte Carlo codes. One of these methods is the iterated fission probability (IFP) method, which is adopted by most of Monte Carlo codes of having the capabilities of computing sensitivity coefficients, including the Reactor Monte Carlo code RMC. Though it is accurate theoretically, the IFP method faces the challenge of huge memory consumption. Therefore, it may sometimes produce poor sensitivity coefficients since the number of particles in each active cycle is not sufficient enough due to the limitation of computer memory capacity. In this work, two algorithms of the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) method, namely, the collision-event-based algorithm (C-CLUTCH) which is also implemented in SCALE and the fission-event-based algorithm (F-CLUTCH) which is put forward in this work, are investigated and implemented in RMC to reduce memory requirements for computing eigenvalue sensitivity coefficients. While the C-CLUTCH algorithm requires to store concerning reaction rates of every collision, the F-CLUTCH algorithm only stores concerning reaction rates of every fission point. In addition, the fission matrix method is put forward to generate the adjoint fission source distribution for the CLUTCH method to compute sensitivity coefficients. These newly proposed approaches implemented in RMC code are verified by a SF96 lattice model and the MIT BEAVRS benchmark problem. The numerical results indicate the accuracy of the F-CLUTCH algorithm is the same as the C

  19. Sensitivity analysis of numerical results of one- and two-dimensional advection-diffusion problems

    International Nuclear Information System (INIS)

    Motoyama, Yasunori; Tanaka, Nobuatsu

    2005-01-01

    Numerical simulation has been playing an increasingly important role in the fields of science and engineering. However, every numerical result contains errors such as modeling, truncation, and computing errors, and the magnitude of the errors that are quantitatively contained in the results is unknown. This situation causes a large design margin in designing by analyses and prevents further cost reduction by optimizing design. To overcome this situation, we developed a new method to numerically analyze the quantitative error of a numerical solution by using the sensitivity analysis method and modified equation approach. If a reference case of typical parameters is calculated once by this method, then no additional calculation is required to estimate the results of other numerical parameters such as those of parameters with higher resolutions. Furthermore, we can predict the exact solution from the sensitivity analysis results and can quantitatively evaluate the error of numerical solutions. Since the method incorporates the features of the conventional sensitivity analysis method, it can evaluate the effect of the modeling error as well as the truncation error. In this study, we confirm the effectiveness of the method through some numerical benchmark problems of one- and two-dimensional advection-diffusion problems. (author)

  20. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    Science.gov (United States)

    Eno, Larry; Rabitz, Herschel

    1981-08-01

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.

  1. High order effects in cross section sensitivity analysis

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Gilai, D.

    1978-01-01

    Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method

  2. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM

  3. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.

  4. Development of a method for comprehensive and quantitative analysis of plant hormones by highly sensitive nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry

    International Nuclear Information System (INIS)

    Izumi, Yoshihiro; Okazawa, Atsushi; Bamba, Takeshi; Kobayashi, Akio; Fukusaki, Eiichiro

    2009-01-01

    In recent plant hormone research, there is an increased demand for a highly sensitive and comprehensive analytical approach to elucidate the hormonal signaling networks, functions, and dynamics. We have demonstrated the high sensitivity of a comprehensive and quantitative analytical method developed with nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry (LC-ESI-IT-MS/MS) under multiple-reaction monitoring (MRM) in plant hormone profiling. Unlabeled and deuterium-labeled isotopomers of four classes of plant hormones and their derivatives, auxins, cytokinins (CK), abscisic acid (ABA), and gibberellins (GA), were analyzed by this method. The optimized nanoflow-LC-ESI-IT-MS/MS method showed ca. 5-10-fold greater sensitivity than capillary-LC-ESI-IT-MS/MS, and the detection limits (S/N = 3) of several plant hormones were in the sub-fmol range. The results showed excellent linearity (R 2 values of 0.9937-1.0000) and reproducibility of elution times (relative standard deviations, RSDs, <1.1%) and peak areas (RSDs, <10.7%) for all target compounds. Further, sample purification using Oasis HLB and Oasis MCX cartridges significantly decreased the ion-suppressing effects of biological matrix as compared to the purification using only Oasis HLB cartridge. The optimized nanoflow-LC-ESI-IT-MS/MS method was successfully used to analyze endogenous plant hormones in Arabidopsis and tobacco samples. The samples used in this analysis were extracted from only 17 tobacco dry seeds (1 mg DW), indicating that the efficiency of analysis of endogenous plant hormones strongly depends on the detection sensitivity of the method. Our analytical approach will be useful for in-depth studies on complex plant hormonal metabolism.

  5. Development of a method for comprehensive and quantitative analysis of plant hormones by highly sensitive nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Izumi, Yoshihiro; Okazawa, Atsushi; Bamba, Takeshi; Kobayashi, Akio [Department of Biotechnology, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871 (Japan); Fukusaki, Eiichiro, E-mail: fukusaki@bio.eng.osaka-u.ac.jp [Department of Biotechnology, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871 (Japan)

    2009-08-26

    In recent plant hormone research, there is an increased demand for a highly sensitive and comprehensive analytical approach to elucidate the hormonal signaling networks, functions, and dynamics. We have demonstrated the high sensitivity of a comprehensive and quantitative analytical method developed with nanoflow liquid chromatography-electrospray ionization-ion trap mass spectrometry (LC-ESI-IT-MS/MS) under multiple-reaction monitoring (MRM) in plant hormone profiling. Unlabeled and deuterium-labeled isotopomers of four classes of plant hormones and their derivatives, auxins, cytokinins (CK), abscisic acid (ABA), and gibberellins (GA), were analyzed by this method. The optimized nanoflow-LC-ESI-IT-MS/MS method showed ca. 5-10-fold greater sensitivity than capillary-LC-ESI-IT-MS/MS, and the detection limits (S/N = 3) of several plant hormones were in the sub-fmol range. The results showed excellent linearity (R{sup 2} values of 0.9937-1.0000) and reproducibility of elution times (relative standard deviations, RSDs, <1.1%) and peak areas (RSDs, <10.7%) for all target compounds. Further, sample purification using Oasis HLB and Oasis MCX cartridges significantly decreased the ion-suppressing effects of biological matrix as compared to the purification using only Oasis HLB cartridge. The optimized nanoflow-LC-ESI-IT-MS/MS method was successfully used to analyze endogenous plant hormones in Arabidopsis and tobacco samples. The samples used in this analysis were extracted from only 17 tobacco dry seeds (1 mg DW), indicating that the efficiency of analysis of endogenous plant hormones strongly depends on the detection sensitivity of the method. Our analytical approach will be useful for in-depth studies on complex plant hormonal metabolism.

  6. Instrumental neutron activation analysis as a routine method for rock analysis

    International Nuclear Information System (INIS)

    Rosenberg, R.J.

    1977-06-01

    Instrumental neutron activation methods for the analysis of geological samples have been developed. Special emphasis has been laid on the improvement of sensitivity and accuracy in order to maximize tha quality of the analyses. Furthermore, the procedures have been automated as far as possible in order to minimize the cost of the analysis. A short review of the basic literature is given followed by a description of the principles of the method. All aspects concerning the sensitivity are discussed thoroughly in view of the analyst's possibility of influencing them. Experimentally determined detection limits for Na, Al, K, Ca, Sc, Cr, Ti, V, Mn, Fe, Ni, Co, Rb, Zr, Sb, Cs, Ba, La, Ce, Nd, Sm, Eu, Gd, Tb, Dy, Yb, Lu, Hf, Ta, Th and U are given. The errors of the method are discussed followed by actions taken to avoid them. The most significant error was caused by flux deviation, but this was avoided by building a rotating sample holder for rotating the samples during irradiation. A scheme for the INAA of 32 elements is proposed. The method has been automated as far as possible and an automatic γ-spectrometer and a computer program for the automatic calculation of the results are described. Furthermore, a completely automated uranium analyzer based on delayed neutron counting is described. The methods are discussed in view of their applicability to rock analysis. It is stated that the sensitivity varies considerably from element to element and instrumental activation analysis is an excellent method for the analysis of some specific elements like lanthanides, thorium and uranium but less so for many other elements. The accuracy is good varying from 2% to 10% for most elements. Instrumental activation analysis for most elements is rather an expensive method there being, however, a few exceptions. The most important of these is uranium. The analysis of uranium by delayed neutron counting is an inexpensive means for the analysis of large numbers of samples needed for

  7. A novel method of sensitivity analysis testing by applying the DRASTIC and fuzzy optimization methods to assess groundwater vulnerability to pollution: the case of the Senegal River basin in Mali

    Science.gov (United States)

    Souleymane, Keita; Zhonghua, Tang

    2017-08-01

    Vulnerability to groundwater pollution in the Senegal River basin was studied by two different but complementary methods: the DRASTIC method (which evaluates the intrinsic vulnerability) and the fuzzy method (which assesses the specific vulnerability by taking into account the continuity of the parameters). The validation of this application has been tested by comparing the connection in groundwater and distribution of different established classes of vulnerabilities as well as the nitrate distribution in the study area. Three vulnerability classes (low, medium and high) have been identified by both the DRASTIC method and the fuzzy method (between which the normalized model was used). An integrated analysis reveals that high classes with 14.64 % (for the DRASTIC method), 21.68 % (for the normalized DRASTIC method) and 18.92 % (for the fuzzy method) are not the most dominant. In addition, a new method for sensitivity analysis was used to identify (and confirm) the main parameters which impact the vulnerability to pollution with fuzzy membership. The results showed that the vadose zone is the main parameter which impacts groundwater vulnerability to pollution while net recharge contributes least to pollution in the study area. It was also found that the fuzzy method better assesses the vulnerability to pollution with a coincidence rate of 81.13 % versus that of 77.35 % for the DRASTIC method. These results serve as a guide for policymakers to identify areas sensitive to pollution before such sites are used for socioeconomic infrastructures.

  8. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  9. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)

    2015-08-15

    Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.

  10. Alternative global goodness metrics and sensitivity analysis: heuristics to check the robustness of conclusions from studies comparing virtual screening methods.

    Science.gov (United States)

    Sheridan, Robert P

    2008-02-01

    We introduce two ways of testing the robustness of conclusions from studies comparing virtual screening methods: alternative "global goodness" metrics and sensitivity analysis. While the robustness tests cannot eliminate all biases in virtual screening comparisons, they are useful as a "reality check" for any given study. To illustrate this, we apply them to a set of enrichments published in McGaughey et al. (J. Chem. Inf. Model. 2007, 47, 1504-1519) where 11 target protein/ligand combinations are tested on 2D and 3D similarity methods, plus docking. The major conclusions in that paper, for instance, that ligand-based methods are better than docking methods, hold up. However, some minor conclusions, such as Glide being the best docking method, do not.

  11. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  12. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-01-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  13. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  14. Comparison of neutron activation analysis with other instrumental methods for elemental analysis of airborne particulate matter

    International Nuclear Information System (INIS)

    Regge, P. de; Lievens, F.; Delespaul, I.; Monsecour, M.

    1976-01-01

    A comparison of instrumental methods, including neutron activation analysis, X-ray fluorescence spectrometry, atomic absorption spectrometry and emission spectrometry, for the analysis of heavy metals in airborne particulate matter is described. The merits and drawbacks of each method for the routine analysis of a large number of samples are discussed. The sample preparation technique, calibration and statistical data relevant to each method are given. Concordant results are obtained by the different methods for Co, Cu, Ni, Pb and Zn. Less good agreement is obtained for Fe, Mn and V. The results are not in agreement for the elements Cd and Cr. Using data obtained on the dust sample distributed by Euratom-ISPRA within the framework of an interlaboratory comparison, the accuracy of each method for the various elements is estimated. Neutron activation analysis was found to be the most sensitive and accurate of the non-destructive analysis methods. Only atomic absorption spectrometry has a comparable sensitivity, but requires considerable preparation work. X-ray fluorescence spectrometry is less sensitive and shows biases for Cr and V. Automatic emission spectrometry with simultaneous measurement of the beam intensities by photomultipliers is the fastest and most economical technique, though at the expense of some precision and sensitivity. (author)

  15. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  16. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  17. B1 -sensitivity analysis of quantitative magnetization transfer imaging.

    Science.gov (United States)

    Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce

    2018-01-01

    To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  19. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    Science.gov (United States)

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  1. Steady state likelihood ratio sensitivity analysis for stiff kinetic Monte Carlo simulations.

    Science.gov (United States)

    Núñez, M; Vlachos, D G

    2015-01-28

    Kinetic Monte Carlo simulation is an integral tool in the study of complex physical phenomena present in applications ranging from heterogeneous catalysis to biological systems to crystal growth and atmospheric sciences. Sensitivity analysis is useful for identifying important parameters and rate-determining steps, but the finite-difference application of sensitivity analysis is computationally demanding. Techniques based on the likelihood ratio method reduce the computational cost of sensitivity analysis by obtaining all gradient information in a single run. However, we show that disparity in time scales of microscopic events, which is ubiquitous in real systems, introduces drastic statistical noise into derivative estimates for parameters affecting the fast events. In this work, the steady-state likelihood ratio sensitivity analysis is extended to singularly perturbed systems by invoking partial equilibration for fast reactions, that is, by working on the fast and slow manifolds of the chemistry. Derivatives on each time scale are computed independently and combined to the desired sensitivity coefficients to considerably reduce the noise in derivative estimates for stiff systems. The approach is demonstrated in an analytically solvable linear system.

  2. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  3. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  4. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    International Nuclear Information System (INIS)

    Eno, L.; Rabitz, H.

    1981-01-01

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h 0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h 0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of h 0 on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H 2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed

  5. Probabilistic sensitivity analysis of optimised preventive maintenance strategies for deteriorating infrastructure assets

    International Nuclear Information System (INIS)

    Daneshkhah, A.; Stocks, N.G.; Jeffrey, P.

    2017-01-01

    Efficient life-cycle management of civil infrastructure systems under continuous deterioration can be improved by studying the sensitivity of optimised preventive maintenance decisions with respect to changes in model parameters. Sensitivity analysis in maintenance optimisation problems is important because if the calculation of the cost of preventive maintenance strategies is not sufficiently robust, the use of the maintenance model can generate optimised maintenances strategies that are not cost-effective. Probabilistic sensitivity analysis methods (particularly variance based ones), only partially respond to this issue and their use is limited to evaluating the extent to which uncertainty in each input contributes to the overall output's variance. These methods do not take account of the decision-making problem in a straightforward manner. To address this issue, we use the concept of the Expected Value of Perfect Information (EVPI) to perform decision-informed sensitivity analysis: to identify the key parameters of the problem and quantify the value of learning about certain aspects of the life-cycle management of civil infrastructure system. This approach allows us to quantify the benefits of the maintenance strategies in terms of expected costs and in the light of accumulated information about the model parameters and aspects of the system, such as the ageing process. We use a Gamma process model to represent the uncertainty associated with asset deterioration, illustrating the use of EVPI to perform sensitivity analysis on the optimisation problem for age-based and condition-based preventive maintenance strategies. The evaluation of EVPI indices is computationally demanding and Markov Chain Monte Carlo techniques would not be helpful. To overcome this computational difficulty, we approximate the EVPI indices using Gaussian process emulators. The implications of the worked numerical examples discussed in the context of analytical efficiency and organisational

  6. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1990-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems

  7. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  8. Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos

    Directory of Open Access Journals (Sweden)

    Jianbin Guo

    2014-07-01

    Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.

  9. An approach of sensitivity and uncertainty analyses methods installation in a safety calculation

    International Nuclear Information System (INIS)

    Pepin, G.; Sallaberry, C.

    2003-01-01

    Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA

  10. Sensitivity analysis for reactivity and power density investigations in nuclear reactors

    International Nuclear Information System (INIS)

    Naguib, K.; Morcos, H.N.; Sallam, O.H.; Abdelsamei, SH.

    1993-01-01

    Sensitivity analysis theory based on the variational functional approaches was applied to evaluate sensitivities of eigenvalues and power densities due to variation of the absorber concentration in the reactor core. The practical usefulness of this method is illustrated by considering test cases. The result indicates that this method is as accurate as those obtained from direct calculations, yet it provides an economical means in saving computational time since it requires fewer calculations. The SARC-1/2 code have been written in Fortran-77 to solve this problem.3 tab. 1 fig

  11. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  12. Application of sensitivity analysis for assessment of de-desertification alternatives in the central Iran by using Triantaphyllou method.

    Science.gov (United States)

    Sadeghi Ravesh, Mohammad Hassan; Ahmadi, Hassan; Zehtabian, Gholamreza

    2011-08-01

    Desertification, land degradation in arid, semi-arid, and dry sub-humid regions, is a global environmental problem. With respect to increasing importance of desertification and its complexity, the necessity of attention to the optimal de-desertification alternatives is essential. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select the optimal de-desertification alternatives based on the results of interviews with experts in Khezr Abad region, central Iran as the case study. This model was used in Yazd Khezr Abad region to evaluate the efficiency in presentation of better alternatives related to personal and environmental situations. Obtained results indicate that the criterion "proportion and adaptation to the environment" with the weighted average of 33.6% is the most important criterion from experts viewpoints. While prevention alternatives of land usage unsuitable of reveres and conversion with 22.88% mean weight and vegetation cover development and reclamation with 21.9% mean weight are recognized ordinarily as the most important de-desertification alternatives in region. Finally, sensitivity analysis is performed in detail by varying the objective factor decision weight, the priority weight of subjective factors, and the gain factors. After the fulfillment of sensitivity analysis and determination of the most sensitive criteria and alternatives, the former classification and ranking of alternatives does not change so much, and it was observed that unsuitable land use alternative with the preference degree of 22.7% was still in the first order of priority. The final priority of livestock grazing control alternative was replaced with the alternative of modification of ground water harvesting.

  13. A rapid and sensitive method for the simultaneous analysis of aliphatic and polar molecules containing free carboxyl groups in plant extracts by LC-MS/MS

    Directory of Open Access Journals (Sweden)

    Bonaventure Gustavo

    2009-11-01

    Full Text Available Abstract Background Aliphatic molecules containing free carboxyl groups are important intermediates in many metabolic and signalling reactions, however, they accumulate to low levels in tissues and are not efficiently ionized by electrospray ionization (ESI compared to more polar substances. Quantification of aliphatic molecules becomes therefore difficult when small amounts of tissue are available for analysis. Traditional methods for analysis of these molecules require purification or enrichment steps, which are onerous when multiple samples need to be analyzed. In contrast to aliphatic molecules, more polar substances containing free carboxyl groups such as some phytohormones are efficiently ionized by ESI and suitable for analysis by LC-MS/MS. Thus, the development of a method with which aliphatic and polar molecules -which their unmodified forms differ dramatically in their efficiencies of ionization by ESI- can be simultaneously detected with similar sensitivities would substantially simplify the analysis of complex biological matrices. Results A simple, rapid, specific and sensitive method for the simultaneous detection and quantification of free aliphatic molecules (e.g., free fatty acids (FFA and small polar molecules (e.g., jasmonic acid (JA, salicylic acid (SA containing free carboxyl groups by direct derivatization of leaf extracts with Picolinyl reagent followed by LC-MS/MS analysis is presented. The presence of the N atom in the esterified pyridine moiety allowed the efficient ionization of 25 compounds tested irrespective of their chemical structure. The method was validated by comparing the results obtained after analysis of Nicotiana attenuata leaf material with previously described analytical methods. Conclusion The method presented was used to detect 16 compounds in leaf extracts of N. attenuata plants. Importantly, the method can be adapted based on the specific analytes of interest with the only consideration that the

  14. Excitation methods for energy dispersive analysis

    International Nuclear Information System (INIS)

    Jaklevic, J.M.

    1976-01-01

    The rapid development in recent years of energy dispersive x-ray fluorescence analysis has been based primarily on improvements in semiconductor detector x-ray spectrometers. However, the whole analysis system performance is critically dependent on the availability of optimum methods of excitation for the characteristic x rays in specimens. A number of analysis facilities based on various methods of excitation have been developed over the past few years. A discussion is given of the features of various excitation methods including charged particles, monochromatic photons, and broad-energy band photons. The effects of the excitation method on background and sensitivity are discussed from both theoretical and experimental viewpoints. Recent developments such as pulsed excitation and polarized photons are also discussed

  15. Sensitization trajectories in childhood revealed by using a cluster analysis

    DEFF Research Database (Denmark)

    Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik

    2017-01-01

    Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...

  16. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  17. Sensitivity and uncertainty analyses applied to criticality safety validation, methods development. Volume 1

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Hopper, C.M.; Childs, R.L.; Parks, C.V.

    1999-01-01

    This report presents the application of sensitivity and uncertainty (S/U) analysis methodologies to the code/data validation tasks of a criticality safety computational study. Sensitivity and uncertainty analysis methods were first developed for application to fast reactor studies in the 1970s. This work has revitalized and updated the available S/U computational capabilities such that they can be used as prototypic modules of the SCALE code system, which contains criticality analysis tools currently used by criticality safety practitioners. After complete development, simplified tools are expected to be released for general use. The S/U methods that are presented in this volume are designed to provide a formal means of establishing the range (or area) of applicability for criticality safety data validation studies. The development of parameters that are analogous to the standard trending parameters forms the key to the technique. These parameters are the D parameters, which represent the differences by group of sensitivity profiles, and the ck parameters, which are the correlation coefficients for the calculational uncertainties between systems; each set of parameters gives information relative to the similarity between pairs of selected systems, e.g., a critical experiment and a specific real-world system (the application)

  18. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    Science.gov (United States)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  19. OPTIMIZATION OF THE TEMPERATURE CONTROL SCHEME FOR ROLLER COMPACTED CONCRETE DAMS BASED ON FINITE ELEMENT AND SENSITIVITY ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    Huawei Zhou

    2016-10-01

    Full Text Available Achieving an effective combination of various temperature control measures is critical for temperature control and crack prevention of concrete dams. This paper presents a procedure for optimizing the temperature control scheme of roller compacted concrete (RCC dams that couples the finite element method (FEM with a sensitivity analysis method. In this study, seven temperature control schemes are defined according to variations in three temperature control measures: concrete placement temperature, water-pipe cooling time, and thermal insulation layer thickness. FEM is employed to simulate the equivalent temperature field and temperature stress field obtained under each of the seven designed temperature control schemes for a typical overflow dam monolith based on the actual characteristics of a RCC dam located in southwestern China. A sensitivity analysis is subsequently conducted to investigate the degree of influence each of the three temperature control measures has on the temperature field and temperature tensile stress field of the dam. Results show that the placement temperature has a substantial influence on the maximum temperature and tensile stress of the dam, and that the placement temperature cannot exceed 15 °C. The water-pipe cooling time and thermal insulation layer thickness have little influence on the maximum temperature, but both demonstrate a substantial influence on the maximum tensile stress of the dam. The thermal insulation thickness is significant for reducing the probability of cracking as a result of high thermal stress, and the maximum tensile stress can be controlled under the specification limit with a thermal insulation layer thickness of 10 cm. Finally, an optimized temperature control scheme for crack prevention is obtained based on the analysis results.

  20. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  1. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  3. Global sensitivity analysis of Alkali-Surfactant-Polymer enhanced oil recovery processes

    Energy Technology Data Exchange (ETDEWEB)

    Carrero, Enrique; Queipo, Nestor V.; Pintos, Salvador; Zerpa, Luis E. [Applied Computing Institute, Faculty of Engineering, University of Zulia, Zulia (Venezuela)

    2007-08-15

    After conventional waterflooding processes the residual oil in the reservoir remains as a discontinuous phase in the form of oil drops trapped by capillary forces and is likely to be around 70% of the original oil in place (OOIP). The EOR method so-called Alkaline-Surfactant-Polymer (ASP) flooding has been proved to be effective in reducing the oil residual saturation in laboratory experiments and field projects through reduction of interfacial tension and mobility ratio between oil and water phases. A critical step for the optimal design and control of ASP recovery processes is to find the relative contributions of design variables such as, slug size and chemical concentrations, in the variability of given performance measures (e.g., net present value, cumulative oil recovery), considering a heterogeneous and multiphase petroleum reservoir (sensitivity analysis). Previously reported works using reservoir numerical simulation have been limited to local sensitivity analyses because a global sensitivity analysis may require hundreds or even thousands of computationally expensive evaluations (field scale numerical simulations). To overcome this issue, a surrogate-based approach is suggested. Surrogate-based analysis/optimization makes reference to the idea of constructing an alternative fast model (surrogate) from numerical simulation data and using it for analysis/optimization purposes. This paper presents an efficient global sensitivity approach based on Sobol's method and multiple surrogates (i.e., Polynomial Regression, Kriging, Radial Base Functions and a Weighed Adaptive Model), with the multiple surrogates used to address the uncertainty in the analysis derived from plausible alternative surrogate-modeling schemes. The proposed approach was evaluated in the context of the global sensitivity analysis of a field scale Alkali-Surfactant-Polymer flooding process. The design variables and the performance measure in the ASP process were selected as slug size

  4. A framework for sensitivity analysis of decision trees.

    Science.gov (United States)

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  5. Remote Sensing of Seagrass Leaf Area Index and Species: The Capability of a Model Inversion Method Assessed by Sensitivity Analysis and Hyperspectral Data of Florida Bay

    Directory of Open Access Journals (Sweden)

    John D. Hedley

    2017-11-01

    Full Text Available The capability for mapping two species of seagrass, Thalassia testudinium and Syringodium filiforme, by remote sensing using a physics based model inversion method was investigated. The model was based on a three-dimensional canopy model combined with a model for the overlying water column. The model included uncertainty propagation based on variation in leaf reflectances, canopy structure, water column properties, and the air-water interface. The uncertainty propagation enabled both a-priori predictive sensitivity analysis of potential capability and the generation of per-pixel error bars when applied to imagery. A primary aim of the work was to compare the sensitivity analysis to results achieved in a practical application using airborne hyperspectral data, to gain insight on the validity of sensitivity analyses in general. Results showed that while the sensitivity analysis predicted a weak but positive discrimination capability for species, in a practical application the relevant spectral differences were extremely small compared to discrepancies in the radiometric alignment of the model with the imagery—even though this alignment was very good. Complex interactions between spectral matching and uncertainty propagation also introduced biases. Ability to discriminate LAI was good, and comparable to previously published methods using different approaches. The main limitation in this respect was spatial alignment with the imagery with in situ data, which was heterogeneous on scales of a few meters. The results provide insight on the limitations of physics based inversion methods and seagrass mapping in general. Complex models can degrade unpredictably when radiometric alignment of the model and imagery is not perfect and incorporating uncertainties can have non-intuitive impacts on method performance. Sensitivity analyses are upper bounds to practical capability, incorporating a term for potential systematic errors in radiometric alignment may

  6. Technique for sensitivity analysis of space- and energy-dependent burn-up calculations

    International Nuclear Information System (INIS)

    Williams, M.L.; White, J.R.

    1979-01-01

    A practical method is presented for sensitivity analysis of the very complex, space-energy dependent burn-up equations, in which the neutron and nuclide fields are coupled nonlinearly. The adjoint burn-up equations that are given are in a form which can be directly implemented into multi-dimensional depletion codes, such as VENTURE/BURNER. The data sensitivity coefficients can be used to determine the effect of data uncertainties on time-dependent depletion responses. Initial condition sensitivity coefficients provide a very effective method for computing the change in end of cycle parameters (such as k/sub eff/, fissile inventory, etc.) due to changes in nuclide concentrations at beginning of cycle

  7. Procedures for uncertainty and sensitivity analysis in repository performance assessment

    International Nuclear Information System (INIS)

    Poern, K.; Aakerlund, O.

    1985-10-01

    The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)

  8. Emissivity compensated spectral pyrometry—algorithm and sensitivity analysis

    International Nuclear Information System (INIS)

    Hagqvist, Petter; Sikström, Fredrik; Christiansson, Anna-Karin; Lennartson, Bengt

    2014-01-01

    In order to solve the problem of non-contact temperature measurements on an object with varying emissivity, a new method is herein described and evaluated. The method uses spectral radiance measurements and converts them to temperature readings. It proves to be resilient towards changes in spectral emissivity and tolerates noisy spectral measurements. It is based on an assumption of smooth changes in emissivity and uses historical values of spectral emissivity and temperature for estimating current spectral emissivity. The algorithm, its constituent steps and accompanying parameters are described and discussed. A thorough sensitivity analysis of the method is carried out through simulations. No rigorous instrument calibration is needed for the presented method and it is therefore industrially tractable. (paper)

  9. Global sensitivity analysis for models with spatially dependent outputs

    International Nuclear Information System (INIS)

    Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.

    2011-01-01

    The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)

  10. Total sensitivity and uncertainty analysis for LWR pin-cells with improved UNICORN code

    International Nuclear Information System (INIS)

    Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Shen, Wei

    2017-01-01

    Highlights: • A new model is established for the total sensitivity and uncertainty analysis. • The NR approximation applied in S&U analysis can be avoided by the new model. • Sensitivity and uncertainty analysis is performed to PWR pin-cells by the new model. • The effects of the NR approximation for the PWR pin-cells are quantified. - Abstract: In this paper, improvements to the multigroup cross-section perturbation model have been proposed and applied in the self-developed UNICORN code, which is capable of performing the total sensitivity and total uncertainty analysis for the neutron-physics calculations by applying the direct numerical perturbation method and the statistical sampling method respectively. The narrow resonance (NR) approximation was applied in the multigroup cross-section perturbation model, implemented in UNICORN. As improvements to the NR approximation to refine the multigroup cross-section perturbation model, an ultrafine-group cross-section perturbation model has been established, in which the actual perturbations are applied to the ultrafine-group cross-section library and the reconstructions of the resonance cross sections are performed by solving the neutron slowing-down equation. The total sensitivity and total uncertainty analysis were then applied to the LWR pin-cells, using both the multigroup and the ultrafine-group cross-section perturbation models. The numerical results show that the NR approximation overestimates the relative sensitivity coefficients and the corresponding uncertainty results for the LWR pin-cells, and the effects of the NR approximation are significant for σ_(_n_,_γ_) and σ_(_n_,_e_l_a_s_) of "2"3"8U. Therefore, the effects of the NR approximation applied in the total sensitivity and total uncertainty analysis for the neutron-physics calculations of LWR should be taken into account.

  11. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  12. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  13. Probability and sensitivity analysis of machine foundation and soil interaction

    Directory of Open Access Journals (Sweden)

    Králik J., jr.

    2009-06-01

    Full Text Available This paper deals with the possibility of the sensitivity and probabilistic analysis of the reliability of the machine foundation depending on variability of the soil stiffness, structure geometry and compressor operation. The requirements to design of the foundation under rotating machines increased due to development of calculation method and computer tools. During the structural design process, an engineer has to consider problems of the soil-foundation and foundation-machine interaction from the safety, reliability and durability of structure point of view. The advantages and disadvantages of the deterministic and probabilistic analysis of the machine foundation resistance are discussed. The sensitivity of the machine foundation to the uncertainties of the soil properties due to longtime rotating movement of machine is not negligible for design engineers. On the example of compressor foundation and turbine fy. SIEMENS AG the affectivity of the probabilistic design methodology was presented. The Latin Hypercube Sampling (LHS simulation method for the analysis of the compressor foundation reliability was used on program ANSYS. The 200 simulations for five load cases were calculated in the real time on PC. The probabilistic analysis gives us more complex information about the soil-foundation-machine interaction as the deterministic analysis.

  14. Eigenvalue sensitivity analysis and uncertainty quantification in SCALE6.2.1 using continuous-energy Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Labarile, A.; Barrachina, T.; Miró, R.; Verdú, G., E-mail: alabarile@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es [Institute for Industrial, Radiophysical and Environmental Safety - ISIRYM, Valencia (Spain); Pereira, C., E-mail: claubia@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    The use of Best-Estimate computer codes is one of the greatest concerns in the nuclear industry especially for licensing analysis. Of paramount importance is the estimation of the uncertainties of the whole system to establish the safety margins based on highly reliable results. The estimation of these uncertainties should be performed by applying a methodology to propagate the uncertainties from the input parameters and the models implemented in the code to the output parameters. This study employs two different approaches for the Sensitivity Analysis (SA) and Uncertainty Quantification (UQ), the adjoint-based perturbation theory of TSUNAMI-3D, and the stochastic sampling technique of SAMPLER/KENO. The cases studied are two models of Light Water Reactors in the framework of the OECD/NEA UAM-LWR benchmark, a Boiling Water Reactor (BWR) and a Pressurized Water Reactor (PWR). Both of them at Hot Full Power (HFP) and Hot Zero Power (HZP) conditions, with and without control rod. This work presents the results of k{sub eff} from different simulation, and discuss the comparison of the two methods employed. In particular, a list of the major contributors to the uncertainty of k{sub eff} in terms of microscopic cross sections; their sensitivity coefficients; a comparison between the results of the two modules and with reference values; statistical information from the stochastic approach, and the probability and statistical confidence reached in the simulations. The reader will find all these information discussed in this paper. (author)

  15. A sensitive, reproducible and objective immunofluorescence analysis method of dystrophin in individual fibers in samples from patients with duchenne muscular dystrophy.

    Directory of Open Access Journals (Sweden)

    Chantal Beekman

    Full Text Available Duchenne muscular dystrophy (DMD is characterized by the absence or reduced levels of dystrophin expression on the inner surface of the sarcolemmal membrane of muscle fibers. Clinical development of therapeutic approaches aiming to increase dystrophin levels requires sensitive and reproducible measurement of differences in dystrophin expression in muscle biopsies of treated patients with DMD. This, however, poses a technical challenge due to intra- and inter-donor variance in the occurrence of revertant fibers and low trace dystrophin expression throughout the biopsies. We have developed an immunofluorescence and semi-automated image analysis method that measures the sarcolemmal dystrophin intensity per individual fiber for the entire fiber population in a muscle biopsy. Cross-sections of muscle co-stained for dystrophin and spectrin have been imaged by confocal microscopy, and image analysis was performed using Definiens software. Dystrophin intensity has been measured in the sarcolemmal mask of spectrin for each individual muscle fiber and multiple membrane intensity parameters (mean, maximum, quantiles per fiber were calculated. A histogram can depict the distribution of dystrophin intensities for the fiber population in the biopsy. This method was tested by measuring dystrophin in DMD, Becker muscular dystrophy, and healthy muscle samples. Analysis of duplicate or quadruplicate sections of DMD biopsies on the same or multiple days, by different operators, or using different antibodies, was shown to be objective and reproducible (inter-assay precision, CV 2-17% and intra-assay precision, CV 2-10%. Moreover, the method was sufficiently sensitive to detect consistently small differences in dystrophin between two biopsies from a patient with DMD before and after treatment with an investigational compound.

  16. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  17. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  18. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  19. Perturbative methods applied for sensitive coefficients calculations in thermal-hydraulic systems

    International Nuclear Information System (INIS)

    Andrade Lima, F.R. de

    1993-01-01

    The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)

  20. SENSITIVITY ANALYSIS OF BUILDING STRUCTURES WITHIN THE SCOPE OF ENERGY, ENVIRONMENT AND INVESTMENT

    Directory of Open Access Journals (Sweden)

    František Kulhánek

    2015-10-01

    Full Text Available The primary objective of this paper is to prove the feasibility of sensitivity analysis with dominant weight method for structure parts of envelope of buildings inclusive of energy; ecological and financial assessments, and determination of different designs for same structural part via multi-criteria assessment with theoretical example designs ancillary. Multi-criteria assessment (MCA of different structural designs or in other word alternatives aims to find the best available alternative. The application of sensitivity analysis technique in this paper bases on dominant weighting method. In this research, to choose the best thermal insulation design in the case of that more than one projection, simultaneously, criteria of total thickness (T; heat transfer coefficient (U through the cross section; global warming potential (GWP; acid produce (AP; primary energy content (PEI non renewable and cost per m2 (C are investigated for all designs via sensitivity analysis. Three different designs for external wall (over soil which are convenient with regard to globally suggested energy features for passive house design are investigated through the mentioned six projections. By creating a given set of scenarios; depending upon the importance of each criterion, sensitivity analysis is distributed. As conclusion, uncertainty in the output of model is attributed to different sources in the model input. In this manner, determination of the best available design is achieved. The original outlook and the outlook afterwards the sensitivity analysis are visualized, that enables easily to choose the optimum design within the scope of verified components.

  1. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  2. Sensitive spectrophotometric methods for determination of some organophosphorus pesticides in vegetable samples

    Directory of Open Access Journals (Sweden)

    MAGDA A. AKL

    2010-03-01

    Full Text Available Three rapid, simple, reproducible and sensitive spectrophotometric methods (A, B and C are described for the determination of two organophosphorus pesticides, (malathion and dimethoate in formulations and vegetable samples. The methods A and B involve the addition of an excess of Ce4+ into sulphuric acid medium and the determination of the unreacted oxidant by decreasing the red color of chromotrope 2R (C2R at a suitable lmax = 528 nm for method A, or a decrease in the orange pink color of rhodamine 6G (Rh6G at a suitable lmax = = 525 nm. The method C is based on the oxidation of malathion or dimethoate with the slight excess of N-bromosuccinimide (NBS and the determination of unreacted oxidant by reacting it with amaranth dye (AM in hydrochloric acid medium at a suitable lmax = 520 nm. A regression analysis of Beer-Lambert plots showed a good correlation in the concentration range of 0.1-4.2 μg mL−1. The apparent molar absorptivity, Sandell sensitivity, the detection and quantification limits were calculated. For more accurate analysis, Ringbom optimum concentration ranges are 0.25-4.0 μg mL−1. The developed methods were successfully applied to the determination of malathion, and dimethoate in their formulations and environmental vegetable samples.

  3. A Sensitivity Study for an Evaluation of Input Parameters Effect on a Preliminary Probabilistic Tsunami Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rhee, Hyun-Me; Kim, Min Kyu; Choi, In-Kil [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Sheen, Dong-Hoon [Chonnam National University, Gwangju (Korea, Republic of)

    2014-10-15

    The tsunami hazard analysis has been based on the seismic hazard analysis. The seismic hazard analysis has been performed by using the deterministic method and the probabilistic method. To consider the uncertainties in hazard analysis, the probabilistic method has been regarded as attractive approach. The various parameters and their weight are considered by using the logic tree approach in the probabilistic method. The uncertainties of parameters should be suggested by analyzing the sensitivity because the various parameters are used in the hazard analysis. To apply the probabilistic tsunami hazard analysis, the preliminary study for the Ulchin NPP site had been performed. The information on the fault sources which was published by the Atomic Energy Society of Japan (AESJ) had been used in the preliminary study. The tsunami propagation was simulated by using the TSUNAMI{sub 1}.0 which was developed by Japan Nuclear Energy Safety Organization (JNES). The wave parameters have been estimated from the result of tsunami simulation. In this study, the sensitivity analysis for the fault sources which were selected in the previous studies has been performed. To analyze the effect of the parameters, the sensitivity analysis for the E3 fault source which was published by AESJ was performed. The effect of the recurrence interval, the potential maximum magnitude, and the beta were suggested by the sensitivity analysis results. Level of annual exceedance probability has been affected by the recurrence interval.. Wave heights have been influenced by the potential maximum magnitude and the beta. In the future, the sensitivity analysis for the all fault sources in the western part of Japan which were published AESJ would be performed.

  4. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  5. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  6. The surface analysis methods; Les methodes d`analyse des surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Deville, J.P. [Institut de Physique et Chimie, 67 - Strasbourg (France)

    1998-11-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.) 11 refs.

  7. High sensitivity neutron activation analysis of environmental and biological standard reference materials

    International Nuclear Information System (INIS)

    Greenberg, R.R.; Fleming, R.F.; Zeisler, R.

    1984-01-01

    Neutron activation analysis is a sensitive method with unique capabilities for the analysis of environmental and biological samples. Since it is based upon the nuclear properties of the elements, it does not suffer from many of the chemical effects that plague other methods of analysis. Analyses can be performed either with no chemical treatment of the sample (instrumentally), or with separations of the elements of interest after neutron irradiation (radiochemically). Typical examples of both types of analysis are discussed, and data obtained for a number of environmental and biological SRMs are presented. (author)

  8. Estimating the Capacity of Urban Transportation Networks with an Improved Sensitivity Based Method

    Directory of Open Access Journals (Sweden)

    Muqing Du

    2015-01-01

    Full Text Available The throughput of a given transportation network is always of interest to the traffic administrative department, so as to evaluate the benefit of the transportation construction or expansion project before its implementation. The model of the transportation network capacity formulated as a mathematic programming with equilibrium constraint (MPEC well defines this problem. For practical applications, a modified sensitivity analysis based (SAB method is developed to estimate the solution of this bilevel model. The high-efficient origin-based (OB algorithm is extended for the precise solution of the combined model which is integrated in the network capacity model. The sensitivity analysis approach is also modified to simplify the inversion of the Jacobian matrix in large-scale problems. The solution produced in every iteration of SAB is restrained to be feasible to guarantee the success of the heuristic search. From the numerical experiments, the accuracy of the derivatives for the linear approximation could significantly affect the converging of the SAB method. The results also show that the proposed method could obtain good suboptimal solutions from different starting points in the test examples.

  9. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  10. The importance of input interactions in the uncertainty and sensitivity analysis of nuclear fuel behavior

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, T., E-mail: timo.ikonen@vtt.fi; Tulkki, V.

    2014-08-15

    Highlights: • Uncertainty and sensitivity analysis of modeled nuclear fuel behavior is performed. • Burnup dependency of the uncertainties and sensitivities is characterized. • Input interactions significantly increase output uncertainties for irradiated fuel. • Identification of uncertainty sources is greatly improved with higher order methods. • Results stress the importance of using methods that take interactions into account. - Abstract: The propagation of uncertainties in a PWR fuel rod under steady-state irradiation is analyzed by computational means. A hypothetical steady-state scenario of the Three Mile Island 1 reactor fuel rod is modeled with the fuel performance FRAPCON, using realistic input uncertainties for the fabrication and model parameters, boundary conditions and material properties. The uncertainty and sensitivity analysis is performed by extensive Monte Carlo sampling of the inputs’ probability distribution and by applying correlation coefficient and Sobol’ variance decomposition analyses. The latter includes evaluation of the second order and total effect sensitivity indices, allowing the study of interactions between input variables. The results show that the interactions play a large role in the propagation of uncertainties, and first order methods such as the correlation coefficient analyses are in general insufficient for sensitivity analysis of the fuel rod. Significant improvement over the first order methods can be achieved by using higher order methods. The results also show that both the magnitude of the uncertainties and their propagation depends not only on the output in question, but also on burnup. The latter is due to onset of new phenomena (such as the fission gas release) and the gradual closure of the pellet-cladding gap with increasing burnup. Increasing burnup also affects the importance of input interactions. Interaction effects are typically highest in the moderate burnup (of the order of 10–40 MWd

  11. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Directory of Open Access Journals (Sweden)

    L.-P. Wang

    2015-09-01

    Full Text Available Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2 (Edinburgh, UK during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban

  12. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Science.gov (United States)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system

  13. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    Science.gov (United States)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  14. Temperature sensitive surfaces and methods of making same

    Science.gov (United States)

    Liang, Liang [Richland, WA; Rieke, Peter C [Pasco, WA; Alford, Kentin L [Pasco, WA

    2002-09-10

    Poly-n-isopropylacrylamide surface coatings demonstrate the useful property of being able to switch charateristics depending upon temperature. More specifically, these coatings switch from being hydrophilic at low temperature to hydrophobic at high temperature. Research has been conducted for many years to better characterize and control the properties of temperature sensitive coatings. The present invention provides novel temperature sensitive coatings on articles and novel methods of making temperature sensitive coatings that are disposed on the surfaces of various articles. These novel coatings contain the reaction products of n-isopropylacrylamide and are characterized by their properties such as advancing contact angles. Numerous other characteristics such as coating thickness, surface roughness, and hydrophilic-to-hydrophobic transition temperatures are also described. The present invention includes articles having temperature-sensitve coatings with improved properties as well as improved methods for forming temperature sensitive coatings.

  15. A tool model for predicting atmospheric kinetics with sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.

  16. Parametric sensitivity analysis for stochastic molecular systems using information theoretic metrics

    Energy Technology Data Exchange (ETDEWEB)

    Tsourtis, Anastasios, E-mail: tsourtis@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, Crete (Greece); Pantazis, Yannis, E-mail: pantazis@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States); Harmandaris, Vagelis, E-mail: harman@uoc.gr [Department of Mathematics and Applied Mathematics, University of Crete, and Institute of Applied and Computational Mathematics (IACM), Foundation for Research and Technology Hellas (FORTH), GR-70013 Heraklion, Crete (Greece)

    2015-07-07

    In this paper, we present a parametric sensitivity analysis (SA) methodology for continuous time and continuous space Markov processes represented by stochastic differential equations. Particularly, we focus on stochastic molecular dynamics as described by the Langevin equation. The utilized SA method is based on the computation of the information-theoretic (and thermodynamic) quantity of relative entropy rate (RER) and the associated Fisher information matrix (FIM) between path distributions, and it is an extension of the work proposed by Y. Pantazis and M. A. Katsoulakis [J. Chem. Phys. 138, 054115 (2013)]. A major advantage of the pathwise SA method is that both RER and pathwise FIM depend only on averages of the force field; therefore, they are tractable and computable as ergodic averages from a single run of the molecular dynamics simulation both in equilibrium and in non-equilibrium steady state regimes. We validate the performance of the extended SA method to two different molecular stochastic systems, a standard Lennard-Jones fluid and an all-atom methane liquid, and compare the obtained parameter sensitivities with parameter sensitivities on three popular and well-studied observable functions, namely, the radial distribution function, the mean squared displacement, and the pressure. Results show that the RER-based sensitivities are highly correlated with the observable-based sensitivities.

  17. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling.

    Science.gov (United States)

    Núñez, M; Robie, T; Vlachos, D G

    2017-10-28

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  18. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  19. A Sensitive Method Approach for Chromatographic Analysis of Gas Streams in Separation Processes Based on Columns Packed with an Adsorbent Material

    Directory of Open Access Journals (Sweden)

    I. A. A. C. Esteves

    2016-01-01

    Full Text Available A sensitive method was developed and experimentally validated for the in-line analysis and quantification of gaseous feed and product streams of separation processes under research and development based on column chromatography. The analysis uses a specific mass spectrometry method coupled to engineering processes, such as Pressure Swing Adsorption (PSA and Simulated Moving Bed (SMB, which are examples of popular continuous separation technologies that can be used in applications such as natural gas and biogas purifications or carbon dioxide sequestration. These processes employ column adsorption equilibria on adsorbent materials, thus requiring real-time gas stream composition quantification. For this assay, an internal standard is assumed and a single-point calibration is used in a simple mixture-specific algorithm. The accuracy of the method was found to be between 0.01% and 0.25% (-mol for mixtures of CO2, CH4, and N2, tested as case-studies. This makes the method feasible for streams with quality control levels that can be used as a standard monitoring and analyzing procedure.

  20. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  1. Robust Stability Clearance of Flight Control Law Based on Global Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Liuli Ou

    2014-01-01

    Full Text Available To validate the robust stability of the flight control system of hypersonic flight vehicle, which suffers from a large number of parametrical uncertainties, a new clearance framework based on structural singular value (μ theory and global uncertainty sensitivity analysis (SA is proposed. In this framework, SA serves as the preprocess of uncertain model to be analysed to help engineers to determine which uncertainties affect the stability of the closed loop system more slightly. By ignoring these unimportant uncertainties, the calculation of μ can be simplified. Instead of analysing the effect of uncertainties on μ which involves solving optimal problems repeatedly, a simpler stability analysis function which represents the effect of uncertainties on closed loop poles is proposed. Based on this stability analysis function, Sobol’s method, the most widely used global SA method, is extended and applied to the new clearance framework due to its suitability for system with strong nonlinearity and input factors varying in large interval, as well as input factors subjecting to random distributions. In this method, the sensitive indices can be estimated via Monte Carlo simulation conveniently. An example is given to illustrate the efficiency of the proposed method.

  2. Introducing AAA-MS, a rapid and sensitive method for amino acid analysis using isotope dilution and high-resolution mass spectrometry.

    Science.gov (United States)

    Louwagie, Mathilde; Kieffer-Jaquinod, Sylvie; Dupierris, Véronique; Couté, Yohann; Bruley, Christophe; Garin, Jérôme; Dupuis, Alain; Jaquinod, Michel; Brun, Virginie

    2012-07-06

    Accurate quantification of pure peptides and proteins is essential for biotechnology, clinical chemistry, proteomics, and systems biology. The reference method to quantify peptides and proteins is amino acid analysis (AAA). This consists of an acidic hydrolysis followed by chromatographic separation and spectrophotometric detection of amino acids. Although widely used, this method displays some limitations, in particular the need for large amounts of starting material. Driven by the need to quantify isotope-dilution standards used for absolute quantitative proteomics, particularly stable isotope-labeled (SIL) peptides and PSAQ proteins, we developed a new AAA assay (AAA-MS). This method requires neither derivatization nor chromatographic separation of amino acids. It is based on rapid microwave-assisted acidic hydrolysis followed by high-resolution mass spectrometry analysis of amino acids. Quantification is performed by comparing MS signals from labeled amino acids (SIL peptide- and PSAQ-derived) with those of unlabeled amino acids originating from co-hydrolyzed NIST standard reference materials. For both SIL peptides and PSAQ standards, AAA-MS quantification results were consistent with classical AAA measurements. Compared to AAA assay, AAA-MS was much faster and was 100-fold more sensitive for peptide and protein quantification. Finally, thanks to the development of a labeled protein standard, we also extended AAA-MS analysis to the quantification of unlabeled proteins.

  3. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  4. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Sensitivity analysis of hydraulic fracturing Using an extended finite element method for the PKN model

    NARCIS (Netherlands)

    Garikapati, Hasini; Verhoosel, Clemens V.; van Brummelen, Harald; Diez, Pedro; Papadrakakis, M.; Papadopoulos, V.; Stefanou, G.; Plevris, V.

    2016-01-01

    Hydraulic fracturing is a process that is surrounded by uncertainty, as available data on e.g. rock formations is scant and available models are still rudimentary. In this contribution sensitivity analysis is carried out as first step in studying the uncertainties in the model. This is done to

  6. Stability and Sensitive Analysis of a Model with Delay Quorum Sensing

    Directory of Open Access Journals (Sweden)

    Zhonghua Zhang

    2015-01-01

    Full Text Available This paper formulates a delay model characterizing the competition between bacteria and immune system. The center manifold reduction method and the normal form theory due to Faria and Magalhaes are used to compute the normal form of the model, and the stability of two nonhyperbolic equilibria is discussed. Sensitivity analysis suggests that the growth rate of bacteria is the most sensitive parameter of the threshold parameter R0 and should be targeted in the controlling strategies.

  7. EPR method for the detection of sensitization in stainless steels

    International Nuclear Information System (INIS)

    Clarke, W.L.; Cowan, R.L.

    1980-01-01

    The overall objective of the program was to improve the reliability of reactor system piping by increasing knowledge of failure causing mechanisms and by enhancing the capability for design evaluation and analysis. Toward the attainment of that objective, a technique has been developed to measure the degree of sensitization quantitatively in thermally treated AISI-304, -304L, -316 and 316L stainless steels. The Electrochemical Potentiokinetic Reactivation (EPR) test was developed because of an industrial need for a rapid, nondestructive, quantitative field test which could be used for assessing sensitization in reactor components. The EPR method consists of developing potentiokinetic curves on a polarized sample obtained by controlled potential sweep from the passive to the active region (reactivation) in a specific electrolyte; details of the test technique have been reported

  8. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    Science.gov (United States)

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  9. Sensitivity analysis of the Galerkin finite element method neutron diffusion solver to the shape of the elements

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, Seyed Abolfaz [Dept. of Energy Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2017-02-15

    The purpose of the present study is the presentation of the appropriate element and shape function in the solution of the neutron diffusion equation in two-dimensional (2D) geometries. To this end, the multigroup neutron diffusion equation is solved using the Galerkin finite element method in both rectangular and hexagonal reactor cores. The spatial discretization of the equation is performed using unstructured triangular and quadrilateral finite elements. Calculations are performed using both linear and quadratic approximations of shape function in the Galerkin finite element method, based on which results are compared. Using the power iteration method, the neutron flux distributions with the corresponding eigenvalue are obtained. The results are then validated against the valid results for IAEA-2D and BIBLIS-2D benchmark problems. To investigate the dependency of the results to the type and number of the elements, and shape function order, a sensitivity analysis of the calculations to the mentioned parameters is performed. It is shown that the triangular elements and second order of the shape function in each element give the best results in comparison to the other states.

  10. Application of Sensitivity and Uncertainty Analysis Methods to a Validation Study for Weapons-Grade Mixed-Oxide Fuel

    International Nuclear Information System (INIS)

    Dunn, M.E.

    2001-01-01

    At the Oak Ridge National Laboratory (ORNL), sensitivity and uncertainty (S/U) analysis methods and a Generalized Linear Least-Squares Methodology (GLLSM) have been developed to quantitatively determine the similarity or lack thereof between critical benchmark experiments and an application of interest. The S/U and GLLSM methods provide a mathematical approach, which is less judgment based relative to traditional validation procedures, to assess system similarity and estimate the calculational bias and uncertainty for an application of interest. The objective of this paper is to gain experience with the S/U and GLLSM methods by revisiting a criticality safety evaluation and associated traditional validation for the shipment of weapons-grade (WG) MOX fuel in the MO-1 transportation package. In the original validation, critical experiments were selected based on a qualitative assessment of the MO-1 and MOX contents relative to the available experiments. Subsequently, traditional trending analyses were used to estimate the Δk bias and associated uncertainty. In this paper, the S/U and GLLSM procedures are used to re-evaluate the suite of critical experiments associated with the original MO-1 evaluation. Using the S/U procedures developed at ORNL, critical experiments that are similar to the undamaged and damaged MO-1 package are identified based on sensitivity and uncertainty analyses of the criticals and the MO-1 package configurations. Based on the trending analyses developed for the S/U and GLLSM procedures, the Δk bias and uncertainty for the most reactive MO-1 package configurations are estimated and used to calculate an upper subcritical limit (USL) for the MO-1 evaluation. The calculated bias and uncertainty from the S/U and GLLSM analyses lead to a calculational USL that supports the original validation study for the MO-1

  11. Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders.

    Science.gov (United States)

    Vanderweele, Tyler J; Arah, Onyebuchi A

    2011-01-01

    Uncontrolled confounding in observational studies gives rise to biased effect estimates. Sensitivity analysis techniques can be useful in assessing the magnitude of these biases. In this paper, we use the potential outcomes framework to derive a general class of sensitivity-analysis formulas for outcomes, treatments, and measured and unmeasured confounding variables that may be categorical or continuous. We give results for additive, risk-ratio and odds-ratio scales. We show that these results encompass a number of more specific sensitivity-analysis methods in the statistics and epidemiology literature. The applicability, usefulness, and limits of the bias-adjustment formulas are discussed. We illustrate the sensitivity-analysis techniques that follow from our results by applying them to 3 different studies. The bias formulas are particularly simple and easy to use in settings in which the unmeasured confounding variable is binary with constant effect on the outcome across treatment levels.

  12. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  13. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  14. Analytical methods for large-scale sensitivity analysis using GRESS [GRadient Enhanced Software System] and ADGEN [Automated Adjoint Generator

    International Nuclear Information System (INIS)

    Pin, F.G.

    1988-04-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and ADGEN now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed. 7 refs., 2 figs

  15. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  16. Application of perturbation methods for sensitivity analysis for nuclear power plant steam generators; Aplicacao da teoria de perturbacao a analise de sensibilidade em geradores de vapor de usinas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Gurjao, Emir Candeia

    1996-02-01

    The differential and GPT (Generalized Perturbation Theory) formalisms of the Perturbation Theory were applied in this work to a simplified U-tubes steam generator model to perform sensitivity analysis. The adjoint and importance equations, with the corresponding expressions for the sensitivity coefficients, were derived for this steam generator model. The system was numerically was numerically solved in a Fortran program, called GEVADJ, in order to calculate the sensitivity coefficients. A transient loss of forced primary coolant in the nuclear power plant Angra-1 was used as example case. The average and final values of functionals: secondary pressure and enthalpy were studied in relation to changes in the secondary feedwater flow, enthalpy and total volume in secondary circuit. Absolute variations in the above functionals were calculated using the perturbative methods, considering the variations in the feedwater flow and total secondary volume. Comparison with the same variations obtained via direct model showed in general good agreement, demonstrating the potentiality of perturbative methods for sensitivity analysis of nuclear systems. (author) 22 refs., 7 figs., 8 tabs.

  17. Robust and sensitive analysis of mouse knockout phenotypes.

    Directory of Open Access Journals (Sweden)

    Natasha A Karp

    Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.

  18. Development and Sensitivity Analysis of a Fully Kinetic Model of Sequential Reductive Dechlorination in Groundwater

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup

    2011-01-01

    experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most...

  19. Resonance-induced sensitivity enhancement method for conductivity sensors

    Science.gov (United States)

    Tai, Yu-Chong (Inventor); Shih, Chi-yuan (Inventor); Li, Wei (Inventor); Zheng, Siyang (Inventor)

    2009-01-01

    Methods and systems for improving the sensitivity of a variety of conductivity sensing devices, in particular capacitively-coupled contactless conductivity detectors. A parallel inductor is added to the conductivity sensor. The sensor with the parallel inductor is operated at a resonant frequency of the equivalent circuit model. At the resonant frequency, parasitic capacitances that are either in series or in parallel with the conductance (and possibly a series resistance) is substantially removed from the equivalent circuit, leaving a purely resistive impedance. An appreciably higher sensor sensitivity results. Experimental verification shows that sensitivity improvements of the order of 10,000-fold are possible. Examples of detecting particulates with high precision by application of the apparatus and methods of operation are described.

  20. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    International Nuclear Information System (INIS)

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  1. Bayesian Sensitivity Analysis of a Nonlinear Dynamic Factor Analysis Model with Nonparametric Prior and Possible Nonignorable Missingness.

    Science.gov (United States)

    Tang, Niansheng; Chow, Sy-Miin; Ibrahim, Joseph G; Zhu, Hongtu

    2017-12-01

    Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters' restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.

  2. Sensitivity Analysis of Fire Dynamics Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.

    2007-01-01

    (Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...

  3. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  4. Development of a sensitive GC-C-IRMS method for the analysis of androgens.

    Science.gov (United States)

    Polet, Michael; Van Gansbeke, Wim; Deventer, Koen; Van Eenoo, Peter

    2013-02-01

    The administration of anabolic steroids is one of the most important issues in doping control and is detectable through a change in the carbon isotopic composition of testosterone and/or its metabolites. Gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS), however, remains a very laborious and expensive technique and substantial amounts of urine are needed to meet the sensitivity requirements of the IRMS. This can be problematic because only a limited amount of urine is available for anti-doping analysis on a broad spectrum of substances. In this work we introduce a new type of injection that increases the sensitivity of GC-C-IRMS by a factor of 13 and reduces the limit of detection, simply by using solvent vent injections instead of splitless injection. This drastically reduces the amount of urine required. On top of that, by only changing the injection technique, the detection parameters of the IRMS are not affected and there is no loss in linearity. Copyright © 2012 John Wiley & Sons, Ltd.

  5. NK sensitivity of neuroblastoma cells determined by a highly sensitive coupled luminescent method

    International Nuclear Information System (INIS)

    Ogbomo, Henry; Hahn, Anke; Geiler, Janina; Michaelis, Martin; Doerr, Hans Wilhelm; Cinatl, Jindrich

    2006-01-01

    The measurement of natural killer (NK) cells toxicity against tumor or virus-infected cells especially in cases with small blood samples requires highly sensitive methods. Here, a coupled luminescent method (CLM) based on glyceraldehyde-3-phosphate dehydrogenase release from injured target cells was used to evaluate the cytotoxicity of interleukin-2 activated NK cells against neuroblastoma cell lines. In contrast to most other methods, CLM does not require the pretreatment of target cells with labeling substances which could be toxic or radioactive. The effective killing of tumor cells was achieved by low effector/target ratios ranging from 0.5:1 to 4:1. CLM provides highly sensitive, safe, and fast procedure for measurement of NK cell activity with small blood samples such as those obtained from pediatric patients

  6. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    International Nuclear Information System (INIS)

    Alhossen, I; Bugarin, F; Segonds, S; Villeneuve-Faure, C; Baudoin, F

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC. (paper)

  7. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    Science.gov (United States)

    Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.

  8. Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach

    Science.gov (United States)

    Aguilar, José G.; Magri, Luca; Juniper, Matthew P.

    2017-07-01

    Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.

  9. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  10. Contributions to sensitivity analysis and generalized discriminant analysis; Contributions a l'analyse de sensibilite et a l'analyse discriminante generalisee

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, J

    2005-12-15

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  11. Stochastic sensitivity analysis of periodic attractors in non-autonomous nonlinear dynamical systems based on stroboscopic map

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Kong-Ming, E-mail: kmguo@xidian.edu.cn [School of Electromechanical Engineering, Xidian University, P.O. Box 187, Xi' an 710071 (China); Jiang, Jun, E-mail: jun.jiang@mail.xjtu.edu.cn [State Key Laboratory for Strength and Vibration, Xi' an Jiaotong University, Xi' an 710049 (China)

    2014-07-04

    To apply stochastic sensitivity function method, which can estimate the probabilistic distribution of stochastic attractors, to non-autonomous dynamical systems, a 1/N-period stroboscopic map for a periodic motion is constructed in order to discretize the continuous cycle into a discrete one. In this way, the sensitivity analysis of a cycle for discrete map can be utilized and a numerical algorithm for the stochastic sensitivity analysis of periodic solutions of non-autonomous nonlinear dynamical systems under stochastic disturbances is devised. An external excited Duffing oscillator and a parametric excited laser system are studied as examples to show the validity of the proposed method. - Highlights: • A method to analyze sensitivity of stochastic periodic attractors in non-autonomous dynamical systems is proposed. • Probabilistic distribution around periodic attractors in an external excited Φ{sup 6} Duffing system is obtained. • Probabilistic distribution around a periodic attractor in a parametric excited laser system is determined.

  12. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  13. Methods of high-sensitive analysis of actinides in liquid radioactive waste

    International Nuclear Information System (INIS)

    Diakov, Alexandre A.; Perekhozheva, Tatiana N.; Zlokazova, Elena I.

    2002-01-01

    A complex of methods has been developed to determine actinides in liquid radioactive wastes for solving the problems of radiation, nuclear and ecological safety of nuclear reactors. The main method is based on the radiochemical separation of U, Np-Pu, Am-Cm on ion-exchange and extraction columns. An identification of radionuclides and determination of their content are performed using alpha-spectrometry. The microconcentrations of the sum of the main fissile materials U-235 and Pu-239 are determined with the usage of plastic track detectors. An independent method of U-238 content determination is the neutron activation analysis. Am-241 content is possible to determine with gamma-spectrometry. (author)

  14. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  15. A new method of removing the high value feedback resistor in the charge sensitive preamplifier

    International Nuclear Information System (INIS)

    Xi Deming

    1993-01-01

    A new method of removing the high value feedback resistor in the charge sensitive preamplifier is introduced. The circuit analysis of this novel design is described and the measured performances of a practical circuit are provided

  16. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  17. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  18. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    International Nuclear Information System (INIS)

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-01-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  19. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  20. Assessing the Risk of Secondary Transfer Via Fingerprint Brush Contamination Using Enhanced Sensitivity DNA Analysis Methods.

    Science.gov (United States)

    Bolivar, Paula-Andrea; Tracey, Martin; McCord, Bruce

    2016-01-01

    Experiments were performed to determine the extent of cross-contamination of DNA resulting from secondary transfer due to fingerprint brushes used on multiple items of evidence. Analysis of both standard and low copy number (LCN) STR was performed. Two different procedures were used to enhance sensitivity, post-PCR cleanup and increased cycle number. Under standard STR typing procedures, some additional alleles were produced that were not present in the controls or blanks; however, there was insufficient data to include the contaminant donor as a contributor. Inclusion of the contaminant donor did occur for one sample using post-PCR cleanup. Detection of the contaminant donor occurred for every replicate of the 31 cycle amplifications; however, using LCN interpretation recommendations for consensus profiles, only one sample would include the contaminant donor. Our results indicate that detection of secondary transfer of DNA can occur through fingerprint brush contamination and is enhanced using LCN-DNA methods. © 2015 American Academy of Forensic Sciences.

  1. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    Science.gov (United States)

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier

  2. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  3. Improved sensitivity testing of explosives using transformed Up-Down methods

    International Nuclear Information System (INIS)

    Brown, Geoffrey W

    2014-01-01

    Sensitivity tests provide data that help establish guidelines for the safe handling of explosives. Any sensitivity test is based on assumptions to simplify the method or reduce the number of individual sample evaluations. Two common assumptions that are not typically checked after testing are 1) explosive response follows a normal distribution as a function of the applied stimulus levels and 2) the chosen test level spacing is close to the standard deviation of the explosive response function (for Bruceton Up-Down testing for example). These assumptions and other limitations of traditional explosive sensitivity testing can be addressed using Transformed Up-Down (TUD) test methods. TUD methods have been developed extensively for psychometric testing over the past 50 years and generally use multiple tests at a given level to determine how to adjust the applied stimulus. In the context of explosive sensitivity we can use TUD methods that concentrate testing around useful probability levels. Here, these methods are explained and compared to Bruceton Up-Down testing using computer simulation. The results show that the TUD methods are more useful for many cases but that they do require more tests as a consequence. For non-normal distributions, however, the TUD methods may be the only accurate assessment method.

  4. Exploring Intercultural Sensitivity in Early Adolescence: A Mixed Methods Study

    Science.gov (United States)

    Mellizo, Jennifer M.

    2017-01-01

    The purpose of this mixed methods study was to explore levels of intercultural sensitivity in a sample of fourth to eighth grade students in the United States (n = 162). "Intercultural sensitivity" was conceptualised through Bennett's Developmental Model of Sensitivity, and assessed through the Adapted Intercultural Sensitivity Index.…

  5. Sensitivity analysis of U238 cross section in thermal nuclear systems

    International Nuclear Information System (INIS)

    Amorim, E.S. do; D'Oliveira, A.B.; Oliveira, E.C. de; Moura Neto, C. de.

    1980-01-01

    A sensitivity analysis system is developed for assessing the implication of uncertainties in nuclear data and related computational methods for light water power reactor. Sensitivies, at equilibrium cycle condition, are carried out for the few group macroscopic cross section of the U 238 with respect to their 35 group microscopic absorption cross section using the batch depletion code SENTEAV similar to those calculation methods used in the industry. This investigation indicates that improvements are requested on specific range of energy. These results point out the direction for worth while experimental measurements based on an analysis of costs and economic benefits. (Author) [pt

  6. Comparison of two ultra-sensitive methods for the determination of 232Th by recovery corrected pre-concentration radiochemical neutron activation analysis

    International Nuclear Information System (INIS)

    Glover, S.E.; Qu, H.; LaMont, S.P.; Grimm, C.A.; Filby, R.H.

    2001-01-01

    The determination of isotopic thorium by alpha spectrometric methods is a routine practice for bioassay and environmental measurement programs. Alpha-spectrometry has excellent detection limits (by mass) for all isotopes of thorium except 232 Th due to its extremely long half-life. Improvements in the detection limit an sensitivity over previously reported methods of pre-concentration neutron activation analysis (PCNAA) for the recovery corrected, isotopic determination of thorium in various matrices is discussed. Following irradiation, the samples were dissolved, 231 Pa added as a tracer, and Pa isolated by two different methods and compared (extraction chromatography and anion exchange chromatography) followed by alpha spectrometry for recovery correction. Ion exchange chromatography was found to be superior for this application at this time, principally for reliability. The detection limit for 232 Th of 3.5 x 10 -7 Bq is almost three orders of magnitude lower than for alpha spectrometry using the PCRNAA method and one order of magnitude below previously reported PCNAA methods. (author)

  7. Parametric sensitivity analysis for the helium dimers on a model potential

    Directory of Open Access Journals (Sweden)

    Nelson Henrique Teixeira Lemes

    2012-01-01

    Full Text Available Potential parameters sensitivity analysis for helium unlike molecules, HeNe, HeAr, HeKr and HeXe is the subject of this work. Number of bound states these rare gas dimers can support, for different angular momentum, will be presented and discussed. The variable phase method, together with the Levinson's theorem, is used to explore the quantum scattering process at very low collision energy using the Tang and Toennies potential. These diatomic dimers can support a bound state even for relative angular momentum equal to five, as in HeXe. Vibrational excited states, with zero angular momentum, are also possible for HeKr and HeXe. Results from sensitive analysis will give acceptable order of magnitude on potentials parameters.

  8. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  9. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  10. Sensitivity analysis for missing data in regulatory submissions.

    Science.gov (United States)

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  11. Sensitivity analysis for heat diffusion in a fin on a nuclear fuel element

    International Nuclear Information System (INIS)

    Tito, Max Werner de Carvalho

    2001-11-01

    The modern thermal systems generally present a growing complexity, as is in the case of nuclear power plants. It seems that is necessary the use of complex computation and mathematical tools in order to increase the efficiency of the operations, reduce costs and maximize profits while maintaining the integrity of its components. The use of sensitivity calculations plays an important role in this process providing relevant information regarding the resultant influence of variation or perturbation of its parameters as the system works. This technique is better known as sensitivity analysis and through its use makes possible the understanding of the effects of the parameters, which are fundamental for the project preparation, and for the development of preventive and corrective handling measurements of many pieces of equipment of modern engineering. The sensitivity calculation methodology is based generally on the response surface technique (graphic description of the functions of interest based in the results obtained from the system parameter variation). This method presents a lot of disadvantages and sometimes is even impracticable since many parameters can cause alterations or perturbations to the system and the model to analyse it can be very complex as well. The utilization of perturbative methods result appropriate as a practical solution to this problem especially in the presence of complex equations. Also it reduces the resultant computational calculus time considerably. The use of these methods becomes an essential tool to simplify the sensitivity analysis. In this dissertation, the differential perturbative method is applied in a heat conduction problem within a thermal system, made up of a one-dimensional circumferential fin on a nuclear fuel element. The fins are used to extend the thermal surfaces where convection occurs; thus increasing the heat transfer to many thermal pieces of equipment in order to obtain better results. The finned claddings are

  12. Sensitivity analysis of longitudinal cracking on asphalt pavement using MEPDG in permafrost region

    Directory of Open Access Journals (Sweden)

    Chen Zhang

    2015-02-01

    Full Text Available Longitudinal cracking is one of the most important distresses of asphalt pavement in permafrost regions. The sensitivity analysis of design parameters for asphalt pavement can be used to study the influence of every parameter on longitudinal cracking, which can help optimizing the design of the pavement structure. In this study, 20 test sections of Qinghai–Tibet Highway were selected to conduct the sensitivity analysis of longitudinal cracking on material parameter based on Mechanistic-Empirical Pavement Design Guide (MEPDG and single factorial sensitivity analysis method. Some computer aided engineering (CAE simulation techniques, such as the Latin hypercube sampling (LHS technique and the multiple regression analysis are used as auxiliary means. Finally, the sensitivity spectrum of material parameter on longitudinal cracking was established. The result shows the multiple regression analysis can be used to determine the remarkable influence factor more efficiently and to process the qualitative analysis when applying the MEPDG software in sensitivity analysis of longitudinal cracking in permafrost regions. The effect weights of the three parameters on longitudinal cracking in descending order are air void, effective binder content and PG grade. The influence of air void on top layer is bigger than that on middle layer and bottom layer. The influence of effective asphalt content on top layer is bigger than that on middle layer and bottom layer, and the influence of bottom layer is slightly bigger than middle layer. The accumulated value of longitudinal cracking on middle layer and bottom layer in the design life would begin to increase when the design temperature of PG grade increased.

  13. Risk and sensitivity analysis in relation to external events

    International Nuclear Information System (INIS)

    Alzbutas, R.; Urbonas, R.; Augutis, J.

    2001-01-01

    This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)

  14. Anisotropic analysis for seismic sensitivity of groundwater monitoring wells

    Science.gov (United States)

    Pan, Y.; Hsu, K.

    2011-12-01

    Taiwan is located at the boundaries of Eurasian Plate and the Philippine Sea Plate. The movement of plate causes crustal uplift and lateral deformation to lead frequent earthquakes in the vicinity of Taiwan. The change of groundwater level trigged by earthquake has been observed and studied in Taiwan for many years. The change of groundwater may appear in oscillation and step changes. The former is caused by seismic waves. The latter is caused by the volumetric strain and reflects the strain status. Since the setting of groundwater monitoring well is easier and cheaper than the setting of strain gauge, the groundwater measurement may be used as a indication of stress. This research proposes the concept of seismic sensitivity of groundwater monitoring well and apply to DonHer station in Taiwan. Geostatistical method is used to analysis the anisotropy of seismic sensitivity. GIS is used to map the sensitive area of the existing groundwater monitoring well.

  15. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  16. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  17. On uncertainty and local sensitivity analysis for transient conjugate heat transfer problems

    International Nuclear Information System (INIS)

    Rauch, Christian

    2012-01-01

    The need for simulating real-world behavior of automobiles has led to more and more sophisticated models being added of various physical phenomena for being coupled together. This increases the number of parameters to be set and, consequently, the required knowledge of their relative importance for the solution and the theory behind them. Sensitivity and uncertainty analysis provides the knowledge of parameter importance. In this paper a thermal radiation solver is considered that performs conduction calculations and receives heat transfer coefficient and fluid temperate at a thermal node. The equations of local, discrete, and transient sensitivities for the conjugate heat transfer model solved by the finite difference method are being derived for some parameters. In the past, formulations for the finite element method have been published. This paper builds on the steady-state formulation published previously by the author. A numerical analysis on the stability of the solution matrix is being conducted. From those normalized sensitivity coefficients are calculated dimensionless uncertainty factors. On a simplified example the relative importance of the heat transfer modes at various locations is then investigated by those uncertainty factors and their changes over time

  18. Global sensitivity analysis in wind energy assessment

    Science.gov (United States)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  19. Parameter Identification with the Random Perturbation Particle Swarm Optimization Method and Sensitivity Analysis of an Advanced Pressurized Water Reactor Nuclear Power Plant Model for Power Systems

    Directory of Open Access Journals (Sweden)

    Li Wang

    2017-02-01

    Full Text Available The ability to obtain appropriate parameters for an advanced pressurized water reactor (PWR unit model is of great significance for power system analysis. The attributes of that ability include the following: nonlinear relationships, long transition time, intercoupled parameters and difficult obtainment from practical test, posed complexity and difficult parameter identification. In this paper, a model and a parameter identification method for the PWR primary loop system were investigated. A parameter identification process was proposed, using a particle swarm optimization (PSO algorithm that is based on random perturbation (RP-PSO. The identification process included model variable initialization based on the differential equations of each sub-module and program setting method, parameter obtainment through sub-module identification in the Matlab/Simulink Software (Math Works Inc., Natick, MA, USA as well as adaptation analysis for an integrated model. A lot of parameter identification work was carried out, the results of which verified the effectiveness of the method. It was found that the change of some parameters, like the fuel temperature and coolant temperature feedback coefficients, changed the model gain, of which the trajectory sensitivities were not zero. Thus, obtaining their appropriate values had significant effects on the simulation results. The trajectory sensitivities of some parameters in the core neutron dynamic module were interrelated, causing the parameters to be difficult to identify. The model parameter sensitivity could be different, which would be influenced by the model input conditions, reflecting the parameter identifiability difficulty degree for various input conditions.

  20. Performances of non-parametric statistics in sensitivity analysis and parameter ranking

    International Nuclear Information System (INIS)

    Saltelli, A.

    1987-01-01

    Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis

  1. Highly Sensitive and High-Throughput Method for the Analysis of Bisphenol Analogues and Their Halogenated Derivatives in Breast Milk.

    Science.gov (United States)

    Niu, Yumin; Wang, Bin; Zhao, Yunfeng; Zhang, Jing; Shao, Bing

    2017-12-06

    The structural analogs of bisphenol A (BPA) and their halogenated derivatives (together termed BPs) have been found in the environment, food, and even the human body. Limited research showed that some of them exhibited toxicities that were similar to or even greater than that of BPA. Therefore, adverse health effects for BPs were expected for humans with low-dose exposure in early life. Breast milk is an excellent matrix and could reflect fetuses' and babies' exposure to contaminants. Some of the emerging BPs may present with trace or ultratrace levels in humans. However, existing analytical methods for breast milk cannot quantify these BPs simultaneously with high sensitivity using a small sampling weight, which is important for human biomonitoring studies. In this paper, a method based on Bond Elut Enhanced Matrix Removal-Lipid purification, pyridine-3-sulfonyl chloride derivatization, and liquid chromatography electrospray tandem mass spectrometry was developed. The method requires only a small quantity of sample (200 μL) and allowed for the simultaneous determination of 24 BPs in breast milk with ultrahigh sensitivity. The limits of quantitation of the proposed method were 0.001-0.200 μg L -1 , which were 1-6.7 times lower than the only study for the simultaneous analysis of bisphenol analogs in breast milk based on a 3 g sample weight. The mean recoveries ranged from 86.11% to 119.05% with relative standard deviation (RSD) ≤ 19.5% (n = 6). Matrix effects were within 20% with RSD bisphenol F (BPF), bisphenol S (BPS), and bisphenol AF (BPAF) were detected. BPA was still the dominant BP, followed by BPF. This is the first report describing the occurrence of BPF and BPAF in breast milk.

  2. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    OpenAIRE

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An inv...

  3. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)

    2016-10-15

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.

  4. Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes

    International Nuclear Information System (INIS)

    Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae

    2016-01-01

    Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed

  5. A reactive transport model for mercury fate in contaminated soil--sensitivity analysis.

    Science.gov (United States)

    Leterme, Bertrand; Jacques, Diederik

    2015-11-01

    We present a sensitivity analysis of a reactive transport model of mercury (Hg) fate in contaminated soil systems. The one-dimensional model, presented in Leterme et al. (2014), couples water flow in variably saturated conditions with Hg physico-chemical reactions. The sensitivity of Hg leaching and volatilisation to parameter uncertainty is examined using the elementary effect method. A test case is built using a hypothetical 1-m depth sandy soil and a 50-year time series of daily precipitation and evapotranspiration. Hg anthropogenic contamination is simulated in the topsoil by separately considering three different sources: cinnabar, non-aqueous phase liquid and aqueous mercuric chloride. The model sensitivity to a set of 13 input parameters is assessed, using three different model outputs (volatilized Hg, leached Hg, Hg still present in the contaminated soil horizon). Results show that dissolved organic matter (DOM) concentration in soil solution and the binding constant to DOM thiol groups are critical parameters, as well as parameters related to Hg sorption to humic and fulvic acids in solid organic matter. Initial Hg concentration is also identified as a sensitive parameter. The sensitivity analysis also brings out non-monotonic model behaviour for certain parameters.

  6. Method of estimating the sensitivity of a calculated nuclide vector to deviations in initial data

    International Nuclear Information System (INIS)

    Ivanov, E.A.

    1998-12-01

    The application of perturbation theory algorithms in modelling of nuclides transmutation is considered. The perturbation theory is used to construct the analytical technique of sensitivity analysis. It is shown that such algorithms have to be used in modelling of lifetime performance of nuclear power installations with the Monte Carlo method. The present approach differs from others by consistent use of analytical methods. (author)

  7. A simple in chemico method for testing skin sensitizing potential of chemicals using small endogenous molecules.

    Science.gov (United States)

    Nepal, Mahesh Raj; Shakya, Rajina; Kang, Mi Jeong; Jeong, Tae Cheon

    2018-06-01

    Among many of the validated methods for testing skin sensitization, direct peptide reactivity assay (DPRA) employs no cells or animals. Although no immune cells are involved in this assay, it reliably predicts the skin sensitization potential of a chemical in chemico. Herein, a new method was developed using endogenous small-molecular-weight compounds, cysteamine and glutathione, rather than synthetic peptides, to differentiate skin sensitizers from non-sensitizers with an accuracy as high as DPRA. The percent depletion of cysteamine and glutathione by test chemicals was measured by an HPLC equipped with a PDA detector. To detect small-size molecules, such as cysteamine and glutathione, a derivatization by 4-(4-dimethylaminophenylazo) benzenesulfonyl chloride (DABS-Cl) was employed prior to the HPLC analysis. Following test method optimization, a cut-off criterion of 7.14% depletion was applied to differentiate skin sensitizers from non-sensitizers in combination of the ratio of 1:25 for cysteamine:test chemical with 1:50 for glutathione:test chemical for the best predictivity among various single or combination conditions. Although overlapping HPLC peaks could not be fully resolved for some test chemicals, high levels of sensitivity (100.0%), specificity (81.8%), and accuracy (93.3%) were obtained for 30 chemicals tested, which were comparable or better than those achieved with DPRA. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. The role of sensitivity analysis in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.; Knochenhauer, M.

    1987-01-01

    The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)

  9. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  10. A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.

    Science.gov (United States)

    McCandless, Lawrence C; Gustafson, Paul

    2017-08-15

    Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Development of a sensitivity analysis systems in nuclear reactors through generalized perturbation theory at first order in 2 D geometries

    International Nuclear Information System (INIS)

    Garcia, Juan Matias

    2005-01-01

    Perturbation Methods represent a powerful tool to do sensitivity analysis, and they found many aplications in nuclear engineering.As an introduction to this kind of analysis, we develope a program that apply the Generalized Perturbation Theory or GPT Method to bidimensional system of rectangular geometry.We first consider an homogeneous system of non-multiplying material and then an heterogeneous system with region of multiplying material, with the intention of make concret aplications of perturbation method to nuclear engineering problems.The program, that we called Pert, determines neutron fluxes and importance functions applying the Multigroup Diffusion Theory; and also solves the integrals required to calculate sensitivity coefficients.Using this perturbation methods we could verify the low computational cost required to make this kind of analysis and the simplicity of the equations systems involved, allowing us to make elaborates sensitivity analysis for the responses of our interest

  12. BEMUSE Phase III Report - Uncertainty and Sensitivity Analysis of the LOFT L2-5 Test

    International Nuclear Information System (INIS)

    Bazin, P.; Crecy, A. de; Glaeser, H.; Skorek, T.; Joucla, J.; Probst, P.; Chung, B.; Oh, D.Y.; Kyncl, M.; Pernica, R.; Macek, J.; Meca, R.; Macian, R.; D'Auria, F.; Petruzzi, A.; Perez, M.; Reventos, F.; Fujioka, K.

    2007-02-01

    This report summarises the various contributions (ten participants) for phase 3 of BEMUSE: Uncertainty and Sensitivity Analyses of the LOFT L2-5 experiment, a Large-Break Loss-of-Coolant-Accident (LB-LOCA). For this phase, precise requirements step by step were provided to the participants. Four main parts are defined, which are: 1. List and uncertainties of the input uncertain parameters. 2. Uncertainty analysis results. 3. Sensitivity analysis results. 4. Improved methods, assessment of the methods (optional). 5% and 95% percentiles have to be estimated for 6 output parameters, which are of two kinds: 1. Scalar output parameters (First Peak Cladding Temperature (PCT), Second Peak Cladding Temperature, Time of accumulator injection, Time of complete quenching); 2. Time trends output parameters (Maximum cladding temperature, Upper plenum pressure). The main lessons learnt from phase 3 of the BEMUSE programme are the following: - for uncertainty analysis, all the participants use a probabilistic method associated with the use of Wilks' formula, except for UNIPI with its CIAU method (Code with the Capability of Internal Assessment of Uncertainty). Use of both methods has been successfully mastered. - Compared with the experiment, the results of uncertainty analysis are good on the whole. For example, for the cladding temperature-type output parameters (1. PCT, 2. PCT, time of complete quenching, maximum cladding temperature), 8 participants out of 10 find upper and lower bounds which envelop the experimental data. - Sensitivity analysis has been successfully performed by all the participants using the probabilistic method. All the used influence measures include the range of variation of the input parameters. Synthesis tables of the most influential phenomena and parameters have been plotted and participants will be able to use them for the continuation of the BEMUSE programme

  13. Fluorimetry as a Simple and Sensitive Method for Determination of Catalase

    Directory of Open Access Journals (Sweden)

    Mehdi Hedayati

    2014-02-01

    Full Text Available Background: Catalase enzyme plays an important role in the anti-oxidation defense of body so it is important to measure its activity. Nowadays catalase activity measurement is performed by expensive imported kits in various scientific fields. The purpose of this study was to design a sensitive fluorimetry method for measuring catalase activity with improved sensitivity, accuracy and speed. Materials and Methods: In this study, the reaction of hydrogen peroxide with peroxidase (as a reaction accelerator was used in fluorimetry for catalase activity measuring in serum samples in order to increase the sensitivity of the assay. The sensitivity and intra- and inter-assay accuracy, verification test, recovery and parallelism tests, comparison method and correlation and coherence investigation methods were also performed. In order to increase the accuracy and speed of reading, the assay was performed in microplates and reading was done in fluorimetry plates. Results: The percentage of intra- and inter-assay variation coefficients were measured 3.8- 6.6 % and 4.1-7.3%, respectively. Comparison of the results of mentioned method for 50 serum samples with common colorimetric method showed a good correlation (0.917. In assessing the accuracy, the recovery percent was obtained 91% to 107%. The test sensitivity was measured 0.02 IU/ml. Conclusion: The fluorimetry method by microplate reading has a sufficient precision, accuracy and efficiency for catalase activity measuring as well as speed of measurement. Thus it can be an alternative method to conventional imported colorimetric methods.

  14. Uncertainty and sensitivity analysis of the nuclear fuel thermal behavior

    Energy Technology Data Exchange (ETDEWEB)

    Boulore, A., E-mail: antoine.boulore@cea.fr [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Struzik, C. [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Gaudier, F. [Commissariat a l' Energie Atomique (CEA), DEN, Systems and Structure Modeling Department, 91191 Gif-sur-Yvette (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer A complete quantitative method for uncertainty propagation and sensitivity analysis is applied. Black-Right-Pointing-Pointer The thermal conductivity of UO{sub 2} is modeled as a random variable. Black-Right-Pointing-Pointer The first source of uncertainty is the linear heat rate. Black-Right-Pointing-Pointer The second source of uncertainty is the thermal conductivity of the fuel. - Abstract: In the global framework of nuclear fuel behavior simulation, the response of the models describing the physical phenomena occurring during the irradiation in reactor is mainly conditioned by the confidence in the calculated temperature of the fuel. Amongst all parameters influencing the temperature calculation in our fuel rod simulation code (METEOR V2), several sources of uncertainty have been identified as being the most sensitive: thermal conductivity of UO{sub 2}, radial distribution of power in the fuel pellet, local linear heat rate in the fuel rod, geometry of the pellet and thermal transfer in the gap. Expert judgment and inverse methods have been used to model the uncertainty of these parameters using theoretical distributions and correlation matrices. Propagation of these uncertainties in the METEOR V2 code using the URANIE framework and a Monte-Carlo technique has been performed in different experimental irradiations of UO{sub 2} fuel. At every time step of the simulated experiments, we get a temperature statistical distribution which results from the initial distributions of the uncertain parameters. We then can estimate confidence intervals of the calculated temperature. In order to quantify the sensitivity of the calculated temperature to each of the uncertain input parameters and data, we have also performed a sensitivity analysis using the Sobol' indices at first order.

  15. LHS (latin hypercubes) sampling of the material properties of steels for the analysis of the global sensitivity in welding numerical simulation

    International Nuclear Information System (INIS)

    Petelet, Matthieu; Asserin, Olivier; Iooss, Bertrand; Petelet, Matthieu; Loredo, Alexandre

    2006-01-01

    In this work, the method of sensitivity analysis allowing to identify the inlet data the most influential on the variability of the responses (residual stresses and distortions). Classically, the sensitivity analysis is carried out locally what limits its validity domain to a given material. A global sensitivity analysis method is proposed; it allows to cover a material domain as wide as those of the steels series. A probabilistic modeling giving the variability of the material parameters in the steels series is proposed. The original aspect of this work consists in the use of the sampling method by latin hypercubes (LHS) of the material parameters which forms the inlet data (dependent of temperature) of the numerical simulations. Thus, a statistical approach has been applied to the welding numerical simulation: LHS sampling of the material properties, global sensitivity analysis what has allowed the reduction of the material parameterization. (O.M.)

  16. New strategies of sensitivity analysis capabilities in continuous-energy Monte Carlo code RMC

    International Nuclear Information System (INIS)

    Qiu, Yishu; Liang, Jingang; Wang, Kan; Yu, Jiankai

    2015-01-01

    Highlights: • Data decomposition techniques are proposed for memory reduction. • New strategies are put forward and implemented in RMC code to improve efficiency and accuracy for sensitivity calculations. • A capability to compute region-specific sensitivity coefficients is developed in RMC code. - Abstract: The iterated fission probability (IFP) method has been demonstrated to be an accurate alternative for estimating the adjoint-weighted parameters in continuous-energy Monte Carlo forward calculations. However, the memory requirements of this method are huge especially when a large number of sensitivity coefficients are desired. Therefore, data decomposition techniques are proposed in this work. Two parallel strategies based on the neutron production rate (NPR) estimator and the fission neutron population (FNP) estimator for adjoint fluxes, as well as a more efficient algorithm which has multiple overlapping blocks (MOB) in a cycle, are investigated and implemented in the continuous-energy Reactor Monte Carlo code RMC for sensitivity analysis. Furthermore, a region-specific sensitivity analysis capability is developed in RMC. These new strategies, algorithms and capabilities are verified against analytic solutions of a multi-group infinite-medium problem and against results from other software packages including MCNP6, TSUANAMI-1D and multi-group TSUNAMI-3D. While the results generated by the NPR and FNP strategies agree within 0.1% of the analytic sensitivity coefficients, the MOB strategy surprisingly produces sensitivity coefficients exactly equal to the analytic ones. Meanwhile, the results generated by the three strategies in RMC are in agreement with those produced by other codes within a few percent. Moreover, the MOB strategy performs the most efficient sensitivity coefficient calculations (offering as much as an order of magnitude gain in FoMs over MCNP6), followed by the NPR and FNP strategies, and then MCNP6. The results also reveal that these

  17. Spectroscopic Chemical Analysis Methods and Apparatus

    Science.gov (United States)

    Hug, William F. (Inventor); Reid, Ray D. (Inventor); Bhartia, Rohit (Inventor); Lane, Arthur L. (Inventor)

    2018-01-01

    Spectroscopic chemical analysis methods and apparatus are disclosed which employ deep ultraviolet (e.g. in the 200 nm to 300 nm spectral range) electron beam pumped wide bandgap semiconductor lasers, incoherent wide bandgap semiconductor light emitting devices, and hollow cathode metal ion lasers to perform non-contact, non-invasive detection of unknown chemical analytes. These deep ultraviolet sources enable dramatic size, weight and power consumption reductions of chemical analysis instruments. In some embodiments, Raman spectroscopic detection methods and apparatus use ultra-narrow-band angle tuning filters, acousto-optic tuning filters, and temperature tuned filters to enable ultra-miniature analyzers for chemical identification. In some embodiments Raman analysis is conducted along with photoluminescence spectroscopy (i.e. fluorescence and/or phosphorescence spectroscopy) to provide high levels of sensitivity and specificity in the same instrument.

  18. Sensitivity Analysis of Corrosion Rate Prediction Models Utilized for Reinforced Concrete Affected by Chloride

    Science.gov (United States)

    Siamphukdee, Kanjana; Collins, Frank; Zou, Roger

    2013-06-01

    Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.

  19. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  20. Local sensitivity analysis for inverse problems solved by singular value decomposition

    Science.gov (United States)

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  1. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  2. X-ray fluorescence method for trace analysis and imaging

    International Nuclear Information System (INIS)

    Hayakawa, Shinjiro

    2000-01-01

    X-ray fluorescence analysis has a long history as conventional bulk elemental analysis with medium sensitivity. However, with the use of synchrotron radiation x-ray fluorescence method has become a unique analytical technique which can provide tace elemental information with the spatial resolution. To obtain quantitative information of trace elemental distribution by using the x-ray fluorescence method, theoretical description of x-ray fluorescence yield is described. Moreover, methods and instruments for trace characterization with a scanning x-ray microprobe are described. (author)

  3. Contributions to sensitivity analysis and generalized discriminant analysis; Contributions a l'analyse de sensibilite et a l'analyse discriminante generalisee

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, J

    2005-12-15

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  4. Overview and application of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) toolbox

    Science.gov (United States)

    For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...

  5. Studies on analytical method and nondestructive measuring method on the sensitization of austenitic stainless steels

    International Nuclear Information System (INIS)

    Onimura, Kichiro; Arioka, Koji; Horai, Manabu; Noguchi, Shigeru.

    1982-03-01

    Austenitic stainless steels are widely used as structural materials for the machine and equipment of various kinds of plants, such as thermal power, nuclear power, and chemical plants. The machines and equipment using this kind of material, however, have the possibility of suffering corrosion damage while in service, and these damages are considered to be largely due to the sensitization of the material in sometimes. So, it is necessary to develop an analytical method for grasping the sensitization of the material more in detail and a quantitative nondestructive measuring method which is applicable to various kinds of structures in order to prevent the corrosion damage. From the above viewpoint, studies have been made on the analytical method based on the theory of diffusion of chromium in austenitic stainless steels and on Electro-Potentiokinetics Reactivation Method (EPR Method) as a nondestructive measuring method, using 304 and 316 austenitic stainless steels having different carbon contents in base metals. This paper introduces the results of EPR test on the sensitization of austenitic stainless steels and the correlation between analytical and experimental results. (author)

  6. A relative quantitative Methylation-Sensitive Amplified Polymorphism (MSAP) method for the analysis of abiotic stress

    OpenAIRE

    Bednarek, Piotr T.; Or?owska, Renata; Niedziela, Agnieszka

    2017-01-01

    Background We present a new methylation-sensitive amplified polymorphism (MSAP) approach for the evaluation of relative quantitative characteristics such as demethylation, de novo methylation, and preservation of methylation status of CCGG sequences, which are recognized by the isoschizomers HpaII and MspI. We applied the technique to analyze aluminum (Al)-tolerant and non-tolerant control and Al-stressed inbred triticale lines. The approach is based on detailed analysis of events affecting H...

  7. Global sensitivity analysis in wastewater treatment plant model applications: Prioritizing sources of uncertainty

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Neumann, Marc B.

    2011-01-01

    This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements...... insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants....... a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R2...

  8. Hamiltonian Markov Chain Monte Carlo Methods for the CUORE Neutrinoless Double Beta Decay Sensitivity

    Science.gov (United States)

    Graham, Eleanor; Cuore Collaboration

    2017-09-01

    The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.

  9. A comparative study of three different gene expression analysis methods.

    Science.gov (United States)

    Choe, Jae Young; Han, Hyung Soo; Lee, Seon Duk; Lee, Hanna; Lee, Dong Eun; Ahn, Jae Yun; Ryoo, Hyun Wook; Seo, Kang Suk; Kim, Jong Kun

    2017-12-04

    TNF-α regulates immune cells and acts as an endogenous pyrogen. Reverse transcription polymerase chain reaction (RT-PCR) is one of the most commonly used methods for gene expression analysis. Among the alternatives to PCR, loop-mediated isothermal amplification (LAMP) shows good potential in terms of specificity and sensitivity. However, few studies have compared RT-PCR and LAMP for human gene expression analysis. Therefore, in the present study, we compared one-step RT-PCR, two-step RT-LAMP and one-step RT-LAMP for human gene expression analysis. We compared three gene expression analysis methods using the human TNF-α gene as a biomarker from peripheral blood cells. Total RNA from the three selected febrile patients were subjected to the three different methods of gene expression analysis. In the comparison of three gene expression analysis methods, the detection limit of both one-step RT-PCR and one-step RT-LAMP were the same, while that of two-step RT-LAMP was inferior. One-step RT-LAMP takes less time, and the experimental result is easy to determine. One-step RT-LAMP is a potentially useful and complementary tool that is fast and reasonably sensitive. In addition, one-step RT-LAMP could be useful in environments lacking specialized equipment or expertise.

  10. Sensitivity and Uncertainty Analysis for coolant void reactivity in a CANDU Fuel Lattice Cell Model

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Seung Yeol; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2016-10-15

    In this study, the EPBM is implemented in Seoul National university Monte Carlo (MC) code, McCARD which has the k uncertainty evaluation capability by the adjoint-weighted perturbation (AWP) method. The implementation is verified by comparing the sensitivities of the k-eigenvalue difference to the microscopic cross sections computed by the DPBM and the direct subtractions for the TMI-1 pin-cell problem. The uncertainty of the coolant void reactivity (CVR) in a CANDU fuel lattice model due to the ENDF/B-VII.1 covariance data is calculated by its sensitivities estimated by the EPBM. The method based on the eigenvalue perturbation theory (EPBM) utilizes the 1st order adjoint-weighted perturbation (AWP) technique to estimate the sensitivity of the eigenvalue difference. Furthermore this method can be easily applied in a S/U analysis code system equipped with the eigenvalue sensitivity calculation capability. The EPBM is implemented in McCARD code and verified by showing good agreement with reference solution. Then the McCARD S/U analysis have been performed with the EPBM module for the CVR in CANDU fuel lattice problem. It shows that the uncertainty contributions of nu of {sup 235}U and gamma reaction of {sup 238}U are dominant.

  11. The delayed neutron method of uranium analysis

    International Nuclear Information System (INIS)

    Wall, T.

    1989-01-01

    The technique of delayed neutron analysis (DNA) is discussed. The DNA rig installed on the MOATA reactor, the assay standards and the types of samples which have been assayed are described. Of the total sample throughput of about 55,000 units since the uranium analysis service began, some 78% has been concerned with analysis of uranium ore samples derived from mining and exploration. Delayed neutron analysis provides a high sensitivity, low cost uranium analysis method for both uranium exploration and other applications. It is particularly suitable for analysis of large batch samples and for non-destructive analysis over a wide range of matrices. 8 refs., 4 figs., 3 tabs

  12. Global sensitivity analysis of bogie dynamics with respect to suspension components

    Energy Technology Data Exchange (ETDEWEB)

    Mousavi Bideleh, Seyed Milad, E-mail: milad.mousavi@chalmers.se; Berbyuk, Viktor, E-mail: viktor.berbyuk@chalmers.se [Chalmers University of Technology, Department of Applied Mechanics (Sweden)

    2016-06-15

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  13. Global sensitivity analysis of bogie dynamics with respect to suspension components

    International Nuclear Information System (INIS)

    Mousavi Bideleh, Seyed Milad; Berbyuk, Viktor

    2016-01-01

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  14. Analysis of Methanol Sensitivity on SnO2-ZnO Nanocomposite

    Science.gov (United States)

    Bassey, Enobong E.; Sallis, Philip; Prasad, Krishnamachar

    This research reports on the sensing behavior of a nanocomposite of tin dioxide (SnO2) and zinc oxide (ZnO). SnO2-ZnO nanocomposites were fabricated into sensor devices by the radio frequency sputtering method, and used for the characterization of the sensitivity behavior of methanol vapor. The sensor devices were subjected to methanol concentration of 200 ppm at operating temperatures of 150, 250 and 350 °C. A fractional difference model was used to normalize the sensor response, and determine the sensitivity of methanol on the sensor. Response analysis of the SnO2-ZnO sensors to the methanol was most sensitive at 350 °C, followed by 250 and 150 °C. Supported by the morphology (FE-SEM, AFM) analyses of the thin films, the sensitivity behavior confirmed that the nanoparticles of coupled SnO2 and ZnO nanocomposites can promote the charge transportation, and be used to fine-tune the sensitivity of methanol and sensor selectivity to a desired target gas.

  15. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  16. Sensitivity analysis and comparison of two methods of using heart rate to represent energy expenditure during walking.

    Science.gov (United States)

    Karimi, Mohammad Taghi

    2015-01-01

    Heart rate is an accurate and easy to use method to represent the energy expenditure during walking, based on physiological cost index (PCI). However, in some conditions the heart rate during walking does not reach to a steady state. Therefore, it is not possible to determine the energy expenditure by use of the PCI index. The total heart beat index (THBI) is a new method to solve the aforementioned problem. The aim of this research project was to find the sensitivity of both the physiological cost index (PCI) and total heart beat index (THBI). Fifteen normal subjects and ten patients with flatfoot disorder and two subjects with spinal cord injury were recruited in this research project. The PCI and THBI indexes were determined by use of heart beats with respect to walking speed and total distance walked, respectively. The sensitivity of PCI was more than that of THBI index in the three groups of subjects. Although the PCI and THBI indexes are easy to use and reliable parameters to represent the energy expenditure during walking, their sensitivity is not high to detect the influence of some orthotic interventions, such as use of insoles or using shoes on energy expenditure during walking.

  17. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  18. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    Science.gov (United States)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  19. Extracting sensitive spectrum bands of rapeseed using multiscale multifractal detrended fluctuation analysis

    Science.gov (United States)

    Jiang, Shan; Wang, Fang; Shen, Luming; Liao, Guiping; Wang, Lin

    2017-03-01

    Spectrum technology has been widely used in crop non-destructive testing diagnosis for crop information acquisition. Since spectrum covers a wide range of bands, it is of critical importance to extract the sensitive bands. In this paper, we propose a methodology to extract the sensitive spectrum bands of rapeseed using multiscale multifractal detrended fluctuation analysis. Our obtained sensitive bands are relatively robust in the range of 534 nm-574 nm. Further, by using the multifractal parameter (Hurst exponent) of the extracted sensitive bands, we propose a prediction model to forecast the Soil and plant analyzer development values ((SPAD), often used as a parameter to indicate the chlorophyll content) and an identification model to distinguish the different planting patterns. Three vegetation indices (VIs) based on previous work are used for comparison. Three evaluation indicators, namely, the root mean square error, the correlation coefficient, and the relative error employed in the SPAD values prediction model all demonstrate that our Hurst exponent has the best performance. Four rapeseed compound planting factors, namely, seeding method, planting density, fertilizer type, and weed control method are considered in the identification model. The Youden indices calculated by the random decision forest method and the K-nearest neighbor method show that our Hurst exponent is superior to other three Vis, and their combination for the factor of seeding method. In addition, there is no significant difference among the five features for other three planting factors. This interesting finding suggests that the transplanting and the direct seeding would make a big difference in the growth of rapeseed.

  20. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  1. Photocleavable DNA barcode-antibody conjugates allow sensitive and multiplexed protein analysis in single cells.

    Science.gov (United States)

    Agasti, Sarit S; Liong, Monty; Peterson, Vanessa M; Lee, Hakho; Weissleder, Ralph

    2012-11-14

    DNA barcoding is an attractive technology, as it allows sensitive and multiplexed target analysis. However, DNA barcoding of cellular proteins remains challenging, primarily because barcode amplification and readout techniques are often incompatible with the cellular microenvironment. Here we describe the development and validation of a photocleavable DNA barcode-antibody conjugate method for rapid, quantitative, and multiplexed detection of proteins in single live cells. Following target binding, this method allows DNA barcodes to be photoreleased in solution, enabling easy isolation, amplification, and readout. As a proof of principle, we demonstrate sensitive and multiplexed detection of protein biomarkers in a variety of cancer cells.

  2. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  3. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Directory of Open Access Journals (Sweden)

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  4. Cross section and method uncertainties: the application of sensitivity analysis to study their relationship in radiation transport benchmark problems

    International Nuclear Information System (INIS)

    Weisbi, C.R.; Oblow, E.M.; Ching, J.; White, J.E.; Wright, R.Q.; Drischler, J.

    1975-08-01

    Sensitivity analysis is applied to the study of an air transport benchmark calculation to quantify and distinguish between cross-section and method uncertainties. The boundary detector response was converged with respect to spatial and angular mesh size, P/sub l/ expansion of the scattering kernel, and the number and location of energy grid boundaries. The uncertainty in the detector response due to uncertainties in nuclear data is 17.0 percent (one standard deviation, not including uncertainties in energy and angular distribution) based upon the ENDF/B-IV ''error files'' including correlations in energy and reaction type. Differences of approximately 6 percent can be attributed exclusively to differences in processing multigroup transfer matrices. Formal documentation of the PUFF computer program for the generation of multigroup covariance matrices is presented. (47 figures, 14 tables) (U.S.)

  5. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  6. Neutron activation analysis: principle and methods

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.

    2006-01-01

    Neutron activation analysis (NAA) is a powerful isotope specific nuclear analytical technique for simultaneous determination of elemental composition of major, minor and trace elements in diverse matrices. The technique is capable of yielding high analytical sensitivity and low detection limits (ppm to ppb). Due to high penetration power of neutrons and gamma rays, NAA experiences negligible matrix effects in the samples of different origins. Depending on the sample matrix and element of interest NAA technique is used non-destructively, known as instrumental neutron activation analysis (INAA), or through chemical NAA methods. The present article describes principle of NAA, different methods and gives a overview some applications in the fields like environment, biology, geology, material sciences, nuclear technology and forensic sciences. (author)

  7. The sensitivity analysis as a method of quantifying the degree of uncertainty

    Directory of Open Access Journals (Sweden)

    Manole Tatiana

    2013-01-01

    Full Text Available In this article the author relates about the uncertainty of any proposed investment or government policies. Taking in account this situation, it is necessary to do an analysis of proposed projects for implementation and from multiple choices to choose the project that is most advantageous. This is a general principle. The financial science provides to the researchers a set of tools with what we can identify the best project. The author aims to examine three projects that have the same features, applying them to various methods of financial analysis, such as net present value (NPV, the discount rate (SAR, recovery time (TR, additional income (VS and return on invested (RR. All these tools of financial analysis are in the cost-benefit analysis (CBA and have the aim to streamline the public money that are invested to achieve successful performance.

  8. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  9. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  10. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    Science.gov (United States)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  11. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  12. Sensitivity of radiation methods of diagnosis of electric potentials in dielectric materials

    International Nuclear Information System (INIS)

    Sapozhkov, Yu.I.; Smekalin, L.F.; Yagushkin, N.I.

    1985-01-01

    On the base of the albedo method the characteristics of radiation methods of diagnosis of electric potentials inside dielectrics, such as sensitivity and resolution are considered. Investigations are carried out for electron energies of tens keV. It is shown that with energy growth the sensitivity to electric field in the dielectrics volume drops. The target atomic number growth reduces the sensitivity approximately 1/lnz. The albedo method resolution in the investigated energy range is constant. The results obtained testify to the usability radiation methods of the diagnosis for control of electric fields of dielectric structural materials in the course of their operation

  13. Robust prediction of anti-cancer drug sensitivity and sensitivity-specific biomarker.

    Directory of Open Access Journals (Sweden)

    Heewon Park

    Full Text Available The personal genomics era has attracted a large amount of attention for anti-cancer therapy by patient-specific analysis. Patient-specific analysis enables discovery of individual genomic characteristics for each patient, and thus we can effectively predict individual genetic risk of disease and perform personalized anti-cancer therapy. Although the existing methods for patient-specific analysis have successfully uncovered crucial biomarkers, their performance takes a sudden turn for the worst in the presence of outliers, since the methods are based on non-robust manners. In practice, clinical and genomic alterations datasets usually contain outliers from various sources (e.g., experiment error, coding error, etc. and the outliers may significantly affect the result of patient-specific analysis. We propose a robust methodology for patient-specific analysis in line with the NetwrokProfiler. In the proposed method, outliers in high dimensional gene expression levels and drug response datasets are simultaneously controlled by robust Mahalanobis distance in robust principal component space. Thus, we can effectively perform for predicting anti-cancer drug sensitivity and identifying sensitivity-specific biomarkers for individual patients. We observe through Monte Carlo simulations that the proposed robust method produces outstanding performances for predicting response variable in the presence of outliers. We also apply the proposed methodology to the Sanger dataset in order to uncover cancer biomarkers and predict anti-cancer drug sensitivity, and show the effectiveness of our method.

  14. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  15. New experimental and analysis methods in I-DLTS

    International Nuclear Information System (INIS)

    Pandey, S.U.; Middelkamp, P.; Li, Z.; Eremin, V.

    1998-02-01

    A new experimental apparatus to perform I-DLTS measurements is presented. The method is shown to be faster and more sensitive than traditional double boxcar I-DLTS systems. A novel analysis technique utilizing multiple exponential fits to the I-DLTS signal from a highly neutron irradiated silicon sample is presented with a discussion of the results. It is shown that the new method has better resolution and can deconvolute overlapping peaks more accurately than previous methods

  16. Sensitivity analysis of VERA-CS and FRAPCON coupling in a multiphysics environment

    International Nuclear Information System (INIS)

    Blakely, Cole; Zhang, Hongbin; Ban, Heng

    2018-01-01

    Highlights: •VERA-CS and FRAPCON coupling. •Uncertainty quantification and sensitivity analysis for coupled VERA-CS and FRAPCON simulations in a multiphysics environment LOTUS. -- Abstract: A demonstration and description of the LOCA Toolkit for US light water reactors (LOTUS) is presented. Through LOTUS, the core simulator VERA-CS developed by CASL is coupled with the fuel performance code FRAPCON. The coupling is performed with consistent uncertainty propagation with all model inconsistencies being well-documented. Monte Carlo sampling is performed on a single 17 × 17 fuel assembly with a three cycle depletion case. Both uncertainty quantification (UQ) and sensitivity analysis (SA) are used at multiple states within the simulation to elucidate the behavior of minimum departure from nucleate boiling ratio (MDNBR), maximum fuel centerline temperature (MFCT), and gap conductance at peak power (GCPP). The SA metrics used are the Pearson correlation coefficient, Sobol sensitivity indices, and the density-based, delta moment independent measures. Results for MDNBR show consistency among all SA measures, as well for all states throughout the fuel lifecycle. MFCT results contain consistent rankings between SA measures, but show differences throughout the lifecycle. GCPP exhibits predominantly linear relations at low and high burnup, but highly nonlinear relations at intermediate burnup due to abrupt shifts between models. Such behavior is largely undetectable to traditional regression or variance-based methods and demonstrates the utility of density-based methods.

  17. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  18. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    Science.gov (United States)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  19. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    Science.gov (United States)

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  1. Nonintrusive Polynomial Chaos Expansions for Sensitivity Analysis in Stochastic Differential Equations

    KAUST Repository

    Jimenez, M. Navarro; Le Maî tre, O. P.; Knio, Omar

    2017-01-01

    A Galerkin polynomial chaos (PC) method was recently proposed to perform variance decomposition and sensitivity analysis in stochastic differential equations (SDEs), driven by Wiener noise and involving uncertain parameters. The present paper extends the PC method to nonintrusive approaches enabling its application to more complex systems hardly amenable to stochastic Galerkin projection methods. We also discuss parallel implementations and the variance decomposition of the derived quantity of interest within the framework of nonintrusive approaches. In particular, a novel hybrid PC-sampling-based strategy is proposed in the case of nonsmooth quantities of interest (QoIs) but smooth SDE solution. Numerical examples are provided that illustrate the decomposition of the variance of QoIs into contributions arising from the uncertain parameters, the inherent stochastic forcing, and joint effects. The simulations are also used to support a brief analysis of the computational complexity of the method, providing insight on the types of problems that would benefit from the present developments.

  2. Nonintrusive Polynomial Chaos Expansions for Sensitivity Analysis in Stochastic Differential Equations

    KAUST Repository

    Jimenez, M. Navarro

    2017-04-18

    A Galerkin polynomial chaos (PC) method was recently proposed to perform variance decomposition and sensitivity analysis in stochastic differential equations (SDEs), driven by Wiener noise and involving uncertain parameters. The present paper extends the PC method to nonintrusive approaches enabling its application to more complex systems hardly amenable to stochastic Galerkin projection methods. We also discuss parallel implementations and the variance decomposition of the derived quantity of interest within the framework of nonintrusive approaches. In particular, a novel hybrid PC-sampling-based strategy is proposed in the case of nonsmooth quantities of interest (QoIs) but smooth SDE solution. Numerical examples are provided that illustrate the decomposition of the variance of QoIs into contributions arising from the uncertain parameters, the inherent stochastic forcing, and joint effects. The simulations are also used to support a brief analysis of the computational complexity of the method, providing insight on the types of problems that would benefit from the present developments.

  3. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

    Energy Technology Data Exchange (ETDEWEB)

    Balan, I.

    2005-05-01

    This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been

  4. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  5. Adjoint sensitivity analysis of the thermomechanical behavior of repositories

    International Nuclear Information System (INIS)

    Wilson, J.L.; Thompson, B.M.

    1984-01-01

    The adjoint sensitivity method is applied to thermomechanical models for the first time. The method provides an efficient and inexpensive answer to the question: how sensitive are thermomechanical predictions to assumed parameters. The answer is exact, in the sense that it yields exact derivatives of response measures to parameters, and approximate, in the sense that projections of the response fo other parameter assumptions are only first order correct. The method is applied to linear finite element models of thermomechanical behavior. Extensions to more complicated models are straight-forward but often laborious. An illustration of the method with a two-dimensional repository corridor model reveals that the chosen stress response measure was most sensitive to Poisson's ratio for the rock matrix

  6. Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.

    Science.gov (United States)

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2017-12-01

    This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.

  7. Global sensitivity analysis for an integrated model for simulation of nitrogen dynamics under the irrigation with treated wastewater.

    Science.gov (United States)

    Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui

    2015-11-01

    As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a

  8. Application of status uncertainty analysis methods for AP1000 LBLOCA calculation

    International Nuclear Information System (INIS)

    Zhang Shunxiang; Liang Guoxing

    2012-01-01

    Parameter uncertainty analysis is developed by using the reasonable method to establish the response relations between input parameter uncertainties and output uncertainties. The application of the parameter uncertainty analysis makes the simulation of plant state more accuracy and improves the plant economy with reasonable security assurance. The AP1000 LBLOCA was analyzed in this paper and the results indicate that the random sampling statistical analysis method, sensitivity analysis numerical method and traditional error propagation analysis method can provide quite large peak cladding temperature (PCT) safety margin, which is much helpful for choosing suitable uncertainty analysis method to improve the plant economy. Additionally, the random sampling statistical analysis method applying mathematical statistics theory makes the largest safety margin due to the reducing of the conservation. Comparing with the traditional conservative bounding parameter analysis method, the random sampling method can provide the PCT margin of 100 K, while the other two methods can only provide 50-60 K. (authors)

  9. Sensitivity analysis of effective population size to demographic parameters in house sparrow populations.

    Science.gov (United States)

    Stubberud, Marlene Waege; Myhre, Ane Marlene; Holand, Håkon; Kvalnes, Thomas; Ringsby, Thor Harald; Saether, Bernt-Erik; Jensen, Henrik

    2017-05-01

    The ratio between the effective and the census population size, Ne/N, is an important measure of the long-term viability and sustainability of a population. Understanding which demographic processes that affect Ne/N most will improve our understanding of how genetic drift and the probability of fixation of alleles is affected by demography. This knowledge may also be of vital importance in management of endangered populations and species. Here, we use data from 13 natural populations of house sparrow (Passer domesticus) in Norway to calculate the demographic parameters that determine Ne/N. Using the global variance-based Sobol' method for the sensitivity analyses, we found that Ne/N was most sensitive to demographic variance, especially among older individuals. Furthermore, the individual reproductive values (that determine the demographic variance) were most sensitive to variation in fecundity. Our results draw attention to the applicability of sensitivity analyses in population management and conservation. For population management aiming to reduce the loss of genetic variation, a sensitivity analysis may indicate the demographic parameters towards which resources should be focused. The result of such an analysis may depend on the life history and mating system of the population or species under consideration, because the vital rates and sex-age classes that Ne/N is most sensitive to may change accordingly. © 2017 John Wiley & Sons Ltd.

  10. Cross-covariance based global dynamic sensitivity analysis

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng

    2018-02-01

    For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.

  11. On the consistency of adjoint sensitivity analysis for structural optimization of linear dynamic problems

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Nakshatrala, Praveen B.; Tortorelli, Daniel A.

    2014-01-01

    Gradient-based topology optimization typically involves thousands or millions of design variables. This makes efficient sensitivity analysis essential and for this the adjoint variable method (AVM) is indispensable. For transient problems it has been observed that the traditional AVM, based on a ...

  12. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  13. Sensitivity theory for reactor burnup analysis based on depletion perturbation theory

    International Nuclear Information System (INIS)

    Yang, Wonsik.

    1989-01-01

    The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation

  14. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  15. Probabilistic Sensitivity Analysis for Launch Vehicles with Varying Payloads and Adapters for Structural Dynamics and Loads

    Science.gov (United States)

    McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.

    2012-01-01

    This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.

  16. Development of rapid urine analysis method for uranium

    Energy Technology Data Exchange (ETDEWEB)

    Kuwabara, J.; Noguchi, H. [Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan)

    2000-05-01

    ICP-MS has begun to spread in the field of individual monitoring for internal exposure as a very effective machine for uranium analysis. Although the ICP-MS has very high sensitivity, it requires longer time than conventional analysis, such as fluorescence analysis, because it is necessary to remove matrix from a urine sample sufficiently. To shorten time required for the urine bioassay by ICP-MS, a rapid uranium analysis method using the ICP-MS connected with a flow injection system was developed. Since this method does not involve chemical separation steps, the time required is equivalent to the conventional analysis. A measurement test was carried out using 10 urine solutions prepared from a urine sample. Required volume of urine solution is 5 ml. Main chemical treatment is only the digestion with 5 ml of nitric acid using a microwave oven to decompose organic matter and to dissolve suspended or precipitated matter. The microwave oven can digest 10 samples at once within an hour. Volume of digested sample solution was adjusted to 10 ml. The prepared sample solutions were directly introduced to the ICP-MS without any chemical separation procedure. The ICP-MS was connected with a flow injection system and an auto sampler. The flow injection system can minimize the matrix effects caused from salt dissolved in high matrix solution, such as non chemical separated urine sample, because it can introduce micro volume of sample solution into the ICP-MS. The ICP-MS detected uranium within 2 min/sample using the auto sampler. The 10 solutions prepared from a urine sample showed an average of 7.5 ng/l of uranium concentration in urine with 10 % standard deviation. A detection limit is about 1 ng/l. The total time required was less than 4 hours for 10 sample analysis. In the series of measurement, any memory effect was not observed. The present analysis method using the ICP-MS equipped with the flow injection system demonstrated that the shortening of time required on high

  17. Development of rapid urine analysis method for uranium

    International Nuclear Information System (INIS)

    Kuwabara, J.; Noguchi, H.

    2000-01-01

    ICP-MS has begun to spread in the field of individual monitoring for internal exposure as a very effective machine for uranium analysis. Although the ICP-MS has very high sensitivity, it requires longer time than conventional analysis, such as fluorescence analysis, because it is necessary to remove matrix from a urine sample sufficiently. To shorten time required for the urine bioassay by ICP-MS, a rapid uranium analysis method using the ICP-MS connected with a flow injection system was developed. Since this method does not involve chemical separation steps, the time required is equivalent to the conventional analysis. A measurement test was carried out using 10 urine solutions prepared from a urine sample. Required volume of urine solution is 5 ml. Main chemical treatment is only the digestion with 5 ml of nitric acid using a microwave oven to decompose organic matter and to dissolve suspended or precipitated matter. The microwave oven can digest 10 samples at once within an hour. Volume of digested sample solution was adjusted to 10 ml. The prepared sample solutions were directly introduced to the ICP-MS without any chemical separation procedure. The ICP-MS was connected with a flow injection system and an auto sampler. The flow injection system can minimize the matrix effects caused from salt dissolved in high matrix solution, such as non chemical separated urine sample, because it can introduce micro volume of sample solution into the ICP-MS. The ICP-MS detected uranium within 2 min/sample using the auto sampler. The 10 solutions prepared from a urine sample showed an average of 7.5 ng/l of uranium concentration in urine with 10 % standard deviation. A detection limit is about 1 ng/l. The total time required was less than 4 hours for 10 sample analysis. In the series of measurement, any memory effect was not observed. The present analysis method using the ICP-MS equipped with the flow injection system demonstrated that the shortening of time required on high

  18. Estimation and analysis of the sensitivity of monoenergetic electron radiography of composite materials with fluctuating composition

    International Nuclear Information System (INIS)

    Rudenko, V.N.; Yunda, N.T.

    1978-01-01

    A sensitivity analysis of the electron defectoscopy method for composite materials with fluctuating composition has been carried out. Quantitative evaluations of the testing sensitivity depending on inspection conditions have been obtained, and calculations of the instrumental error are shown. Based on numerical calculations, a comparison of error has been carried out between high-energy electron and X-ray testings. It is shown that when testing composite materials with a surface density of up to 7-10 g/cm 2 , the advantage of the electron defectoscopy method as compared to the X-ray one is the higher sensitivity and lower instrumental error. The advantage of the electron defectoscopy method over the X-ray one as regards the sensitivity is greater when a light-atom component is predomenant in the composition. A monoenergetic electron beam from a betatron with an energy of up to 30 MeV should be used for testing materials with a surface density of up to 15 g/cm 2

  19. Fast, sensitive, and selective gas chromatography tandem mass spectrometry method for the target analysis of chemical secretions from femoral glands in lizards.

    Science.gov (United States)

    Sáiz, Jorge; García-Roa, Roberto; Martín, José; Gómara, Belén

    2017-09-08

    Chemical signaling is a widespread mode of communication among living organisms that is used to establish social organization, territoriality and/or for mate choice. In lizards, femoral and precloacal glands are important sources of chemical signals. These glands protrude chemical secretions used to mark territories and also, to provide valuable information from the bearer to other individuals. Ecologists have studied these chemical secretions for decades in order to increase the knowledge of chemical communication in lizards. Although several studies have focused on the chemical analysis of these secretions, there is a lack of faster, more sensitive and more selective analytical methodologies for their study. In this work a new GC coupled to tandem triple quadrupole MS (GC-QqQ (MS/MS)) methodology is developed and proposed for the target study of 12 relevant compounds often found in lizard secretions (i.e. 1-hexadecanol, palmitic acid, 1-octadecanol, oleic acid, stearic acid, 1-tetracosanol, squalene, cholesta-3,5-diene, α-tocopherol, cholesterol, ergosterol and campesterol). The method baseline-separated the analytes in less than 7min, with instrumental limits of detection ranging from 0.04 to 6.0ng/mL. It was possible to identify differences in the composition of the samples from the lizards analyzed, which depended on the species, the habitat occupied and the diet of the individuals. Moreover, α-tocopherol has been determined for the first time in a lizard species, which was thought to lack its expression in chemical secretions. Globally, the methodology has been proven to be a valuable alternative to other published methods with important improvements in terms of analysis time, sensitivity, and selectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Demodulation method for tilted fiber Bragg grating refractometer with high sensitivity

    Science.gov (United States)

    Pham, Xuantung; Si, Jinhai; Chen, Tao; Wang, Ruize; Yan, Lihe; Cao, Houjun; Hou, Xun

    2018-05-01

    In this paper, we propose a demodulation method for refractive index (RI) sensing with tilted fiber Bragg gratings (TFBGs). It operates by monitoring the TFBG cladding mode resonance "cut-off wavelengths." The idea of a "cut-off wavelength" and its determination method are introduced. The RI sensitivities of TFBGs are significantly enhanced in certain RI ranges by using our demodulation method. The temperature-induced cross sensitivity is eliminated. We also demonstrate a parallel-double-angle TFBG (PDTFBG), in which two individual TFBGs are inscribed in the fiber core in parallel using a femtosecond laser and a phase mask. The RI sensing range of the PDTFBG is significantly broader than that of a conventional single-angle TFBG. In addition, its RI sensitivity can reach 1023.1 nm/refractive index unit in the 1.4401-1.4570 RI range when our proposed demodulation method is used.

  1. A laboratory measurement method for pressure sensitive adhesives in agglomeration deinking of mixed office waste paper: The high-low scanning contrast method

    OpenAIRE

    Guolin Tong; Shuang Sun; Cuixia Wang; Kecheng Fu; Yungchang F. Chin

    2012-01-01

    A simple measurement method for pressure sensitive adhesives (PSA) in an agglomeration deinking system of mixed office waste paper was studied. This method was based on the different scanning performance of ink and PSA specks in hot-pressed and oven-dried handsheets with the change of contrast values that had been selected and set in the image analysis software. The numbers of ink specks per square meter (NPM) were well recognized at both low and high contrast values and exhibited a very good...

  2. Sensitivity analysis of large system of chemical kinetic parameters for engine combustion simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, H; Sanz-Argent, J; Petitpas, G; Havstad, M; Flowers, D

    2012-04-19

    In this study, the authors applied the state-of-the art sensitivity methods to downselect system parameters from 4000+ to 8, (23000+ -> 4000+ -> 84 -> 8). This analysis procedure paves the way for future works: (1) calibrate the system response using existed experimental observations, and (2) predict future experiment results, using the calibrated system.

  3. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    Science.gov (United States)

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  4. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  5. Characterizing Wheel-Soil Interaction Loads Using Meshfree Finite Element Methods: A Sensitivity Analysis for Design Trade Studies

    Science.gov (United States)

    Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.

    2013-01-01

    A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce

  6. Sensitivity analysis for contagion effects in social networks

    Science.gov (United States)

    VanderWeele, Tyler J.

    2014-01-01

    Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037

  7. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  8. A comparison of sorptive extraction techniques coupled to a new quantitative, sensitive, high throughput GC-MS/MS method for methoxypyrazine analysis in wine.

    Science.gov (United States)

    Hjelmeland, Anna K; Wylie, Philip L; Ebeler, Susan E

    2016-02-01

    Methoxypyrazines are volatile compounds found in plants, microbes, and insects that have potent vegetal and earthy aromas. With sensory detection thresholds in the low ng L(-1) range, modest concentrations of these compounds can profoundly impact the aroma quality of foods and beverages, and high levels can lead to consumer rejection. The wine industry routinely analyzes the most prevalent methoxypyrazine, 2-isobutyl-3-methoxypyrazine (IBMP), to aid in harvest decisions, since concentrations decrease during berry ripening. In addition to IBMP, three other methoxypyrazines IPMP (2-isopropyl-3-methoxypyrazine), SBMP (2-sec-butyl-3-methoxypyrazine), and EMP (2-ethyl-3-methoxypyrazine) have been identified in grapes and/or wine and can impact aroma quality. Despite their routine analysis in the wine industry (mostly IBMP), accurate methoxypyrazine quantitation is hindered by two major challenges: sensitivity and resolution. With extremely low sensory detection thresholds (~8-15 ng L(-1) in wine for IBMP), highly sensitive analytical methods to quantify methoxypyrazines at trace levels are necessary. Here we were able to achieve resolution of IBMP as well as IPMP, EMP, and SBMP from co-eluting compounds using one-dimensional chromatography coupled to positive chemical ionization tandem mass spectrometry. Three extraction techniques HS-SPME (headspace-solid phase microextraction), SBSE (stirbar sorptive extraction), and HSSE (headspace sorptive extraction) were validated and compared. A 30 min extraction time was used for HS-SPME and SBSE extraction techniques, while 120 min was necessary to achieve sufficient sensitivity for HSSE extractions. All extraction methods have limits of quantitation (LOQ) at or below 1 ng L(-1) for all four methoxypyrazines analyzed, i.e., LOQ's at or below reported sensory detection limits in wine. The method is high throughput, with resolution of all compounds possible with a relatively rapid 27 min GC oven program. Copyright © 2015

  9. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  10. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  11. Sensitivity analysis of a low-level waste environmental transport code

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1989-01-01

    Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt

  12. Using Real-time Event Tracking Sensitivity Analysis to Overcome Sensor Measurement Uncertainties of Geo-Information Management in Drilling Disasters

    Science.gov (United States)

    Tavakoli, S.; Poslad, S.; Fruhwirth, R.; Winter, M.

    2012-04-01

    This paper introduces an application of a novel EventTracker platform for instantaneous Sensitivity Analysis (SA) of large scale real-time geo-information. Earth disaster management systems demand high quality information to aid a quick and timely response to their evolving environments. The idea behind the proposed EventTracker platform is the assumption that modern information management systems are able to capture data in real-time and have the technological flexibility to adjust their services to work with specific sources of data/information. However, to assure this adaptation in real time, the online data should be collected, interpreted, and translated into corrective actions in a concise and timely manner. This can hardly be handled by existing sensitivity analysis methods because they rely on historical data and lazy processing algorithms. In event-driven systems, the effect of system inputs on its state is of value, as events could cause this state to change. This 'event triggering' situation underpins the logic of the proposed approach. Event tracking sensitivity analysis method describes the system variables and states as a collection of events. The higher the occurrence of an input variable during the trigger of event, the greater its potential impact will be on the final analysis of the system state. Experiments were designed to compare the proposed event tracking sensitivity analysis with existing Entropy-based sensitivity analysis methods. The results have shown a 10% improvement in a computational efficiency with no compromise for accuracy. It has also shown that the computational time to perform the sensitivity analysis is 0.5% of the time required compared to using the Entropy-based method. The proposed method has been applied to real world data in the context of preventing emerging crises at drilling rigs. One of the major purposes of such rigs is to drill boreholes to explore oil or gas reservoirs with the final scope of recovering the content

  13. High sensitive quench detection method using an integrated test wire

    International Nuclear Information System (INIS)

    Fevrier, A.; Tavergnier, J.P.; Nithart, H.; Kiblaire, M.; Duchateau, J.L.

    1981-01-01

    A high sensitive quench detection method which works even in the presence of an external perturbing magnetic field is reported. The quench signal is obtained from the difference in voltages at the superconducting winding terminals and at the terminals at a secondary winding strongly coupled to the primary. The secondary winding could consist of a ''zero-current strand'' of the superconducting cable not connected to one of the winding terminals or an integrated normal test wire inside the superconducting cable. Experimental results on quench detection obtained by this method are described. It is shown that the integrated test wire method leads to efficient and sensitive quench detection, especially in the presence of an external perturbing magnetic field

  14. Sensitivity Analysis of the Influence of Structural Parameters on Dynamic Behaviour of Highly Redundant Cable-Stayed Bridges

    Directory of Open Access Journals (Sweden)

    B. Asgari

    2013-01-01

    Full Text Available The model tuning through sensitivity analysis is a prominent procedure to assess the structural behavior and dynamic characteristics of cable-stayed bridges. Most of the previous sensitivity-based model tuning methods are automatic iterative processes; however, the results of recent studies show that the most reasonable results are achievable by applying the manual methods to update the analytical model of cable-stayed bridges. This paper presents a model updating algorithm for highly redundant cable-stayed bridges that can be used as an iterative manual procedure. The updating parameters are selected through the sensitivity analysis which helps to better understand the structural behavior of the bridge. The finite element model of Tatara Bridge is considered for the numerical studies. The results of the simulations indicate the efficiency and applicability of the presented manual tuning method for updating the finite element model of cable-stayed bridges. The new aspects regarding effective material and structural parameters and model tuning procedure presented in this paper will be useful for analyzing and model updating of cable-stayed bridges.

  15. A relative quantitative Methylation-Sensitive Amplified Polymorphism (MSAP) method for the analysis of abiotic stress.

    Science.gov (United States)

    Bednarek, Piotr T; Orłowska, Renata; Niedziela, Agnieszka

    2017-04-21

    We present a new methylation-sensitive amplified polymorphism (MSAP) approach for the evaluation of relative quantitative characteristics such as demethylation, de novo methylation, and preservation of methylation status of CCGG sequences, which are recognized by the isoschizomers HpaII and MspI. We applied the technique to analyze aluminum (Al)-tolerant and non-tolerant control and Al-stressed inbred triticale lines. The approach is based on detailed analysis of events affecting HpaII and MspI restriction sites in control and stressed samples, and takes advantage of molecular marker profiles generated by EcoRI/HpaII and EcoRI/MspI MSAP platforms. Five Al-tolerant and five non-tolerant triticale lines were exposed to aluminum stress using the physiologicaltest. Total genomic DNA was isolated from root tips of all tolerant and non-tolerant lines before and after Al stress following metAFLP and MSAP approaches. Based on codes reflecting events affecting cytosines within a given restriction site recognized by HpaII and MspI in control and stressed samples demethylation (DM), de novo methylation (DNM), preservation of methylated sites (MSP), and preservation of nonmethylatedsites (NMSP) were evaluated. MSAP profiles were used for Agglomerative hierarchicalclustering (AHC) based on Squared Euclidean distance and Ward's Agglomeration method whereas MSAP characteristics for ANOVA. Relative quantitative MSAP analysis revealed that both Al-tolerant and non-tolerant triticale lines subjected to Al stress underwent demethylation, with demethylation of CG predominating over CHG. The rate of de novo methylation in the CG context was ~3-fold lower than demethylation, whereas de novo methylation of CHG was observed only in Al-tolerant lines. Our relative quantitative MSAP approach, based on methylation events affecting cytosines within HpaII-MspI recognition sequences, was capable of quantifying de novo methylation, demethylation, methylation, and non-methylated status in control

  16. Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance

    Science.gov (United States)

    Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.

    2016-01-01

    Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.

  17. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  18. Modeling and sensitivity analysis of consensus algorithm based distributed hierarchical control for dc microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2015-01-01

    of dynamic study. The aim of this paper is to model the complete DC microgrid system in z-domain and perform sensitivity analysis for the complete system. A generalized modeling method is proposed and the system dynamics under different control parameters, communication topologies and communication speed...

  19. Effect of cantilever geometry on the optical lever sensitivities and thermal noise method of the atomic force microscope.

    Science.gov (United States)

    Sader, John E; Lu, Jianing; Mulvaney, Paul

    2014-11-01

    Calibration of the optical lever sensitivities of atomic force microscope (AFM) cantilevers is especially important for determining the force in AFM measurements. These sensitivities depend critically on the cantilever mode used and are known to differ for static and dynamic measurements. Here, we calculate the ratio of the dynamic and static sensitivities for several common AFM cantilevers, whose shapes vary considerably, and experimentally verify these results. The dynamic-to-static optical lever sensitivity ratio is found to range from 1.09 to 1.41 for the cantilevers studied - in stark contrast to the constant value of 1.09 used widely in current calibration studies. This analysis shows that accuracy of the thermal noise method for the static spring constant is strongly dependent on cantilever geometry - neglect of these dynamic-to-static factors can induce errors exceeding 100%. We also discuss a simple experimental approach to non-invasively and simultaneously determine the dynamic and static spring constants and optical lever sensitivities of cantilevers of arbitrary shape, which is applicable to all AFM platforms that have the thermal noise method for spring constant calibration.

  20. Statistical Sensitive Data Protection and Inference Prevention with Decision Tree Methods

    National Research Council Canada - National Science Library

    Chang, LiWu

    2003-01-01

    .... We consider inference as correct classification and approach it with decision tree methods. As in our previous work, sensitive data are viewed as classes of those test data and non-sensitive data are the rest attribute values...

  1. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  2. Estimating Sobol Sensitivity Indices Using Correlations

    Science.gov (United States)

    Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...

  3. Prediction of skin sensitizers using alternative methods to animal experimentation.

    Science.gov (United States)

    Johansson, Henrik; Lindstedt, Malin

    2014-07-01

    Regulatory frameworks within the European Union demand that chemical substances are investigated for their ability to induce sensitization, an adverse health effect caused by the human immune system in response to chemical exposure. A recent ban on the use of animal tests within the cosmetics industry has led to an urgent need for alternative animal-free test methods that can be used for assessment of chemical sensitizers. To date, no such alternative assay has yet completed formal validation. However, a number of assays are in development and the understanding of the biological mechanisms of chemical sensitization has greatly increased during the last decade. In this MiniReview, we aim to summarize and give our view on the recent progress of method development for alternative assessment of chemical sensitizers. We propose that integrated testing strategies should comprise complementary assays, providing measurements of a wide range of mechanistic events, to perform well-educated risk assessments based on weight of evidence. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  4. Gene targeting associated with the radiation sensitivity in squamous cell carcinoma by using microarray analysis

    International Nuclear Information System (INIS)

    Nimura, Yoshinori; Kumagai, Ken; Kouzu, Yoshinao; Higo, Morihiro; Kato, Yoshikuni; Seki, Naohiko; Yamada, Shigeru

    2005-01-01

    In order to identify a set of genes related to radiation sensitivity of squamous cell carcinoma (SCC) and establish a predictive method, we compared expression profiles of radio-sensitive/radio-resistant SCC cell lines, using the in-house cDNA microarray consisting of 2,201 human genes derived from full-length enriched SCC cDNA libraries and the Human oligo chip 30 K (Hitachi Software Engineering). Surviving fractions (SF) after irradiation of heavy iron were calculated by colony formation assay. Three pairs (TE2-TE13, YES5-YES6, and HSC3-HSC2), sensitive (SF1 0.6), were selected for the microarray analysis. The results of cDNA microarray analysis showed that 20 genes in resistant cell lines and 5 genes in sensitive cell lines were up regulated more than 1.5-fold compared with sensitive and resistant cell lines respectively. Fourteen out of 25 genes were confirmed the gene expression profiles by real-time polymerase chain reaction (PCR). Twenty-seven genes identified by Human oligo chip 30 K are candidate for the markers to distinguish radio-sensitive from radio-resistant. These results suggest that the isolated 27 genes are the candidates that might be used as specific molecular markers to predict radiation sensitivity. (author)

  5. Sensitivity Analysis for Not-at-Random Missing Data in Trial-Based Cost-Effectiveness Analysis: A Tutorial.

    Science.gov (United States)

    Leurent, Baptiste; Gomes, Manuel; Faria, Rita; Morris, Stephen; Grieve, Richard; Carpenter, James R

    2018-04-20

    Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice.

  6. An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.

    Science.gov (United States)

    Rosenbaum, Paul R; Small, Dylan S

    2017-06-01

    In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.

  7. Deterministic sensitivity analysis for the numerical simulation of contaminants transport

    International Nuclear Information System (INIS)

    Marchand, E.

    2007-12-01

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  8. Sensitivity analysis for improving nanomechanical photonic transducers biosensors

    International Nuclear Information System (INIS)

    Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C

    2015-01-01

    The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8   ×   10 −2  nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)

  9. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States

    Directory of Open Access Journals (Sweden)

    Min-Uk Kim

    2018-05-01

    Full Text Available We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA tools. We used OCA tools Korea Offsite Risk Assessment (KORA and Areal Location of Hazardous Atmospheres (ALOHA in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH3, 35% hydrogen chloride (HCl, 50% hydrofluoric acid (HF, and 69% nitric acid (HNO3. The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  10. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States.

    Science.gov (United States)

    Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon

    2018-05-18

    We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  11. Sensitive high performance liquid chromatographic method for the ...

    African Journals Online (AJOL)

    A new simple, sensitive, cost-effective and reproducible high performance liquid chromatographic (HPLC) method for the determination of proguanil (PG) and its metabolites, cycloguanil (CG) and 4-chlorophenylbiguanide (4-CPB) in urine and plasma is described. The extraction procedure is a simple three-step process ...

  12. Sensitivity analysis of Repast computational ecology models with R/Repast.

    Science.gov (United States)

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  13. Development of a rapid, simple and sensitive HPLC-FLD method for determination of rhodamine B in chili-containing products.

    Science.gov (United States)

    Qi, Ping; Lin, Zhihao; Li, Jiaxu; Wang, ChengLong; Meng, WeiWei; Hong, Hong; Zhang, Xuewu

    2014-12-01

    In this work, a simple, rapid and sensitive analytical method for the determination of rhodamine B in chili-containing foodstuffs is described. The dye is extracted from samples with methanol and analysed without further cleanup procedure by high-performance liquid chromatography (HPLC) coupled to fluorescence detection (FLD). The influence of matrix fluorescent compounds (capsaicin and dihydrocapsaicin) on the analysis was overcome by the optimisation of mobile-phase composition. The limit of determination (LOD) and limit of quantification (LOQ) were 3.7 and 10 μg/kg, respectively. Validation data show a good repeatability and within-lab reproducibility with relative standard deviations rhodamine B in foodstuffs. This method is suitable for the routine analysis of rhodamine B due to its sensitivity, simplicity, reasonable time and cost. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Application of generalized perturbation theory to sensitivity analysis in boron neutron capture therapy

    International Nuclear Information System (INIS)

    Garcia, Vanessa S.; Silva, Fernando C.; Silva, Ademir X.; Alvarez, Gustavo B.

    2011-01-01

    Boron neutron capture therapy - BNCT - is a binary cancer treatment used in brain tumors. The tumor is loaded with a boron compound and subsequently irradiated by thermal neutrons. The therapy is based on the 10 B (n, α) 7 Li nuclear reaction, which emits two types of high-energy particles, α particle and the 7 Li nuclei. The total kinetic energy released in this nuclear reaction, when deposited in the tumor region, destroys the cancer cells. Since the success of the BNCT is linked to the different selectivity between the tumor and healthy tissue, it is necessary to carry out a sensitivity analysis to determinate the boron concentration. Computational simulations are very important in this context because they help in the treatment planning by calculating the lowest effective absorbed dose rate to reduce the damage to healthy tissue. The objective of this paper is to present a deterministic method based on generalized perturbation theory (GPT) to perform sensitivity analysis with respect to the 10 B concentration and to estimate the absorbed dose rate by patients undergoing this therapy. The advantage of the method is a significant reduction in computational time required to perform these calculations. To simulate the neutron flux in all brain regions, the method relies on a two-dimensional neutron transport equation whose spatial, angular and energy variables are discretized by the diamond difference method, the discrete ordinate method and multigroup formulation, respectively. The results obtained through GPT are consistent with those obtained using other methods, demonstrating the efficacy of the proposed method. (author)

  15. Application of generalized perturbation theory to sensitivity analysis in boron neutron capture therapy

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Vanessa S. [Universidade Federal Fluminense (EEIMVR/UFF-RJ), Volta Redonda, RJ (Brazil). Escola de Engenharia Industrial e Metalurgica. Programa de Pos-Graduacao em Modelagem Computacional em Ciencia e Tecnologia; Silva, Fernando C.; Silva, Ademir X., E-mail: fernando@con.ufrj.b, E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Alvarez, Gustavo B. [Universidade Federal Fluminense (EEIMVR/UFF-RJ), Volta Redonda, RJ (Brazil). Escola de Engenharia Industrial e Metalurgica. Dept. de Ciencias Exatas

    2011-07-01

    Boron neutron capture therapy - BNCT - is a binary cancer treatment used in brain tumors. The tumor is loaded with a boron compound and subsequently irradiated by thermal neutrons. The therapy is based on the {sup 10}B (n, {alpha}) {sup 7}Li nuclear reaction, which emits two types of high-energy particles, {alpha} particle and the {sup 7}Li nuclei. The total kinetic energy released in this nuclear reaction, when deposited in the tumor region, destroys the cancer cells. Since the success of the BNCT is linked to the different selectivity between the tumor and healthy tissue, it is necessary to carry out a sensitivity analysis to determinate the boron concentration. Computational simulations are very important in this context because they help in the treatment planning by calculating the lowest effective absorbed dose rate to reduce the damage to healthy tissue. The objective of this paper is to present a deterministic method based on generalized perturbation theory (GPT) to perform sensitivity analysis with respect to the {sup 10}B concentration and to estimate the absorbed dose rate by patients undergoing this therapy. The advantage of the method is a significant reduction in computational time required to perform these calculations. To simulate the neutron flux in all brain regions, the method relies on a two-dimensional neutron transport equation whose spatial, angular and energy variables are discretized by the diamond difference method, the discrete ordinate method and multigroup formulation, respectively. The results obtained through GPT are consistent with those obtained using other methods, demonstrating the efficacy of the proposed method. (author)

  16. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  17. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  18. Application of the Tikhonov regularization method to wind retrieval from scatterometer data I. Sensitivity analysis and simulation experiments

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang

    2011-01-01

    Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  19. Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis

    Science.gov (United States)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    2017-11-01

    The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.

  20. Sensitivity Analysis of Hydraulic Methods Regarding Hydromorphologic Data Derivation Methods to Determine Environmental Water Requirements

    Directory of Open Access Journals (Sweden)

    Alireza Shokoohi

    2015-07-01

    Full Text Available This paper studies the accuracy of hydraulic methods in determining environmental flow requirements. Despite the vital importance of deriving river cross sectional data for hydraulic methods, few studies have focused on the criteria for deriving this data. The present study shows that the depth of cross section has a meaningful effect on the results obtained from hydraulic methods and that, considering fish as the index species for river habitat analysis, an optimum depth of 1 m should be assumed for deriving information from cross sections. The second important parameter required for extracting the geometric and hydraulic properties of rivers is the selection of an appropriate depth increment; ∆y. In the present research, this parameter was found to be equal to 1 cm. The uncertainty of the environmental discharge evaluation, when allocating water in areas with water scarcity, should be kept as low as possible. The Manning friction coefficient (n is an important factor in river discharge calculation. Using a range of "n" equal to 3 times the standard deviation for the study area, it is shown that the influence of friction coefficient on the estimation of environmental flow is much less than that on the calculation of river discharge.

  1. Aroma analysis and quality control of food using highly sensitive analytical methods

    International Nuclear Information System (INIS)

    Mayr, D.

    2003-02-01

    This thesis deals with the development of quality control methods for food based on headspace measurements by Proton-Transfer-Reaction Mass-Spectrometry (PTR-MS) and with aroma analysis of food using PTR-MS and Gas Chromatography-Olfactometry (GC-O). An objective method was developed for the determination of a herb extract's quality; this quality was checked by a sensory analysis until now. The concentrations of the volatile organic compounds (VOCs) in the headspace of 81 different batches were measured by PTR-MS. Based on the sensory judgment of the customer, characteristic differences in the emissions of 'good' and 'bad' quality samples were identified and a method for the quality control of this herb extract was developed. This novel method enables the producing company to check and ensure that they are only selling high-quality products and therefore avoid complaints of the customer. Furthermore this method can be used for controlling, optimizing and automating the production process. VOCs emitted by meat were investigated using PTR-MS to develop a rapid, non-destructive and quantitative technique for determination of the microbial contamination of meat. Meat samples (beef, pork and poultry) that were wrapped into different kinds of packages (air and vacuum) were stored in at 4 o C for up to 13 days. The emitted VOCs were measured as a function of storage time and identified partly. The concentration of many of the measured VOCs, e.g. sulfur compounds like methanethiol, dimethylsulfide and dimethyldisulfide, largely increased over the storage time. There were big differences in the emissions of normal air- and vacuum-packed meat. VOCs typically emitted by air-packaged meat were methanethiol, dimethylsulfide and dimethyldisulfide, while ethanol and methanol were found in vacuum-packaged meat. A comparison of the PTR-MS results with those obtained by a bacteriological examination performed at the same time showed strong correlations (up to 99 %) between the

  2. Comparison of the sensitivity of mass spectrometry atmospheric pressure ionization techniques in the analysis of porphyrinoids.

    Science.gov (United States)

    Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold

    2013-10-01

    The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments. Copyright © 2013 John Wiley & Sons, Ltd.

  3. A combined sensitivity analysis and kriging surrogate modeling for early validation of health indicators

    International Nuclear Information System (INIS)

    Lamoureux, Benjamin; Mechbal, Nazih; Massé, Jean-Rémi

    2014-01-01

    To increase the dependability of complex systems, one solution is to assess their state of health continuously through the monitoring of variables sensitive to potential degradation modes. When computed in an operating environment, these variables, known as health indicators, are subject to many uncertainties. Hence, the stochastic nature of health assessment combined with the lack of data in design stages makes it difficult to evaluate the efficiency of a health indicator before the system enters into service. This paper introduces a method for early validation of health indicators during the design stages of a system development process. This method uses physics-based modeling and uncertainties propagation to create simulated stochastic data. However, because of the large number of parameters defining the model and its computation duration, the necessary runtime for uncertainties propagation is prohibitive. Thus, kriging is used to obtain low computation time estimations of the model outputs. Moreover, sensitivity analysis techniques are performed upstream to determine the hierarchization of the model parameters and to reduce the dimension of the input space. The validation is based on three types of numerical key performance indicators corresponding to the detection, identification and prognostic processes. After having introduced and formalized the framework of uncertain systems modeling and the different performance metrics, the issues of sensitivity analysis and surrogate modeling are addressed. The method is subsequently applied to the validation of a set of health indicators for the monitoring of an aircraft engine’s pumping unit

  4. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    Science.gov (United States)

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  5. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system.

    Science.gov (United States)

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.

  6. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system

    Directory of Open Access Journals (Sweden)

    Annie eLumen

    2015-05-01

    Full Text Available A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local

  7. Linearization of the Principal Component Analysis method for radiative transfer acceleration: Application to retrieval algorithms and sensitivity studies

    International Nuclear Information System (INIS)

    Spurr, R.; Natraj, V.; Lerot, C.; Van Roozendael, M.; Loyola, D.

    2013-01-01

    Principal Component Analysis (PCA) is a promising tool for enhancing radiative transfer (RT) performance. When applied to binned optical property data sets, PCA exploits redundancy in the optical data, and restricts the number of full multiple-scatter calculations to those optical states corresponding to the most important principal components, yet still maintaining high accuracy in the radiance approximations. We show that the entire PCA RT enhancement process is analytically differentiable with respect to any atmospheric or surface parameter, thus allowing for accurate and fast approximations of Jacobian matrices, in addition to radiances. This linearization greatly extends the power and scope of the PCA method to many remote sensing retrieval applications and sensitivity studies. In the first example, we examine accuracy for PCA-derived UV-backscatter radiance and Jacobian fields over a 290–340 nm window. In a second application, we show that performance for UV-based total ozone column retrieval is considerably improved without compromising the accuracy. -- Highlights: •Principal Component Analysis (PCA) of spectrally-binned atmospheric optical properties. •PCA-based accelerated radiative transfer with 2-stream model for fast multiple-scatter. •Atmospheric and surface property linearization of this PCA performance enhancement. •Accuracy of PCA enhancement for radiances and bulk-property Jacobians, 290–340 nm. •Application of PCA speed enhancement to UV backscatter total ozone retrievals

  8. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    four approximation techniques considered in this paper is orders of magnitude smaller than traditional Monte Carlo estimation. Software, coded in MATLAB®, which implements all sensitivity analysis techniques discussed in this paper, is available free of charge. Conclusions Estimating variance-based sensitivity indices of a large biochemical reaction system is a computationally challenging task that can only be addressed via approximations. Among the methods presented in this paper, a technique based on orthonormal Hermite polynomials seems to be an acceptable candidate for the job, producing very good approximation results for a wide range of uncertainty levels in a fraction of the time required by traditional Monte Carlo sampling.

  9. Sensitivity analysis of local uncertainties in large break loss-of-coolant accident (LB-LOCA) thermo-mechanical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Arkoma, Asko, E-mail: asko.arkoma@vtt.fi; Ikonen, Timo

    2016-08-15

    Highlights: • A sensitivity analysis using the data from EPR LB-LOCA simulations is done. • A procedure to analyze such complex data is outlined. • Both visual and quantitative methods are used. • Input factors related to core design are identified as most significant. - Abstract: In this paper, a sensitivity analysis for the data originating from a large break loss-of-coolant accident (LB-LOCA) analysis of an EPR-type nuclear power plant is presented. In the preceding LOCA analysis, the number of failing fuel rods in the accident was established (Arkoma et al., 2015). However, the underlying causes for rod failures were not addressed. It is essential to bring out which input parameters and boundary conditions have significance to the outcome of the analysis, i.e. the ballooning and burst of the rods. Due to complexity of the existing data, the first part of the analysis consists of defining the relevant input parameters for the sensitivity analysis. Then, selected sensitivity measures are calculated between the chosen input and output parameters. The ultimate goal is to develop a systematic procedure for the sensitivity analysis of statistical LOCA simulation that takes into account the various sources of uncertainties in the calculation chain. In the current analysis, the most relevant parameters with respect to the cladding integrity are the decay heat power during the transient, the thermal hydraulic conditions in the rod’s location in reactor, and the steady-state irradiation history of the rod. Meanwhile, the tolerances in fuel manufacturing parameters were found to have negligible effect on cladding deformation.

  10. Sensitive and comprehensive analysis of O-glycosylation in biotherapeutics: a case study of novel erythropoiesis stimulating protein.

    Science.gov (United States)

    Kim, Unyong; Oh, Myung Jin; Seo, Youngsuk; Jeon, Yinae; Eom, Joon-Ho; An, Hyun Joo

    2017-09-01

    Glycosylation of recombinant human erythropoietins (rhEPOs) is significantly associated with drug's quality and potency. Thus, comprehensive characterization of glycosylation is vital to assess the biotherapeutic quality and establish the equivalency of biosimilar rhEPOs. However, current glycan analysis mainly focuses on the N-glycans due to the absence of analytical tools to liberate O-glycans with high sensitivity. We developed selective and sensitive method to profile native O-glycans on rhEPOs. O-glycosylation on rhEPO including O-acetylation on a sialic acid was comprehensively characterized. Details such as O-glycan structure and O-acetyl-modification site were obtained from tandem MS. This method may be applied to QC and batch analysis of not only rhEPOs but also other biotherapeutics bearing multiple O-glycosylations.

  11. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  12. Privacy Protection Method for Multiple Sensitive Attributes Based on Strong Rule

    Directory of Open Access Journals (Sweden)

    Tong Yi

    2015-01-01

    Full Text Available At present, most studies on data publishing only considered single sensitive attribute, and the works on multiple sensitive attributes are still few. And almost all the existing studies on multiple sensitive attributes had not taken the inherent relationship between sensitive attributes into account, so that adversary can use the background knowledge about this relationship to attack the privacy of users. This paper presents an attack model with the association rules between the sensitive attributes and, accordingly, presents a data publication for multiple sensitive attributes. Through proof and analysis, the new model can prevent adversary from using the background knowledge about association rules to attack privacy, and it is able to get high-quality released information. At last, this paper verifies the above conclusion with experiments.

  13. Application of sensitivity analysis to a simplified coupled neutronic thermal-hydraulics transient in a fast reactor using Adjoint techniques

    International Nuclear Information System (INIS)

    Gilli, L.; Lathouwers, D.; Kloosterman, J.L.; Van der Hagen, T.H.J.J.

    2011-01-01

    In this paper a method to perform sensitivity analysis for a simplified multi-physics problem is presented. The method is based on the Adjoint Sensitivity Analysis Procedure which is used to apply first order perturbation theory to linear and nonlinear problems using adjoint techniques. The multi-physics problem considered includes a neutronic, a thermo-kinetics, and a thermal-hydraulics part and it is used to model the time dependent behavior of a sodium cooled fast reactor. The adjoint procedure is applied to calculate the sensitivity coefficients with respect to the kinetic parameters of the problem for two reference transients using two different model responses, the results obtained are then compared with the values given by a direct sampling of the forward nonlinear problem. Our first results show that, thanks to modern numerical techniques, the procedure is relatively easy to implement and provides good estimation for most perturbations, making the method appealing for more detailed problems. (author)

  14. The lead cooled fast reactor benchmark Brest-300: analysis with sensitivity method

    International Nuclear Information System (INIS)

    Smirnov, V.; Orlov, V.; Mourogov, A.; Lecarpentier, D.; Ivanova, T.

    2005-01-01

    Lead cooled fast neutrons reactor is one of the most interesting candidates for the development of atomic energy. BREST-300 is a 300 MWe lead cooled fast reactor developed by the NIKIET (Russia) with a deterministic safety approach which aims to exclude reactivity margins greater than the delayed neutron fraction. The development of innovative reactors (lead coolant, nitride fuel...) and fuel cycles with new constraints such as cycle closure or actinide burning, requires new technologies and new nuclear data. In this connection, the tool and neutron data used for the calculational analysis of reactor characteristics requires thorough validation. NIKIET developed a reactor benchmark fitting of design type calculational tools (including neutron data). In the frame of technical exchanges between NIKIET and EDF (France), results of this benchmark calculation concerning the principal parameters of fuel evolution and safety parameters has been inter-compared, in order to estimate the uncertainties and validate the codes for calculations of this new kind of reactors. Different codes and cross-sections data have been used, and sensitivity studies have been performed to understand and quantify the uncertainties sources.The comparison of results shows that the difference on k eff value between ERANOS code with ERALIB1 library and the reference is of the same order of magnitude than the delayed neutron fraction. On the other hand, the discrepancy is more than twice bigger if JEF2.2 library is used with ERANOS. Analysis of discrepancies in calculation results reveals that the main effect is provided by the difference of nuclear data, namely U 238 , Pu 239 fission and capture cross sections and lead inelastic cross sections

  15. Sensitive rapid analysis of iodine-labelled protein mixture on flat substrates with high spatial resolution

    International Nuclear Information System (INIS)

    Zanevskij, Yu.V.; Ivanov, A.B.; Movchan, S.A.; Peshekhonov, V.D.; Chan Dyk Tkhan'; Chernenko, S.P.; Kaminir, L.B.; Krejndlin, Eh.Ya.; Chernyj, A.A.

    1983-01-01

    Usability of rapid analysis by electrophoresis of the admixture of I 125 -labelled proteins on flat samples by means of URAN type installation developed using a multiwire proportional chamber is studied. The sensitivity of the method is better than 200 cpm/cm 2 and the spatial resolution is approximately 1 mm. The procedure of the rapid analysis is no longer than several tens of minutes

  16. Further development of LLNA:DAE method as stand-alone skin-sensitization testing method and applied for evaluation of relative skin-sensitizing potency between chemicals.

    Science.gov (United States)

    Yamashita, Kunihiko; Shinoda, Shinsuke; Hagiwara, Saori; Itagaki, Hiroshi

    2015-04-01

    To date, there has been no well-established local lymph node assay (LLNA) that includes an elicitation phase. Therefore, we developed a modified local lymph node assay with an elicitation phase (LLNA:DAE) to discriminate true skin sensitizers from chemicals that gave borderline positive results and previously reported this assay. To develop the LLNA:DAE method as a useful stand-alone testing method, we investigated the complete procedure for the LLNA:DAE method using hexyl cinnamic aldehyde (HCA), isoeugenol, and 2,4-dinitrochlorobenzene (DNCB) as test compounds. We defined the LLNA:DAE procedure as follows: in the dose-finding test, four concentrations of chemical applied to dorsum of the right ear on days 1, 2, and 3 and dorsum of both ears on day 10. Ear thickness and skin irritation score were measured on days 1, 3, 5, 10, and 12. Local lymph nodes were excised and weighed on day 12. The test dose for the primary LLNA:DAE study was selected as the dose that gave the highest left ear lymph node weight in the dose-finding study, or the lowest dose that produced a left ear lymph node of over 4 mg. This procedure was validated using nine different chemicals. Furthermore, qualitative relationship was observed between the degree of elicitation response in the left ear lymph node and the skin sensitizing potency of 32 chemicals tested in this study and the previous study. These results indicated that LLNA:DAE method was as first LLNA method that was able to evaluate the skin sensitizing potential and potency in elicitation response.

  17. A Sensitive Validated Spectrophotometric Method for the Determination of Flucloxacillin Sodium

    Directory of Open Access Journals (Sweden)

    R. Singh Gujral

    2009-01-01

    Full Text Available A simple and sensitive spectrophotometric method has been proposed for the determination of flucloxacillin sodium. The determination method is based on charge transfer complexation reaction of the drug with iodine in methanol-dichloromethane medium. The absorbance was measured at 362 nm against the reagent blank. Under optimized experimental conditions, Beer's law is obeyed in the concentration ranges 1-9 μg/mL for flucloxacillin. The method was validated for specificity, linearity, precision, accuracy. The degree of linearity of the calibration curves, the percent recoveries, limit of detection and quantitation for the spectrophotometric method were determined. No interferences could be observed from the additives commonly present in the pharmaceutical formulations. The method was successfully applied for in vitro determination of human urine samples with low RSD value. This is simple, specific, accurate and sensitive spectrophotometric method.

  18. Sensitivity analysis of the nuclear data for MYRRHA reactor modelling

    International Nuclear Information System (INIS)

    Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan

    2014-01-01

    A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)

  19. Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance

    Science.gov (United States)

    Kato, H.; Ito, K.

    2009-01-01

    A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.

  20. Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis

    Science.gov (United States)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2015-04-01

    Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed