WorldWideScience

Sample records for parameter sensitivity analysis

  1. Sensitivity analysis in multi-parameter probabilistic systems

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model

  2. A sensitivity analysis approach to optical parameters of scintillation detectors

    International Nuclear Information System (INIS)

    Ghal-Eh, N.; Koohi-Fayegh, R.

    2008-01-01

    In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators

  3. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  4. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  5. Sensitivity analysis of railpad parameters on vertical railway track dynamics

    NARCIS (Netherlands)

    Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.

    2016-01-01

    This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad

  6. Seismic analysis of steam generator and parameter sensitivity studies

    International Nuclear Information System (INIS)

    Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun

    2013-01-01

    Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)

  7. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    Science.gov (United States)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  8. ECOS - analysis of sensitivity to database and input parameters

    International Nuclear Information System (INIS)

    Sumerling, T.J.; Jones, C.H.

    1986-06-01

    The sensitivity of doses calculated by the generic biosphere code ECOS to parameter changes has been investigated by the authors for the Department of the Environment as part of its radioactive waste management research programme. The sensitivity of results to radionuclide dependent parameters has been tested by specifying reasonable parameter ranges and performing code runs for best estimate, upper-bound and lower-bound parameter values. The work indicates that doses are most sensitive to scenario parameters: geosphere input fractions, area of contaminated land, land use and diet, flux of contaminated waters and water use. Recommendations are made based on the results of sensitivity. (author)

  9. Sensitivity analysis on parameters and processes affecting vapor intrusion risk

    KAUST Repository

    Picone, Sara

    2012-03-30

    A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion. © 2012 SETAC.

  10. Impact Responses and Parameters Sensitivity Analysis of Electric Wheelchairs

    Directory of Open Access Journals (Sweden)

    Song Wang

    2018-06-01

    Full Text Available The shock and vibration of electric wheelchairs undergoing road irregularities is inevitable. The road excitation causes the uneven magnetic gap of the motor, and the harmful vibration decreases the recovery rate of rehabilitation patients. To effectively suppress the shock and vibration, this paper introduces the DA (dynamic absorber to the electric wheelchair. Firstly, a vibration model of the human-wheelchair system with the DA was created. The models of the road excitation for wheelchairs going up a step and going down a step were proposed, respectively. To reasonably evaluate the impact level of the human-wheelchair system undergoing the step–road transition, evaluation indexes were given. Moreover, the created vibration model and the road–step model were validated via tests. Then, to reveal the vibration suppression performance of the DA, the impact responses and the amplitude frequency characteristics were numerically simulated and compared. Finally, a sensitivity analysis of the impact responses to the tire static radius r and the characteristic parameters was carried out. The results show that the DA can effectively suppress the shock and vibration of the human-wheelchair system. Moreover, for the electric wheelchair going up a step and going down a step, there are some differences in the vibration behaviors.

  11. Reliability analysis of a sensitive and independent stabilometry parameter set.

    Science.gov (United States)

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  12. Reliability analysis of a sensitive and independent stabilometry parameter set

    Science.gov (United States)

    Nagymáté, Gergely; Orlovits, Zsanett

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938

  13. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  14. Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.

  15. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    Science.gov (United States)

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  16. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  17. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    Directory of Open Access Journals (Sweden)

    J. Li

    2013-08-01

    Full Text Available Proper specification of model parameters is critical to the performance of land surface models (LSMs. Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2–8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive or type II errors (i.e., insensitive parameters labeled as sensitive. Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  18. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  19. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    Science.gov (United States)

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost

  20. Groundwater pathway sensitivity analysis and hydrogeologic parameters identification for waste disposal in porous media

    International Nuclear Information System (INIS)

    Yu, C.

    1986-01-01

    The migration of radionuclides in a geologic medium is controlled by the hydrogeologic parameters of the medium such as dispersion coefficient, pore water velocity, retardation factor, degradation rate, mass transfer coefficient, water content, and fraction of dead-end pores. These hydrogeologic parameters are often used to predict the migration of buried wastes in nuclide transport models such as the conventional advection-dispersion model, the mobile-immobile pores model, the nonequilibrium adsorption-desorption model, and the general group transfer concentration model. One of the most important factors determining the accuracy of predicting waste migration is the accuracy of the parameter values used in the model. More sensitive parameters have a greater influence on the results and hence should determined (measured or estimated) more accurately than less sensitive parameters. A formal parameter sensitivity analysis is carried out in this paper. Parameter identification techniques to determine the hydrogeologic parameters of the flow system are discussed. The dependence of the accuracy of the estimated parameters upon the parameter sensitivity is also discussed

  1. Sensitivity analysis of specific activity model parameters for environmental transport of 3H and dose assessment

    International Nuclear Information System (INIS)

    Rout, S.; Mishra, D.G.; Ravi, P.M.; Tripathi, R.M.

    2016-01-01

    Tritium is one of the radionuclides likely to get released to the environment from Pressurized Heavy Water Reactors. Environmental models are extensively used to quantify the complex environmental transport processes of radionuclides and also to assess the impact to the environment. Model parameters exerting the significant influence on model results are identified through a sensitivity analysis (SA). SA is the study of how the variation (uncertainty) in the output of a mathematical model can be apportioned, qualitatively or quantitatively, to different sources of variation in the input parameters. This study was designed to identify the sensitive model parameters of specific activity model (TRS 1616, IAEA) for environmental transfer of 3 H following release to air and then to vegetation and animal products. Model includes parameters such as air to soil transfer factor (CRs), Tissue Free Water 3 H to Organically Bound 3 H ratio (Rp), Relative humidity (RH), WCP (fractional water content) and WEQp (water equivalent factor) any change in these parameters leads to change in 3 H level in vegetation and animal products consequently change in dose due to ingestion. All these parameters are function of climate and/or plant which change with time, space and species. Estimation of these parameters at every time is a time consuming and also required sophisticated instrumentation. Therefore it is necessary to identify the sensitive parameters and freeze the values of least sensitive parameters at constant values for more accurate estimation of 3 H dose in short time for routine assessment

  2. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  3. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    Science.gov (United States)

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  4. Multi-parameters sensitivity analysis of natural vibration modal for steel arch bridge

    Directory of Open Access Journals (Sweden)

    WANG Ying

    2014-02-01

    Full Text Available Because of the vehicle loads and environmental factors,the behaviors of bridge structure in service is becoming deterioration.The modal parameters are important indexes of structure,so sensitivity analysis of natural vibration is an important way to evaluate the behavior of bridge structure.In this paper,using the finite element software Ansys,calculation model of a steel arch bridge was built,and the natural vibration modals were obtained.In order to compare the different sensitivity of material parameters which may affect the natural vibration modal,5 factors were chosen to perform the calculation.The results indicated that different 5 factors had different sensitivity.The leading factor was elastic modulus of arch rib,and the elastic modulus of suspender had little effect to the sensitivity.Another argument was the opposite sensitivity effect happened between the elastic modulus and density of the material.

  5. Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia

    Science.gov (United States)

    Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica

    2017-01-01

    We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.

  6. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  7. Sensitivity Analysis of WEC Array Layout Parameters Effect on the Power Performance

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Ferri, Francesco; Kofoed, Jens Peter

    2015-01-01

    This study assesses the effect that the array layout choice has on the power performance. To this end, a sensitivity analysis is carried out with six array layout parameters, as the simulation inputs, the array power performance (q-factor), as the simulation output, and a simulation model special...

  8. Sensitivity analysis of large system of chemical kinetic parameters for engine combustion simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, H; Sanz-Argent, J; Petitpas, G; Havstad, M; Flowers, D

    2012-04-19

    In this study, the authors applied the state-of-the art sensitivity methods to downselect system parameters from 4000+ to 8, (23000+ -> 4000+ -> 84 -> 8). This analysis procedure paves the way for future works: (1) calibrate the system response using existed experimental observations, and (2) predict future experiment results, using the calibrated system.

  9. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    Science.gov (United States)

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  10. Parameter sensitivity and uncertainty of the forest carbon flux model FORUG : a Monte Carlo analysis

    Energy Technology Data Exchange (ETDEWEB)

    Verbeeck, H.; Samson, R.; Lemeur, R. [Ghent Univ., Ghent (Belgium). Laboratory of Plant Ecology; Verdonck, F. [Ghent Univ., Ghent (Belgium). Dept. of Applied Mathematics, Biometrics and Process Control

    2006-06-15

    The FORUG model is a multi-layer process-based model that simulates carbon dioxide (CO{sub 2}) and water exchange between forest stands and the atmosphere. The main model outputs are net ecosystem exchange (NEE), total ecosystem respiration (TER), gross primary production (GPP) and evapotranspiration. This study used a sensitivity analysis to identify the parameters contributing to NEE uncertainty in the FORUG model. The aim was to determine if it is necessary to estimate the uncertainty of all parameters of a model to determine overall output uncertainty. Data used in the study were the meteorological and flux data of beech trees in Hesse. The Monte Carlo method was used to rank sensitivity and uncertainty parameters in combination with a multiple linear regression. Simulations were run in which parameters were assigned probability distributions and the effect of variance in the parameters on the output distribution was assessed. The uncertainty of the output for NEE was estimated. Based on the arbitrary uncertainty of 10 key parameters, a standard deviation of 0.88 Mg C per year per NEE was found, which was equal to 24 per cent of the mean value of NEE. The sensitivity analysis showed that the overall output uncertainty of the FORUG model could be determined by accounting for only a few key parameters, which were identified as corresponding to critical parameters in the literature. It was concluded that the 10 most important parameters determined more than 90 per cent of the output uncertainty. High ranking parameters included soil respiration; photosynthesis; and crown architecture. It was concluded that the Monte Carlo technique is a useful tool for ranking the uncertainty of parameters of process-based forest flux models. 48 refs., 2 tabs., 2 figs.

  11. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  12. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  13. Sensitivity analysis in oxidation ditch modelling: the effect of variations in stoichiometric, kinetic and operating parameters on the performance indices

    NARCIS (Netherlands)

    Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.

    2001-01-01

    This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis

  14. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  15. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    Science.gov (United States)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  16. Sensitivity analysis of effective population size to demographic parameters in house sparrow populations.

    Science.gov (United States)

    Stubberud, Marlene Waege; Myhre, Ane Marlene; Holand, Håkon; Kvalnes, Thomas; Ringsby, Thor Harald; Saether, Bernt-Erik; Jensen, Henrik

    2017-05-01

    The ratio between the effective and the census population size, Ne/N, is an important measure of the long-term viability and sustainability of a population. Understanding which demographic processes that affect Ne/N most will improve our understanding of how genetic drift and the probability of fixation of alleles is affected by demography. This knowledge may also be of vital importance in management of endangered populations and species. Here, we use data from 13 natural populations of house sparrow (Passer domesticus) in Norway to calculate the demographic parameters that determine Ne/N. Using the global variance-based Sobol' method for the sensitivity analyses, we found that Ne/N was most sensitive to demographic variance, especially among older individuals. Furthermore, the individual reproductive values (that determine the demographic variance) were most sensitive to variation in fecundity. Our results draw attention to the applicability of sensitivity analyses in population management and conservation. For population management aiming to reduce the loss of genetic variation, a sensitivity analysis may indicate the demographic parameters towards which resources should be focused. The result of such an analysis may depend on the life history and mating system of the population or species under consideration, because the vital rates and sex-age classes that Ne/N is most sensitive to may change accordingly. © 2017 John Wiley & Sons Ltd.

  17. Parametric sensitivity analysis for techno-economic parameters in Indian power sector

    International Nuclear Information System (INIS)

    Mallah, Subhash; Bansal, N.K.

    2011-01-01

    Sensitivity analysis is a technique that evaluates the model response to changes in input assumptions. Due to uncertain prices of primary fuels in the world market, Government regulations for sustainability and various other technical parameters there is a need to analyze the techno-economic parameters which play an important role in policy formulations. This paper examines the variations in technical as well as economic parameters that can mostly affect the energy policy of India. MARKAL energy simulation model has been used to analyze the uncertainty in all techno-economic parameters. Various ranges of input parameters are adopted from previous studies. The results show that at lower discount rate coal is the least preferred technology and correspondingly carbon emission reduction. With increased gas and nuclear fuel prices they disappear from the allocations of energy mix.

  18. Transient dynamic and modeling parameter sensitivity analysis of 1D solid oxide fuel cell model

    International Nuclear Information System (INIS)

    Huangfu, Yigeng; Gao, Fei; Abbas-Turki, Abdeljalil; Bouquain, David; Miraoui, Abdellatif

    2013-01-01

    Highlights: • A multiphysics, 1D, dynamic SOFC model is developed. • The presented model is validated experimentally in eight different operating conditions. • Electrochemical and thermal dynamic transient time expressions are given in explicit forms. • Parameter sensitivity is discussed for different semi-empirical parameters in the model. - Abstract: In this paper, a multiphysics solid oxide fuel cell (SOFC) dynamic model is developed by using a one dimensional (1D) modeling approach. The dynamic effects of double layer capacitance on the electrochemical domain and the dynamic effect of thermal capacity on thermal domain are thoroughly considered. The 1D approach allows the model to predict the non-uniform distributions of current density, gas pressure and temperature in SOFC during its operation. The developed model has been experimentally validated, under different conditions of temperature and gas pressure. Based on the proposed model, the explicit time constant expressions for different dynamic phenomena in SOFC have been given and discussed in detail. A parameters sensitivity study has also been performed and discussed by using statistical Multi Parameter Sensitivity Analysis (MPSA) method, in order to investigate the impact of parameters on the modeling accuracy

  19. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper

  20. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr

    2015-10-15

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.

  1. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  2. Parameter sensitivity analysis of nonlinear piezoelectric probe in tapping mode atomic force microscopy for measurement improvement

    Energy Technology Data Exchange (ETDEWEB)

    McCarty, Rachael; Nima Mahmoodi, S., E-mail: nmahmoodi@eng.ua.edu [Department of Mechanical Engineering, The University of Alabama, Box 870276, Tuscaloosa, Alabama 35487 (United States)

    2014-02-21

    The equations of motion for a piezoelectric microcantilever are derived for a nonlinear contact force. The analytical expressions for natural frequencies and mode shapes are obtained. Then, the method of multiple scales is used to analyze the analytical frequency response of the piezoelectric probe. The effects of nonlinear excitation force on the microcantilever beam's frequency and amplitude are analytically studied. The results show a frequency shift in the response resulting from the force nonlinearities. This frequency shift during contact mode is an important consideration in the modeling of AFM mechanics for generation of more accurate imaging. Also, a sensitivity analysis of the system parameters on the nonlinearity effect is performed. The results of a sensitivity analysis show that it is possible to choose parameters such that the frequency shift minimizes. Certain parameters such as tip radius, microcantilever beam dimensions, and modulus of elasticity have more influence on the nonlinearity of the system than other parameters. By changing only three parameters—tip radius, thickness, and modulus of elasticity of the microbeam—a more than 70% reduction in nonlinearity effect was achieved.

  3. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM

  4. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.

  5. Sensitivity coefficients of reactor parameters in fast critical assemblies and uncertainty analysis

    International Nuclear Information System (INIS)

    Aoyama, Takafumi; Suzuki, Takayuki; Takeda, Toshikazu; Hasegawa, Akira; Kikuchi, Yasuyuki.

    1986-02-01

    Sensitivity coefficients of reactor parameters in several fast critical assemblies to various cross sections were calculated in 16 group by means of SAGEP code based on the generalized perturbation theory. The sensitivity coefficients were tabulated and the difference of sensitivity coefficients was discussed. Furthermore, the uncertainty of calculated reactor parameters due to cross section uncertainty were estimated using the sensitivity coefficients and cross section covariance data. (author)

  6. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    Science.gov (United States)

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  7. A sensitivity analysis of hazardous waste disposal site climatic and soil design parameters using HELP3

    International Nuclear Information System (INIS)

    Adelman, D.D.; Stansbury, J.

    1997-01-01

    The Resource Conservation and Recovery Act (RCRA) Subtitle C, Comprehensive Environmental Response, Compensation, And Liability Act (CERCLA), and subsequent amendments have formed a comprehensive framework to deal with hazardous wastes on the national level. Key to this waste management is guidance on design (e.g., cover and bottom leachate control systems) of hazardous waste landfills. The objective of this research was to investigate the sensitivity of leachate volume at hazardous waste disposal sites to climatic, soil cover, and vegetative cover (Leaf Area Index) conditions. The computer model HELP3 which has the capability to simulate double bottom liner systems as called for in hazardous waste disposal sites was used in the analysis. HELP3 was used to model 54 combinations of climatic conditions, disposal site soil surface curve numbers, and leaf area index values to investigate how sensitive disposal site leachate volume was to these three variables. Results showed that leachate volume from the bottom double liner system was not sensitive to these parameters. However, the cover liner system leachate volume was quite sensitive to climatic conditions and less sensitive to Leaf Area Index and curve number values. Since humid locations had considerably more cover liner system leachate volume than and locations, different design standards may be appropriate for humid conditions than for and conditions

  8. Reduction of low frequency vibration of truck driver and seating system through system parameter identification, sensitivity analysis and active control

    Science.gov (United States)

    Wang, Xu; Bi, Fengrong; Du, Haiping

    2018-05-01

    This paper aims to develop an 5-degree-of-freedom driver and seating system model for optimal vibration control. A new method for identification of the driver seating system parameters from experimental vibration measurement has been developed. The parameter sensitivity analysis has been conducted considering the random excitation frequency and system parameter uncertainty. The most and least sensitive system parameters for the transmissibility ratio have been identified. The optimised PID controllers have been developed to reduce the driver's body vibration.

  9. Sensitivity analysis of reactor safety parameters with automated adjoint function generation

    International Nuclear Information System (INIS)

    Kallfelz, J.M.; Horwedel, J.E.; Worley, B.A.

    1992-01-01

    A project at the Paul Scherrer Institute (PSI) involves the development of simulation models for the transient analysis of the reactors in Switzerland (STARS). This project, funded in part by the Swiss Federal Nuclear Safety Inspectorate, also involves the calculation and evaluation of certain transients for Swiss light water reactors (LWRs). For best-estimate analyses, a key element in quantifying reactor safety margins is uncertainty evaluation to determine the uncertainty in calculated integral values (responses) caused by modeling, calculational methodology, and input data (parameters). The work reported in this paper is a joint PSI/Oak Ridge National Laboratory (ORNL) application to a core transient analysis code of an ORNL software system for automated sensitivity analysis. The Gradient-Enhanced Software System (GRESS) is a software package that can in principle enhance any code so that it can calculate the sensitivity (derivative) to input parameters of any integral value (response) calculated in the original code. The studies reported are the first application of the GRESS capability to core neutronics and safety codes

  10. Reliability of a new biokinetic model of zirconium in internal dosimetry: part II, parameter sensitivity analysis.

    Science.gov (United States)

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential for the assessment of internal doses and a radiation risk analysis for the public and occupational workers exposed to radionuclides. In the present study, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. In the first part of the paper, the parameter uncertainty was analyzed for two biokinetic models of zirconium (Zr); one was reported by the International Commission on Radiological Protection (ICRP), and one was developed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU). In the second part of the paper, the parameter uncertainties and distributions of the Zr biokinetic models evaluated in Part I are used as the model inputs for identifying the most influential parameters in the models. Furthermore, the most influential model parameter on the integral of the radioactivity of Zr over 50 y in source organs after ingestion was identified. The results of the systemic HMGU Zr model showed that over the first 10 d, the parameters of transfer rates between blood and other soft tissues have the largest influence on the content of Zr in the blood and the daily urinary excretion; however, after day 1,000, the transfer rate from bone to blood becomes dominant. For the retention in bone, the transfer rate from blood to bone surfaces has the most influence out to the endpoint of the simulation; the transfer rate from blood to the upper larger intestine contributes a lot in the later days; i.e., after day 300. The alimentary tract absorption factor (fA) influences mostly the integral of radioactivity of Zr in most source organs after ingestion.

  11. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    Science.gov (United States)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  12. Sensitivity Analysis of Methane Hydrate Reservoirs: Effects of Reservoir Parameters on Gas Productivity and Economics

    Science.gov (United States)

    Anderson, B. J.; Gaddipati, M.; Nyayapathi, L.

    2008-12-01

    This paper presents a parametric study on production rates of natural gas from gas hydrates by the method of depressurization, using CMG STARS. Seven factors/parameters were considered as perturbations from a base-case hydrate reservoir description based on Problem 7 of the International Methane Hydrate Reservoir Simulator Code Comparison Study led by the Department of Energy and the USGS. This reservoir is modeled after the inferred properties of the hydrate deposit at the Prudhoe Bay L-106 site. The included sensitivity variables were hydrate saturation, pressure (depth), temperature, bottom-hole pressure of the production well, free water saturation, intrinsic rock permeability, and porosity. A two-level (L=2) Plackett-Burman experimental design was used to study the relative effects of these factors. The measured variable was the discounted cumulative gas production. The discount rate chosen was 15%, resulting in the gas contribution to the net present value of a reservoir. Eight different designs were developed for conducting sensitivity analysis and the effects of the parameters on the real and discounted production rates will be discussed. The breakeven price in various cases and the dependence of the breakeven price on the production parameters is given in the paper. As expected, initial reservoir temperature has the strongest positive effect on the productivity of a hydrate deposit and the bottom-hole pressure in the production well has the strongest negative dependence. Also resulting in a positive correlation is the intrinsic permeability and the initial free water of the formation. Negative effects were found for initial hydrate saturation (at saturations greater than 50% of the pore space) and the reservoir porosity. These negative effects are related to the available sensible heat of the reservoir, with decreasing productivity due to decreasing available sensible heat. Finally, we conclude that for the base case reservoir, the break-even price (BEP

  13. Sensitivity Analysis of Unsaturated Flow and Contaminant Transport with Correlated Parameters

    Science.gov (United States)

    Relative contributions from uncertainties in input parameters to the predictive uncertainties in unsaturated flow and contaminant transport are investigated in this study. The objectives are to: (1) examine the effects of input parameter correlations on the sensitivity of unsaturated flow and conta...

  14. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    Science.gov (United States)

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, andParameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  15. The SSI TOOLBOX Source Term Model SOSIM - Screening for important radionuclides and parameter sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Avila Moreno, R.; Barrdahl, R.; Haegg, C.

    1995-05-01

    The main objective of the present study was to carry out a screening and a sensitivity analysis of the SSI TOOLBOX source term model SOSIM. This model is a part of the SSI TOOLBOX for radiological impact assessment of the Swedish disposal concept for high-level waste KBS-3. The outputs of interest for this purpose were: the total released fraction, the time of total release, the time and value of maximum release rate, the dose rates after direct releases of the biosphere. The source term equations were derived and simple equations and methods were proposed for calculation of these. A literature survey has been performed in order to determine a characteristic variation range and a nominal value for each model parameter. In order to reduce the model uncertainties the authors recommend a change in the initial boundary condition for solution of the diffusion equation for highly soluble nuclides. 13 refs.

  16. Semianalytical Solution and Parameters Sensitivity Analysis of Shallow Shield Tunneling-Induced Ground Settlement

    Directory of Open Access Journals (Sweden)

    Jifeng Liu

    2017-01-01

    Full Text Available The influence of boundary soil properties on tunneling-induced ground settlement is generally not considered in current analytic solutions, and the hypothesis of equal initial stress in vertical and horizontal makes the application of the above solutions limited. Based on the homogeneous half-plane hypothesis, by defining the boundary condition according to the ground loss pattern in shallow tunnel, and with the use of Mohr-Coulomb plastic yielding criteria and classic Lame and Kiersch elastic equations by separating the nonuniform stress field to uniform and single-direction stress field, a semiempirical solution for ground settlement induced by single shallow circular tunnel is presented and sensitivity to the ground parameters is analyzed. The methods of settlement control are offered by influence factors analysis of semiempirical solution. A case study in Beijing Metro tunnel shows that the semiempirical solution agrees well with the in situ measured results.

  17. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  18. 'PSA-SPN' - A Parameter Sensitivity Analysis Method Using Stochastic Petri Nets: Application to a Production Line System

    International Nuclear Information System (INIS)

    Labadi, Karim; Saggadi, Samira; Amodeo, Lionel

    2009-01-01

    The dynamic behavior of a discrete event dynamic system can be significantly affected for some uncertain changes in its decision parameters. So, parameter sensitivity analysis would be a useful way in studying the effects of these changes on the system performance. In the past, the sensitivity analysis approaches are frequently based on simulation models. In recent years, formal methods based on stochastic process including Markov process are proposed in the literature. In this paper, we are interested in the parameter sensitivity analysis of discrete event dynamic systems by using stochastic Petri nets models as a tool for modelling and performance evaluation. A sensitivity analysis approach based on stochastic Petri nets, called PSA-SPN method, will be proposed with an application to a production line system.

  19. Sensitivity analysis of minor actinides transmutation to physical and technological parameters

    International Nuclear Information System (INIS)

    Kooyman, T.; Buiron, L.

    2015-01-01

    Minor actinides transmutation is one of the 3 main axis defined by the 2006 French law for management of nuclear waste, along with long-term storage and use of a deep geological repository. Transmutation options for critical systems can be divided in two different approaches: (a) homogeneous transmutation, in which minor actinides are mixed with the fuel. This exhibits the drawback of 'polluting' the entire fuel cycle with minor actinides and also has an important impact on core reactivity coefficients such as Doppler Effect or sodium void worth for fast reactors when the minor actinides fraction increases above 3 to 5% depending on the core; (b) heterogeneous transmutation, in which minor actinides are inserted into transmutation targets which can be located in the center or in the periphery of the core. This presents the advantage of decoupling the management of the minor actinides from the conventional fuel and not impacting the core reactivity coefficients. In both cases, the design and analyses of potential transmutation systems have been carried out in the frame of Gen IV fast reactor using a 'perturbation' approach in which nominal power reactor parameters are modified to accommodate the loading of minor actinides. However, when designing such a transmutation strategy, parameters from all steps of the fuel cycle must be taken into account, such as spent fuel heat load, gamma or neutron sources or fabrication feasibility. Considering a multi-recycling strategy of minor actinides, an analysis of relevant estimators necessary to fully analyze a transmutation strategy has been performed in this work and a sensitivity analysis of these estimators to a broad choice of reactors and fuel cycle parameters has been carried out. No threshold or percolation effects were observed. Saturation of transmutation rate with regards to several parameters has been observed, namely the minor actinides volume fraction and the irradiation time. Estimators of interest that have been

  20. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  1. Sensitivity analysis of physical/operational parameters in neutron multiplicity counting

    International Nuclear Information System (INIS)

    Peerani, P.; Marin Ferrer, M.

    2007-01-01

    In this paper, we perform a sensitivity study on the influence of various physical and operational parameters on the results of neutron multiplicity counting. The purpose is to have a better understanding of the importance of each component and its contribution to the measurement uncertainty. Then we will be able to determine the optimal conditions for the operational parameters and for detector design and as well to point out weaknesses in the knowledge of critical fundamental nuclear data

  2. Parameters Identification and Sensitive Characteristics Analysis for Lithium-Ion Batteries of Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2017-12-01

    Full Text Available This paper mainly investigates the sensitive characteristics of lithium-ion batteries so as to provide scientific basises for simplifying the design of the state estimator that adapt to various environments. Three lithium-ion batteries are chosen as the experimental samples. The samples were tested at various temperatures (−20 ∘ C, −10 ∘ C, 0 ∘ C , 10 ∘ C , 25 ∘ C and various current rates (0.5C, 1C, 1.5C using a battery test bench. A physical equivalent circuit model is developed to capture the dynamic characteristics of the batteries. The experimental results show that all battery parameters are time-varying and have different sensitivity to temperature, current rate and state of charge (SOC. The sensitivity of battery to temperature, current rate and SOC increases the difficulty in battery modeling because of the change of parameters. The further simulation experiments show that the model output has a higher sensitivity to the change of ohmic resistance than that of other parameters. Based on the experimental and simulation results obtained here, it is expected that the adaptive parameter state estimator design could be simplified in the near future.

  3. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    Science.gov (United States)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model

  4. Parametric uncertainty and global sensitivity analysis in a model of the carotid bifurcation: Identification and ranking of most sensitive model parameters.

    Science.gov (United States)

    Gul, R; Bernhard, S

    2015-11-01

    In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Sensitivity analysis of the effect of various key parameters on fission product concentration (mass number 120 to 126)

    International Nuclear Information System (INIS)

    Sola, A.

    1978-01-01

    An analytical sensitivity analysis has been made of the effect of various parameters on the evaluation of fission product concentration. Such parameters include cross sections, decay constants, branching ratios, fission yields, flux and time. The formulae are applied to isotopes of the Tin, Antimony and Tellurium series. The agreement between analytically obtained data and that derived from a computer evaluated model is good, suggesting that the analytical representation includes all the important parameters useful to the evaluation of the fission product concentrations

  6. Sensitivity Analysis and Identification of Parameters to the Van Genuchten Equation

    Directory of Open Access Journals (Sweden)

    Guangzhou Chen

    2016-01-01

    Full Text Available Van Genuchten equation is the soil water characteristic curve equation used commonly, and identifying (estimating accurately its parameters plays an important role in the study on the movement of soil water. Selecting the desorption and absorption experimental data of silt loam from a northwest region in China as an instance, Monte-Carlo method was firstly applied to analyze sensitivity of the parameters and uncertainty of model so as to get the key parameters and posteriori parameter distribution to guide subsequent parameter identification. Then, the optimization model of the parameters was set up, and a new type of intelligent algorithm-difference search algorithm was employed to identify them. In order to overcome the fault that the base difference search algorithm needed more iterations and to further enhance the optimization performance, a hybrid algorithm, which coupled the difference search algorithm with simplex method, was employed to identification of the parameters. By comparison with other optimization algorithms, the results show that the difference search algorithm has the following characteristics: good optimization performance, the simple principle, easy implement, short program code, and less control parameters required to run the algorithm. In addition, the proposed hybrid algorithm outperforms the basic difference search algorithm on the comprehensive performance of algorithm.

  7. Global sensitivity analysis of the joint kinematics during gait to the parameters of a lower limb multi-body model.

    Science.gov (United States)

    El Habachi, Aimad; Moissenet, Florent; Duprey, Sonia; Cheze, Laurence; Dumas, Raphaël

    2015-07-01

    Sensitivity analysis is a typical part of biomechanical models evaluation. For lower limb multi-body models, sensitivity analyses have been mainly performed on musculoskeletal parameters, more rarely on the parameters of the joint models. This study deals with a global sensitivity analysis achieved on a lower limb multi-body model that introduces anatomical constraints at the ankle, tibiofemoral, and patellofemoral joints. The aim of the study was to take into account the uncertainty of parameters (e.g. 2.5 cm on the positions of the skin markers embedded in the segments, 5° on the orientation of hinge axis, 2.5 mm on the origin and insertion of ligaments) using statistical distributions and propagate it through a multi-body optimisation method used for the computation of joint kinematics from skin markers during gait. This will allow us to identify the most influential parameters on the minimum of the objective function of the multi-body optimisation (i.e. the sum of the squared distances between measured and model-determined skin marker positions) and on the joint angles and displacements. To quantify this influence, a Fourier-based algorithm of global sensitivity analysis coupled with a Latin hypercube sampling is used. This sensitivity analysis shows that some parameters of the motor constraints, that is to say the distances between measured and model-determined skin marker positions, and the kinematic constraints are highly influencing the joint kinematics obtained from the lower limb multi-body model, for example, positions of the skin markers embedded in the shank and pelvis, parameters of the patellofemoral hinge axis, and parameters of the ankle and tibiofemoral ligaments. The resulting standard deviations on the joint angles and displacements reach 36° and 12 mm. Therefore, personalisation, customisation or identification of these most sensitive parameters of the lower limb multi-body models may be considered as essential.

  8. Sensitivity Analysis of Depletion Parameters for Heat Load Evaluation of PWR Spent Fuel Storage Pool

    International Nuclear Information System (INIS)

    Kim, In Young; Lee, Un Chul

    2011-01-01

    As necessity of safety re-evaluation for spent fuel storage facility has emphasized after the Fukushima accident, accuracy improvement of heat load evaluation has become more important to acquire reliable thermal-hydraulic evaluation results. As groundwork, parametric and sensitivity analyses of various storage conditions for Kori Unit 4 spent fuel storage pool and spent fuel depletion parameters such as axial burnup effect, operation history, and specific heat are conducted using ORIGEN2 code. According to heat load evaluation and parametric sensitivity analyses, decay heat of last discharged fuel comprises maximum 80.42% of total heat load of storage facility and there is a negative correlation between effect of depletion parameters and cooling period. It is determined that specific heat is most influential parameter and operation history is secondly influential parameter. And decay heat of just discharged fuel is varied from 0.34 to 1.66 times of average value and decay heat of 1 year cooled fuel is varied from 0.55 to 1.37 times of average value in accordance with change of specific power. Namely depletion parameters can cause large variation in decay heat calculation of short-term cooled fuel. Therefore application of real operation data instead of user selection value is needed to improve evaluation accuracy. It is expected that these results could be used to improve accuracy of heat load assessment and evaluate uncertainty of calculated heat load.

  9. Sensitivity Analysis of the USLE Soil Erodibility Factor to Its Determining Parameters

    Science.gov (United States)

    Mitova, Milena; Rousseva, Svetla

    2014-05-01

    Soil erosion is recognized as one of the most serious soil threats worldwide. Soil erosion prediction is the first step in soil conservation planning. The Universal Soil Loss Equation (USLE) is one of the most widely used models for soil erosion predictions. One of the five USLE predictors is the soil erodibility factor (K-factor), which evaluates the impact of soil characteristics on soil erosion rates. Soil erodibility nomograph defines K-factor depending on soil characteristics, such as: particle size distribution (fractions finer that 0.002 mm and from 0.1 to 0.002 mm), organic matter content, soil structure and soil profile water permeability. Identifying the soil characteristics, which mostly influence the K-factor would give an opportunity to control the soil loss through erosion by controlling the parameters, which reduce the K-factor value. The aim of the report is to present the results of analysis of the relative weight of these soil characteristics in the K-factor values. The relative impact of the soil characteristics on K-factor was studied through a series of statistical analyses of data from the geographic database for soil erosion risk assessments in Bulgaria. Degree of correlation between K-factor values and the parameters that determine it was studied by correlation analysis. The sensitivity of the K-factor was determined by studying the variance of each parameter within the range between minimum and maximum possible values considering average value of the other factors. Normalizing transformation of data sets was applied because of the different dimensions and the orders of variation of the values of the various parameters. The results show that the content of particles finer than 0.002 mm has the most significant relative impact on the soil erodibility, followed by the content of particles with size from 0.1 mm to 0.002 mm, the class of the water permeability of the soil profile, the content of organic matter and the aggregation class. The

  10. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    Science.gov (United States)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  11. Sensitivity analysis of parameters important to nuclear criticality safety of Castor X/28F spent nuclear fuel cask

    Energy Technology Data Exchange (ETDEWEB)

    Leotlela, Mosebetsi J. [Witwatersrand Univ., Johannesburg (South Africa). School of Physics; Koeberg Operating Unit, Johannesburg (South Africa). Regulations and Licensing; Malgas, Isaac [Koeberg Nuclear Power Station, Duinefontein (South Africa). Nuclear Engineering Analysis; Taviv, Eugene [ASARA consultants (PTY) LTD, Johannesburg (South Africa)

    2015-11-15

    In nuclear criticality safety analysis it is essential to ascertain how various components of the nuclear system will perform under certain conditions they may be subjected to, particularly if the components of the system are likely to be affected by environmental factors such as temperature, radiation or material composition. It is therefore prudent that a sensitivity analysis is performed to determine and quantify the response of the output to variation in any of the input parameters. In a fissile system, the output parameter of importance is the k{sub eff}. Therefore, in attempting to prevent reactivity-induced accidents, it is important for the criticality safety analyst to have a quantified degree of response for the neutron multiplication factor to perturbation in a given input parameter. This article will present the results of the perturbation of the parameters that are important to nuclear criticality safety analysis and their respective correlation equations for deriving the sensitivity coefficients.

  12. A Sensitivity Study for an Evaluation of Input Parameters Effect on a Preliminary Probabilistic Tsunami Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rhee, Hyun-Me; Kim, Min Kyu; Choi, In-Kil [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Sheen, Dong-Hoon [Chonnam National University, Gwangju (Korea, Republic of)

    2014-10-15

    The tsunami hazard analysis has been based on the seismic hazard analysis. The seismic hazard analysis has been performed by using the deterministic method and the probabilistic method. To consider the uncertainties in hazard analysis, the probabilistic method has been regarded as attractive approach. The various parameters and their weight are considered by using the logic tree approach in the probabilistic method. The uncertainties of parameters should be suggested by analyzing the sensitivity because the various parameters are used in the hazard analysis. To apply the probabilistic tsunami hazard analysis, the preliminary study for the Ulchin NPP site had been performed. The information on the fault sources which was published by the Atomic Energy Society of Japan (AESJ) had been used in the preliminary study. The tsunami propagation was simulated by using the TSUNAMI{sub 1}.0 which was developed by Japan Nuclear Energy Safety Organization (JNES). The wave parameters have been estimated from the result of tsunami simulation. In this study, the sensitivity analysis for the fault sources which were selected in the previous studies has been performed. To analyze the effect of the parameters, the sensitivity analysis for the E3 fault source which was published by AESJ was performed. The effect of the recurrence interval, the potential maximum magnitude, and the beta were suggested by the sensitivity analysis results. Level of annual exceedance probability has been affected by the recurrence interval.. Wave heights have been influenced by the potential maximum magnitude and the beta. In the future, the sensitivity analysis for the all fault sources in the western part of Japan which were published AESJ would be performed.

  13. Sensitivity Analysis of Uncertainty Parameter based on MARS-LMR Code on SHRT-45R of EBR II

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seok-Ju; Kang, Doo-Hyuk; Seo, Jae-Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Bae, Sung-Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeong, Hae-Yong [Sejong University, Seoul (Korea, Republic of)

    2016-10-15

    In order to assess the uncertainty quantification of the MARS-LMR code, the code has been improved by modifying the source code to accommodate calculation process required for uncertainty quantification. In the present study, a transient of Unprotected Loss of Flow(ULOF) is selected as typical cases of as Anticipated Transient without Scram(ATWS) which belongs to DEC category. The MARS-LMR input generation for EBR II SHRT-45R and execution works are performed by using the PAPIRUS program. The sensitivity analysis is carried out with Uncertainty Parameter of the MARS-LMR code for EBR-II SHRT-45R. Based on the results of sensitivity analysis, dominant parameters with large sensitivity to FoM are picked out. Dominant parameters selected are closely related to the development process of ULOF event.

  14. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    Science.gov (United States)

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  15. Modelling and simulation of a transketolase mediated reaction: Sensitivity analysis of kinetic parameters

    DEFF Research Database (Denmark)

    Sayar, N.A.; Chen, B.H.; Lye, G.J.

    2009-01-01

    In this paper we have used a proposed mathematical model, describing the carbon-carbon bond format ion reaction between beta-hydroxypyruvate and glycolaldehyde to synthesise L-erythrulose, catalysed by the enzyme transketolase, for the analysis of the sensitivity of the process to its kinetic...

  16. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    Science.gov (United States)

    Ely, D. Matthew

    2006-01-01

    routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  17. Sensitivity analysis of hydrogeological parameters affecting groundwater storage change caused by sea level rise

    Science.gov (United States)

    Shin, J.; Kim, K.-H.; Lee, K.-K.

    2012-04-01

    Sea level rise, which is one of the representative phenomena of climate changes caused by global warming, can affect groundwater system. The rising trend of the sea level caused by the global warming is reported to be about 3 mm/year for the most recent 10 year average (IPCC, 2007). The rate of sea level rise around the Korean peninsula is reported to be 2.30±2.22 mm/yr during the 1960-1999 period (Cho, 2002) and 2.16±1.77 mm/yr (Kim et al., 2009) during the 1968-2007 period. Both of these rates are faster than the 1.8±0.5 mm/yr global average for the similar 1961-2003 period (IPCC, 2007). In this study, we analyzed changes in the groundwater environment affected by the sea level rise by using an analytical methodology. We tried to find the most effective parameters of groundwater amount change in order to estimate the change in fresh water amount in coastal groundwater. A hypothetical island model of a cylindrical shape in considered. The groundwater storage change is bi-directional as the sea level rises according to the natural and hydrogeological conditions. Analysis of the computation results shows that topographic slope and hydraulic conductivity are the most sensitive factors. The contributions of the groundwater recharge rate and the thickness of aquifer below sea level are relatively less effective. In the island with steep seashore slopes larger than 1~2 degrees or so, the storage amount of fresh water in a coastal area increases as sea level rises. On the other hand, when sea level drops, the storage amount decreases. This is because the groundwater level also rises with the rising sea level in steep seashores. For relatively flat seashores, where the slope is smaller than around 1-2 degrees, the storage amount of coastal fresh water decreases when the sea level rises because the area flooded by the rising sea water is increased. The volume of aquifer fresh water in this circumstance is greatly reduced in proportion to the flooded area with the sea

  18. CCP Sensitivity Analysis by Variation of Thermal-Hydraulic Parameters of Wolsong-3, 4

    Energy Technology Data Exchange (ETDEWEB)

    You, Sung Chang [KHNP, Daejeon (Korea, Republic of)

    2016-10-15

    The PHWRs are tendency that ROPT(Regional Overpower Protection Trip) setpoint is decreased with reduction of CCP(Critical Channel Power) due to aging effects. For this reason, Wolsong unit 3 and 4 has been operated less than 100% power due to the result of ROPT setpoint evaluation. Typically CCP for ROPT evaluation is derived at 100% PHTS(Primary Heat Transport System) boundary conditions - inlet header temperature, header to header different pressure and outlet header pressure. Therefore boundary conditions at 100% power were estimated to calculate the thermal-hydraulic model at 100% power condition. Actually thermal-hydraulic boundary condition data for Wolsong-3 and 4 cannot be taken at 100% power condition at aged reactor condition. Therefore, to create a single-phase thermal-hydraulic model with 80% data, the validity of the model was confirmed at 93.8%(W3), 94.2%(W4, in the two-phase). And thermal-hydraulic boundary conditions at 100% power were calculated to use this model. For this reason, the sensitivities by varying thermal-hydraulic parameters for CCP calculation were evaluated for Wolsong unit 3 and 4. For confirming the uncertainties by variation PHTS model, sensitivity calculations were performed by varying of pressure tube roughness, orifice degradation factor and SG fouling factor, etc. In conclusion, sensitivity calculation results were very similar and the linearity was constant.

  19. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    Science.gov (United States)

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Performances of non-parametric statistics in sensitivity analysis and parameter ranking

    International Nuclear Information System (INIS)

    Saltelli, A.

    1987-01-01

    Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis

  1. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    Science.gov (United States)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  2. Sensitivity analysis with respect to observations in variational data assimilation for parameter estimation

    Directory of Open Access Journals (Sweden)

    V. Shutyaev

    2018-06-01

    Full Text Available The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find unknown parameters of the model. The observation data, and hence the optimal solution, may contain uncertainties. A response function is considered as a functional of the optimal solution after assimilation. Based on the second-order adjoint techniques, the sensitivity of the response function to the observation data is studied. The gradient of the response function is related to the solution of a nonstandard problem involving the coupled system of direct and adjoint equations. The nonstandard problem is studied, based on the Hessian of the original cost function. An algorithm to compute the gradient of the response function with respect to observations is presented. A numerical example is given for the variational data assimilation problem related to sea surface temperature for the Baltic Sea thermodynamics model.

  3. Heat and Mass Transfer of Vacuum Cooling for Porous Foods-Parameter Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Zhijun Zhang

    2014-01-01

    Full Text Available Based on the theory of heat and mass transfer, a coupled model for the porous food vacuum cooling process is constructed. Sensitivity analyses of the process to food density, thermal conductivity, specific heat, latent heat of evaporation, diameter of pores, mass transfer coefficient, viscosity of gas, and porosity were examined. The simulation results show that the food density would affect the vacuum cooling process but not the vacuum cooling end temperature. The surface temperature of food was slightly affected and the core temperature is not affected by the changed thermal conductivity. The core temperature and surface temperature are affected by the changed specific heat. The core temperature and surface temperature are affected by the changed latent heat of evaporation. The core temperature is affected by the diameter of pores. But the surface temperature is not affected obviously. The core temperature and surface temperature are not affected by the changed gas viscosity. The parameter sensitivity of mass transfer coefficient is obvious. The core temperature and surface temperature are affected by the changed mass transfer coefficient. In all the simulations, the end temperature of core and surface is not affected. The vacuum cooling process of porous medium is a process controlled by outside process.

  4. Sensitivity Analysis of Core Neutronic Parameters in Electron Accelerator-driven Subcritical Advanced Liquid Metal Reactor

    Directory of Open Access Journals (Sweden)

    Marziye Ebrahimkhani

    2016-02-01

    Full Text Available Calculation of the core neutronic parameters is one of the key components in all nuclear reactors. In this research, the energy spectrum and spatial distribution of the neutron flux in a uranium target have been calculated. In addition, sensitivity of the core neutronic parameters in accelerator-driven subcritical advanced liquid metal reactors, such as electron beam energy (Ee and source multiplication coefficient (ks, has been investigated. A Monte Carlo code (MCNPX_2.6 has been used to calculate neutronic parameters such as effective multiplication coefficient (keff, net neutron multiplication (M, neutron yield (Yn/e, energy constant gain (G0, energy gain (G, importance of neutron source (φ∗, axial and radial distributions of neutron flux, and power peaking factor (Pmax/Pave in two axial and radial directions of the reactor core for four fuel loading patterns. According to the results, safety margin and accelerator current (Ie have been decreased in the highest case of ks, but G and φ∗ have increased by 88.9% and 21.6%, respectively. In addition, for LP1 loading pattern, with increasing Ee from 100 MeV up to 1 GeV, Yn/e and G improved by 91.09% and 10.21%, and Ie and Pacc decreased by 91.05% and 10.57%, respectively. The results indicate that placement of the Np–Pu assemblies on the periphery allows for a consistent keff because the Np–Pu assemblies experience less burn-up.

  5. Sensitivity Analysis of Wind Plant Performance to Key Turbine Design Parameters: A Systems Engineering Approach; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.

    2014-02-01

    This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.

  6. Sensitivity analysis of coupled processes and parameters on the performance of enhanced geothermal systems.

    Science.gov (United States)

    Pandey, S N; Vishal, Vikram

    2017-12-06

    3-D modeling of coupled thermo-hydro-mechanical (THM) processes in enhanced geothermal systems using the control volume finite element code was done. In a first, a comparative analysis on the effects of coupled processes, operational parameters and reservoir parameters on heat extraction was conducted. We found that significant temperature drop and fluid overpressure occurred inside the reservoirs/fracture that affected the transport behavior of the fracture. The spatio-temporal variations of fracture aperture greatly impacted the thermal drawdown and consequently the net energy output. The results showed that maximum aperture evolution occurred near the injection zone instead of the production zone. Opening of the fracture reduced the injection pressure required to circulate a fixed mass of water. The thermal breakthrough and heat extraction strongly depend on the injection mass flow rate, well distances, reservoir permeability and geothermal gradients. High permeability caused higher water loss, leading to reduced heat extraction. From the results of TH vs THM process simulations, we conclude that appropriate coupling is vital and can impact the estimates of net heat extraction. This study can help in identifying the critical operational parameters, and process optimization for enhanced energy extraction from a geothermal system.

  7. Uncertainty and sensitivity analysis of parameters affecting water hammer pressure wave behaviour

    International Nuclear Information System (INIS)

    Kaliatka, A.; Uspuras, E.; Vaisnoras, M.

    2006-01-01

    Pressure surges occurring in pipeline systems may be caused by fast control interference, start up and shut down processes and operation failure. They lead to water hammer upstream the closing valve and cavitational hammer downstream the valve, which may cause considerable damages to the pipeline and the support structures. Appearance of water hammer in thermal-hydraulic systems was widely studied employing different state-of-the-art thermal-hydraulic codes in many organizations. For the analysis water hammer test performed at Fraunhofer Institute for Environmental, Safety and Energy Technology (UMSICHT) at Oberhausen was considered. This paper presents the comparison of UMSICHT test facility experiment calculations employing the best estimate system code RELAP5/Mod3.3 to measured water hammer values after fast closure of a valve. The analysis revealed that the calculated first pressure peak, which has the highest value, matches the measured value very well. The performed analysis (as well as any other analyses) as a results of each individual calculation always contains uncertainty owing to initial conditions of installations, errors of measuring systems, errors caused by nodalization of objects at modelling, code correlations, etc. In this connection, results of uncertainty and sensitivity analysis of the initial conditions and code-selected models are shown in the paper. (orig.)

  8. Geochemical sensitivity analysis: Identification of important geochemical parameters for performance assessment studies

    International Nuclear Information System (INIS)

    Siegel, M.; Guzowski, R.; Rechard, R.; Erickson, K.

    1986-01-01

    The EPA Standard for geologic disposal of high level waste requires demonstration that the cumulative discharge of individual radioisotopes over a 10,000 year period at points 5 kilometers from the engineered barrier system will not exceed the limits prescribed in 40 CFR Part 191. The roles of the waste package, engineered facility, hydrogeology and geochemical processes in limiting radionuclide releases all must be considered in calculations designed to assess compliance of candidate repositories with the EPA Standard. In this talk, they will discuss the geochemical requirements of calculations used in these compliance assessments. In addition, they will describe the complementary roles of (1) simple models designed to bound the radionuclide discharge over the widest reasonable range of geochemical conditions and scenarios and (2) detailed geochemical models which can provide insights into the actual behavior of the radionuclides in the ground water. Finally, they will discuss development of sensitivity/uncertainty techniques designed to identify important site-specific geochemical parameters and processes using data from a basalt formation

  9. Sensitivity analysis on the effect of key parameters on the performance of parabolic trough solar collectors

    Science.gov (United States)

    Muhlen, Luis S. W.; Najafi, Behzad; Rinaldi, Fabio; Marchesi, Renzo

    2014-04-01

    Solar troughs are amongst the most commonly used technologies for collecting solar thermal energy and any attempt to increase the performance of these systems is welcomed. In the present study a parabolic solar trough is simulated using a one dimensional finite element model in which the energy balances for the fluid, the absorber and the envelope in each element are performed. The developed model is then validated using the available experimental data . A sensitivity analysis is performed in the next step in order to study the effect of changing the type of the working fluid and the corresponding Reynolds number on the overall performance of the system. The potential improvement due to the addition of a shield on the upper half of the annulus and enhancing the convection coefficient of the heat transfer fluid is also studied.

  10. Sensitivity analysis on the effect of key parameters on the performance of parabolic trough solar collectors

    International Nuclear Information System (INIS)

    Muhlen, Luis S W; Najafi, Behzad; Rinaldi, Fabio; Marchesi, Renzo

    2014-01-01

    Solar troughs are amongst the most commonly used technologies for collecting solar thermal energy and any attempt to increase the performance of these systems is welcomed. In the present study a parabolic solar trough is simulated using a one dimensional finite element model in which the energy balances for the fluid, the absorber and the envelope in each element are performed. The developed model is then validated using the available experimental data . A sensitivity analysis is performed in the next step in order to study the effect of changing the type of the working fluid and the corresponding Reynolds number on the overall performance of the system. The potential improvement due to the addition of a shield on the upper half of the annulus and enhancing the convection coefficient of the heat transfer fluid is also studied.

  11. Uncertainty Quantification and Global Sensitivity Analysis of Subsurface Flow Parameters to Gravimetric Variations During Pumping Tests in Unconfined Aquifers

    Science.gov (United States)

    Maina, Fadji Zaouna; Guadagnini, Alberto

    2018-01-01

    We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic

  12. Sensitivity analysis of the parameters of an HIV/AIDS model with condom campaign and antiretroviral therapy

    Science.gov (United States)

    Marsudi, Hidayat, Noor; Wibowo, Ratno Bagus Edy

    2017-12-01

    In this article, we present a deterministic model for the transmission dynamics of HIV/AIDS in which condom campaign and antiretroviral therapy are both important for the disease management. We calculate the effective reproduction number using the next generation matrix method and investigate the local and global stability of the disease-free equilibrium of the model. Sensitivity analysis of the effective reproduction number with respect to the model parameters were carried out. Our result shows that efficacy rate of condom campaign, transmission rate for contact with the asymptomatic infective, progression rate from the asymptomatic infective to the pre-AIDS infective, transmission rate for contact with the pre-AIDS infective, ARV therapy rate, proportion of the susceptible receiving condom campaign and proportion of the pre-AIDS receiving ARV therapy are highly sensitive parameters that effect the transmission dynamics of HIV/AIDS infection.

  13. Sensitiveness Analysis of Neutronic Parameters Due to Uncertainty in Thermo-hydraulic parameters on CAREM-25 Reactor

    International Nuclear Information System (INIS)

    Serra, Oscar

    2000-01-01

    Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose

  14. Flow analysis with WaSiM-ETH – model parameter sensitivity at different scales

    Directory of Open Access Journals (Sweden)

    J. Cullmann

    2006-01-01

    Full Text Available WaSiM-ETH (Gurtz et al., 2001, a widely used water balance simulation model, is tested for its suitability to serve for flow analysis in the context of rainfall runoff modelling and flood forecasting. In this paper, special focus is on the resolution of the process domain in space as well as in time. We try to couple model runs with different calculation time steps in order to reduce the effort arising from calculating the whole flow hydrograph at the hourly time step. We aim at modelling on the daily time step for water balance purposes, switching to the hourly time step whenever high-resolution information is necessary (flood forecasting. WaSiM-ETH is used at different grid resolutions, thus we try to become clear about being able to transfer the model in spatial resolution. We further use two different approaches for the overland flow time calculation within the sub-basins of the test watershed to gain insights about the process dynamics portrayed by the model. Our findings indicate that the model is very sensitive to time and space resolution and cannot be transferred across scales without recalibration.

  15. Application of Sensitivity Analysis to Aerodynamic Parameters of a Bank to Turn Missile.

    Science.gov (United States)

    1983-12-01

    the parameter-induced chanqe of the trajectcry can be taken as r X ( tOC ) (, 0%(2.10) Where o( + Z , which is the ICTUIL parameter vector of the system...Espacimis Divisao de Sistemas Belicos Rnia Paraibuna S/N12200 Sao Jose dos Campos - SPSao Paulo, BRAZIL 10. MAJ Tiaq* da Silva Ribeiro 3Cento ec ico. cial

  16. Global sensitivity analysis for identifying important parameters of nitrogen nitrification and denitrification under model uncertainty and scenario uncertainty

    Science.gov (United States)

    Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong

    2018-06-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen

  17. Investigation, sensitivity analysis, and multi-objective optimization of effective parameters on temperature and force in robotic drilling cortical bone.

    Science.gov (United States)

    Tahmasbi, Vahid; Ghoreishi, Majid; Zolfaghari, Mojtaba

    2017-11-01

    The bone drilling process is very prominent in orthopedic surgeries and in the repair of bone fractures. It is also very common in dentistry and bone sampling operations. Due to the complexity of bone and the sensitivity of the process, bone drilling is one of the most important and sensitive processes in biomedical engineering. Orthopedic surgeries can be improved using robotic systems and mechatronic tools. The most crucial problem during drilling is an unwanted increase in process temperature (higher than 47 °C), which causes thermal osteonecrosis or cell death and local burning of the bone tissue. Moreover, imposing higher forces to the bone may lead to breaking or cracking and consequently cause serious damage. In this study, a mathematical second-order linear regression model as a function of tool drilling speed, feed rate, tool diameter, and their effective interactions is introduced to predict temperature and force during the bone drilling process. This model can determine the maximum speed of surgery that remains within an acceptable temperature range. Moreover, for the first time, using designed experiments, the bone drilling process was modeled, and the drilling speed, feed rate, and tool diameter were optimized. Then, using response surface methodology and applying a multi-objective optimization, drilling force was minimized to sustain an acceptable temperature range without damaging the bone or the surrounding tissue. In addition, for the first time, Sobol statistical sensitivity analysis is used to ascertain the effect of process input parameters on process temperature and force. The results show that among all effective input parameters, tool rotational speed, feed rate, and tool diameter have the highest influence on process temperature and force, respectively. The behavior of each output parameters with variation in each input parameter is further investigated. Finally, a multi-objective optimization has been performed considering all the

  18. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    Science.gov (United States)

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Sensitivity Analysis of the Influence of Structural Parameters on Dynamic Behaviour of Highly Redundant Cable-Stayed Bridges

    Directory of Open Access Journals (Sweden)

    B. Asgari

    2013-01-01

    Full Text Available The model tuning through sensitivity analysis is a prominent procedure to assess the structural behavior and dynamic characteristics of cable-stayed bridges. Most of the previous sensitivity-based model tuning methods are automatic iterative processes; however, the results of recent studies show that the most reasonable results are achievable by applying the manual methods to update the analytical model of cable-stayed bridges. This paper presents a model updating algorithm for highly redundant cable-stayed bridges that can be used as an iterative manual procedure. The updating parameters are selected through the sensitivity analysis which helps to better understand the structural behavior of the bridge. The finite element model of Tatara Bridge is considered for the numerical studies. The results of the simulations indicate the efficiency and applicability of the presented manual tuning method for updating the finite element model of cable-stayed bridges. The new aspects regarding effective material and structural parameters and model tuning procedure presented in this paper will be useful for analyzing and model updating of cable-stayed bridges.

  20. Sensitivity of tidal sand wave characteristics to environmental parameters: A combined data analysis and modelling approach

    NARCIS (Netherlands)

    van Santen, R.B.; de Swart, H.E.; van Dijk, T.A.G.P.

    2011-01-01

    An integrated field data-modelling approach is employed to investigate relationships between the wavelength of tidal sand waves and four environmental parameters: tidal current amplitude, water depth, tidal ellipticity and median grain size. From echo sounder data at 23 locations on the Dutch

  1. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia (Sandia National Laboratories, Livermore, CA); Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  2. Stability assessment and operating parameter optimization on experimental results in very small plasma focus, using sensitivity analysis

    Science.gov (United States)

    Jafari, Hossein; Habibi, Morteza

    2018-04-01

    Regarding the importance of stability in small-scale plasma focus devices for producing the repeatable and strength pinching, a sensitivity analysis approach has been used for applicability in design parameters optimization of an actually very low energy device (84 nF, 48 nH, 8-9.5 kV, ∼2.7-3.7 J). To optimize the devices functional specification, four different coaxial electrode configurations have been studied, scanning an argon gas pressure range from 0.6 to 1.5 mbar via the charging voltage variation study from 8.3 to 9.3 kV. The strength and efficient pinching was observed for the tapered anode configuration, over an expanded operating pressure range of 0.6 to 1.5 mbar. The analysis results showed that the most sensitive of the pinch voltage was associated with 0.88 ± 0.8mbar argon gas pressure and 8.3-8.5 kV charging voltage, respectively, as the optimum operating parameters. From the viewpoint of stability assessment of the device, it was observed that the least variation in stable operation of the device was for a charging voltage range of 8.3 to 8.7 kV in an operating pressure range from 0.6 to 1.1 mbar.

  3. Flexural modeling of the elastic lithosphere at an ocean trench: A parameter sensitivity analysis using analytical solutions

    Science.gov (United States)

    Contreras-Reyes, Eduardo; Garay, Jeremías

    2018-01-01

    The outer rise is a topographic bulge seaward of the trench at a subduction zone that is caused by bending and flexure of the oceanic lithosphere as subduction commences. The classic model of the flexure of oceanic lithosphere w (x) is a hydrostatic restoring force acting upon an elastic plate at the trench axis. The governing parameters are elastic thickness Te, shear force V0, and bending moment M0. V0 and M0 are unknown variables that are typically replaced by other quantities such as the height of the fore-bulge, wb, and the half-width of the fore-bulge, (xb - xo). However, this method is difficult to implement with the presence of excessive topographic noise around the bulge of the outer rise. Here, we present an alternative method to the classic model, in which lithospheric flexure w (x) is a function of the flexure at the trench axis w0, the initial dip angle of subduction β0, and the elastic thickness Te. In this investigation, we apply a sensitivity analysis to both methods in order to determine the impact of the differing parameters on the solution, w (x). The parametric sensitivity analysis suggests that stable solutions for the alternative approach requires relatively low β0 values (rise bulge. The alternative method is a more suitable approach, assuming that accurate geometric information at the trench axis (i.e., w0 and β0) is available.

  4. Sensitivity analysis of hydraulic and thermal parameters inducing anomalous heat flow in the Lower Yarmouk Gorge

    Science.gov (United States)

    Goretzki, Nora; Inbar, Nimrod; Kühn, Michael; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Siebert, Christian; Magri, Fabien

    2016-04-01

    The Lower Yarmouk Gorge, at the border between Israel and Jordan, is characterized by an anomalous temperature gradient of 46 °C/km. Numerical simulations of thermally-driven flow show that ascending thermal waters are the result of mixed convection, i.e. the interaction between the regional flow from the surrounding heights and buoyant flow within permeable faults [1]. Those models were calibrated against available temperature logs by running several forward problems (FP), with a classic "trial and error" method. In the present study, inverse problems (IP) are applied to find alternative parameter distributions that also lead to the observed thermal anomalies. The investigated physical parameters are hydraulic conductivity and thermal conductivity. To solve the IP, the PEST® code [2] is applied via the graphical interface FEPEST® in FEFLOW® [3]. The results show that both hydraulic and thermal conductivity are consistent with the values determined with the trial and error calibrations, which precede this study. However, the IP indicates that the hydraulic conductivity of the Senonian Paleocene aquitard can be 8.54*10-3 m/d, which is three times lower than the originally estimated value in [1]. Moreover, the IP suggests that the hydraulic conductivity in the faults can increase locally up to 0.17 m/d. These highly permeable areas can be interpreted as local damage zones at the faults/units intersections. They can act as lateral pathways in the deep aquifers that allow deep outflow of thermal water. This presentation provides an example about the application of FP and IP to infer a wide range of parameter values that reproduce observed environmental issues. [1] Magri F, Inbar N, Siebert C, Rosenthal E, Guttman J, Möller P (2015) Transient simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin. Journal of Hydrology, 520, 342-355 [2] Doherty J (2010) PEST: Model-Independent Parameter Estimation. user

  5. Combination for differential and integral data: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Marable, J.H.; de Saussure, G.; Weisbin, C.R.

    1982-01-01

    This chapter attempts to show how the various types of data presented and discussed in previous chapters can be combined and applied to the calculation of performance parameters of a reactor design model. Discusses derivation of least-squares adjustment; input data to the adjustment; the results of adjustment; and application to an LMFBR. Demonstrates that the least-squares formulae represent a logical, well-founded method for combining the results of integral and differential experiments. Includes calculational bias factors and their uncertainties. Concludes that the adjustment technique is a valuable tool, and that significant progress has been made with respect to its development and its applications. Recommends further work on the evaluation of covariance files, especially for calculational biases, and the inclusion of specific shielding factors as variables to be adjusted. The appendix features a calculation whose goal is to find the form of the projection operator which projects perpendicular to the calculational manifold

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  7. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    Science.gov (United States)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  8. Modeling a production scale milk drying process: parameter estimation, uncertainty and sensitivity analysis

    DEFF Research Database (Denmark)

    Ferrari, A.; Gutierrez, S.; Sin, Gürkan

    2016-01-01

    A steady state model for a production scale milk drying process was built to help process understanding and optimization studies. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a comprehensive statistical analysis for quality assurance using...

  9. Generation of input parameters for OSPM calculations. Sensitivity analysis of a method based on a questionnaire

    Energy Technology Data Exchange (ETDEWEB)

    Vignati, E.; Hertel, O.; Berkowicz, R. [National Environmental Research Inst., Dept. of Atmospheric Enviroment (Denmark); Raaschou-Nielsen, O. [Danish Cancer Society, Division of Cancer Epidemiology (Denmark)

    1997-05-01

    The method for generation of the input data for the calculations with OSPM is presented in this report. The described method which is based on information provided from a questionnaire, will be used for model calculations of long term exposure for a large number of children in connection with an epidemiological study. A test of the calculation method has been performed on a few locations in which detailed measurements of air pollution, meteorological data and traffic were available. Comparisons between measured and calculated concentrations were made for hourly, monthly and yearly values. Beside the measured concentrations, the test results were compared to results obtained with the optimal street configuration data and measured traffic. The main conclusions drawn from this investigation are: (1) The calculation method works satisfactory well for long term averages, whereas the uncertainties are high when short term averages are considered. (2) The street width is one of the most crucial input parameters for the calculation of street pollution levels for both short and long term averages. Using H.C. Andersens Boulevard as an example, it was shown that estimation of street width based on traffic amount can lead to large overestimation of the concentration levels (in this case 50% for NO{sub x} and 30% for NO{sub 2}). (3) The street orientation and geometry is important for prediction of short term concentrations but this importance diminished for longer term averages. (4) The uncertainties in diurnal traffic profiles can influence the accuracy of short term averages, but are less important for long term averages. The correlation is good between modelled and measured concentrations when the actual background concentrations are replaced with the generated values. Even though extreme situations are difficult to reproduce with this method, the comparison between the yearly averaged modelled and measured concentrations is very good. (LN) 20 refs.

  10. Significance of uncertainties derived from settling tank model structure and parameters on predicting WWTP performance - A global sensitivity analysis study

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2011-01-01

    Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... and from the last aerobic bioreactor upstream to the SST (Garrett/hydraulic method). For model structure uncertainty, two one-dimensional secondary settling tank (1-D SST) models are assessed, including a first-order model (the widely used Takács-model), in which the feasibility of using measured...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based...

  11. Groundwater travel time uncertainty analysis: Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1984-12-01

    The deep basalt formations beneath the Hanford Site are being investigated for the Department of Energy (DOE) to assess their suitability as a host medium for a high level nuclear waste repository. Predicted performance of the proposed repository is an important part of the investigation. One of the performance measures being used to gauge the suitability of the host medium is pre-waste-emplacement groundwater travel times to the accessible environment. Many deterministic analyses of groundwater travel times have been completed by Rockwell and other independent organizations. Recently, Rockwell has completed a preliminary stochastic analysis of groundwater travel times. This document presents analyses that show the sensitivity of the results from the previous stochastic travel time study to: (1) scale of representation of model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross-correlation between transmissivity and effective thickness. 40 refs., 29 figs., 6 tabs

  12. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    Science.gov (United States)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  13. MOVES regional level sensitivity analysis

    Science.gov (United States)

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  14. Groundwater travel time uncertainty analysis. Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1985-03-01

    This study examines the sensitivity of the travel time distribution predicted by a reference case model to (1) scale of representation of the model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross correlations between transmissivity and effective thickness. The basis for the reference model is the preliminary stochastic travel time model previously documented by the Basalt Waste Isolation Project. Results of this study show the following. The variability of the predicted travel times can be adequately represented when the ratio between the size of the zones used to represent the model parameters and the log-transmissivity correlation range is less than about one-fifth. The size of the model domain and the types of boundary conditions can have a strong impact on the distribution of travel times. Longer log-transmissivity correlation ranges cause larger variability in the predicted travel times. Positive cross correlation between transmissivity and effective thickness causes a decrease in the travel time variability. These results demonstrate the need for a sound conceptual model prior to conducting a stochastic travel time analysis

  15. Parameter Identification with the Random Perturbation Particle Swarm Optimization Method and Sensitivity Analysis of an Advanced Pressurized Water Reactor Nuclear Power Plant Model for Power Systems

    Directory of Open Access Journals (Sweden)

    Li Wang

    2017-02-01

    Full Text Available The ability to obtain appropriate parameters for an advanced pressurized water reactor (PWR unit model is of great significance for power system analysis. The attributes of that ability include the following: nonlinear relationships, long transition time, intercoupled parameters and difficult obtainment from practical test, posed complexity and difficult parameter identification. In this paper, a model and a parameter identification method for the PWR primary loop system were investigated. A parameter identification process was proposed, using a particle swarm optimization (PSO algorithm that is based on random perturbation (RP-PSO. The identification process included model variable initialization based on the differential equations of each sub-module and program setting method, parameter obtainment through sub-module identification in the Matlab/Simulink Software (Math Works Inc., Natick, MA, USA as well as adaptation analysis for an integrated model. A lot of parameter identification work was carried out, the results of which verified the effectiveness of the method. It was found that the change of some parameters, like the fuel temperature and coolant temperature feedback coefficients, changed the model gain, of which the trajectory sensitivities were not zero. Thus, obtaining their appropriate values had significant effects on the simulation results. The trajectory sensitivities of some parameters in the core neutron dynamic module were interrelated, causing the parameters to be difficult to identify. The model parameter sensitivity could be different, which would be influenced by the model input conditions, reflecting the parameter identifiability difficulty degree for various input conditions.

  16. Field-sensitivity To Rheological Parameters

    Science.gov (United States)

    Freund, Jonathan; Ewoldt, Randy

    2017-11-01

    We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.

  17. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    Science.gov (United States)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  18. Sensitivity analysis of efficiency thermal energy storage on selected rock mass and grout parameters using design of experiment method

    International Nuclear Information System (INIS)

    Wołoszyn, Jerzy; Gołaś, Andrzej

    2014-01-01

    Highlights: • Paper propose a new methodology to sensitivity study of underground thermal storage. • Using MDF model and DOE technique significantly shorter of calculations time. • Calculation of one time step was equal to approximately 57 s. • Sensitivity study cover five thermo-physical parameters. • Conductivity of rock mass and grout material have a significant impact on efficiency. - Abstract: The aim of this study was to investigate the influence of selected parameters on the efficiency of underground thermal energy storage. In this paper, besides thermal conductivity, the effect of such parameters as specific heat, density of the rock mass, thermal conductivity and specific heat of grout material was investigated. Implementation of this objective requires the use of an efficient computational method. The aim of the research was achieved by using a new numerical model, Multi Degree of Freedom (MDF), as developed by the authors and Design of Experiment (DoE) techniques with a response surface. The presented methodology can significantly reduce the time that is needed for research and to determine the effect of various parameters on the efficiency of underground thermal energy storage. Preliminary results of the research confirmed that thermal conductivity of the rock mass has the greatest impact on the efficiency of underground thermal energy storage, and that other parameters also play quite significant role

  19. Sensitivity Analysis of Vagus Nerve Stimulation Parameters on Acute Cardiac Autonomic Responses: Chronotropic, Inotropic and Dromotropic Effects.

    Directory of Open Access Journals (Sweden)

    David Ojeda

    Full Text Available Although the therapeutic effects of Vagus Nerve Stimulation (VNS have been recognized in pre-clinical and pilot clinical studies, the effect of different stimulation configurations on the cardiovascular response is still an open question, especially in the case of VNS delivered synchronously with cardiac activity. In this paper, we propose a formal mathematical methodology to analyze the acute cardiac response to different VNS configurations, jointly considering the chronotropic, dromotropic and inotropic cardiac effects. A latin hypercube sampling method was chosen to design a uniform experimental plan, composed of 75 different VNS configurations, with different values for the main parameters (current amplitude, number of delivered pulses, pulse width, interpulse period and the delay between the detected cardiac event and VNS onset. These VNS configurations were applied to 6 healthy, anesthetized sheep, while acquiring the associated cardiovascular response. Unobserved VNS configurations were estimated using a Gaussian process regression (GPR model. In order to quantitatively analyze the effect of each parameter and their combinations on the cardiac response, the Sobol sensitivity method was applied to the obtained GPR model and inter-individual sensitivity markers were estimated using a bootstrap approach. Results highlight the dominant effect of pulse current, pulse width and number of pulses, which explain respectively 49.4%, 19.7% and 6.0% of the mean global cardiovascular variability provoked by VNS. More interestingly, results also quantify the effect of the interactions between VNS parameters. In particular, the interactions between current and pulse width provoke higher cardiac effects than the changes on the number of pulses alone (between 6 and 25% of the variability. Although the sensitivity of individual VNS parameters seems similar for chronotropic, dromotropic and inotropic responses, the interacting effects of VNS parameters

  20. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  1. Global sensitivity analysis of a model related to memory formation in synapses: Model reduction based on epistemic parameter uncertainties and related issues.

    Science.gov (United States)

    Kulasiri, Don; Liang, Jingyi; He, Yao; Samarasinghe, Sandhya

    2017-04-21

    We investigate the epistemic uncertainties of parameters of a mathematical model that describes the dynamics of CaMKII-NMDAR complex related to memory formation in synapses using global sensitivity analysis (GSA). The model, which was published in this journal, is nonlinear and complex with Ca 2+ patterns with different level of frequencies as inputs. We explore the effects of parameter on the key outputs of the model to discover the most sensitive ones using GSA and partial ranking correlation coefficient (PRCC) and to understand why they are sensitive and others are not based on the biology of the problem. We also extend the model to add presynaptic neurotransmitter vesicles release to have action potentials as inputs of different frequencies. We perform GSA on this extended model to show that the parameter sensitivities are different for the extended model as shown by PRCC landscapes. Based on the results of GSA and PRCC, we reduce the original model to a less complex model taking the most important biological processes into account. We validate the reduced model against the outputs of the original model. We show that the parameter sensitivities are dependent on the inputs and GSA would make us understand the sensitivities and the importance of the parameters. A thorough phenomenological understanding of the relationships involved is essential to interpret the results of GSA and hence for the possible model reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The implication of sensitivity analysis on the safety and delayed-neutron parameters for fast breeder reactors

    International Nuclear Information System (INIS)

    Onega, R.J.; Florian, R.J.

    1983-01-01

    The delayed-neutron energy spectra for LMFBRs are not as well known as those for LWRs. These spectra are necessary for kinetics calculations which play an important role in safety and accident analyses. A sensitivity analysis was performed to study the response of the reactor power and power density to uncertainties in the delayed-neutron spectra during a rod-ejection accident. The accidents studied were central control-rod-ejections with ejection times of 2,10 and 30s. A two-energy group and two-precursor group model was formulated for the International Nuclear Fuel Cycle Evaluation (INFCE) reference design MOX-fueled LMFBR. The sensitivity analysis is based on the use of adjoints so that it is not necessary to repeatedly solve the governing (kinetics) equations to obtain the sensitivity derivatives. This is of particular importance when large systems of equations are used. The power and power-density responses were found to be most sensitive to uncertainties in the spectrum of the second delayed-neutron precursor group, resulting from the fission of 238 U, producing neutrons in the first energy group. It was found, for example, that for a rod-ejection time of 30s, and uncertainty of 7.2% in the fast components of the spectra resulted in a 24% uncertainty in the predicted power and power density. These responses were recalculated by repeatedly solving the kinetics equations. The maximum discrepancy between the recalculated and the sensitivity analysis response was only 1.6%. The results of the sensitivity analysis indicate the need for improved delayed-neutron spectral data in order to reduce the uncertainties in accident analyses. (author)

  3. Histogram analysis derived from apparent diffusion coefficient (ADC) is more sensitive to reflect serological parameters in myositis than conventional ADC analysis.

    Science.gov (United States)

    Meyer, Hans Jonas; Emmer, Alexander; Kornhuber, Malte; Surov, Alexey

    2018-05-01

    Diffusion-weighted imaging (DWI) has the potential of being able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize tissues on MRI. The aim of this study was to correlate histogram parameters derived from apparent diffusion coefficient (ADC) maps with serological parameters in myositis. 16 patients with autoimmune myositis were included in this retrospective study. DWI was obtained on a 1.5 T scanner by using the b-values of 0 and 1000 s mm - 2 . Histogram analysis was performed as a whole muscle measurement by using a custom-made Matlab-based application. The following ADC histogram parameters were estimated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, and the following percentiles ADCp10, ADCp25, ADCp75, ADCp90, as well histogram parameters kurtosis, skewness, and entropy. In all patients, the blood sample was acquired within 3 days to the MRI. The following serological parameters were estimated: alanine aminotransferase, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, C-reactive protein (CRP) and myoglobin. All patients were screened for Jo1-autobodies. Kurtosis correlated inversely with CRP (p = -0.55 and 0.03). Furthermore, ADCp10 and ADCp90 values tended to correlate with creatine kinase (p = -0.43, 0.11, and p = -0.42, = 0.12 respectively). In addition, ADCmean, p10, p25, median, mode, and entropy were different between Jo1-positive and Jo1-negative patients. ADC histogram parameters are sensitive for detection of muscle alterations in myositis patients. Advances in knowledge: This study identified that kurtosis derived from ADC maps is associated with CRP in myositis patients. Furthermore, several ADC histogram parameters are statistically different between Jo1-positive and Jo1-negative patients.

  4. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  5. Sensitivity analysis of the kinetic behaviour of a Gas Cooled Fast Reactor to variations of the delayed neutron parameters

    International Nuclear Information System (INIS)

    Van Rooijen, W. F. G.; Lathouwers, D.

    2007-01-01

    In advanced Generation IV (fast) reactors an integral fuel cycle is envisaged, where all Heavy Metal is recycled in the reactor. This leads to a nuclear fuel with a considerable content of Minor Actinides. For many of these isotopes the nuclear data is not very well known. In this paper the sensitivity of the kinetic behaviour of the reactor to the dynamic parameters λ k , β k and the delayed spectrum χ d,k is studied using first order perturbation theory. In the current study, feedback due to Doppler and/or thermohydraulic effects are not treated. The theoretical framework is applied to a Generation IV Gas Cooled Fast Reactor. The results indicate that the first-order approach is satisfactory for small variations of the data. Sensitivities to delayed neutron data are similar for increasing and decreasing transients. Sensitivities generally increase with reactivity for increasing transients. For decreasing transients, there are less clearly defined trends, although the sensitivity to the delayed neutron spectrum decreases with larger sub-criticality, as expected. For this research, an adjoint capable version of the time-dependent diffusion code DALTON is under development. (authors)

  6. Key Parameters for Urban Heat Island Assessment in A Mediterranean Context: A Sensitivity Analysis Using the Urban Weather Generator Model

    Science.gov (United States)

    Salvati, Agnese; Palme, Massimo; Inostroza, Luis

    2017-10-01

    Although Urban Heat Island (UHI) is a fundamental effect modifying the urban climate, being widely studied, the relative weight of the parameters involved in its generation is still not clear. This paper investigates the hierarchy of importance of eight parameters responsible for UHI intensity in the Mediterranean context. Sensitivity analyses have been carried out using the Urban Weather Generator model, considering the range of variability of: 1) city radius, 2) urban morphology, 3) tree coverage, 4) anthropogenic heat from vehicles, 5) building’s cooling set point, 6) heat released to canyon from HVAC systems, 7) wall construction properties and 8) albedo of vertical and horizontal surfaces. Results show a clear hierarchy of significance among the considered parameters; the urban morphology is the most important variable, causing a relative change up to 120% of the annual average UHI intensity in the Mediterranean context. The impact of anthropogenic sources of heat such as cooling systems and vehicles is also significant. These results suggest that urban morphology parameters can be used as descriptors of the climatic performance of different urban areas, easing the work of urban planners and designers in understanding a complex physical phenomenon, such as the UHI.

  7. Sensitive analysis of low-flow parameters using the hourly hydrological model for two mountainous basins in Japan

    Science.gov (United States)

    Fujimura, Kazumasa; Iseri, Yoshihiko; Kanae, Shinjiro; Murakami, Masahiro

    2014-05-01

    Accurate estimation of low flow can contribute to better water resources management and also lead to more reliable evaluation of climate change impacts on water resources. In the early study, the nonlinearity of low flow related to the storage in the basin was suggested by Horton (1937) as the exponential function of Q=KSN, where Q is the discharge, S is the storage, K is a constant and N is the exponent value. In the recent study by Ding (2011) showed the general storage-discharge equation of Q = KNSN. Since the constant K is defined as the fractional recession constant and symbolized as Au by Ando et al. (1983), in this study, we rewrite this equation as Qg=AuNSgN, where Qg is the groundwater runoff and Sg is the groundwater storage. Although this equation was applied to a short-term runoff event of less than 14 hours using the unit hydrograph method by Ding, it was not yet applied for a long-term runoff event including low flow more than 10 years. This study performed a sensitive analysis of two parameters of the constant Au and exponent value N by using the hourly hydrological model for two mountainous basins in Japan. The hourly hydrological model used in this study was presented by Fujimura et al. (2012), which comprise the Diskin-Nazimov infiltration model, groundwater recharge and groundwater runoff calculations, and a direct runoff component. The study basins are the Sameura Dam basin (SAME basin) (472 km2) located in the western Japan which has variability of rainfall, and the Shirakawa Dam basin (SIRA basin) (205km2) located in a region of heavy snowfall in the eastern Japan, that are different conditions of climate and geology. The period of available hourly data for the SAME basin is 20 years from 1 January 1991 to 31 December 2010, and for the SIRA basin is 10 years from 1 October 2003 to 30 September 2013. In the sensitive analysis, we prepared 19900 sets of the two parameters of Au and N, the Au value ranges from 0.0001 to 0.0100 in steps of 0

  8. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    Energy Technology Data Exchange (ETDEWEB)

    Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-12

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We looked for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.

  9. Sensitivity Analysis of Input Parameters for the Dose Assessment from Gaseous Effluents due to the Normal Operation of Jordan Research and Training Reactor

    International Nuclear Information System (INIS)

    Kim, Sukhoon; Lee, Seunghee; Kim, Juyoul; Kim, Juyub; Han, Moonhee

    2015-01-01

    In this study, therefore, the sensitivity analysis of input variables for the dose assessment was performed for reviewing the effect of each parameter on the result after determining the type and range of parameters that could affect the exposure dose of the public. (Since JRTR will be operated by the concept of 'no liquid discharge,' the input parameters used for calculation of dose due to liquid effluents are not considered in the sensitivity analysis.) In this paper, the sensitivity analysis of input parameters for the dose assessment in the vicinity of the site boundary due to gaseous effluents was performed for a total of thirty-five (35) cases. And, detailed results for the input variables that have an significant effect are shown in Figures 1 through 7, respectively. For preparing a R-ER for the operating license of the JRTR, these results will be updated by the additional information and could be applied to predicting the variation trend of the exposure dose in the process of updating the input parameters for the dose assessment reflecting the characteristics of the JRTR site

  10. [Simulation of carbon cycle in Qianyanzhou artificial masson pine forest ecosystem and sensitivity analysis of model parameters].

    Science.gov (United States)

    Wang, Yuan; Zhang, Na; Yu, Gui-rui

    2010-07-01

    By using modified carbon-water cycle model EPPML (ecosystem productivity process model for landscape), the carbon absorption and respiration in Qianyanzhou artificial masson pine forest ecosystem in 2003 and 2004 were simulated, and the sensitivity of the model parameters was analyzed. The results showed that EPPML could effectively simulate the carbon cycle process of this ecosystem. The simulated annual values and the seasonal variations of gross primary productivity (GPP), net ecosystem productivity (NEP), and ecosystem respiration (Re) not only fitted well with the measured data, but also reflected the major impacts of extreme weather on carbon flows. The artificial masson pine forest ecosystem in Qianyanzhou was a strong carbon sink in both 2003 and 2004. Due to the coupling of high temperature and severe drought in the growth season in 2003, the carbon absorption in 2003 was lower than that in 2004. The annual NEP in 2003 and 2004 was 481.8 and 516.6 g C x m(-2) x a(-1), respectively. The key climatic factors giving important impacts on the seasonal variations of carbon cycle were solar radiation during early growth season, drought during peak growth season, and precipitation during post-peak growth season. Autotrophic respiration (Ra) and net primary productivity (NPP) had the similar seasonal variations. Soil heterotrophic respiration (Rh) was mainly affected by soil temperature at yearly scale, and by soil water content at monthly scale. During wet growth season, the higher the soil water content, the lower the Rh was; during dry growth season, the higher the precipitation during the earlier two months, the higher the Rh was. The maximum RuBP carboxylation rate at 25 degrees C (Vm25), specific leaf area (SLA), maximum leaf nitrogen content (LNm), average leaf nitrogen content (LN), and conversion coefficient of biomass to carbon (C/B) had the greatest influence on annual NEP. Different carbon cycle process could have different responses to sensitive

  11. Parameter Sensitivity Analysis on Deformation of Composite Soil-Nailed Wall Using Artificial Neural Networks and Orthogonal Experiment

    Directory of Open Access Journals (Sweden)

    Jianbin Hao

    2014-01-01

    Full Text Available Based on the back-propagation algorithm of artificial neural networks (ANNs, this paper establishes an intelligent model, which is used to predict the maximum lateral displacement of composite soil-nailed wall. Some parameters, such as soil cohesive strength, soil friction angle, prestress of anchor cable, soil-nail spacing, soil-nail diameter, soil-nail length, and other factors, are considered in the model. Combined with the in situ test data of composite soil-nail wall reinforcement engineering, the network is trained and the errors are analyzed. Thus it is demonstrated that the method is applicable and feasible in predicting lateral displacement of excavation retained by composite soil-nailed wall. Extended calculations are conducted by using the well-trained intelligent forecast model. Through application of orthogonal table test theory, 25 sets of tests are designed to analyze the sensitivity of factors affecting the maximum lateral displacement of composite soil-nailing wall. The results show that the sensitivity of factors affecting the maximum lateral displacement of composite soil nailing wall, in a descending order, are prestress of anchor cable, soil friction angle, soil cohesion strength, soil-nail spacing, soil-nail length, and soil-nail diameter. The results can provide important reference for the same reinforcement engineering.

  12. A sensitivity analysis of a personalized pulse wave propagation model for arteriovenous fistula surgery. Part A: Identification of most influential model parameters.

    Science.gov (United States)

    Huberts, W; de Jonge, C; van der Linden, W P M; Inda, M A; Tordoir, J H M; van de Vosse, F N; Bosboom, E M H

    2013-06-01

    Previously, a pulse wave propagation model was developed that has potential in supporting decision-making in arteriovenous fistula (AVF) surgery for hemodialysis. To adapt the wave propagation model to personalized conditions, patient-specific input parameters should be available. In clinics, the number of measurable input parameters is limited which results in sparse datasets. In addition, patient data are compromised with uncertainty. These uncertain and incomplete input datasets will result in model output uncertainties. By means of a sensitivity analysis the propagation of input uncertainties into output uncertainty can be studied which can give directions for input measurement improvement. In this study, a computational framework has been developed to perform such a sensitivity analysis with a variance-based method and Monte Carlo simulations. The framework was used to determine the influential parameters of our pulse wave propagation model applied to AVF surgery, with respect to parameter prioritization and parameter fixing. With this we were able to determine the model parameters that have the largest influence on the predicted mean brachial flow and systolic radial artery pressure after AVF surgery. Of all 73 parameters 51 could be fixed within their measurement uncertainty interval without significantly influencing the output, while 16 parameters importantly influence the output uncertainty. Measurement accuracy improvement should thus focus on these 16 influential parameters. The most rewarding are measurement improvements of the following parameters: the mean aortic flow, the aortic windkessel resistance, the parameters associated with the smallest arterial or venous diameters of the AVF in- and outflow tract and the radial artery windkessel compliance. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Parameter Estimation, Sensitivity Analysis and Optimal Control of a Periodic Epidemic Model with Application to HRSV in Florida

    Directory of Open Access Journals (Sweden)

    Silvério Rosa

    2018-02-01

    Full Text Available A state wide Human Respiratory Syncytial Virus (HRSV surveillance system was implemented in Florida in 1999 to support clinical decision-making for prophylaxis of premature infants. The research presented in this paper addresses the problem of fitting real data collected by the Florida HRSV surveillance system by using a periodic SEIRS mathematical model. A sensitivity and cost-effectiveness analysis of the model is done and an optimal control problem is formulated and solved with treatment as the control variable.

  14. Analysis of ex-vessel melt jet breakup and coolability. Part 1: Sensitivity on model parameters and accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Moriyama, Kiyofumi; Park, Hyun Sun, E-mail: hejsunny@postech.ac.kr; Hwang, Byoungcheol; Jung, Woo Hyun

    2016-06-15

    Highlights: • Application of JASMINE code to melt jet breakup and coolability in APR1400 condition. • Coolability indexes for quasi steady state breakup and cooling process. • Typical case in complete breakup/solidification, film boiling quench not reached. • Significant impact of water depth and melt jet size; weak impact of model parameters. - Abstract: The breakup of a melt jet falling in a water pool and the coolability of the melt particles produced by such jet breakup are important phenomena in terms of the mitigation of severe accident consequences in light water reactors, because the molten and relocated core material is the primary heat source that governs the accident progression. We applied a modified version of the fuel–coolant interaction simulation code, JASMINE, developed at Japan Atomic Energy Agency (JAEA) to a plant scale simulation of melt jet breakup and cooling assuming an ex-vessel condition in the APR1400, a Korean advanced pressurized water reactor. Also, we examined the sensitivity on seven model parameters and five initial/boundary condition variables. The results showed that the melt cooling performance of a 6 m deep water pool in the reactor cavity is enough for removing the initial melt enthalpy for solidification, for a melt jet of 0.2 m initial diameter. The impacts of the model parameters were relatively weak and that of some of the initial/boundary condition variables, namely the water depth and melt jet diameter, were very strong. The present model indicated that a significant fraction of the melt jet is not broken up and forms a continuous melt pool on the containment floor in cases with a large melt jet diameter, 0.5 m, or a shallow water pool depth, ≤3 m.

  15. A sensitivity analysis of a personalized pulse wave propagation model for arteriovenous fistula surgery. Part B: Identification of possible generic model parameters.

    Science.gov (United States)

    Huberts, W; de Jonge, C; van der Linden, W P M; Inda, M A; Passera, K; Tordoir, J H M; van de Vosse, F N; Bosboom, E M H

    2013-06-01

    Decision-making in vascular access surgery for hemodialysis can be supported by a pulse wave propagation model that is able to simulate pressure and flow changes induced by the creation of a vascular access. To personalize such a model, patient-specific input parameters should be chosen. However, the number of input parameters that can be measured in clinical routine is limited. Besides, patient data are compromised with uncertainty. Incomplete and uncertain input data will result in uncertainties in model predictions. In part A, we analyzed how the measurement uncertainty in the input propagates to the model output by means of a sensitivity analysis. Of all 73 input parameters, 16 parameters were identified to be worthwhile to measure more accurately and 51 could be fixed within their measurement uncertainty range, but these latter parameters still needed to be measured. Here, we present a methodology for assessing the model input parameters that can be taken constant and therefore do not need to be measured. In addition, a method to determine the value of this parameter is presented. For the pulse wave propagation model applied to vascular access surgery, six patient-specific datasets were analyzed and it was found that 47 out of 73 parameters can be fixed on a generic value. These model parameters are not important for personalization of the wave propagation model. Furthermore, we were able to determine a generic value for 37 of the 47 fixable model parameters. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Sensitivity analysis of the leaching rate parameter in assessing the environmental risk of phosphogypsum application in sanitary landfills

    Energy Technology Data Exchange (ETDEWEB)

    Marchesi, Marcos Vinicius A.; Hama, Naruhiko; Jacomino, Vanusa M.F.; Ladeira, Ana Claudia Q.; Cota, Stela D.S., E-mail: mvmarchesi@hotmail.com, E-mail: sdsc@cdtn.br, E-mail: vmfj@cdtn.br, E-mail: ana.ladeira@cdtn.br, E-mail: naruhikohama@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2013-07-01

    The attack with sulfuric acid to phosphate rock produces both phosphoric acid, basic raw material in the manufacture of fertilizers, as a by-product called phosphogypsum. Phosphogypsum is composed mostly of calcium sulfate dihydrated, but may have high levels of impurities from the phosphate rock matrix as a series of natural radionuclides, and heavy metals (e.g. Cd, Zn) and metalloids (e.g. , As and Se). Although it is used for agricultural purposes and more recently in construction, in Brazil the generation rate estimated at six million tons per year is much higher than the amount spent on existing alternatives, and therefore mostly deposited in piles in the same place production, causing thereby the risk of contamination of soil and water resources of the region and providing risk to human health. Taken into account the need to find alternative arrangements for phosphogypsum and reduce the impact generated by its contaminants, this study aims to analyze the sensitivity of the leaching rate parameter in the environmental risk evaluation of the application of phosphogypsum in landfills through mathematical modeling, where it is evaluated the concentration of heavy metals and radionuclides in the layer of the soil under the clay layer of the landfill.

  17. Sensitivity analysis of the leaching rate parameter in assessing the environmental risk of phosphogypsum application in sanitary landfills

    International Nuclear Information System (INIS)

    Marchesi, Marcos Vinicius A.; Hama, Naruhiko; Jacomino, Vanusa M.F.; Ladeira, Ana Claudia Q.; Cota, Stela D.S.

    2013-01-01

    The attack with sulfuric acid to phosphate rock produces both phosphoric acid, basic raw material in the manufacture of fertilizers, as a by-product called phosphogypsum. Phosphogypsum is composed mostly of calcium sulfate dihydrated, but may have high levels of impurities from the phosphate rock matrix as a series of natural radionuclides, and heavy metals (e.g. Cd, Zn) and metalloids (e.g. , As and Se). Although it is used for agricultural purposes and more recently in construction, in Brazil the generation rate estimated at six million tons per year is much higher than the amount spent on existing alternatives, and therefore mostly deposited in piles in the same place production, causing thereby the risk of contamination of soil and water resources of the region and providing risk to human health. Taken into account the need to find alternative arrangements for phosphogypsum and reduce the impact generated by its contaminants, this study aims to analyze the sensitivity of the leaching rate parameter in the environmental risk evaluation of the application of phosphogypsum in landfills through mathematical modeling, where it is evaluated the concentration of heavy metals and radionuclides in the layer of the soil under the clay layer of the landfill

  18. A sensitivity analysis and assessment on the reactivity, economics and resorce implications of reactor systems and cycles with respect to uncertainity in nuclear data and other reactor parameters

    International Nuclear Information System (INIS)

    Quan, B.L.

    1980-01-01

    A general sensitivity analysis system for analyzing the effects of uncertainity in nuclear data and reactor parameters on fuel cycle economics, resources and physics has been developed. The sensitivity analysis has been performed on various reactor systems and cycles such as the thorium cycles, plutonium cycles, CANDU reactor fuel cycles and alternate once-through LWR cycles such as the 18 month cycle. Sensitivity coefficients were generated for a variety of materials pertinent to the LWR fuel cycle using a series of fast running codes developed for this purpose and running on a local PDP-15 computer. Their relative order of importance were assessed and the reasons explaining this difference were examined. This work is a result of EPRI project in determining the data needs for the LWR industry and should be valuable in identifying areas in which data improvements are worthwhile

  19. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  20. DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  1. The mobilisation model and parameter sensitivity

    International Nuclear Information System (INIS)

    Blok, B.M.

    1993-12-01

    In the PRObabillistic Safety Assessment (PROSA) of radioactive waste in a salt repository one of the nuclide release scenario's is the subrosion scenario. A new subrosion model SUBRECN has been developed. In this model the combined effect of a depth-dependent subrosion, glass dissolution, and salt rise has been taken into account. The subrosion model SUBRECN and the implementation of this model in the German computer program EMOS4 is presented. A new computer program PANTER is derived from EMOS4. PANTER models releases of radionuclides via subrosion from a disposal site in a salt pillar into the biosphere. For uncertainty and sensitivity analyses the new subrosion model Latin Hypercube Sampling has been used for determine the different values for the uncertain parameters. The influence of the uncertainty in the parameters on the dose calculations has been investigated by the following sensitivity techniques: Spearman Rank Correlation Coefficients, Partial Rank Correlation Coefficients, Standardised Rank Regression Coefficients, and the Smirnov Test. (orig./HP)

  2. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    Energy Technology Data Exchange (ETDEWEB)

    Goldstein, Peter [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  3. Importance and sensitivity of parameters affecting the Zion Seismic Risk

    International Nuclear Information System (INIS)

    George, L.L.; O'Connell, W.J.

    1985-06-01

    This report presents the results of a study on the importance and sensitivity of structures, systems, equipment, components and design parameters used in the Zion Seismic Risk Calculations. This study is part of the Seismic Safety Margins Research Program (SSMRP) supported by the NRC Office of Nuclear Regulatory Research. The objective of this study is to provide the NRC with results on the importance and sensitivity of parameters used to evaluate seismic risk. These results can assist the NRC in making decisions dealing with the allocation of research resources on seismic issues. This study uses marginal analysis in addition to importance and sensitivity analysis to identify subject areas (input parameter areas) for improvements that reduce risk, estimate how much the improvement dfforts reduce risk, and rank the subject areas for improvements. Importance analysis identifies the systems, components, and parameters that are important to risk. Sensitivity analysis estimates the change in risk per unit improvement. Marginal analysis indicates the reduction in risk or uncertainty for improvement effort made in each subject area. The results described in this study were generated using the SEISIM (Systematic Evaluation of Important Safety Improvement Measures) and CHAIN computer codes. Part 1 of the SEISIM computer code generated the failure probabilities and risk values. Part 2 of SEISIM, along with the CHAIN computer code, generated the importance and sensitivity measures

  4. Importance and sensitivity of parameters affecting the Zion Seismic Risk

    Energy Technology Data Exchange (ETDEWEB)

    George, L.L.; O' Connell, W.J.

    1985-06-01

    This report presents the results of a study on the importance and sensitivity of structures, systems, equipment, components and design parameters used in the Zion Seismic Risk Calculations. This study is part of the Seismic Safety Margins Research Program (SSMRP) supported by the NRC Office of Nuclear Regulatory Research. The objective of this study is to provide the NRC with results on the importance and sensitivity of parameters used to evaluate seismic risk. These results can assist the NRC in making decisions dealing with the allocation of research resources on seismic issues. This study uses marginal analysis in addition to importance and sensitivity analysis to identify subject areas (input parameter areas) for improvements that reduce risk, estimate how much the improvement dfforts reduce risk, and rank the subject areas for improvements. Importance analysis identifies the systems, components, and parameters that are important to risk. Sensitivity analysis estimates the change in risk per unit improvement. Marginal analysis indicates the reduction in risk or uncertainty for improvement effort made in each subject area. The results described in this study were generated using the SEISIM (Systematic Evaluation of Important Safety Improvement Measures) and CHAIN computer codes. Part 1 of the SEISIM computer code generated the failure probabilities and risk values. Part 2 of SEISIM, along with the CHAIN computer code, generated the importance and sensitivity measures.

  5. Correlation of 210Po implanted in glass with radon gas exposure: sensitivity analysis of critical parameters using a Monte-Carlo approach.

    Science.gov (United States)

    Walsh, C; McLaughlin, J P

    2001-05-14

    In recent years, 210Po implanted in glass artefacts has been used as an indicator of the mean radon gas concentration in dwellings in the past. Glass artefacts have been selected in many dwellings and the alpha-recoil implanted 210Po concentration has been measured using various techniques. Some of these retrospective techniques use a model to estimate the retrospective radon gas on the basis of this surface 210Po activity. The accumulation of 210Po on glass surfaces is determined by the deposition regime over the exposure period. The 210Po activity is determined not only by the radon progeny deposition velocities, but by other room parameters such as ventilation rate, aerosol conditions and the surface to volume ratio of the room. Up to now in using room models, a nominal or 'base-case' scenario is used, i.e. a single value is chosen for each input parameter. In this paper a Monte-Carlo analysis is presented in which a probability distribution for each parameter is chosen, based on measurements quoted in the literature. A 210Po surface activity is calculated using a single value drawn from each of the parameter distributions using a pseudo-random number generator. This process is repeated n times (up to 20,000), producing n independent scenarios with corresponding 210Po values. This process permits a sensitivity analysis to be carried out to see the effect of changes in inputs on the model output.

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

  7. Probabilistic and Nonprobabilistic Sensitivity Analyses of Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Sheng-En Fang

    2014-01-01

    Full Text Available Parameter sensitivity analyses have been widely applied to industrial problems for evaluating parameter significance, effects on responses, uncertainty influence, and so forth. In the interest of simple implementation and computational efficiency, this study has developed two sensitivity analysis methods corresponding to the situations with or without sufficient probability information. The probabilistic method is established with the aid of the stochastic response surface and the mathematical derivation proves that the coefficients of first-order items embody the parameter main effects on the response. Simultaneously, a nonprobabilistic interval analysis based method is brought forward for the circumstance when the parameter probability distributions are unknown. The two methods have been verified against a numerical beam example with their accuracy compared to that of a traditional variance-based method. The analysis results have demonstrated the reliability and accuracy of the developed methods. And their suitability for different situations has also been discussed.

  8. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  9. Evaluation of the Trajectory Sensitivity Analysis of the DFIG Control Parameters in Response to Changes in Wind Speed and the Line Impedance Connection to the Grid DFIG

    Directory of Open Access Journals (Sweden)

    Mehdi Fooladgar

    2015-01-01

    Full Text Available Economic and environmental conditions often make large stations and transmission lines, restrictions are placed. Small and medium-sized production units connected to existing systems as a strategy is in progress. These units are usually near the center of the load placed and distributed generators (DG famous are the DG are allowed types vary, such as induction generators rack squirrel-connected wind turbines, generators fed induction double mounted wind turbines, fuel cells connected to the system by power electronic converters or synchronous generator connected to the turbine combustion [10]. This way sensitivity analysis in systems of distributed generation (DG is assessed. It is shown that the method can detect the effect of control parameters listed wind turbine connected to a double-fed induction generator (DFIG Badoou the impedance of the changing the speed of on the stability of the transmission line useful system invested. The control parameters of the importance of influencing the behavior of DFIG are divided.

  10. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    Science.gov (United States)

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation

  11. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual.

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandia National lababoratory, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandia National lababoratory, Livermore, CA); Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  12. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  13. DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual.

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  14. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  15. The sensitivity and significance analysis of parameters in the model of pH regulation on lactic acid production by Lactobacillus bulgaricus.

    Science.gov (United States)

    Liu, Ke; Zeng, Xiangmiao; Qiao, Lei; Li, Xisheng; Yang, Yubo; Dai, Cuihong; Hou, Aiju; Xu, Dechang

    2014-01-01

    The excessive production of lactic acid by L. bulgaricus during yogurt storage is a phenomenon we are always tried to prevent. The methods used in industry either control the post-acidification inefficiently or kill the probiotics in yogurt. Genetic methods of changing the activity of one enzyme related to lactic acid metabolism make the bacteria short of energy to growth, although they are efficient ways in controlling lactic acid production. A model of pH-induced promoter regulation on the production of lactic acid by L. bulgaricus was built. The modelled lactic acid metabolism without pH-induced promoter regulation fitted well with wild type L. bulgaricus (R2LAC = 0.943, R2LA = 0.942). Both the local sensitivity analysis and Sobol sensitivity analysis indicated parameters Tmax, GR, KLR, S, V0, V1 and dLR were sensitive. In order to guide the future biology experiments, three adjustable parameters, KLR, V0 and V1, were chosen for further simulations. V0 had little effect on lactic acid production if the pH-induced promoter could be well induced when pH decreased to its threshold. KLR and V1 both exhibited great influence on the producing of lactic acid. The proposed method of introducing a pH-induced promoter to regulate a repressor gene could restrain the synthesis of lactic acid if an appropriate strength of promoter and/or an appropriate strength of ribosome binding sequence (RBS) in lacR gene has been designed.

  16. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  17. Uncertainty and sensitivity analysis in the neutronic parameters generation for BWR and PWR coupled thermal-hydraulic–neutronic simulations

    International Nuclear Information System (INIS)

    Ánchel, F.; Barrachina, T.; Miró, R.; Verdú, G.; Juanas, J.; Macián-Juan, R.

    2012-01-01

    Highlights: ► Best-estimate codes are affected by the uncertainty in the methods and the models. ► Influence of the uncertainty in the macroscopic cross-sections in a BWR and PWR RIA accidents analysis. ► The fast diffusion coefficient, the scattering cross section and both fission cross sections are the most influential factors. ► The absorption cross sections very little influence. ► Using a normal pdf the results are more “conservative” comparing the power peak reached with uncertainty quantified with a uniform pdf. - Abstract: The Best Estimate analysis consists of a coupled thermal-hydraulic and neutronic description of the nuclear system's behavior; uncertainties from both aspects should be included and jointly propagated. This paper presents a study of the influence of the uncertainty in the macroscopic neutronic information that describes a three-dimensional core model on the most relevant results of the simulation of a Reactivity Induced Accident (RIA). The analyses of a BWR-RIA and a PWR-RIA have been carried out with a three-dimensional thermal-hydraulic and neutronic model for the coupled system TRACE-PARCS and RELAP-PARCS. The cross section information has been generated by the SIMTAB methodology based on the joint use of CASMO-SIMULATE. The statistically based methodology performs a Monte-Carlo kind of sampling of the uncertainty in the macroscopic cross sections. The size of the sampling is determined by the characteristics of the tolerance intervals by applying the Noether–Wilks formulas. A number of simulations equal to the sample size have been carried out in which the cross sections used by PARCS are directly modified with uncertainty, and non-parametric statistical methods are applied to the resulting sample of the values of the output variables to determine their intervals of tolerance.

  18. Probabilistic calculations and sensitivity analysis of parameters for a reference biosphere model assessing the potential exposure of a population to radionuclides from a deep geological repository

    Energy Technology Data Exchange (ETDEWEB)

    Staudt, Christian; Kaiser, Jan Christian [Helmholtz Zentrum Muenchen, Institute of Radiation Protection, Munich (Germany); Proehl, Gerhard [International Atomic Energy Agency, Division of Radiation, Transport and Waste Safety, Wagramerstrasse 5, 1400 Vienna (Austria)

    2014-07-01

    Radioecological models are used to assess the exposure of hypothetical populations to radionuclides. Potential radionuclide sources are deep geological repositories for high level radioactive waste. Assessment time frames are long since releases from those repositories are only expected in the far future, and radionuclide migration to the geosphere biosphere interface will take additional time. Due to the long time frames, climate conditions at the repository site will change, leading to changing exposure pathways and model parameters. To identify climate dependent changes in exposure in the far field of a deep geological repository a range of reference biosphere models representing climate analogues for potential future climate states at a German site were developed. In this approach, model scenarios are developed for different contemporary climate states. It is assumed that the exposure pathways and parameters of the contemporary biosphere in the far field of the repository will change to be similar to those at the analogue sites. Since current climate models cannot predict climate developments over the assessment time frame of 1 million years, analogues for a range of realistically possible future climate conditions were selected. These climate states range from steppe to permafrost climate. As model endpoint Biosphere Dose conversion factors (BDCF) are calculated. The radionuclide specific BDCF describe the exposure of a population to radionuclides entering the biosphere in near surface ground water. The BDCF are subject to uncertainties in the exposure pathways and model parameters. In the presented work, probabilistic and sensitivity analysis was used to assess the influence of model parameter uncertainties on the BDCF and the relevance of individual parameters for the model result. This was done for the long half-live radionuclides Cs-135, I-129 and U-238. In addition to this, BDCF distributions for nine climate reference regions and several scenarios were

  19. Cogeneration: Key feasibility analysis parameters

    International Nuclear Information System (INIS)

    Coslovi, S.; Zulian, A.

    1992-01-01

    This paper first reviews the essential requirements, in terms of scope, objectives and methods, of technical/economic feasibility analyses applied to cogeneration systems proposed for industrial plants in Italy. Attention is given to the influence on overall feasibility of the following factors: electric power and fuel costs, equipment coefficients of performance, operating schedules, maintenance costs, Italian Government taxes and financial and legal incentives. Through an examination of several feasibility studies that were done on cogeneration proposals relative to different industrial sectors, a sensitivity analysis is performed on the effects of varying the weights of different cost benefit analysis parameters. With the use of statistical analyses, standard deviations are then determined for key analysis parameters, and guidelines are suggested for analysis simplifications

  20. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  1. Biosphere assessment for high-level radioactive waste disposal: modelling experiences and discussion on key parameters by sensitivity analysis in JNC

    International Nuclear Information System (INIS)

    Kato, Tomoko; Makino, Hitoshi; Uchida, Masahiro; Suzuki, Yuji

    2004-01-01

    In the safety assessment of the deep geological disposal system of the high-level radioactive waste (HLW), biosphere assessment is often necessary to estimate future radiological impacts on human beings (e.g. radiation dose). In order to estimate the dose, the surface environment (biosphere) into which future releases of radionuclides might occur and the associated future human behaviour needs to be considered. However, for a deep repository, such releases might not occur for many thousands of years after disposal. Over such timescales, it is impossible to predict with any certainty how the biosphere and human behaviour will evolve. To avoid endless speculation aimed at reducing such uncertainty, the 'Reference Biospheres' concept has been developed for use in the safety assessment of HLW disposal. As the aim of the safety assessment with a hypothetical HLW disposal system by JNC was to demonstrate the technical feasibility and reliability of the Japanese disposal concept for a range of geological and surface environments, some biosphere models were developed using the 'Reference Biospheres' concept and the BIOMASS Methodology. These models have been used to derive factors to convert the radionuclide flux from a geosphere to a biosphere into a dose (flux to dose conversion factors). Moreover, sensitivity analysis for parameters in the biosphere models was performed to evaluate and understand the relative importance of parameters. It was concluded that transport parameters in the surface environments, annual amount of food consumption, distribution coefficients on soils and sediments, transfer coefficients of radionuclides to animal products and concentration ratios for marine organisms would have larger influence on the flux to dose conversion factors than any other parameters. (author)

  2. Sensitivity analysis in remote sensing

    CERN Document Server

    Ustinov, Eugene A

    2015-01-01

    This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...

  3. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  4. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    Science.gov (United States)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  6. Sensitivity analysis for reactivity parameter change of the creole experiment caused by the differences between ENDF-BVII and JENDL neutron cross section evaluations

    International Nuclear Information System (INIS)

    Boulaich, Y.; Bardouni, C.; Elyounoussi, C.; Elbakkari, H.; Boukhal, H.; Erradi, L.; Nacir, B.

    2011-01-01

    Full text: In this work, we present our analysis of the CREOLE experiment on the parameter by using the three-dimensional continuous energy code (MCNPS) and the last updated nuclear data evaluations. This experiment performed in the EOLE critical facility located at CEA-Cadarache, was dedicated to studies for both UO2 and UO2-PuO2 PWR type lattices covering the whole temperature range from 20 0 C to 300 0 C. We have developed an accurate model of the EOLE reactor to be used by the MCNP5 Monte Carlo code. This model guarantees a high level of fidelity in the description of different configurations at various temperatures taking into account their consequence on neutron cross section data and all thermal expansion effects. In this case, the remaining error between calculation and experiment will be awarded mainly to uncertainties on nuclear data. Our own cross section library was constructed by using NJOY99.259 code with point-wise nuclear data based on ENDF-BVII. JEFF3.1, JENDL3.3 and JENDL4 evaluation files. The MCNP model was validated through the axial and radial fission rate measurements at room and hot temperatures. Calculation-experiment discrepancies of the reactivity parameter were analyzed and the results have shown that the JENDL evaluations give the most consistent values. In order to specify the source of the relatively large difference between experiment and calculation due to ENDF-BVII nuclear data evaluation, the discrepancy in reactivity between ENDF-BVII and JENDL evaluations was decomposed using sensitivity and uncertainty analysis technique

  7. Parameter-sensitivity analysis of near-field radionuclide transport in buffer material and rock for an underground nuclear fuel waste vault

    International Nuclear Information System (INIS)

    Cheung, S.C.H.; Chan, T.

    1983-08-01

    An analytical model has been developed for radionuclide transport in the vicinity of a nuclear fuel waste container emplaced in a borehole. The model considers diffusion in the buffer surrounding the waste container, and both diffusion and groundwater convection in the rock around the borehole. A parameter-sensitivity analysis has been done to study the effects on radionuclide flux of (a) Darcian velocity of groundwater in the rock, (b) effective porosity of the buffer, (c) porosity of the rock, (d) radial buffer thickness, and (e) radius and length of the container. It is found that the radionuclide flux, Fsub(R), and the total integrated flux, Fsub(T), are greater for horizontal flow than for vertical flow; Fsub(R) decreases with increasing radial buffer thickness for all Darcian velocities, whereas Fsub(T) decreases at high velocities but increases at low velocities. The rate of change of Fsub(R) and of Fsub(T) decreases with decreasing flow velocity and increasing buffer thickness; Fsub(R) is greater for higher effective porosity of buffer or rock; and Fsub(R) increases and Fsub(T) decreases with decreasing container radius or length

  8. Sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.

    2014-01-01

    Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two

  9. A parameter tree approach to estimating system sensitivities to parameter sets

    International Nuclear Information System (INIS)

    Jarzemba, M.S.; Sagar, B.

    2000-01-01

    Total System Performance Assessment Code called TPA, realizations are obtained and analyzed. In the examples presented, groups of five important parameters, one for each level of the tree, are used to identify branches of the tree and construct the bins. In the first example, the five important parameters are selected by more traditional sensitivity analysis techniques. This example shows that relatively few branches of the tree dominate system performance. In another example, the same realizations are used but the most important five-parameter set is determined in a stepwise manner (using the parameter tree technique) and it is found that these five parameters do not match the five of the first example. This important result shows that sensitivities based on individual parameters (i.e. one parameter at a time) may differ from sensitivities estimated based on joint sets of parameters (i.e. two or more parameters at a time). The technique is extended using subsystem outputs to define the branches of the tree. The subsystem outputs used in this example are the total cumulative radionuclide release (TCR) from the engineered barriers, unsaturated zone, and saturated zone over 10,000 yr. The technique is found to be successful in estimating the relative influence of each of these three subsystems on the overall system behavior

  10. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  11. Interference and Sensitivity Analysis.

    Science.gov (United States)

    VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth

    2014-11-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.

  12. Sensitivity analysis of system parameters on the performance of the Organic Rankine Cycle system for binary-cycle geothermal power plants

    International Nuclear Information System (INIS)

    Liu, Xiaomin; Wang, Xing; Zhang, Chuhua

    2014-01-01

    The main purpose of this paper is to analyze the sensitivity of system parameters to the performance of the Organic Rankine Cycle (ORC) system quantitatively. A thermodynamic model of the ORC system for binary-cycle geothermal power plants has been developed and verified. The system parameters, such as working fluid, superheat temperature, pinch temperature difference in evaporator and condenser, evaporating temperature, the isentropic efficiencies of the cycle pump and radial inflow turbine are selected as six factors for orthogonal design. The order of factors sensitivity on performance indices of the net power output of the ORC system, the thermal efficiency, the size parameter of radial inflow turbine, the power decrease factor of the pump and the total heat transfer capacity are determined by the range obtained from the orthogonal design. At different geothermal temperatures, the ranges of the six factors corresponding to performance indices are analyzed respectively. The results show that the geothermal temperature influences the range of the factors to the net power output, SP factor of radial inflow turbine, and the total heat transfer capacity, but it has no effect for the range of the factors for the thermal efficiency and the power decrease factor of the pump. The evaporating temperature is always the primary or secondary factor that influence the thermodynamic and economic performance of the ORC system. This study would provide useful references for determining the proper design variables in the performance optimization of the ORC system at different geothermal temperatures. - Highlights: • Evaporating temperature has significant effect on performance of ORC system. • Order of system parameters' sensitivity to the performance of ORC is revealed. • Effect of system parameters on performance indices vary with geothermal temperature. • Geothermal temperature has no effect on range of six factors to the size of turbine

  13. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  14. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  15. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  16. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    Science.gov (United States)

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  17. Probabilistic sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John

    2009-09-07

    Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.

  18. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  19. Sensitivity of transient synchrotron radiation to tokamak plasma parameters

    International Nuclear Information System (INIS)

    Fisch, N.J.; Kritz, A.H.

    1988-12-01

    Synchrotron radiation from a hot plasma can inform on certain plasma parameters. The dependence on plasma parameters is particularly sensitive for the transient radiation response to a brief, deliberate, perturbation of hot plasma electrons. We investigate how such a radiation response can be used to diagnose a variety of plasma parameters in a tokamak. 18 refs., 13 figs

  20. EV range sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)

    2010-07-01

    This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.

  1. Sensitivity Analysis of Viscoelastic Structures

    Directory of Open Access Journals (Sweden)

    A.M.G. de Lima

    2006-01-01

    Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.

  2. Analysis of the behavior of tubular-type equipment for nuclear waste treatment: sensitivities of the parameters affecting mass transfer yield

    International Nuclear Information System (INIS)

    Yoo, Jae Hyung; Lee, Byung Jik; Shim, Joon Bo; Kim, Eung Ho

    2007-01-01

    It was intended in this study to investigate the effects of various parameters on the chemical reaction or mass transfer yield in a tubular-type nuclear waste treatment equipment. Since such equipment. as a tubular reactor, multistage solvent extractor, and adsorption column, accompany chemical reaction or mass transfer along the fluid-flowing direction, mathematical modeling for each equipment was carried out first. Then their behaviors of the chemical reaction or mass transfer were predicted through computer simulations. The inherent major parameters for each equipment were chosen and their sensitivities affecting the reaction or mass transfer yield were analyzed. For the tubular reactor, the effects of axial diffusion coefficient and reaction rate constant on the reaction yield were investigated. As for the multistage solvent extractor, the back mixing of continuous phase and the distribution coefficient between fluid and solvent were considered as the major parameters affecting the extraction yield as well as concentration profiles throughout the axial direction of the extractor. For the adsorption column, the equilibrium constant between fluid and adsorbent surface. and the overall mass transfer coefficient between the two phases were taken as the major factors that affect the adsorption rate

  3. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    Science.gov (United States)

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  4. Neutrino Oscillation Parameter Sensitivity in Future Long-Baseline Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bass, Matthew [Colorado State Univ., Fort Collins, CO (United States)

    2014-01-01

    The study of neutrino interactions and propagation has produced evidence for physics beyond the standard model and promises to continue to shed light on rare phenomena. Since the discovery of neutrino oscillations in the late 1990s there have been rapid advances in establishing the three flavor paradigm of neutrino oscillations. The 2012 discovery of a large value for the last unmeasured missing angle has opened the way for future experiments to search for charge-parity symmetry violation in the lepton sector. This thesis presents an analysis of the future sensitivity to neutrino oscillations in the three flavor paradigm for the T2K, NO A, LBNE, and T2HK experiments. The theory of the three flavor paradigm is explained and the methods to use these theoretical predictions to design long baseline neutrino experiments are described. The sensitivity to the oscillation parameters for each experiment is presented with a particular focus on the search for CP violation and the measurement of the neutrino mass hierarchy. The variations of these sensitivities with statistical considerations and experimental design optimizations taken into account are explored. The effects of systematic uncertainties in the neutrino flux, interaction, and detection predictions are also considered by incorporating more advanced simulations inputs from the LBNE experiment.

  5. Small particle bed reactors: Sensitivity to Brayton cycle parameters

    Science.gov (United States)

    Coiner, John R.; Short, Barry J.

    Relatively simple particle bed reactor (PBR) algorithms were developed for optimizing low power closed Brayton cycle (CBC) systems. These algorithms allow the system designer to understand the relationship among key system parameters as well as the sensitivity of the PBR size and mass (a major system component) to variations in these parameters. Thus, system optimization can be achieved.

  6. Data fusion qualitative sensitivity analysis

    International Nuclear Information System (INIS)

    Clayton, E.A.; Lewis, R.E.

    1995-09-01

    Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables

  7. Nuclear data adjustment methodology utilizing resonance parameter sensitivities and uncertainties

    International Nuclear Information System (INIS)

    Broadhead, B.L.

    1983-01-01

    This work presents the development and demonstration of a Nuclear Data Adjustment Method that allows inclusion of both energy and spatial self-shielding into the adjustment procedure. The resulting adjustments are for the basic parameters (i.e. resonance parameters) in the resonance regions and for the group cross sections elsewhere. The majority of this development effort concerns the production of resonance parameter sensitivity information which allows the linkage between the responses of interest and the basic parameters. The resonance parameter sensitivity methodology developed herein usually provides accurate results when compared to direct recalculations using existng and well-known cross section processing codes. However, it has been shown in several cases that self-shielded cross sections can be very non-linear functions of the basic parameters. For this reason caution must be used in any study which assumes that a linear relatonship exists between a given self-shielded group cross section and its corresponding basic data parameters. The study also has pointed out the need for more approximate techniques which will allow the required sensitivity information to be obtained in a more cost effective manner

  8. Predicting the long-term (137)Cs distribution in Fukushima after the Fukushima Dai-ichi nuclear power plant accident: a parameter sensitivity analysis.

    Science.gov (United States)

    Yamaguchi, Masaaki; Kitamura, Akihiro; Oda, Yoshihiro; Onishi, Yasuo

    2014-09-01

    Radioactive materials deposited on the land surface of Fukushima Prefecture from the Fukushima Dai-ichi Nuclear Power Plant explosion is a crucial issue for a number of reasons, including external and internal radiation exposure and impacts on agricultural environments and aquatic biota. Predicting the future distribution of radioactive materials and their fates is therefore indispensable for evaluation and comparison of the effectiveness of remediation options regarding human health and the environment. Cesium-137, the main radionuclide to be focused on, is well known to adsorb to clay-rich soils; therefore its primary transportation mechanism is in the form of soil erosion on the land surface and transport of sediment-sorbed contaminants in the water system. In this study, we applied the Soil and Cesium Transport model, which we have developed, to predict a long-term cesium distribution in the Fukushima area, based on the Universal Soil Loss Equation and simple sediment discharge formulas. The model consists of calculation schemes of soil erosion, transportation and deposition, as well as cesium transport and its future distribution. Since not all the actual data on parameters is available, a number of sensitivity analyses were conducted here to find the range of the output results due to the uncertainties of parameters. The preliminary calculation indicated that a large amount of total soil loss remained in slope, and the residual sediment was transported to rivers, deposited in rivers and lakes, or transported farther downstream to the river mouths. Most of the sediment deposited in rivers and lakes consists of sand. On the other hand, most of the silt and clay portions transported to river were transported downstream to the river mouths. The rate of sediment deposition in the Abukuma River basin was three times as high as those of the other 13 river basins. This may be due to the larger catchment area and more moderate channel slope of the Abukuma River basin

  9. Sensitivity of Footbridge Vibrations to Stochastic Walking Parameters

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2010-01-01

    of the pedestrian. A stochastic modelling approach is adopted for this paper and it facilitates quantifying the probability of exceeding various vibration levels, which is useful in a discussion of serviceability of a footbridge design. However, estimates of statistical distributions of footbridge vibration levels...... to walking loads might be influenced by the models assumed for the parameters of the load model (the walking parameters). The paper explores how sensitive estimates of the statistical distribution of vertical footbridge response are to various stochastic assumptions for the walking parameters. The basis...... for the study is a literature review identifying different suggestions as to how the stochastic nature of these parameters may be modelled, and a parameter study examines how the different models influence estimates of the statistical distribution of footbridge vibrations. By neglecting scatter in some...

  10. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  11. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  12. Universally sloppy parameter sensitivities in systems biology models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  13. Pattern statistics on Markov chains and sensitivity to parameter estimation

    Directory of Open Access Journals (Sweden)

    Nuel Grégory

    2006-10-01

    Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  14. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  15. Sensitivity of risk parameters to human errors for a PWR

    International Nuclear Information System (INIS)

    Samanta, P.; Hall, R.E.; Kerr, W.

    1980-01-01

    Sensitivities of the risk parameters, emergency safety system unavailabilities, accident sequence probabilities, release category probabilities and core melt probability were investigated for changes in the human error rates within the general methodological framework of the Reactor Safety Study for a Pressurized Water Reactor (PWR). Impact of individual human errors were assessed both in terms of their structural importance to core melt and reliability importance on core melt probability. The Human Error Sensitivity Assessment of a PWR (HESAP) computer code was written for the purpose of this study

  16. Sensitivity calculation of the coolant temperature regarding the thermohydraulic parameters

    International Nuclear Information System (INIS)

    Andrade Lima, F.R. de; Silva, F.C. da; Thome Filho, Z.D.; Alvim, A.C.M.; Oliveira Barroso, A.C. de.

    1985-01-01

    It's studied the application of the Generalized Perturbation Theory (GPT) in the sensitivity calculation of thermalhydraulic problems, aiming at verifying the viability of the extension of the method. For this, the axial distribution, transient, of the coolant temperature in a PWR channel are considered. Perturbation expressions are developed using the GPT formalism, and a computer code (Tempera) is written, to calculate the channel temperature distribution and the associated importance function, as well as the effect of the thermalhydraulic parameters variations in the coolant temperature (sensitivity calculation). The results are compared with those from the direct calculation. (E.G.) [pt

  17. Comparison of adsorption equilibrium and kinetic models for a case study of pharmaceutical active ingredient adsorption from fermentation broths: parameter determination, simulation, sensitivity analysis and optimization

    Directory of Open Access Journals (Sweden)

    B. Likozar

    2012-09-01

    Full Text Available Mathematical models for a batch process were developed to predict concentration distributions for an active ingredient (vancomycin adsorption on a representative hydrophobic-molecule adsorbent, using differently diluted crude fermentation broth with cells as the feedstock. The kinetic parameters were estimated using the maximization of the coefficient of determination by a heuristic algorithm. The parameters were estimated for each fermentation broth concentration using four concentration distributions at initial vancomycin concentrations of 4.96, 1.17, 2.78, and 5.54 g l−¹. In sequence, the models and their parameters were validated for fermentation broth concentrations of 0, 20, 50, and 100% (v/v by calculating the coefficient of determination for each concentration distribution at the corresponding initial concentration. The applicability of the validated models for process optimization was investigated by using the models as process simulators to optimize the two process efficiencies.

  18. Calculation of coolant temperature sensitivity related to thermohydraulic parameters

    International Nuclear Information System (INIS)

    Silva, F.C. da; Andrade Lima, F.R. de

    1985-01-01

    It is verified the viability to apply the generalized Perturbation Theory (GPT) in the calculation of sensitivity for thermal-hydraulic problems. It was developed the TEMPERA code in FORTRAN-IV to transient calculations in the axial temperature distribution in a channel of PWR reactor and the associated importance function, as well as effects of variations of thermalhydraulic parameters in the coolant temperature. The results are compared with one which were obtained by direct calculation. (M.C.K.) [pt

  19. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  20. Sensitivity Analysis of a Simplified Fire Dynamic Model

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt; Nielsen, Anker

    2015-01-01

    This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...

  1. Probabilistic sensitivity analysis in health economics.

    Science.gov (United States)

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.

  2. Using sensitivity derivatives for design and parameter estimation in an atmospheric plasma discharge simulation

    International Nuclear Information System (INIS)

    Lange, Kyle J.; Anderson, W. Kyle

    2010-01-01

    The problem of applying sensitivity analysis to a one-dimensional atmospheric radio frequency plasma discharge simulation is considered. A fluid simulation is used to model an atmospheric pressure radio frequency helium discharge with a small nitrogen impurity. Sensitivity derivatives are computed for the peak electron density with respect to physical inputs to the simulation. These derivatives are verified using several different methods to compute sensitivity derivatives. It is then demonstrated how sensitivity derivatives can be used within a design cycle to change these physical inputs so as to increase the peak electron density. It is also shown how sensitivity analysis can be used in conjunction with experimental data to obtain better estimates for rate and transport parameters. Finally, it is described how sensitivity analysis could be used to compute an upper bound on the uncertainty for results from a simulation.

  3. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    Directory of Open Access Journals (Sweden)

    L. A. Lee

    2011-12-01

    Full Text Available Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process

  4. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  5. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  6. Sensitivity Analysis of Fire Dynamics Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.

    2007-01-01

    (Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...

  7. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  8. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  9. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  10. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  11. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  12. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    Science.gov (United States)

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  13. Sensitivity and parameter-estimation precision for alternate LISA configurations

    International Nuclear Information System (INIS)

    Vallisneri, Michele; Crowder, Jeff; Tinto, Massimo

    2008-01-01

    We describe a simple framework to assess the LISA scientific performance (more specifically, its sensitivity and expected parameter-estimation precision for prescribed gravitational-wave signals) under the assumption of failure of one or two inter-spacecraft laser measurements (links) and of one to four intra-spacecraft laser measurements. We apply the framework to the simple case of measuring the LISA sensitivity to monochromatic circular binaries, and the LISA parameter-estimation precision for the gravitational-wave polarization angle of these systems. Compared to the six-link baseline configuration, the five-link case is characterized by a small loss in signal-to-noise ratio (SNR) in the high-frequency section of the LISA band; the four-link case shows a reduction by a factor of √2 at low frequencies, and by up to ∼2 at high frequencies. The uncertainty in the estimate of polarization, as computed in the Fisher-matrix formalism, also worsens when moving from six to five, and then to four links: this can be explained by the reduced SNR available in those configurations (except for observations shorter than three months, where five and six links do better than four even with the same SNR). In addition, we prove (for generic signals) that the SNR and Fisher matrix are invariant with respect to the choice of a basis of TDI observables; rather, they depend only on which inter-spacecraft and intra-spacecraft measurements are available

  14. Resonance parameter analysis with SAMMY

    International Nuclear Information System (INIS)

    Larson, N.M.; Perey, F.G.

    1988-01-01

    The multilevel R-matrix computer code SAMMY has evolved over the past decade to become an important analysis tool for neutron data. SAMMY uses the Reich-Moore approximation to the multilevel R-matrix and includes an optional logarithmic parameterization of the external R-function. Doppler broadening is simulated either by numerical integration using the Gaussian approximation to the free gas model or by a more rigorous solution of the partial differential equation equivalent to the exact free gas model. Resolution broadening of cross sections and derivatives also has new options that more accurately represent the experimental situation. SAMMY treats constant normalization and some types of backgrounds directly and treats other normalizations and/or backgrounds with the introduction of user-generated partial derivatives. The code uses Bayes' method as an efficient alternative to least squares for fitting experimental data. SAMMY allows virtually any parameter to be varied and outputs values, uncertainties, and covariance matrix for all varied parameters. Versions of SAMMY exist for VAX, FPS, and IBM computers

  15. Kinetic parameters from thermogravimetric analysis

    Science.gov (United States)

    Kiefer, Richard L.

    1993-01-01

    High performance polymeric materials are finding increased use in aerospace applications. Proposed high speed aircraft will require materials to withstand high temperatures in an oxidative atmosphere for long periods of time. It is essential that accurate estimates be made of the performance of these materials at the given conditions of temperature and time. Temperatures of 350 F (177 C) and times of 60,000 to 100,000 hours are anticipated. In order to survey a large number of high performance polymeric materials on a reasonable time scale, some form of accelerated testing must be performed. A knowledge of the rate of a process can be used to predict the lifetime of that process. Thermogravimetric analysis (TGA) has frequently been used to determine kinetic information for degradation reactions in polymeric materials. Flynn and Wall studied a number of methods for using TGA experiments to determine kinetic information in polymer reactions. Kinetic parameters, such as the apparent activation energy and the frequency factor, can be determined in such experiments. Recently, researchers at the McDonnell Douglas Research Laboratory suggested that a graph of the logarithm of the frequency factor against the apparent activation energy can be used to predict long-term thermo-oxidative stability for polymeric materials. Such a graph has been called a kinetic map. In this study, thermogravimetric analyses were performed in air to study the thermo-oxidative degradation of several high performance polymers and to plot their kinetic parameters on a kinetic map.

  16. Probabilistic calculations and sensitivity analysis of parameters for a reference biosphere modell assessing final deposition of radioaktive waste; Probabilistische Rechnungen und Sensitivitaetsanalyse von Parametern fuer ein Referenzbiosphaerenmodell zur Endlagerung von radioaktiven Abfaellen

    Energy Technology Data Exchange (ETDEWEB)

    Staudt, C.; Kaiser, J.C. Christian [Helmholtz Zentrum Muenchen, Deutsches Forschungszentrum fuer Gesundheit und Umwelt, Muenchen (Germany). Inst. fuer Strahlenschutz

    2014-01-20

    Radioecological models are used for the assessment of potential exposures of a population to radionuclides from final repositories for high level radioactive waste. Due to the long disposal time frame, changes in model relevant exposure pathways need to be accounted for. Especially climate change will result in changes of the modelled system. Reference biosphere models are used to asses climate related changes in the far field of a final repository. In this approach, model scenarios are developed for potential future climate states and defined by parameters derived from currently existing, similar climate regions. It is assumed, that habits and agricultural practices of a population will adapt to the new climate over long periods of time, until they mirror the habits of a contemporary population living in a similar climate. As an end point of the models, Biosphere Dose Conversion Factors (BDCF) are calculated. These radionuclide specific BDCF describe the exposure of a hypothetical population resulting from a standardized radionuclide contamination in near surface ground water. Model results are subject to uncertainties due to inherent uncertainties of assumed future developments, habits and empirically measured parameters. In addition to deterministic calculations, sensitivity analysis and probabilistic calculations were done for several model scenarios, to control the quality of the model and due to the high number of parameters used to define different climate states, soil types and consumption habits.

  17. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    Science.gov (United States)

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  18. Parameter-free Locality Sensitive Hashing for Spherical Range Reporting

    DEFF Research Database (Denmark)

    Ahle, Thomas Dybdahl; Pagh, Rasmus; Aumüller, Martin

    2017-01-01

    We present a data structure for *spherical range reporting* on a point set S, i.e., reporting all points in S that lie within radius r of a given query point q. Our solution builds upon the Locality-Sensitive Hashing (LSH) framework of Indyk and Motwani, which represents the asymptotically best...... solutions to near neighbor problems in high dimensions. While traditional LSH data structures have several parameters whose optimal values depend on the distance distribution from q to the points of S, our data structure is parameter-free, except for the space usage, which is configurable by the user...... query time bounded by O(t(n/t)ρ), where t is the number of points to report and ρ∈(0,1) depends on the data distribution and the strength of the LSH family used. We further present a parameter-free way of using multi-probing, for LSH families that support it, and show that for many such families...

  19. 3D investigation of dynamic behavior and sensitivity analysis of the parameters of spherical biological particles in the first phase of AFM-based manipulations with the consideration of humidity effect.

    Science.gov (United States)

    Korayem, M H; Mahmoodi, Z; Mohammadi, M

    2018-01-07

    The imaging and manipulation tools being the same in an AFM has necessitated the modeling and simulation of the AFM-based manipulation processes. In earlier studies, the dynamic behavior of biological particles in the course of manipulation has been modeled and simulated two-dimensionally. Now, with the advancements made in the modeling techniques, a 3D model of the manipulation of biological particles is more accurate than its 2D counterpart. In this paper, the effect of humidity has been taken into consideration in the three-dimensional modeling of the manipulation. By employing this model, the equations for the motion modes of particles (sliding, rolling, and spinning) at the onset of movement have been derived and the critical force magnitude has been obtained. In order to reduce the potential damage to the manipulated biological particle, the maximum radius of the tip has been determined. The effective parameters in this process have been extracted by performing sensitivity analysis using the Sobol method. In comparison to the results obtained for a dry environment, the results obtained by simulating the manipulation of a yeast particle in a wet environment shows that the critical force for the onset of particle movement diminishes by considering the moisture effect (high humidity levels). The parameters influencing the magnitude of the critical force include the particle radius, particle material, surface energy of the chosen substrate, amount of preload and the contact angle. Also, the results of the performed sensitivity analysis indicate a very high influence of particle radius on the critical manipulation force and a very low impact of cantilever width on the critical force. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Sensitiveness Analysis of Neutronic Parameters Due to Uncertainty in Thermo-hydraulic parameters on CAREM-25 Reactor; Analisis de Sensibilidad de los Parametros Neutronicos ante Incertezas en los Parametros Termohidraulicos en el Reactor CAREM-25

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Oscar [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose.

  1. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  2. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  3. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  4. Reliability-based sensitivity of mechanical components with arbitrary distribution parameters

    International Nuclear Information System (INIS)

    Zhang, Yi Min; Yang, Zhou; Wen, Bang Chun; He, Xiang Dong; Liu, Qiaoling

    2010-01-01

    This paper presents a reliability-based sensitivity method for mechanical components with arbitrary distribution parameters. Techniques from the perturbation method, the Edgeworth series, the reliability-based design theory, and the sensitivity analysis approach were employed directly to calculate the reliability-based sensitivity of mechanical components on the condition that the first four moments of the original random variables are known. The reliability-based sensitivity information of the mechanical components can be accurately and quickly obtained using a practical computer program. The effects of the design parameters on the reliability of mechanical components were studied. The method presented in this paper provides the theoretic basis for the reliability-based design of mechanical components

  5. Sensitivity calculations of integral parameters by a generalyzed perturbation theory

    International Nuclear Information System (INIS)

    Santo, A.C.F. de.

    1981-12-01

    In this work, we first revise some concepts, concerning the neutron transport in nuclear systems. We derive the balance and importance equation. Then we discuss the neutron importance in subcritical, critical and supercritical systems. The adjoint flux is estabilished as the neutron importance for the fission process. The conventional perturbation theory is later presented. We developed a sistematic perturbative formulation in the first order variation in the distribution functions calculate the reactivity due to a system perturbation. We present in detail the flux difference and generalized functions methos. The above formulation is then extended for altered systems. We consider integral parameters of the type ratio of bilinear functionals (for which the reactivity is a particular case). We define sensitivity coeficients, for any integral parameter, corresponding to a especific system alterations. Possible aplication of the method are also discussed. In the last part of this work, we apply the perturbative formulation to the doppler reacitivity sensibility calculation, utilizing the generalized functions method. We describe in detail the compiler program written for this and some other possible aplications. (Author) [pt

  6. Sensitivity analysis of critical experiment with direct perturbation compared to TSUNAMI-3D sensitivity analysis

    International Nuclear Information System (INIS)

    Barber, A. D.; Busch, R.

    2009-01-01

    The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)

  7. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  8. Parameter Uncertainty for Repository Thermal Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Greenberg, Harris [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dupont, Mark [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-10-01

    This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approach to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).

  9. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    Science.gov (United States)

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  10. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  11. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  12. Sensitive zone parameters and curvature radius evaluation for polymer optical fiber curvature sensors

    Science.gov (United States)

    Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria

    2018-03-01

    Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.

  13. Sensitivity Analyses for Cross-Coupled Parameters in Automotive Powertrain Optimization

    Directory of Open Access Journals (Sweden)

    Pongpun Othaganont

    2014-06-01

    Full Text Available When vehicle manufacturers are developing new hybrid and electric vehicles, modeling and simulation are frequently used to predict the performance of the new vehicles from an early stage in the product lifecycle. Typically, models are used to predict the range, performance and energy consumption of their future planned production vehicle; they also allow the designer to optimize a vehicle’s configuration. Another use for the models is in performing sensitivity analysis, which helps us understand which parameters have the most influence on model predictions and real-world behaviors. There are various techniques for sensitivity analysis, some are numerical, but the greatest insights are obtained analytically with sensitivity defined in terms of partial derivatives. Existing methods in the literature give us a useful, quantified measure of parameter sensitivity, a first-order effect, but they do not consider second-order effects. Second-order effects could give us additional insights: for example, a first order analysis might tell us that a limiting factor is the efficiency of the vehicle’s prime-mover; our new second order analysis will tell us how quickly the efficiency of the powertrain will become of greater significance. In this paper, we develop a method based on formal optimization mathematics for rapid second-order sensitivity analyses and illustrate these through a case study on a C-segment electric vehicle.

  14. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  15. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    Science.gov (United States)

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  16. Sensitivity analysis of numerical solutions for environmental fluid problems

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Motoyama, Yasunori

    2003-01-01

    In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)

  17. Extended forward sensitivity analysis of one-dimensional isothermal flow

    International Nuclear Information System (INIS)

    Johnson, M.; Zhao, H.

    2013-01-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  18. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  19. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  20. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  1. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  2. Subset simulation for structural reliability sensitivity analysis

    International Nuclear Information System (INIS)

    Song Shufang; Lu Zhenzhou; Qiao Hongwei

    2009-01-01

    Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

  3. Studying the physics potential of long-baseline experiments in terms of new sensitivity parameters

    International Nuclear Information System (INIS)

    Singh, Mandip

    2016-01-01

    We investigate physics opportunities to constraint the leptonic CP-violation phase δ_C_P through numerical analysis of working neutrino oscillation probability parameters, in the context of long-baseline experiments. Numerical analysis of two parameters, the “transition probability δ_C_P phase sensitivity parameter (A"M)” and the “CP-violation probability δ_C_P phase sensitivity parameter (A"C"P),” as functions of beam energy and/or baseline have been carried out. It is an elegant technique to broadly analyze different experiments to constrain the δ_C_P phase and also to investigate the mass hierarchy in the leptonic sector. Positive and negative values of the parameter A"C"P, corresponding to either hierarchy in the specific beam energy ranges, could be a very promising way to explore the mass hierarchy and δ_C_P phase. The keys to more robust bounds on the δ_C_P phase are improvements of the involved detection techniques to explore lower energies and relatively long baseline regions with better experimental accuracy.

  4. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik

    2009-01-01

    satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....

  5. Sensitivity analysis of physiochemical interaction model: which pair ...

    African Journals Online (AJOL)

    ... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...

  6. Parameters influencing deposit estimation when using water sensitive papers

    Directory of Open Access Journals (Sweden)

    Emanuele Cerruto

    2013-10-01

    Full Text Available The aim of the study was to assess the possibility of using water sensitive papers (WSP to estimate the amount of deposit on the target when varying the spray characteristics. To identify the main quantities influencing the deposit, some simplifying hypotheses were applied to simulate WSP behaviour: log-normal distribution of the diameters of the drops and circular stains randomly placed on the images. A very large number (4704 of images of WSPs were produced by means of simulation. The images were obtained by simulating drops of different arithmetic mean diameter (40-300 μm, different coefficient of variation (0.1-1.5, and different percentage of covered surface (2-100%, not considering overlaps. These images were considered to be effective WSP images and then analysed using image processing software in order to measure the percentage of covered surface, the number of particles, and the area of each particle; the deposit was then calculated. These data were correlated with those used to produce the images, varying the spray characteristics. As far as the drop populations are concerned, a classification based on the volume median diameter only should be avoided, especially in case of high variability. This, in fact, results in classifying sprays with very low arithmetic mean diameter as extremely or ultra coarse. The WSP image analysis shows that the relation between simulated and computed percentage of covered surface is independent of the type of spray, whereas impact density and unitary deposit can be estimated from the computed percentage of covered surface only if the spray characteristics (arithmetic mean and coefficient of variation of the drop diameters are known. These data can be estimated by analysing the particles on the WSP images. The results of a validation test show good agreement between simulated and computed deposits, testified by a high (0.93 coefficient of determination.

  7. Global sensitivity analysis in wind energy assessment

    Science.gov (United States)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  8. Sensitivity analysis of a greedy heuristic for knapsack problems

    NARCIS (Netherlands)

    Ghosh, D; Chakravarti, N; Sierksma, G

    2006-01-01

    In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit

  9. Calculation of integral parameters sensitivity in fast reactors

    International Nuclear Information System (INIS)

    Renke, C.A.C.

    1981-01-01

    The variational formulation, incorporated to VARI-1D computer code is used the sensitivity calculations. At a first stage the direct method was also used with the objective of establishing a parallel between the two methods.(E.G.) [pt

  10. Sensitivity Analysis Based on Markovian Integration by Parts Formula

    Directory of Open Access Journals (Sweden)

    Yongsheng Hang

    2017-10-01

    Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.

  11. MOVES2010a regional level sensitivity analysis

    Science.gov (United States)

    2012-12-10

    This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...

  12. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-01

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a

  13. Evaluation of parameter sensitivities for flux-switching permanent magnet machines based on simplified equivalent magnetic circuit

    Directory of Open Access Journals (Sweden)

    Gan Zhang

    2017-05-01

    Full Text Available Most of the published papers regarding the design of flux-switching permanent magnet machines are focused on the analysis and optimization of electromagnetic or mechanical behaviors, however, the evaluate of the parameter sensitivities have not been covered, which contrarily, is the main contribution of this paper. Based on the finite element analysis (FEA and simplified equivalent magnetic circuit, the method proposed in this paper enables the influences of parameters on the electromagnetic performances, i.e. the parameter sensitivities, to be given by equations. The FEA results are also validated by experimental measurements.

  14. Sensitivity analysis of reactive ecological dynamics.

    Science.gov (United States)

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  15. Sensitivity of seismic design parameters to input variables

    International Nuclear Information System (INIS)

    Wium, D.J.W.

    1987-01-01

    The probabilistic method introduced by Cornell (1968) has been used to a large extent for this purpose. Due to its probabilistic approach, this technique provides a sound basis for studying the influence of the dominant parameters in such a model. Although the Southern African region is not well known for its seismicity, a number of events in the recent past has focussed the attention on some seismically active areas where special attention may be needed in defining the correct design parameters. The relatively sparse historical seismic data has been used to develop a mathematical model which represents this region. This paper briefly discusses this model, and uses it as a basis for evaluating the influence of the uncertainty in each of the principal parameters, being the seismicity of the region, the attenuation of seismic waves after an event, and models that can be used to arrive at engineering design values. (orig./HP)

  16. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  17. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  18. Determination of new electroweak parameters at the ILC. Sensitivity to new physics

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, M.; Schmidt, E.; Schroeder, H. [Rostock Univ. (Germany). Inst. fuer Physik; Kilian, W. [Siegen Univ. (Gesamthochschule) (Germany). Fach Physik]|[Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Krstonosic, P.; Reuter, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Moenig, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2006-04-15

    We present a study of the sensitivity of an International Linear Collider (ILC) to electroweak parameters in the absence of a light Higgs boson. In particular, we consider those parameters that have been inaccessible at previous colliders, quartic gauge couplings. Within a generic effective-field theory context we analyze all processes that contain quasi-elastic weak-boson scattering, using complete six-fermion matrix elements in unweighted event samples, fast simulation of the ILC detector, and a multidimensional parameter fit of the set of anomalous couplings. The analysis does not rely on simplifying assumptions such as custodial symmetry or approximations such as the equivalence theorem. We supplement this by a similar new study of triple weak-boson production, which is sensitive to the same set of anomalous couplings. Including the known results on triple gauge couplings and oblique corrections, we thus quantitatively determine the indirect sensitivity of the ILC to new physics in the electroweak symmetry-breaking sector, conveniently parameterized by real or fictitious resonances in each accessible spin/isospin channel. (Orig.)

  19. Determination of new electroweak parameters at the ILC. Sensitivity to new physics

    International Nuclear Information System (INIS)

    Beyer, M.; Schmidt, E.; Schroeder, H.; Krstonosic, P.; Reuter, J.; Moenig, K.

    2006-04-01

    We present a study of the sensitivity of an International Linear Collider (ILC) to electroweak parameters in the absence of a light Higgs boson. In particular, we consider those parameters that have been inaccessible at previous colliders, quartic gauge couplings. Within a generic effective-field theory context we analyze all processes that contain quasi-elastic weak-boson scattering, using complete six-fermion matrix elements in unweighted event samples, fast simulation of the ILC detector, and a multidimensional parameter fit of the set of anomalous couplings. The analysis does not rely on simplifying assumptions such as custodial symmetry or approximations such as the equivalence theorem. We supplement this by a similar new study of triple weak-boson production, which is sensitive to the same set of anomalous couplings. Including the known results on triple gauge couplings and oblique corrections, we thus quantitatively determine the indirect sensitivity of the ILC to new physics in the electroweak symmetry-breaking sector, conveniently parameterized by real or fictitious resonances in each accessible spin/isospin channel. (Orig.)

  20. Sensitivity analysis of floating offshore wind farms

    International Nuclear Information System (INIS)

    Castro-Santos, Laura; Diaz-Casas, Vicente

    2015-01-01

    Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future

  1. Sensitivity study of steam explosion characteristics to uncertain input parameters using TEXAS-V code

    International Nuclear Information System (INIS)

    Grishchenko, Dmitry; Basso, Simone; Kudinov, Pavel; Bechta, Sevostian

    2014-01-01

    Release of core melt from failed reactor vessel into a pool of water is adopted in several existing designs of light water reactors (LWRs) as an element of severe accident mitigation strategy. Corium melt is expected to fragment, solidify and form a debris bed coolable by natural circulation. However, steam explosion can occur upon melt release threatening containment integrity and potentially leading to large early release of radioactive products to the environment. There are many factors and parameters that could be considered for prediction of the fuel-coolant interaction (FCI) energetics, but it is not clear which of them are the most influential and should be addressed in risk analysis. The goal of this work is to assess importance of different uncertain input parameters used in FCI code TEXAS-V for prediction of the steam explosion energetics. Both aleatory uncertainty in characteristics of melt release scenarios and water pool conditions, and epistemic uncertainty in modeling are considered. Ranges of the uncertain parameters are selected based on the available information about prototypic severe accident conditions in a reference design of a Nordic BWR. Sensitivity analysis with Morris method is implemented using coupled TEXAS-V and DAKOTA codes. In total 12 input parameters were studied and 2 melt release scenarios were considered. Each scenario is based on 60,000 of TEXAS-V runs. Sensitivity study identified the most influential input parameters, and those which have no statistically significant effect on the explosion energetics. Details of approach to robust usage of TEXAS-V input, statistical enveloping of TEXAS-V output and interpretation of the results are discussed in the paper. We also provide probability density function (PDF) of steam explosion impulse estimated using TEXAS-V for reference Nordic BWR. It can be used for assessment of the uncertainty ranges of steam explosion loads for given ranges of input parameters. (author)

  2. A new importance measure for sensitivity analysis

    International Nuclear Information System (INIS)

    Liu, Qiao; Homma, Toshimitsu

    2010-01-01

    Uncertainty is an integral part of risk assessment of complex engineering systems, such as nuclear power plants and space crafts. The aim of sensitivity analysis is to identify the contribution of the uncertainty in model inputs to the uncertainty in the model output. In this study, a new importance measure that characterizes the influence of the entire input distribution on the entire output distribution was proposed. It represents the expected deviation of the cumulative distribution function (CDF) of the model output that would be obtained when one input parameter of interest were known. The applicability of this importance measure was tested with two models, a nonlinear nonmonotonic mathematical model and a risk model. In addition, a comparison of this new importance measure with several other importance measures was carried out and the differences between these measures were explained. (author)

  3. Sensitivity of SBLOCA analysis to model nodalization

    International Nuclear Information System (INIS)

    Lee, C.; Ito, T.; Abramson, P.B.

    1983-01-01

    The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery

  4. The role of sensitivity analysis in assessing uncertainty

    International Nuclear Information System (INIS)

    Crick, M.J.; Hill, M.D.

    1987-01-01

    Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice

  5. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  6. Sensitivity of lumbar spine loading to anatomical parameters

    DEFF Research Database (Denmark)

    Putzer, Michael; Ehrlich, Ingo; Rasmussen, John

    2016-01-01

    Musculoskeletal simulations of lumbar spine loading rely on a geometrical representation of the anatomy. However, this data has an inherent inaccuracy. This study evaluates the in uence of dened geometrical parameters on lumbar spine loading utilizing ve parametrized musculoskeletal lumbar spine ...... lumbar spine model for a subject-specic approach with respect to bone geometry. Furthermore, degeneration processes could lead to computational problems and it is advised that stiffness properties of discs and ligaments should be individualized....

  7. Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model

    International Nuclear Information System (INIS)

    Otis, M.D.

    1983-01-01

    Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs

  8. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    Science.gov (United States)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  9. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  10. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  11. Sensitivity Analysis of an Agent-Based Model of Culture's Consequences for Trade

    NARCIS (Netherlands)

    Burgers, S.L.G.E.; Jonker, C.M.; Hofstede, G.J.; Verwaart, D.

    2010-01-01

    This paper describes the analysis of an agent-based model’s sensitivity to changes in parameters that describe the agents’ cultural background, relational parameters, and parameters of the decision functions. As agent-based models may be very sensitive to small changes in parameter values, it is of

  12. The EVEREST project: sensitivity analysis of geological disposal systems

    International Nuclear Information System (INIS)

    Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus

    1997-01-01

    The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied

  13. Ambient pressure sensitivity of microbubbles investigated through a parameter study

    DEFF Research Database (Denmark)

    Andersen, Klaus Scheldrup; Jensen, Jørgen Arendt

    2009-01-01

    Measurements on microbubbles clearly indicate a relation between the ambient pressure and the acoustic behavior of the bubble. The purpose of this study was to optimize the sensitivity of ambient pressure measurements, using the subharmonic component, through microbubble response simulations....... The behavior of two microbubbles corresponding to two different contrast agents was investigated as a function of driving pulse and ambient overpressure, pov. Simulations of Levovist using a rectangular driving pulse show an almost linear reduction in the subharmonic component as pov is increased. For a 20...... found, although the reduction is not completely linear as a function of the ambient pressure....

  14. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    Science.gov (United States)

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  15. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  16. Integral data analysis for resonance parameters determination

    International Nuclear Information System (INIS)

    Larson, N.M.; Leal, L.C.; Derrien, H.

    1997-09-01

    Neutron time-of-flight experiments have long been used to determine resonance parameters. Those resonance parameters have then been used in calculations of integral quantities such as Maxwellian averages or resonance integrals, and results of those calculations in turn have been used as a criterion for acceptability of the resonance analysis. However, the calculations were inadequate because covariances on the parameter values were not included in the calculations. In this report an effort to correct for that deficiency is documented: (1) the R-matrix analysis code SAMMY has been modified to include integral quantities of importance, (2) directly within the resonance parameter analysis, and (3) to determine the best fit to both differential (microscopic) and integral (macroscopic) data simultaneously. This modification was implemented because it is expected to have an impact on the intermediate-energy range that is important for criticality safety applications

  17. Sensitivity analysis in dynamic optimization

    NARCIS (Netherlands)

    Evers, A.H.

    1980-01-01

    To find the optimal control of chemical processes, Pontryagin's minimum principle can be used. In practice, however, one is not only interested in the optimal solution, which satisfies the restrictions on the control, the initial and terminal conditions, and the process parameters. It is also

  18. Sensitivity of viscosity Arrhenius parameters to polarity of liquids

    Science.gov (United States)

    Kacem, R. B. H.; Alzamel, N. O.; Ouerfelli, N.

    2017-09-01

    Several empirical and semi-empirical equations have been proposed in the literature to estimate the liquid viscosity upon temperature. In this context, this paper aims to study the effect of polarity of liquids on the modeling of the viscosity-temperature dependence, considering particularly the Arrhenius type equations. To achieve this purpose, the solvents are classified into three groups: nonpolar, borderline polar and polar solvents. Based on adequate statistical tests, we found that there is strong evidence that the polarity of solvents affects significantly the distribution of the Arrhenius-type equation parameters and consequently the modeling of the viscosity-temperature dependence. Thus, specific estimated values of parameters for each group of liquids are proposed in this paper. In addition, the comparison of the accuracy of approximation with and without classification of liquids, using the Wilcoxon signed-rank test, shows a significant discrepancy of the borderline polar solvents. For that, we suggested in this paper new specific coefficient values of the simplified Arrhenius-type equation for better estimation accuracy. This result is important given that the accuracy in the estimation of the viscosity-temperature dependence may affect considerably the design and the optimization of several industrial processes.

  19. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  20. Sensitivity of control times in function of core parameters and oscillations control in thermal nuclear systems

    International Nuclear Information System (INIS)

    Amorim, E.S. do; D'Oliveira, A.B.; Galvao, O.B.; Oyama, K.

    1981-03-01

    Sensitivity of control times to variation of a thermal reactor core parameters is defined by suitable changes in the power coefficient, core size and fuel enrichment. A control strategy is developed based on control theory concepts and on considerations of the physics of the problem. Digital diffusion theory simulation is described which tends to verify the control concepts considered, face dumped oscillations introduced in one thermal nuclear power system. The effectivity of the control actions, in terms of eliminating oscillations, provided guidelines for the working-group engaged in the analysis of the control rods and its optimal performance. (Author) [pt

  1. Object-sensitive Type Analysis of PHP

    NARCIS (Netherlands)

    Van der Hoek, Henk Erik; Hage, J

    2015-01-01

    In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the

  2. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    Science.gov (United States)

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  3. Sensitivity analysis of LOFT L2-5 test calculations

    International Nuclear Information System (INIS)

    Prosek, Andrej

    2014-01-01

    The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be

  4. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    Science.gov (United States)

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  5. Sensitivity analysis of dynamic characteristic of the fixture based on design variables

    International Nuclear Information System (INIS)

    Wang Dongsheng; Nong Shaoning; Zhang Sijian; Ren Wanfa

    2002-01-01

    The research on the sensitivity analysis is dealt with of structural natural frequencies to structural design parameters. A typical fixture for vibration test is designed. Using I-DEAS Finite Element programs, the sensitivity of its natural frequency to design parameters is analyzed by Matrix Perturbation Method. The research result shows that the sensitivity analysis is a fast and effective dynamic re-analysis method to dynamic design and parameters modification of complex structures such as fixtures

  6. Sensitivity study and parameter optimization of OCD tool for 14nm finFET process

    Science.gov (United States)

    Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping

    2016-03-01

    Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.

  7. Supersymmetry Parameter Analysis : SPA Convention and Project

    CERN Document Server

    Aguilar-Saavedra, J A; Allanach, Benjamin C; Arnowitt, R; Baer, H A; Bagger, J A; Balázs, C; Barger, V; Barnett, M; Bartl, Alfred; Battaglia, M; Bechtle, P; Belyaev, A; Berger, E L; Blair, G; Boos, E; Bélanger, G; Carena, M S; Choi, S Y; Deppisch, F; Desch, Klaus; Djouadi, A; Dutta, B; Dutta, S; Díaz, M A; Eberl, H; Ellis, Jonathan Richard; Erler, Jens; Fraas, H; Freitas, A; Fritzsche, T; Godbole, Rohini M; Gounaris, George J; Guasch, J; Gunion, J F; Haba, N; Haber, Howard E; Hagiwara, K; Han, L; Han, T; He, H J; Heinemeyer, S; Hesselbach, S; Hidaka, K; Hinchliffe, Ian; Hirsch, M; Hohenwarter-Sodek, K; Hollik, W; Hou, W S; Hurth, Tobias; Jack, I; Jiang, Y; Jones, D R T; Kalinowski, Jan; Kamon, T; Kane, G; Kang, S K; Kernreiter, T; Kilian, W; Kim, C S; King, S F; Kittel, O; Klasen, M; Kneur, J L; Kovarik, K; Kraml, Sabine; Krämer, M; Lafaye, R; Langacker, P; Logan, H E; Ma, W G; Majerotto, Walter; Martyn, H U; Matchev, K; Miller, D J; Mondragon, M; Moortgat-Pick, G; Moretti, S; Mori, T; Moultaka, G; Muanza, S; Mukhopadhyaya, B; Mühlleitner, M M; Nauenberg, U; Nojiri, M M; Nomura, D; Nowak, H; Okada, N; Olive, Keith A; Oller, W; Peskin, M; Plehn, T; Polesello, G; Porod, Werner; Quevedo, Fernando; Rainwater, D L; Reuter, J; Richardson, P; Rolbiecki, K; de Roeck, A; Weber, Ch.

    2006-01-01

    High-precision analyses of supersymmetry parameters aim at reconstructing the fundamental supersymmetric theory and its breaking mechanism. A well defined theoretical framework is needed when higher-order corrections are included. We propose such a scheme, Supersymmetry Parameter Analysis SPA, based on a consistent set of conventions and input parameters. A repository for computer programs is provided which connect parameters in different schemes and relate the Lagrangian parameters to physical observables at LHC and high energy e+e- linear collider experiments, i.e., masses, mixings, decay widths and production cross sections for supersymmetric particles. In addition, programs for calculating high-precision low energy observables, the density of cold dark matter (CDM) in the universe as well as the cross sections for CDM search experiments are included. The SPA scheme still requires extended efforts on both the theoretical and experimental side before data can be evaluated in the future at the level of the d...

  8. A hybrid approach for global sensitivity analysis

    International Nuclear Information System (INIS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-01-01

    Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.

  9. Impact parameter analysis and soft QCD dynamics

    International Nuclear Information System (INIS)

    Carvalho, P.A.S.; Martini, A.F.; Menon, M.J.

    2002-01-01

    In a recent paper, based on the hypothesis of light-cone dipole representation for gluon Bremsstrahlung, Kopeliovich et al. developed a dynamical model for the elastic hadronic amplitude. The model has been applied to pp and p (bar) p scattering and the effects of unitarity and peripheral interactions have been investigated in the impact parameter representation. In this communication, making use of a model independent extraction of the scattering amplitude in the impact parameter space (early developed), we represent a comparative study between the predictions from the dynamical model and the impact parameter analysis. (author)

  10. Sensitivity of Hurst parameter estimation to periodic signals in time series and filtering approaches

    Science.gov (United States)

    Marković, D.; Koch, M.

    2005-09-01

    The influence of the periodic signals in time series on the Hurst parameter estimate is investigated with temporal, spectral and time-scale methods. The Hurst parameter estimates of the simulated periodic time series with a white noise background show a high sensitivity on the signal to noise ratio and for some methods, also on the data length used. The analysis is then carried on to the investigation of extreme monthly river flows of the Elbe River (Dresden) and of the Rhine River (Kaub). Effects of removing the periodic components employing different filtering approaches are discussed and it is shown that such procedures are a prerequisite for an unbiased estimation of H. In summary, our results imply that the first step in a time series long-correlation study should be the separation of the deterministic components from the stochastic ones. Otherwise wrong conclusions concerning possible memory effects may be drawn.

  11. Computerized analysis of brain perfusion parameter images

    International Nuclear Information System (INIS)

    Turowski, B.; Haenggi, D.; Wittsack, H.J.; Beck, A.; Aurich, V.

    2007-01-01

    Purpose: The development of a computerized method which allows a direct quantitative comparison of perfusion parameters. The display should allow a clear direct comparison of brain perfusion parameters in different vascular territories and over the course of time. The analysis is intended to be the basis for further evaluation of cerebral vasospasm after subarachnoid hemorrhage (SAH). The method should permit early diagnosis of cerebral vasospasm. Materials and Methods: The Angiotux 2D-ECCET software was developed with a close cooperation between computer scientists and clinicians. Starting from parameter images of brain perfusion, the cortex was marked, segmented and assigned to definite vascular territories. The underlying values were averages for each segment and were displayed in a graph. If a follow-up was available, the mean values of the perfusion parameters were displayed in relation to time. The method was developed under consideration of CT perfusion values but is applicable for other methods of perfusion imaging. Results: Computerized analysis of brain perfusion parameter images allows an immediate comparison of these parameters and follow-up of mean values in a clear and concise manner. Values are related to definite vascular territories. The tabular output facilitates further statistic evaluations. The computerized analysis is precisely reproducible, i. e., repetitions result in exactly the same output. (orig.)

  12. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  13. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  14. Hyperspectral signature analysis of skin parameters

    Science.gov (United States)

    Vyas, Saurabh; Banerjee, Amit; Garza, Luis; Kang, Sewon; Burlina, Philippe

    2013-02-01

    The temporal analysis of changes in biological skin parameters, including melanosome concentration, collagen concentration and blood oxygenation, may serve as a valuable tool in diagnosing the progression of malignant skin cancers and in understanding the pathophysiology of cancerous tumors. Quantitative knowledge of these parameters can also be useful in applications such as wound assessment, and point-of-care diagnostics, amongst others. We propose an approach to estimate in vivo skin parameters using a forward computational model based on Kubelka-Munk theory and the Fresnel Equations. We use this model to map the skin parameters to their corresponding hyperspectral signature. We then use machine learning based regression to develop an inverse map from hyperspectral signatures to skin parameters. In particular, we employ support vector machine based regression to estimate the in vivo skin parameters given their corresponding hyperspectral signature. We build on our work from SPIE 2012, and validate our methodology on an in vivo dataset. This dataset consists of 241 signatures collected from in vivo hyperspectral imaging of patients of both genders and Caucasian, Asian and African American ethnicities. In addition, we also extend our methodology past the visible region and through the short-wave infrared region of the electromagnetic spectrum. We find promising results when comparing the estimated skin parameters to the ground truth, demonstrating good agreement with well-established physiological precepts. This methodology can have potential use in non-invasive skin anomaly detection and for developing minimally invasive pre-screening tools.

  15. Sensitivity of reactor integral parameters to #betta##betta# parameter of resolved resonances of fertile isotopes and to the α values, in thermal and epithermal spectra

    International Nuclear Information System (INIS)

    Barroso, D.E.G.

    1982-01-01

    A sensitivity analysis of reactor integral parameter to more 10% variation in the resolved resonance parameters #betta##betta# of the fertile isotope and the variations of more 10% in the α values (#betta# sub(#betta#)/#betta# sub(f)) of fissile isotopes of PWR fuel elements, is done. The analysis is made with thermal and epithermal spectra, those last generated in a fuel cell with low V sub(M)/V sub(F). The HAMMER system, the interface programs HELP and LITHE and the HAMMER computer codes, were used as a base for this study. (E.G.) [pt

  16. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  17. Sensitivity Analysis Applied in Design of Low Energy Office Building

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik

    2008-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  18. Application of Sensitivity Analysis in Design of Sustainable Buildings

    DEFF Research Database (Denmark)

    Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind

    2007-01-01

    satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...

  19. Methods for global sensitivity analysis in life cycle assessment

    NARCIS (Netherlands)

    Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.

    2017-01-01

    Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to

  20. Design Optimization of Structural Parameters for Highly Sensitive Photonic Crystal Label-Free Biosensors

    Directory of Open Access Journals (Sweden)

    Yun-ah Han

    2013-03-01

    Full Text Available The effects of structural design parameters on the performance of nano-replicated photonic crystal (PC label-free biosensors were examined by the analysis of simulated reflection spectra of PC structures. The grating pitch, duty, scaled grating height and scaled TiO2 layer thickness were selected as the design factors to optimize the PC structure. The peak wavelength value (PWV, full width at half maximum of the peak, figure of merit for the bulk and surface sensitivities, and surface/bulk sensitivity ratio were also selected as the responses to optimize the PC label-free biosensor performance. A parametric study showed that the grating pitch was the dominant factor for PWV, and that it had low interaction effects with other scaled design factors. Therefore, we can isolate the effect of grating pitch using scaled design factors. For the design of PC-label free biosensor, one should consider that: (1 the PWV can be measured by the reflection peak measurement instruments, (2 the grating pitch and duty can be manufactured using conventional lithography systems, and (3 the optimum design is less sensitive to the grating height and TiO2 layer thickness variations in the fabrication process. In this paper, we suggested a design guide for highly sensitive PC biosensor in which one select the grating pitch and duty based on the limitations of the lithography and measurement system, and conduct a multi objective optimization of the grating height and TiO2 layer thickness for maximizing performance and minimizing the influence of parameter variation. Through multi-objective optimization of a PC structure with a fixed grating height of 550 nm and a duty of 50%, we obtained a surface FOM of 66.18 RIU−1 and an S/B ratio of 34.8%, with a grating height of 117 nm and TiO2 height of 210 nm.

  1. Design optimization of structural parameters for highly sensitive photonic crystal label-free biosensors.

    Science.gov (United States)

    Ju, Jonghyun; Han, Yun-ah; Kim, Seok-min

    2013-03-07

    The effects of structural design parameters on the performance of nano-replicated photonic crystal (PC) label-free biosensors were examined by the analysis of simulated reflection spectra of PC structures. The grating pitch, duty, scaled grating height and scaled TiO2 layer thickness were selected as the design factors to optimize the PC structure. The peak wavelength value (PWV), full width at half maximum of the peak, figure of merit for the bulk and surface sensitivities, and surface/bulk sensitivity ratio were also selected as the responses to optimize the PC label-free biosensor performance. A parametric study showed that the grating pitch was the dominant factor for PWV, and that it had low interaction effects with other scaled design factors. Therefore, we can isolate the effect of grating pitch using scaled design factors. For the design of PC-label free biosensor, one should consider that: (1) the PWV can be measured by the reflection peak measurement instruments, (2) the grating pitch and duty can be manufactured using conventional lithography systems, and (3) the optimum design is less sensitive to the grating height and TiO2 layer thickness variations in the fabrication process. In this paper, we suggested a design guide for highly sensitive PC biosensor in which one select the grating pitch and duty based on the limitations of the lithography and measurement system, and conduct a multi objective optimization of the grating height and TiO2 layer thickness for maximizing performance and minimizing the influence of parameter variation. Through multi-objective optimization of a PC structure with a fixed grating height of 550 nm and a duty of 50%, we obtained a surface FOM of 66.18 RIU-1 and an S/B ratio of 34.8%, with a grating height of 117 nm and TiO2 height of 210 nm.

  2. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  3. Ethical sensitivity in professional practice: concept analysis.

    Science.gov (United States)

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  4. Sensitivity/uncertainty analysis of a borehole scenario comparing Latin Hypercube Sampling and deterministic sensitivity approaches

    International Nuclear Information System (INIS)

    Harper, W.V.; Gupta, S.K.

    1983-10-01

    A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables

  5. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  6. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  7. LBLOCA sensitivity analysis using meta models

    International Nuclear Information System (INIS)

    Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.

    2014-01-01

    This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)

  8. Analysis of sagittal spinopelvic parameters in achondroplasia.

    Science.gov (United States)

    Hong, Jae-Young; Suh, Seung-Woo; Modi, Hitesh N; Park, Jong-Woong; Park, Jung-Ho

    2011-08-15

    Prospective radiological analysis of patients with achondroplasia. To analyze sagittal spinal alignment and pelvic orientation in achondroplasia patients. Knowledge of sagittal spinopelvic parameters is important for the treatment of achondroplasia, because they differ from those of the normal population and can induce pain. The study and control groups were composed of 32 achondroplasia patients and 24 healthy volunteers, respectively. All underwent lateral radiography of the whole spine including hip joints. The radiographic parameters examined were sacral slope (SS), pelvic tilt, pelvic incidence (PI), S1 overhang, thoracic kyphosis, T10-L2 kyphosis, lumbar lordosis (LL1, LL2), and sagittal balance. Statistical analysis was performed to identify significant differences between the two groups. In addition, correlations between parameters and symptoms were sought. Sagittal spinopelvic parameters, namely, pelvic tilt, pelvic incidence, S1 overhang, thoracic kyphosis, T10-L2 kyphosis, lumbar lordosis 1 and sagittal balance were found to be significantly different in the patient and control groups (P achondroplasia patients and normal healthy controls. The present study shows that sagittal spinal and pelvic parameters can assist the treatment of spinal disorders in achondroplasia patients.

  9. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  10. Sensitivity analysis in optimization and reliability problems

    International Nuclear Information System (INIS)

    Castillo, Enrique; Minguez, Roberto; Castillo, Carmen

    2008-01-01

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods

  11. Sensitivity analysis in optimization and reliability problems

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es

    2008-12-15

    The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.

  12. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait

    NARCIS (Netherlands)

    Carbone, V.; Krogt, M.M. van der; Koopman, H.F.J.M.; Verdonschot, N.J.

    2016-01-01

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of

  13. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait

    NARCIS (Netherlands)

    Carbone, Vincenzo; van der Krogt, Marjolein; Koopman, Hubertus F.J.M.; Verdonschot, Nicolaas Jacobus Joseph

    2016-01-01

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle–tendon (MT) model parameters for each of

  14. On Approaches to Analyze the Sensitivity of Simulated Hydrologic Fluxes to Model Parameters in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Jie Bao

    2015-12-01

    Full Text Available Effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash–Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA approaches, including analysis of variance based on the generalized linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.

  15. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  16. Parameter Sensitivity of High–Order Equivalent Circuit Models Of Turbine Generator

    Directory of Open Access Journals (Sweden)

    T. Niewierowicz–Swiecicka

    2010-01-01

    Full Text Available This work shows the results of a parametric sensitivity analysis applied to a state–space representation of high–order two–axis equivalent circuits (ECs of a turbo generator (150 MVA, 120 MW, 13.8 kV y 50 Hz. The main purpose of this study is to evaluate each parameter impact on the transient response of the analyzed two–axis models –d–axis ECs with one to five damper branches and q–axis ECs from one to four damper branches–. The parametric sensitivity concept is formulated in a general context and the sensibility function is established from the generator response to a short circuit condition. Results ponder the importance played by each parameter in the model behavior. The algorithms were design within MATLAB® environment. The study gives way to conclusions on electromagnetic aspects of solid rotor synchronous generators that have not been previously studied. The methodology presented here can be applied to any other physical system.

  17. Importance measures in global sensitivity analysis of nonlinear models

    International Nuclear Information System (INIS)

    Homma, Toshimitsu; Saltelli, Andrea

    1996-01-01

    The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost

  18. Wear-Out Sensitivity Analysis Project Abstract

    Science.gov (United States)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  19. Supercritical extraction of oleaginous: parametric sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Santos M.M.

    2000-01-01

    Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.

  20. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  1. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  2. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    Science.gov (United States)

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  3. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  4. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  5. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  6. Sensitivity analysis of the reactor safety study. Final report

    International Nuclear Information System (INIS)

    Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.

    1979-01-01

    The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates

  7. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    Science.gov (United States)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

  8. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    Science.gov (United States)

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  9. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    International Nuclear Information System (INIS)

    Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)

  10. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    Science.gov (United States)

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  11. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  12. Dynamic Resonance Sensitivity Analysis in Wind Farms

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei

    2017-01-01

    (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...

  13. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  14. Quantification of remodeling parameter sensitivity - assessed by a computer simulation model

    DEFF Research Database (Denmark)

    Thomsen, J.S.; Mosekilde, Li.; Mosekilde, Erik

    1996-01-01

    We have used a computer simulation model to evaluate the effect of several bone remodeling parameters on vertebral cancellus bone. The menopause was chosen as the base case scenario, and the sensitivity of the model to the following parameters was investigated: activation frequency, formation bal....... However, the formation balance was responsible for the greater part of total mass loss....

  15. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  16. Assessment of Wind Parameter Sensitivity on Extreme and Fatigue Wind Turbine Loads

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Amy N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sethuraman, Latha [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-12

    Wind turbines are designed using a set of simulations to ascertain the structural loads that the turbine could encounter. While mean hub-height wind speed is considered to vary, other wind parameters such as turbulence spectra, sheer, veer, spatial coherence, and component correlation are fixed or conditional values that, in reality, could have different characteristics at different sites and have a significant effect on the resulting loads. This paper therefore seeks to assess the sensitivity of different wind parameters on the resulting ultimate and fatigue loads on the turbine during normal operational conditions. Eighteen different wind parameters are screened using an Elementary Effects approach with radial points. As expected, the results show a high sensitivity of the loads to the turbulence standard deviation in the primary wind direction, but the sensitivity to wind shear is often much greater. To a lesser extent, other wind parameters that drive loads include the coherence in the primary wind direction and veer.

  17. Assessment of Wind Parameter Sensitivity on Ultimate and Fatigue Wind Turbine Loads: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Amy N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sethuraman, Latha [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-13

    Wind turbines are designed using a set of simulations to ascertain the structural loads that the turbine could encounter. While mean hub-height wind speed is considered to vary, other wind parameters such as turbulence spectra, sheer, veer, spatial coherence, and component correlation are fixed or conditional values that, in reality, could have different characteristics at different sites and have a significant effect on the resulting loads. This paper therefore seeks to assess the sensitivity of different wind parameters on the resulting ultimate and fatigue loads on the turbine during normal operational conditions. Eighteen different wind parameters are screened using an Elementary Effects approach with radial points. As expected, the results show a high sensitivity of the loads to the turbulence standard deviation in the primary wind direction, but the sensitivity to wind shear is often much greater. To a lesser extent, other wind parameters that drive loads include the coherence in the primary wind direction and veer.

  18. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  19. The Sensitivity of the Input Impedance Parameters of Track Circuits to Changes in the Parameters of the Track

    Directory of Open Access Journals (Sweden)

    Lubomir Ivanek

    2017-01-01

    Full Text Available This paper deals with the sensitivity of the input impedance of an open track circuit in the event that the parameters of the track are changed. Weather conditions and the state of pollution are the most common reasons for parameter changes. The results were obtained from the measured values of the parameters R (resistance, G (conductance, L (inductance, and C (capacitance of a rail superstructure depending on the frequency. Measurements were performed on a railway siding in Orlova. The results are used to design a predictor of occupancy of a track section. In particular, we were interested in the frequencies of 75 and 275 Hz for this purpose. Many parameter values of track substructures have already been solved in different works in literature. At first, we had planned to use the parameter values from these sources when we designed the predictor. Deviations between them, however, are large and often differ by three orders of magnitude (see Tab.8. From this perspective, this article presents data that have been updated using modern measurement devices and computer technology. And above all, it shows a transmission (cascade matrix used to determine the parameters.

  20. Phenomenological analysis of the Δ resonance parameters

    International Nuclear Information System (INIS)

    Vasan, S.S.

    1976-01-01

    The positions of the poles in the complex energy plane corresponding to the resonances Δ ++ and Δ 0 , and the associated residues, are determined by fitting the π + p and π - p hadronic phase shift data from the CARTER 73 analysis. As an illustration of the use of the Δ pole parameters, their application to the problem of parametrizing the residue function associated with the Δ Regge trajectory is considered. The input for the parametrization is given partly by the pole position and the residue of the Δ(1950), the first recurrence of the Δ(1236). These pole parameters are deduced from fits to the F 37 partial wave data from the AYED 74 phase shift analysis. Together with the Δ(1236) pole parameters, these provide information on the behavior of the Regge residue in the resonance region u less than 0 (in the context of s-channel backward scattering being dominated by u-channel Regge exchanges). Attempts to incorporate this information in parametrizations of the residue by means of real and complex functions lead to the conclusion that both the residue and the trajectory are better represented in the resonance region by complex parametrizations

  1. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  2. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  3. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  4. Uncertainty and sensitivity analysis in nuclear accident consequence assessment

    International Nuclear Information System (INIS)

    Karlberg, Olof.

    1989-01-01

    This report contains the results of a four year project in research contracts with the Nordic Cooperation in Nuclear Safety and the National Institute for Radiation Protection. An uncertainty/sensitivity analysis methodology consisting of Latin Hypercube sampling and regression analysis was applied to an accident consequence model. A number of input parameters were selected and the uncertainties related to these parameter were estimated within a Nordic group of experts. Individual doses, collective dose, health effects and their related uncertainties were then calculated for three release scenarios and for a representative sample of meteorological situations. From two of the scenarios the acute phase after an accident were simulated and from one the long time consequences. The most significant parameters were identified. The outer limits of the calculated uncertainty distributions are large and will grow to several order of magnitudes for the low probability consequences. The uncertainty in the expectation values are typical a factor 2-5 (1 Sigma). The variation in the model responses due to the variation of the weather parameters is fairly equal to the parameter uncertainty induced variation. The most important parameters showed out to be different for each pathway of exposure, which could be expected. However, the overall most important parameters are the wet deposition coefficient and the shielding factors. A general discussion of the usefulness of uncertainty analysis in consequence analysis is also given. (au)

  5. TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER

    Directory of Open Access Journals (Sweden)

    Richard E. Wendell

    2010-12-01

    Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.

  6. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    Directory of Open Access Journals (Sweden)

    Harry R. Millwater

    2006-01-01

    Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.

  7. Application of sensitivity analysis for optimized piping support design

    International Nuclear Information System (INIS)

    Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.

    1993-01-01

    The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)

  8. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    Energy Technology Data Exchange (ETDEWEB)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  9. Laryngeal High-Speed Videoendoscopy: Sensitivity of Objective Parameters towards Recording Frame Rate

    Directory of Open Access Journals (Sweden)

    Anne Schützenberger

    2016-01-01

    Full Text Available The current use of laryngeal high-speed videoendoscopy in clinic settings involves subjective visual assessment of vocal fold vibratory characteristics. However, objective quantification of vocal fold vibrations for evidence-based diagnosis and therapy is desired, and objective parameters assessing laryngeal dynamics have therefore been suggested. This study investigated the sensitivity of the objective parameters and their dependence on recording frame rate. A total of 300 endoscopic high-speed videos with recording frame rates between 1000 and 15 000 fps were analyzed for a vocally healthy female subject during sustained phonation. Twenty parameters, representing laryngeal dynamics, were computed. Four different parameter characteristics were found: parameters showing no change with increasing frame rate; parameters changing up to a certain frame rate, but then remaining constant; parameters remaining constant within a particular range of recording frame rates; and parameters changing with nearly every frame rate. The results suggest that (1 parameter values are influenced by recording frame rates and different parameters have varying sensitivities to recording frame rate; (2 normative values should be determined based on recording frame rates; and (3 the typically used recording frame rate of 4000 fps seems to be too low to distinguish accurately certain characteristics of the human phonation process in detail.

  10. Information sensitivity functions to assess parameter information gain and identifiability of dynamical systems.

    Science.gov (United States)

    Pant, Sanjay

    2018-05-01

    A new class of functions, called the 'information sensitivity functions' (ISFs), which quantify the information gain about the parameters through the measurements/observables of a dynamical system are presented. These functions can be easily computed through classical sensitivity functions alone and are based on Bayesian and information-theoretic approaches. While marginal information gain is quantified by decrease in differential entropy, correlations between arbitrary sets of parameters are assessed through mutual information. For individual parameters, these information gains are also presented as marginal posterior variances, and, to assess the effect of correlations, as conditional variances when other parameters are given. The easy to interpret ISFs can be used to (a) identify time intervals or regions in dynamical system behaviour where information about the parameters is concentrated; (b) assess the effect of measurement noise on the information gain for the parameters; (c) assess whether sufficient information in an experimental protocol (input, measurements and their frequency) is available to identify the parameters; (d) assess correlation in the posterior distribution of the parameters to identify the sets of parameters that are likely to be indistinguishable; and (e) assess identifiability problems for particular sets of parameters. © 2018 The Authors.

  11. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1987-01-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures

  12. Transcranial magnetic stimulation (TMS): compared sensitivity of different motor response parameters in ALS.

    Science.gov (United States)

    Pouget, J; Trefouret, S; Attarian, S

    2000-06-01

    Owing to the low sensitivity of clinical signs in assessing upper motor neuron (UMN) involvement in ALS, there is a need for investigative tools capable of detecting abnormal function of the pyramidal tract. Transcranial magnetic stimulation (TMS) may contribute to the diagnosis by reflecting a UMN dysfunction that is not clinically detectable. Several parameters for the motor responses to TMS can be evaluated with different levels of significance in healthy subjects compared with ALS patients. The central motor conduction time, however, is not sensitive in detecting subclinical UMN defects in individual ALS patients. The amplitude of the motor evoked potential (MEP), expressed as the percentage of the maximum wave, also has a low sensitivity. In some cases, the corticomotor threshold is decreased early in the disease course as a result of corticomotor neuron hyperexcitability induced by glutamate. Later, the threshold increases, indicating a loss of UMN. In our experience, a decreased silent period duration appears to be the most sensitive parameter when using motor TMS in ALS. TMS is also a sensitive technique for investigating the corticobulbar tract, which is difficult to study by other methods. TMS is a widely available, painless and safe technique with a good sensitivity that can visualize both corticospinal and corticobulbar tract abnormalities. The sensitivity can be improved further by taking into account the several MEP parameters, including latency and cortical silent period decreased duration.

  13. Supplementary Material for: A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja; Navarro, Marí a; Merks, Roeland; Blom, Joke

    2015-01-01

    ) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided

  14. Sensitivity Analysis of Centralized Dynamic Cell Selection

    DEFF Research Database (Denmark)

    Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.

    2016-01-01

    and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...

  15. Applications of advances in nonlinear sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Werbos, P J

    1982-01-01

    The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.

  16. *Corresponding Author Sensitivity Analysis of a Physiochemical ...

    African Journals Online (AJOL)

    Michael Horsfall

    The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.

  17. Global sensitivity analysis of multiscale properties of porous materials

    Science.gov (United States)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  18. A comparative study of the sensitivity of diffusion-related parameters obtained from diffusion tensor imaging, diffusional kurtosis imaging, q-space analysis and bi-exponential modelling in the early disease course (24 h) of hyperacute (6 h) ischemic stroke patients.

    Science.gov (United States)

    Duchêne, Gaëtan; Peeters, Frank; Peeters, André; Duprez, Thierry

    2017-08-01

    To compare the sensitivity and early temporal changes of diffusion parameters obtained from diffusion tensor imaging (DTI), diffusional kurtosis imaging (DKI), q-space analysis (QSA) and bi-exponential modelling in hyperacute stroke patients. A single investigational acquisition allowing the four diffusion analyses was performed on seven hyperacute stroke patients with a 3T system. The percentage change between ipsi- and contralateral regions were compared at admission and 24 h later. Two out of the seven patients were imaged every 6 h during this period. Kurtoses from both DKI and QSA were the most sensitive of the tested diffusion parameters in the few hours following ischemia. An early increase-maximum-decrease pattern of evolution was highlighted during the 24-h period for all parameters proportional to diffusion coefficients. A similar pattern was observed for both kurtoses in only one of two patients. Our comparison was performed using identical diffusion encoding timings and on patients in the same stage of their condition. Although preliminary, our findings confirm those of previous studies that showed enhanced sensitivity of kurtosis. A fine time mapping of diffusion metrics in hyperacute stroke patients was presented which advocates for further investigations on larger animal or human cohorts.

  19. Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks

    OpenAIRE

    Harry R. Millwater; R. Wesley Osborn

    2006-01-01

    A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel) and common damage mechanisms (inherent defects or su...

  20. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    Science.gov (United States)

    2015-04-01

    determined. From the results of the study, UN is safe to store under normal operating conditions. 15. SUBJECT TERMS urea, nitrate , sensitivity, thermal ...HNO3). Due to its simple composition, ease of manufacture, and higher detonation parameters than ammonium nitrate , it has become one of the...an H50 value of 10.054 ± 0.620 inches. 5. Conclusions From the results of the thermal analysis study, it can be concluded that urea nitrate is

  1. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-10-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by

  2. Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen

    2017-01-01

    simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...

  3. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    Science.gov (United States)

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A specialized ODE integrator for the efficient computation of parameter sensitivities

    Directory of Open Access Journals (Sweden)

    Gonnet Pedro

    2012-05-01

    Full Text Available Abstract Background Dynamic mathematical models in the form of systems of ordinary differential equations (ODEs play an important role in systems biology. For any sufficiently complex model, the speed and accuracy of solving the ODEs by numerical integration is critical. This applies especially to systems identification problems where the parameter sensitivities must be integrated alongside the system variables. Although several very good general purpose ODE solvers exist, few of them compute the parameter sensitivities automatically. Results We present a novel integration algorithm that is based on second derivatives and contains other unique features such as improved error estimates. These features allow the integrator to take larger time steps than other methods. In practical applications, i.e. systems biology models of different sizes and behaviors, the method competes well with established integrators in solving the system equations, and it outperforms them significantly when local parameter sensitivities are evaluated. For ease-of-use, the solver is embedded in a framework that automatically generates the integrator input from an SBML description of the system of interest. Conclusions For future applications, comparatively ‘cheap’ parameter sensitivities will enable advances in solving large, otherwise computationally expensive parameter estimation and optimization problems. More generally, we argue that substantially better computational performance can be achieved by exploiting characteristics specific to the problem domain; elements of our methods such as the error estimation could find broader use in other, more general numerical algorithms.

  5. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  6. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  7. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  8. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  9. Thermo-economic analysis of combined power plants with changing economic parameters

    International Nuclear Information System (INIS)

    Bidini, G.; Desideri, U.; Facchini, B.

    1991-01-01

    A method of thermo-economic analysis for the choice of optimal thermodynamic parameters of steam bottoming cycles in combined cycle power plants is presented. By keeping the thermodynamic aspects separated from the economic aspects, this method allows designers to easily perform a sensitivity analysis of the change in the economic parameters

  10. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities

    International Nuclear Information System (INIS)

    Muir, D.W.

    1989-01-01

    File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities

  12. Tracer SWIW tests in propped and un-propped fractures: parameter sensitivity issues, revisited

    Science.gov (United States)

    Ghergut, Julia; Behrens, Horst; Sauter, Martin

    2017-04-01

    Single-well injection-withdrawal (SWIW) or 'push-then-pull' tracer methods appear attractive for a number of reasons: less uncertainty on design and dimensioning, and lower tracer quantities required than for inter-well tests; stronger tracer signals, enabling easier and cheaper metering, and shorter metering duration required, reaching higher tracer mass recovery than in inter-well tests; last not least: no need for a second well. However, SWIW tracer signal inversion faces a major issue: the 'push-then-pull' design weakens the correlation between tracer residence times and georeservoir transport parameters, inducing insensitivity or ambiguity of tracer signal inversion w. r. to some of those georeservoir parameters that are supposed to be the target of tracer tests par excellence: pore velocity, transport-effective porosity, fracture or fissure aperture and spacing or density (where applicable), fluid/solid or fluid/fluid phase interface density. Hydraulic methods cannot measure the transport-effective values of such parameters, because pressure signals correlate neither with fluid motion, nor with material fluxes through (fluid-rock, or fluid-fluid) phase interfaces. The notorious ambiguity impeding parameter inversion from SWIW test signals has nourished several 'modeling attitudes': (i) regard dispersion as the key process encompassing whatever superposition of underlying transport phenomena, and seek a statistical description of flow-path collectives enabling to characterize dispersion independently of any other transport parameter, as proposed by Gouze et al. (2008), with Hansen et al. (2016) offering a comprehensive analysis of the various ways dispersion model assumptions interfere with parameter inversion from SWIW tests; (ii) regard diffusion as the key process, and seek for a large-time, asymptotically advection-independent regime in the measured tracer signals (Haggerty et al. 2001), enabling a dispersion-independent characterization of multiple

  13. Understanding dynamics using sensitivity analysis: caveat and solution

    Science.gov (United States)

    2011-01-01

    Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095

  14. Temperature sensitivity of void nucleation and growth parameters for single crystal copper: a molecular dynamics study

    International Nuclear Information System (INIS)

    Rawat, S; Chavan, V M; Warrier, M; Chaturvedi, S

    2011-01-01

    The effect of temperature on the void nucleation and growth is studied using the molecular dynamics (MD) code LAMMPS (Large-Scale Atomic/Molecular Massively Parallel Simulator). Single crystal copper is triaxially expanded at 5 × 10 9  s −1 strain rate keeping the temperature constant. It is shown that the nucleation and growth of voids at these atomistic scales follows a macroscopic nucleation and growth (NAG) model. As the temperature increases there is a steady decrease in the nucleation and growth thresholds. As the melting point of copper is approached, a double-dip in the pressure–time profile is observed. Analysis of this double-dip shows that the first minimum corresponds to the disappearance of the long-range order due to the creation of stacking faults and the system no longer has a FCC structure. There is no nucleation of voids at this juncture. The second minimum corresponds to the nucleation and incipient growth of voids. We present the sensitivity of NAG parameters to temperature and the analysis of double-dip in the pressure–time profile for single crystal copper at 1250 K

  15. Uncertainty and sensitivity analysis of environmental transport models

    International Nuclear Information System (INIS)

    Margulies, T.S.; Lancaster, L.E.

    1985-01-01

    An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables

  16. Selection of body sway parameters according to their sensitivity and repeatability

    Directory of Open Access Journals (Sweden)

    Nejc Sarabon

    2010-03-01

    Full Text Available For the precise evaluation of body balance, static type of tests performed on a force plate are the most commonly used ones. In these tests, body sway characteristics are analyzed based on the model of inverted pendulum and looking at the center of pressure (COP movement in time. Human body engages different strategies to compensate for balance perturbations. For this reason, there is a need to identify parameters which are sensitive to specific balance changes and which enable us to identify balance sub-components. The aim of our study was to investigate intra-visit repeatability and sensibility of the 40 different body sway parameters. Twenty-nine subjects participated in the study. They performed three different balancing tasks of different levels of difficulty, three repetitions each. The hip-width parallel stance and the single leg stance, both with open eyes, were used as ways to compare different balance intensities due to biomechanical changes. Additionally, deprivation of vision was used in the third balance task to study sensitivity to sensory system changes. As shown by intraclass correlation coefficient (ICC, repeatability of cumulative parameters such as COP, maximal amplitude and frequency showed excellent repeatability (ICC>0,85. Other parameters describing sub-dynamics through single repetition proved to have unsatisfying repeatability. Parameters most sensitive to increased intensity of balancing tasks were common COP, COP in medio-lateral and in antero-posterior direction, and maximal amplitues in the same directions. Frequency of oscilations has proved to be sensitive only to deprivation of vision. As shown in our study, cumulative parameters describing the path which the center of pressure makes proved to be the most repeatable and sensitive to detect different increases of balancing tasks enabling future use in balance studies and in clinical practice.

  17. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.

    1986-09-01

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  18. Speed Sensorless mixed sensitivity linear parameter variant H_inf control of the induction motor

    NARCIS (Netherlands)

    Toth, R.; Fodor, D.

    2004-01-01

    The paper shows the design of a robust control structure for the speed sensorless vector control of the IM, based on the mixed sensitivity (MS) linear parameter variant (LPV) H8 control theory. The controller makes possible the direct control of the flux and speed of the motor with torque adaptation

  19. Sensitivity of Disease Parameters to Flexible Budesonide/Formoterol Treatment in an Allergic Rat Model

    OpenAIRE

    Brange , Charlotte; Smailagic , Amir; Jansson , Anne-Helene; Middleton , Brian; Miller-Larsson , Anna; Taylor , John D.; Silberstein , David S.; Lal , Harbans

    2009-01-01

    Sensitivity of Disease Parameters to Flexible Budesonide/Formoterol Treatment in an Allergic Rat Model correspondance: Corresponding author. Tel.: +46 46 33 6256; fax: +46 46 33 6624. (Brange, Charlotte) (Brange, Charlotte) AstraZeneca R&D Lund--> , Lund--> - SWEDEN (Brange, Charlotte) AstraZeneca R&D Lund--> , Lund--> - SWEDEN (Brange, Charlotte) AstraZeneca R&D Lun...

  20. Sensitivity of the amplitude of the single muscle fibre action potential to microscopic volume conduction parameters

    NARCIS (Netherlands)

    Alberts, B.A.; Rutten, Wim; Wallinga, W.; Boom, H.B.K.

    1988-01-01

    A microscopic model of volume conduction was applied to examine the sensitivity of the single muscle fibre action potential to variations in parameters of the source and of the volume conductor, such as conduction velocity, intracellular conductivity and intracellular volume fraction. The model

  1. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  2. An easily implemented static condensation method for structural sensitivity analysis

    Science.gov (United States)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  3. Thermal-Hydraulic Sensitivity Study of Intermediate Loop Parameters for Nuclear Hydrogen Production System

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Hwa; Lee, Heung Nae; Park, Jea Ho [KONES Corp., Seoul (Korea, Republic of); Lee, Won Jae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Sang Il; Yoo, Yeon Jae [Hyundai Engineering Co., Seoul (Korea, Republic of)

    2016-10-15

    The heat generated from the VHTR is transferred to the intermediate loop through Intermediate Heat Exchanger (IHX). It is further passed on to the Sulfur-Iodine (SI) hydrogen production system (HPS) through Process Heat Exchanger (PHX). The IL provides the safety distance between the VHTR and HPS. Since the IL performance affects the overall nuclear HPS efficiency, it is required to optimize its design and operation parameters. In this study, the thermal-hydraulic sensitivity of IL parameters with various coolant options has been examined by using MARS-GCR code, which was already applied for the case of steam generator. Sensitivity study of the IL and PHX parameters has been carried out based on their thermal-hydraulic performance. Several parameters for design and operation, such as the pipe diameter, safety distance and surface area, are considered for different coolant options, He, CO{sub 2} and He-CO{sub 2} (2:8). It was found that the circulator work is the major factor affecting on the overall nuclear hydrogen production system efficiency. Circulator work increases with the safety distance, and decreases with the operation pressure and loop pipe diameter. Sensitivity results obtained from this study will contribute to the optimization of the IL design and operation parameters and the optimal coolant selection.

  4. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  5. Study of the methodology for sensitivity calculations of fast reactors integral parameters

    International Nuclear Information System (INIS)

    Renke, C.A.C.

    1981-06-01

    A study of the methodology for sensitivity calculations of integral parameters of fast reactors for the adjustment of multigroup cross sections is presented. A description of several existent methods and theories is given, with special emphasis being regarded to variational perturbation theory, integrant of the sensitivity code VARI-1D used in this work. Two calculational systems are defined and a set of procedures and criteria is structured gathering the necessary conditions for the determination of the sensitivity coefficients. These coefficients are then computed by both the direct method and the variational perturbation theory. A reasonable number of sensitivity coefficients are computed and analyzed for three fast critical assemblies, covering a range of special interest of the spectrum. These coefficients are determined for severa integral parameters, for the capture and fission cross sections of the U-238 and Pu-239, covering all the energy up to 14.5 MeV. The nuclear data used were obtained the CARNAVAL II calculational system of the Instituto de Engenharia Nuclear. An optimization for sensitivity computations in a chainned sequence of procedures is made, yielding the sensitivities in the energy macrogroups as the final stage. (Author) [pt

  6. Uncertainty and sensitivity analysis of the nuclear fuel thermal behavior

    Energy Technology Data Exchange (ETDEWEB)

    Boulore, A., E-mail: antoine.boulore@cea.fr [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Struzik, C. [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Gaudier, F. [Commissariat a l' Energie Atomique (CEA), DEN, Systems and Structure Modeling Department, 91191 Gif-sur-Yvette (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer A complete quantitative method for uncertainty propagation and sensitivity analysis is applied. Black-Right-Pointing-Pointer The thermal conductivity of UO{sub 2} is modeled as a random variable. Black-Right-Pointing-Pointer The first source of uncertainty is the linear heat rate. Black-Right-Pointing-Pointer The second source of uncertainty is the thermal conductivity of the fuel. - Abstract: In the global framework of nuclear fuel behavior simulation, the response of the models describing the physical phenomena occurring during the irradiation in reactor is mainly conditioned by the confidence in the calculated temperature of the fuel. Amongst all parameters influencing the temperature calculation in our fuel rod simulation code (METEOR V2), several sources of uncertainty have been identified as being the most sensitive: thermal conductivity of UO{sub 2}, radial distribution of power in the fuel pellet, local linear heat rate in the fuel rod, geometry of the pellet and thermal transfer in the gap. Expert judgment and inverse methods have been used to model the uncertainty of these parameters using theoretical distributions and correlation matrices. Propagation of these uncertainties in the METEOR V2 code using the URANIE framework and a Monte-Carlo technique has been performed in different experimental irradiations of UO{sub 2} fuel. At every time step of the simulated experiments, we get a temperature statistical distribution which results from the initial distributions of the uncertain parameters. We then can estimate confidence intervals of the calculated temperature. In order to quantify the sensitivity of the calculated temperature to each of the uncertain input parameters and data, we have also performed a sensitivity analysis using the Sobol' indices at first order.

  7. Simplified procedures for fast reactor fuel cycle and sensitivity analysis

    International Nuclear Information System (INIS)

    Badruzzaman, A.

    1979-01-01

    The Continuous Slowing Down-Integral Transport Theory has been extended to perform criticality calculations in a Fast Reactor Core-blanket system achieving excellent prediction of the spectrum and the eigenvalue. The integral transport parameters did not need recalculation with source iteration and were found to be relatively constant with exposure. Fuel cycle parameters were accurately predicted when these were not varied, thus reducing a principal potential penalty of the Intergal Transport approach where considerable effort may be required to calculate transport parameters in more complicated geometries. The small variation of the spectrum in the central core region, and its weak dependence on exposure for both this region, the core blanket interface and blanket region led to the extension and development of inexpensive simplified procedures to complement exact methods. These procedures gave accurate predictions of the key fuel cycle parameters such as cost and their sensitivity to variation in spectrum-averaged and multigroup cross sections. They also predicted the implications of design variation on these parameters very well. The accuracy of these procedures and their use in analyzing a wide variety of sensitivities demonstrate the potential utility of survey calculations in Fast Reactor analysis and fuel management

  8. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty

  9. Sensitivity analysis of predictive models with an automated adjoint generator

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig

  10. Sensitivity analysis of time-dependent laminar flows

    International Nuclear Information System (INIS)

    Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.

    2004-01-01

    This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)

  11. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  12. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  13. Influence of parameter values on the oscillation sensitivities of two p53-Mdm2 models.

    Science.gov (United States)

    Cuba, Christian E; Valle, Alexander R; Ayala-Charca, Giancarlo; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Biomolecular networks that present oscillatory behavior are ubiquitous in nature. While some design principles for robust oscillations have been identified, it is not well understood how these oscillations are affected when the kinetic parameters are constantly changing or are not precisely known, as often occurs in cellular environments. Many models of diverse complexity level, for systems such as circadian rhythms, cell cycle or the p53 network, have been proposed. Here we assess the influence of hundreds of different parameter sets on the sensitivities of two configurations of a well-known oscillatory system, the p53 core network. We show that, for both models and all parameter sets, the parameter related to the p53 positive feedback, i.e. self-promotion, is the only one that presents sizeable sensitivities on extrema, periods and delay. Moreover, varying the parameter set values to change the dynamical characteristics of the response is more restricted in the simple model, whereas the complex model shows greater tunability. These results highlight the importance of the presence of specific network patterns, in addition to the role of parameter values, when we want to characterize oscillatory biochemical systems.

  14. Vectorial capacity and vector control: reconsidering sensitivity to parameters for malaria elimination.

    Science.gov (United States)

    Brady, Oliver J; Godfray, H Charles J; Tatem, Andrew J; Gething, Peter W; Cohen, Justin M; McKenzie, F Ellis; Perkins, T Alex; Reiner, Robert C; Tusting, Lucy S; Sinka, Marianne E; Moyes, Catherine L; Eckhoff, Philip A; Scott, Thomas W; Lindsay, Steven W; Hay, Simon I; Smith, David L

    2016-02-01

    Major gains have been made in reducing malaria transmission in many parts of the world, principally by scaling-up coverage with long-lasting insecticidal nets and indoor residual spraying. Historically, choice of vector control intervention has been largely guided by a parameter sensitivity analysis of George Macdonald's theory of vectorial capacity that suggested prioritizing methods that kill adult mosquitoes. While this advice has been highly successful for transmission suppression, there is a need to revisit these arguments as policymakers in certain areas consider which combinations of interventions are required to eliminate malaria. Using analytical solutions to updated equations for vectorial capacity we build on previous work to show that, while adult killing methods can be highly effective under many circumstances, other vector control methods are frequently required to fill effective coverage gaps. These can arise due to pre-existing or developing mosquito physiological and behavioral refractoriness but also due to additive changes in the relative importance of different vector species for transmission. Furthermore, the optimal combination of interventions will depend on the operational constraints and costs associated with reaching high coverage levels with each intervention. Reaching specific policy goals, such as elimination, in defined contexts requires increasingly non-generic advice from modelling. Our results emphasize the importance of measuring baseline epidemiology, intervention coverage, vector ecology and program operational constraints in predicting expected outcomes with different combinations of interventions. © The Author 2016. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.

  15. Parameter Sensitivity Study for Typical Expander-Based Transcritical CO2 Refrigeration Cycles

    Directory of Open Access Journals (Sweden)

    Bo Zhang

    2018-05-01

    Full Text Available A sensitivity study was conducted for three typical expander-based transcritical CO2 cycles with the developed simulation model, and the sensitivities of the maximum coefficient of performance (COP to the key operating parameters, including the inlet pressure of gas cooler, the temperatures at evaporator inlet and gas cooler outlet, the inter-stage pressure and the isentropic efficiency of expander, were obtained. The results showed that the sensitivity to the gas cooler inlet pressure differs greatly before and after the optimal gas cooler inlet pressure. The sensitivity to the intercooler outlet temperature in the two-stage cycles increases sharply to near zero and then keeps almost constant at intercooler outlet temperature of higher than 45 °C. However, the sensitivity stabilizes near zero when the evaporator inlet temperature is very low of −26.1 °C. In two-stage compression with an intercooler and an expander assisting in driving the first-stage compressor (TEADFC cycle, an abrupt change in the sensitivity of maximum COP to the inter-stage pressure was observed, but disappeared after intercooler outlet temperature exceeds 50 °C. The sensitivity of maximum COP to the expander isentropic efficiency increases almost linearly with the expander isentropic efficiency.

  16. Sensitivity analysis of the Two Geometry Method

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1993-09-01

    The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)

  17. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  18. Sensitive parameters' optimization of the permanent magnet supporting mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yongguang; Gao, Xiaohui; Wang, Yixuan; Yang, Xiaowei [Beihang University, Beijing (China)

    2014-07-15

    The fast development of the ultra-high speed vertical rotor promotes the study and exploration for the supporting mechanism. It has become the focus of research that how to improve the speed and overcome the vibration when the rotors pass through the low-order critical frequencies. This paper introduces a kind of permanent magnet (PM) supporting mechanism and describes an optimization method of its sensitive parameters, which can make the vertical rotor system reach 80000 r/min smoothly. Firstly we find the sensitive parameters through analyzing the rotor's features in the process of achieving high-speed, then, study these sensitive parameters and summarize the regularities with the method of combining the experiment and the finite element method (FEM), at last, achieve the optimization method of these parameters. That will not only get a stable effect of raising speed and shorten the debugging time greatly, but also promote the extensive application of the PM supporting mechanism in the ultra-high speed vertical rotors.

  19. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    Science.gov (United States)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  20. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    Science.gov (United States)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-03-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters (i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  1. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    Science.gov (United States)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  2. Sensitivity analysis for modules for various biosphere types

    International Nuclear Information System (INIS)

    Karlsson, Sara; Bergstroem, U.; Rosen, K.

    2000-09-01

    This study presents the results of a sensitivity analysis for the modules developed earlier for calculation of ecosystem specific dose conversion factors (EDFs). The report also includes a comparison between the probabilistically calculated mean values of the EDFs and values gained in deterministic calculations. An overview of the distribution of radionuclides between different environmental parts in the models is also presented. The radionuclides included in the study were 36 Cl, 59 Ni, 93 Mo, 129 I, 135 Cs, 237 Np and 239 Pu, sel to represent various behaviour in the biosphere and some are of particular importance from the dose point of view. The deterministic and probabilistic EDFs showed a good agreement, for most nuclides and modules. Exceptions from this occurred if very skew distributions were used for parameters of importance for the results. Only a minor amount of the released radionuclides were present in the model compartments for all modules, except for the agricultural land module. The differences between the radionuclides were not pronounced which indicates that nuclide specific parameters were of minor importance for the retention of radionuclides for the simulated time period of 10 000 years in those modules. The results from the agricultural land module showed a different pattern. Large amounts of the radionuclides were present in the solid fraction of the saturated soil zone. The high retention within this compartment makes the zone a potential source for future exposure. Differences between the nuclides due to element specific Kd-values could be seen. The amount of radionuclides present in the upper soil layer, which is the most critical zone for exposure to humans, was less then 1% for all studied radionuclides. The sensitivity analysis showed that the physical/chemical parameters were the most important in most modules in contrast to the dominance of biological parameters in the uncertainty analysis. The only exception was the well module where

  3. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    Energy Technology Data Exchange (ETDEWEB)

    Garza, J. [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States); Millwater, H., E-mail: harry.millwater@utsa.edu [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States)

    2012-04-15

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: Black-Right-Pointing-Pointer Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. Black-Right-Pointing-Pointer The sensitivities are computed with negligible cost using Monte Carlo sampling. Black-Right-Pointing-Pointer The change in the POF due to a change in the POD curve parameters can be easily estimated.

  4. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2012-01-01

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: ► Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. ►The sensitivities are computed with negligible cost using Monte Carlo sampling. ► The change in the POF due to a change in the POD curve parameters can be easily estimated.

  5. Study and analysis of drift chamber parameters

    International Nuclear Information System (INIS)

    Martinez Laso, L.

    1988-01-01

    The present work deals mainly with drift chambers. In the first chapter a summary of drift chamber properties is presented. The information has been collected from the extensive bibliography available in this field. A very simple calculation procedure of drift chamber parameters has been developed and is presented in detail in the second chapter. Some prototypes have been made following two geometries (multidrift chamber and Z-chambers). Several installations have been used for test and calibration of these prototypes. A complete description of these installations is given in the third chapter. Cosmic rays, beta particles from a Ru106 radiactive source and a test beam in the WA (West Area) of SPS at CERN have been used for experimental purposes. The analysis and the results are described for the different setups. The experimental measurements have been used to produce a complete cell parametrization (position as function of drift time) and to obtain spatial resolution values (in the range of 200-250 um). Experimental results are in good agreement with numerical calculations. (Author)

  6. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Science.gov (United States)

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  7. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    Science.gov (United States)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  8. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    Science.gov (United States)

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary

  9. Dependence of mis-alignment sensitivity of ring laser gyro cavity on cavity parameters

    Energy Technology Data Exchange (ETDEWEB)

    Sun Feng; Zhang Xi; Zhang Hongbo; Yang Changcheng, E-mail: sunok1234@sohu.com [Huazhong Institute of Electro-Optics - Wuhan National Lab for Optoelectronics, Wuhan, Hubei (China)

    2011-02-01

    The ring laser gyroscope (RLG), as a rotation sensor, has been widely used for navigation and guidance on vehicles and missiles. The environment of strong random-vibration and large acceleration may deteriorate the performance of the RLG due to the vibration-induced tilting of the mirrors. In this paper the RLG performance is theoretically analyzed and the parameters such as the beam diameter at the aperture, cavity mirror alignment sensitivities and power loss due to the mirror tilting are calculated. It is concluded that by carefully choosing the parameters, the significant loss in laser power can be avoided.

  10. Dynamic Simulation, Sensitivity and Uncertainty Analysis of a Demonstration Scale Lignocellulosic Enzymatic Hydrolysis Process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Sin, Gürkan

    2014-01-01

    This study presents the uncertainty and sensitivity analysis of a lignocellulosic enzymatic hydrolysis model considering both model and feed parameters as sources of uncertainty. The dynamic model is parametrized for accommodating various types of biomass, and different enzymatic complexes...

  11. Global sensitivity analysis using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, Bruno

    2008-01-01

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices

  12. Global sensitivity analysis using polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr

    2008-07-15

    Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.

  13. Relative sensitivity analysis of the predictive properties of sloppy models.

    Science.gov (United States)

    Myasnikova, Ekaterina; Spirov, Alexander

    2018-01-25

    Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.

  14. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  15. A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors

    Directory of Open Access Journals (Sweden)

    Xi Yu

    2014-01-01

    Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.

  16. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  17. Study of the sensitivity of integral parameters related to 232 Thorium cross sections

    International Nuclear Information System (INIS)

    Guimaraes, L.N.F.; Menezes, A.

    1986-01-01

    The THOR critical assembly is used to test 232 Th basic nuclear data from ENDL-78, ENDF/B-IV, INDL-83, JENDL-1 and JENDL-2. The FORSS and UNISENS systems are used to calculate integral parameters and sensitivity profiles. The results show that 232 Th from JENDL-2 is superior to the others, with ENDL-78 showing the worst performance. The discrepancies can be credited to the different evaluations for the 232 Thorium scattering cross section. (Author) [pt

  18. SENSITIVITY OF BODY SWAY PARAMETERS DURING QUIET STANDING TO MANIPULATION OF SUPPORT SURFACE SIZE

    Directory of Open Access Journals (Sweden)

    Sarabon Nejc

    2010-09-01

    Full Text Available The centre of pressure (COP movement during stance maintenance on a stable surface is commonly used to describe and evaluate static balance. The aim of our study was to test sensitivity of individual COP parameters to different stance positions which were used to address size specific changes in the support surface. Twenty-nine subjects participated in the study. They carried out three 60-second repetitions of each of the five balance tasks (parallel stance, semi-tandem stance, tandem stance, contra-tandem stance, single leg stance. Using the force plate, the monitored parameters included the total COP distance, the distance covered in antero-posterior and medio-lateral directions, the maximum oscillation amplitude in antero-posterior and medio-lateral directions, the total frequency of oscillation, as well as the frequency of oscillation in antero-posterior and medio-lateral directions. The parameters which describe the total COP distance were the most sensitive to changes in the balance task, whereas the frequency of oscillation proved to be sensitive to a slightly lesser extent. Reductions in the support surface size in each of the directions resulted in proportional changes of antero-posterior and medio- lateral directions. The frequency of oscillation did not increase evenly with the increase in the level of difficulty of the balance task, but reached a certain value, above which it did not increase. Our study revealed the monitored parameters of the COP to be sensitive to the support surface size manipulations. The results of the study provide an important source for clinical and research use of the body sway measurements.

  19. Application of Monte Carlo filtering method in regional sensitivity analysis of AASHTOWare Pavement ME design

    Directory of Open Access Journals (Sweden)

    Zhong Wu

    2017-04-01

    Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.

  20. Simple Sensitivity Analysis for Orion GNC

    Science.gov (United States)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  1. Adjoint sensitivity of global cloud droplet number to aerosol and dynamical parameters

    Directory of Open Access Journals (Sweden)

    V. A. Karydis

    2012-10-01

    Full Text Available We present the development of the adjoint of a comprehensive cloud droplet formation parameterization for use in aerosol-cloud-climate interaction studies. The adjoint efficiently and accurately calculates the sensitivity of cloud droplet number concentration (CDNC to all parameterization inputs (e.g., updraft velocity, water uptake coefficient, aerosol number and hygroscopicity with a single execution. The adjoint is then integrated within three dimensional (3-D aerosol modeling frameworks to quantify the sensitivity of CDNC formation globally to each parameter. Sensitivities are computed for year-long executions of the NASA Global Modeling Initiative (GMI Chemical Transport Model (CTM, using wind fields computed with the Goddard Institute for Space Studies (GISS Global Circulation Model (GCM II', and the GEOS-Chem CTM, driven by meteorological input from the Goddard Earth Observing System (GEOS of the NASA Global Modeling and Assimilation Office (GMAO. We find that over polluted (pristine areas, CDNC is more sensitive to updraft velocity and uptake coefficient (aerosol number and hygroscopicity. Over the oceans of the Northern Hemisphere, addition of anthropogenic or biomass burning aerosol is predicted to increase CDNC in contrast to coarse-mode sea salt which tends to decrease CDNC. Over the Southern Oceans, CDNC is most sensitive to sea salt, which is the main aerosol component of the region. Globally, CDNC is predicted to be less sensitive to changes in the hygroscopicity of the aerosols than in their concentration with the exception of dust where CDNC is very sensitive to particle hydrophilicity over arid areas. Regionally, the sensitivities differ considerably between the two frameworks and quantitatively reveal why the models differ considerably in their indirect forcing estimates.

  2. Key parameters analysis of hybrid HEMP simulator

    International Nuclear Information System (INIS)

    Mao Congguang; Zhou Hui

    2009-01-01

    According to the new standards on the high-altitude electromagnetic pulse (HEMP) developed by International Electrotechnical Commission (IEC), the target parameter requirements of the key structure of the hybrid HEMP simulator are decomposed. Firstly, the influences of the different excitation sources and biconical structures to the key parameters of the radiated electric field wave shape are investigated and analyzed. Then based on the influence curves the target parameter requirements of the pulse generator are proposed. Finally the appropriate parameters of the biconical structure and the excitation sources are chosen, and the computational result of the electric field in free space is presented. The results are of great value for the design of the hybrid HEMP simulator. (authors)

  3. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    Science.gov (United States)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  4. Transient analysis of intercalation electrodes for parameter estimation

    Science.gov (United States)

    Devan, Sheba

    An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform

  5. Sensitivity Analysis of BLISK Airfoil Wear †

    Directory of Open Access Journals (Sweden)

    Andreas Kellersmann

    2018-05-01

    Full Text Available The decreasing performance of jet engines during operation is a major concern for airlines and maintenance companies. Among other effects, the erosion of high-pressure compressor (HPC blades is a critical one and leads to a changed aerodynamic behavior, and therefore to a change in performance. The maintenance of BLISKs (blade-integrated-disks is especially challenging because the blade arrangement cannot be changed and individual blades cannot be replaced. Thus, coupled deteriorated blades have a complex aerodynamic behavior which can have a stronger influence on compressor performance than a conventional HPC. To ensure effective maintenance for BLISKs, the impact of coupled misshaped blades are the key factor. The present study addresses these effects on the aerodynamic performance of a first-stage BLISK of a high-pressure compressor. Therefore, a design of experiments (DoE is done to identify the geometric properties which lead to a reduction in performance. It is shown that the effect of coupled variances is dependent on the operating point. Based on the DoE analysis, the thickness-related parameters, the stagger angle, and the max. profile camber as coupled parameters are identified as the most important parameters for all operating points.

  6. Impact parameter sensitive study of inner-shell atomic processes in the experimental storage ring

    Science.gov (United States)

    Gumberidze, A.; Kozhuharov, C.; Zhang, R. T.; Trotsenko, S.; Kozhedub, Y. S.; DuBois, R. D.; Beyer, H. F.; Blumenhagen, K.-H.; Brandau, C.; Bräuning-Demian, A.; Chen, W.; Forstner, O.; Gao, B.; Gassner, T.; Grisenti, R. E.; Hagmann, S.; Hillenbrand, P.-M.; Indelicato, P.; Kumar, A.; Lestinsky, M.; Litvinov, Yu. A.; Petridis, N.; Schury, D.; Spillmann, U.; Trageser, C.; Trassinelli, M.; Tu, X.; Stöhlker, Th.

    2017-10-01

    In this work, we present a pilot experiment in the experimental storage ring (ESR) at GSI devoted to impact parameter sensitive studies of inner shell atomic processes for low-energy (heavy-) ion-atom collisions. The experiment was performed with bare and He-like xenon ions (Xe54+, Xe52+) colliding with neutral xenon gas atoms, resulting in a symmetric collision system. This choice of the projectile charge states was made in order to compare the effect of a filled K-shell with the empty one. The projectile and target X-rays have been measured at different observation angles for all impact parameters as well as for the impact parameter range of ∼35-70 fm.

  7. Sensitivity analysis for decision-making using the MORE method-A Pareto approach

    International Nuclear Information System (INIS)

    Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.

    2009-01-01

    Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.

  8. A global sensitivity analysis of crop virtual water content

    Science.gov (United States)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  9. What can we learn from global sensitivity analysis of biochemical systems?

    Science.gov (United States)

    Kent, Edward; Neumann, Stefan; Kummer, Ursula; Mendes, Pedro

    2013-01-01

    Most biological models of intermediate size, and probably all large models, need to cope with the fact that many of their parameter values are unknown. In addition, it may not be possible to identify these values unambiguously on the basis of experimental data. This poses the question how reliable predictions made using such models are. Sensitivity analysis is commonly used to measure the impact of each model parameter on its variables. However, the results of such analyses can be dependent on an exact set of parameter values due to nonlinearity. To mitigate this problem, global sensitivity analysis techniques are used to calculate parameter sensitivities in a wider parameter space. We applied global sensitivity analysis to a selection of five signalling and metabolic models, several of which incorporate experimentally well-determined parameters. Assuming these models represent physiological reality, we explored how the results could change under increasing amounts of parameter uncertainty. Our results show that parameter sensitivities calculated with the physiological parameter values are not necessarily the most frequently observed under random sampling, even in a small interval around the physiological values. Often multimodal distributions were observed. Unsurprisingly, the range of possible sensitivity coefficient values increased with the level of parameter uncertainty, though the amount of parameter uncertainty at which the pattern of control was able to change differed among the models analysed. We suggest that this level of uncertainty can be used as a global measure of model robustness. Finally a comparison of different global sensitivity analysis techniques shows that, if high-throughput computing resources are available, then random sampling may actually be the most suitable technique.

  10. Sensitivity analysis of recovery efficiency in high-temperature aquifer thermal energy storage with single well

    DEFF Research Database (Denmark)

    Jeon, Jun-Seo; Lee, Seung-Rae; Pasquinelli, Lisa

    2015-01-01

    ., it is getting more attention as these issues are gradually alleviated. In this study, a sensitivity analysis of recovery efficiency in two cases of HT-ATES system with a single well is conducted to select key parameters. For a fractional factorial design used to choose input parameters with uniformity...... with Smoothly Clopped Absolute Deviation Penalty, is utilized. Finally, the sensitivity analysis is performed based on the variation decomposition. According to the result of sensitivity analysis, the most important input variables are selected and confirmed to consider the interaction effects for each case...

  11. Sensitivity analysis in the WWTP modelling community – new opportunities and applications

    DEFF Research Database (Denmark)

    Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.

    2010-01-01

    design (BSM1 plant layout) using Standardized Regression Coefficients (SRC) and (ii) Applying sensitivity analysis to help fine-tuning a fuzzy controller for a BNPR plant using Morris Screening. The results obtained from each case study are then critically discussed in view of practical applications......A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity...

  12. Statistical analysis of earthquake ground motion parameters

    International Nuclear Information System (INIS)

    1979-12-01

    Several earthquake ground response parameters that define the strength, duration, and frequency content of the motions are investigated using regression analyses techniques; these techniques incorporate statistical significance testing to establish the terms in the regression equations. The parameters investigated are the peak acceleration, velocity, and displacement; Arias intensity; spectrum intensity; bracketed duration; Trifunac-Brady duration; and response spectral amplitudes. The study provides insight into how these parameters are affected by magnitude, epicentral distance, local site conditions, direction of motion (i.e., whether horizontal or vertical), and earthquake event type. The results are presented in a form so as to facilitate their use in the development of seismic input criteria for nuclear plants and other major structures. They are also compared with results from prior investigations that have been used in the past in the criteria development for such facilities

  13. Sensitivity of Tumor Motion Simulation Accuracy to Lung Biomechanical Modeling Approaches and Parameters

    OpenAIRE

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional com...

  14. Analysis of dynamic parameters of mine fans

    Science.gov (United States)

    Russky, E. Yu

    2018-03-01

    The design of the rotor of an axial fan and its main units, namely double leaf blades impeller and the main shaft are discussed. The parameters of a disturbed mine air flow under sudden outbursts are determined and the influence of disturbances on frequencies of axial fan units is assessed. The scope of the assessment embraces the disturbance effect on the blades and on the torsional vibrations of the main shafts. The dependences of the stresses in the elements of the rotor versus the disturbed air flow parameters are derived.

  15. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  16. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  17. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  18. Analysis of Hydrological Sensitivity for Flood Risk Assessment

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Sharma

    2018-02-01

    Full Text Available In order for the Indian government to maximize Integrated Water Resource Management (IWRM, the Brahmaputra River has played an important role in the undertaking of the Pilot Basin Study (PBS due to the Brahmaputra River’s annual regional flooding. The selected Kulsi River—a part of Brahmaputra sub-basin—experienced severe floods in 2007 and 2008. In this study, the Rainfall-Runoff-Inundation (RRI hydrological model was used to simulate the recent historical flood in order to understand and improve the integrated flood risk management plan. The ultimate objective was to evaluate the sensitivity of hydrologic simulation using different Digital Elevation Model (DEM resources, coupled with DEM smoothing techniques, with a particular focus on the comparison of river discharge and flood inundation extent. As a result, the sensitivity analysis showed that, among the input parameters, the RRI model is highly sensitive to Manning’s roughness coefficient values for flood plains, followed by the source of the DEM, and then soil depth. After optimizing its parameters, the simulated inundation extent showed that the smoothing filter was more influential than its simulated discharge at the outlet. Finally, the calibrated and validated RRI model simulations agreed well with the observed discharge and the Moderate Imaging Spectroradiometer (MODIS-detected flood extents.

  19. High order effects in cross section sensitivity analysis

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Gilai, D.

    1978-01-01

    Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method

  20. DEA Sensitivity Analysis for Parallel Production Systems

    Directory of Open Access Journals (Sweden)

    J. Gerami

    2011-06-01

    Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.

  1. Monogenic functions with parameters in Clifford analysis

    International Nuclear Information System (INIS)

    Le Hung Son.

    1990-02-01

    In this paper we study some properties of monogenic functions taking values in a Clifford algebra and depending on several parameters. It is proved that the Hartogs extension theorems are valid for these functions and for the multi-monogenic functions, which contain solutions of many important systems of partial differential equations in Theoretical Physics. (author). 4 refs

  2. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  3. Global sensitivity analysis using a Gaussian Radial Basis Function metamodel

    International Nuclear Information System (INIS)

    Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua

    2016-01-01

    Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.

  4. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  5. Uncertainty and sensitivity analysis in a Probabilistic Safety Analysis level-1

    International Nuclear Information System (INIS)

    Nunez Mc Leod, Jorge E.; Rivera, Selva S.

    1996-01-01

    A methodology for sensitivity and uncertainty analysis, applicable to a Probabilistic Safety Assessment Level I has been presented. The work contents are: correct association of distributions to parameters, importance and qualification of expert opinions, generations of samples according to sample sizes, and study of the relationships among system variables and systems response. A series of statistical-mathematical techniques are recommended along the development of the analysis methodology, as well as different graphical visualization for the control of the study. (author)

  6. Sensitivity of acoustic nonlinearity parameter to the microstructural changes in cement-based materials

    Science.gov (United States)

    Kim, Gun; Kim, Jin-Yeon; Kurtis, Kimberly E.; Jacobs, Laurence J.

    2015-03-01

    This research experimentally investigates the sensitivity of the acoustic nonlinearity parameter to microcracks in cement-based materials. Based on the second harmonic generation (SHG) technique, an experimental setup using non-contact, air-coupled detection is used to receive the consistent Rayleigh surface waves. To induce variations in the extent of microscale cracking in two types of specimens (concrete and mortar), shrinkage reducing admixture (SRA), is used in one set, while a companion specimen is prepared without SRA. A 50 kHz wedge transducer and a 100 kHz air-coupled transducer are implemented for the generation and detection of nonlinear Rayleigh waves. It is shown that the air-coupled detection method provides more repeatable fundamental and second harmonic amplitudes of the propagating Rayleigh waves. The obtained amplitudes are then used to calculate the relative nonlinearity parameter βre, the ratio of the second harmonic amplitude to the square of the fundamental amplitude. The experimental results clearly demonstrate that the nonlinearity parameter (βre) is highly sensitive to the microstructural changes in cement-based materials than the Rayleigh phase velocity and attenuation and that SRA has great potential to avoid shrinkage cracking in cement-based materials.

  7. Sensitivity analysis of a Pelton hydropower station based on a novel approach of turbine torque

    International Nuclear Information System (INIS)

    Xu, Beibei; Yan, Donglin; Chen, Diyi; Gao, Xiang; Wu, Changzhi

    2017-01-01

    Highlights: • A novel approach of the turbine torque is proposed. • A unify model is capable of the dynamic characteristics of Pelton hydropower stations. • Sensitivity analysis from hydraulic parameters, mechanic parameters and electric parameters are performed. • Numerical simulations show the sensitivity ranges of the above three parameters. - Abstract: Hydraulic turbine generator units with long-running operation may cause the values of hydraulic, mechanic or electric parameters changing gradually, which brings a new challenge, namely that whether the operating stability of these units will be changed in the next thirty or forty years. This paper is an attempt to seek a relatively unified model for sensitivity analysis from three aspects: hydraulic parameters (turbine flow and turbine head), mechanic parameters (axis coordinates and axial misalignment) and electric parameters (generator speed and excitation current). First, a novel approach of the Pelton turbine torque is proposed, which can make connections between the hydraulic turbine governing system and the shafting system of the hydro-turbine generator unit. Moreover, the correctness of this approach is verified by comparing with other three models of hydropower stations. Second, this latter is analyzed to obtain the sensitivity of electric parameter (excitation current), the mechanic parameters (axial misalignment, upper guide bearing rigidity, lower guide bearing rigidity, and turbine guide bearing rigidity) on hydraulic parameters on the operating stability of the unit. In addition to this, some critical values and ranges are proposed. Finally, these results can provide some bases for the design and stable operation of Peltonhydropower stations.

  8. Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation

    International Nuclear Information System (INIS)

    Blonigan, Patrick J.; Wang, Qiqi

    2014-01-01

    Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters

  9. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  10. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  11. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    Science.gov (United States)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  12. Analysis of Modeling Parameters on Threaded Screws.

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  13. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    Science.gov (United States)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  14. Sensitivity of risk parameters to human errors in reactor safety study for a PWR

    International Nuclear Information System (INIS)

    Samanta, P.K.; Hall, R.E.; Swoboda, A.L.

    1981-01-01

    Sensitivities of the risk parameters, emergency safety system unavailabilities, accident sequence probabilities, release category probabilities and core melt probability were investigated for changes in the human error rates within the general methodological framework of the Reactor Safety Study (RSS) for a Pressurized Water Reactor (PWR). Impact of individual human errors were assessed both in terms of their structural importance to core melt and reliability importance on core melt probability. The Human Error Sensitivity Assessment of a PWR (HESAP) computer code was written for the purpose of this study. The code employed point estimate approach and ignored the smoothing technique applied in RSS. It computed the point estimates for the system unavailabilities from the median values of the component failure rates and proceeded in terms of point values to obtain the point estimates for the accident sequence probabilities, core melt probability, and release category probabilities. The sensitivity measure used was the ratio of the top event probability before and after the perturbation of the constituent events. Core melt probability per reactor year showed significant increase with the increase in the human error rates, but did not show similar decrease with the decrease in the human error rates due to the dominance of the hardware failures. When the Minimum Human Error Rate (M.H.E.R.) used is increased to 10 -3 , the base case human error rates start sensitivity to human errors. This effort now allows the evaluation of new error rate data along with proposed changes in the man machine interface

  15. Identification of adipokine clusters related to parameters of fat mass, insulin sensitivity and inflammation.

    Directory of Open Access Journals (Sweden)

    Gesine Flehmig

    Full Text Available In obesity, elevated fat mass and ectopic fat accumulation are associated with changes in adipokine secretion, which may link obesity to inflammation and the development of insulin resistance. However, relationships among individual adipokines and between adipokines and parameters of obesity, glucose metabolism or inflammation are largely unknown. Serum concentrations of 20 adipokines were measured in 141 Caucasian obese men (n = 67 and women (n = 74 with a wide range of body weight, glycemia and insulin sensitivity. Unbiased, distance-based hierarchical cluster analyses were performed to recognize patterns among adipokines and their relationship with parameters of obesity, glucose metabolism, insulin sensitivity and inflammation. We identified two major adipokine clusters related to either (1 body fat mass and inflammation (leptin, ANGPTL3, DLL1, chemerin, Nampt, resistin or insulin sensitivity/hyperglycemia, and lipid metabolism (vaspin, clusterin, glypican 4, progranulin, ANGPTL6, GPX3, RBP4, DLK1, SFRP5, BMP7, adiponectin, CTRP3 and 5, omentin. In addition, we found distinct adipokine clusters in subgroups of patients with or without type 2 diabetes (T2D. Logistic regression analyses revealed ANGPTL6, DLK1, Nampt and progranulin as strongest adipokine correlates of T2D in obese individuals. The panel of 20 adipokines predicted T2D compared to a combination of HbA1c, HOMA-IR and fasting plasma glucose with lower sensitivity (78% versus 91% and specificity (76% versus 94%. Therefore, adipokine patterns may currently not be clinically useful for the diagnosis of metabolic diseases. Whether adipokine patterns are relevant for the predictive assessment of intervention outcomes needs to be further investigated.

  16. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  17. Procedures for uncertainty and sensitivity analysis in repository performance assessment

    International Nuclear Information System (INIS)

    Poern, K.; Aakerlund, O.

    1985-10-01

    The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)

  18. The impact of Ag nanoparticles on the parameters of DSS- cells sensitized by Z907

    International Nuclear Information System (INIS)

    Ibrayev, N Kh; Aimukhanov, A K; Zeinidenov, A K

    2016-01-01

    Research of influence of Ag nanoparticles are in-process undertaken on absorption and on parameters CVC DSS-cells sensitized Z907. It is set that with the height of concentration Ag nanoparticles in tape to the concentration of 0.3% wt%. the absorbance of Z907 in a short-wave stripe grew to the value 1,6. It is set that under reaching the concentration of Ag nanoparticles in the cell of value the 0.3% wt%. efficiency of cell increased to 2.2%. (paper)

  19. Parameter studies to determine sensitivity of slug impact loads to properties of core surrounding structures

    International Nuclear Information System (INIS)

    Gvildys, J.

    1985-01-01

    A sensitivity study of the HCDA slug impact response of fast reactor primary containment to properties of core surrounding structures was performed. Parameters such as the strength of the radial shield material, mass, void, and compressibility properties of the gas plenum material, mass of core material, and mass and compressibility properties of the coolant were used as variables to determine the magnitude of the slug impact loads. The response of the reactor primary containment and the partition of energy were also given. A study was also performed using water as coolant to study the difference in slug impact loads

  20. Development and Sensitivity Analysis of a Fully Kinetic Model of Sequential Reductive Dechlorination in Groundwater

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup

    2011-01-01

    experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most...

  1. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  2. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  3. B1 -sensitivity analysis of quantitative magnetization transfer imaging.

    Science.gov (United States)

    Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce

    2018-01-01

    To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Sensitivity analysis on parameters and processes affecting vapor intrusion risk

    KAUST Repository

    Picone, Sara; Valstar, Johan; van Gaans, Pauline; Grotenhuis, Tim; Rijnaarts, Huub

    2012-01-01

    the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl

  5. Sensitivity analysis of FRAPCON-1 computer code to some parameters

    International Nuclear Information System (INIS)

    Chia, C.T.; Silva, C.F. da.

    1987-05-01

    A sensibility study of the code FRAPCON-1 was done for the following inout data: number of axial nodes, number of time steps and the axial power shape. Their influence in the code response concerning to the fuel center line temperature, stored energy, internal gas pressure, clad hoop strain and gap width were analyzed. The number of axial nodes has little influence, but care must be taken in the choice of the power axial profile and the time step length. (Author) [pt

  6. Heuristic Sensitivity Analysis for Baker's Yeast Model Parameters

    OpenAIRE

    Leão, Celina P.; Soares, Filomena O.

    2004-01-01

    The baker's yeast, essentially composed by living cells of Saccharomyces cerevisiae, used in the bread making and beer industries as a microorganism, has an important industrial role. The simulation procedure represents then a necessary tool to understand clearly the baker's yeast fermentation process. The use of mathematical models based on mass balance equations requires the knowledge of the reaction kinetics, thermodynamics, and transport and physical properties. Models may be more or less...

  7. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  8. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  9. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  10. Stability and Sensitive Analysis of a Model with Delay Quorum Sensing

    Directory of Open Access Journals (Sweden)

    Zhonghua Zhang

    2015-01-01

    Full Text Available This paper formulates a delay model characterizing the competition between bacteria and immune system. The center manifold reduction method and the normal form theory due to Faria and Magalhaes are used to compute the normal form of the model, and the stability of two nonhyperbolic equilibria is discussed. Sensitivity analysis suggests that the growth rate of bacteria is the most sensitive parameter of the threshold parameter R0 and should be targeted in the controlling strategies.

  11. Quantitative analysis of spatial variability of geotechnical parameters

    Science.gov (United States)

    Fang, Xing

    2018-04-01

    Geotechnical parameters are the basic parameters of geotechnical engineering design, while the geotechnical parameters have strong regional characteristics. At the same time, the spatial variability of geotechnical parameters has been recognized. It is gradually introduced into the reliability analysis of geotechnical engineering. Based on the statistical theory of geostatistical spatial information, the spatial variability of geotechnical parameters is quantitatively analyzed. At the same time, the evaluation of geotechnical parameters and the correlation coefficient between geotechnical parameters are calculated. A residential district of Tianjin Survey Institute was selected as the research object. There are 68 boreholes in this area and 9 layers of mechanical stratification. The parameters are water content, natural gravity, void ratio, liquid limit, plasticity index, liquidity index, compressibility coefficient, compressive modulus, internal friction angle, cohesion and SP index. According to the principle of statistical correlation, the correlation coefficient of geotechnical parameters is calculated. According to the correlation coefficient, the law of geotechnical parameters is obtained.

  12. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  13. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  14. Sensitivity