High order effects in cross section sensitivity analysis
International Nuclear Information System (INIS)
Greenspan, E.; Karni, Y.; Gilai, D.
1978-01-01
Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method
Energy Technology Data Exchange (ETDEWEB)
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Sensitivity analysis of the U238 cross sections in fast nuclear systems - Program SENSEAV-R
International Nuclear Information System (INIS)
Amorim, E.S. do; D'Oliveira, A.B.; Oliveira, E.C. de; Moura Neto, C. de.
1980-01-01
For many performance parameters of reactors the tabulated calculation/experimental ratios indicate that some potential problems exist in the cross sections data or calculation models used to investigate the critical experimental data. A first step toward drawing a more definitive conclusion is to perform a seletive importance analysis by sensitivity profiles and covariance data files for the cross sections data used in the calculation. Many works in the current literature show that some of these uncertainties come from uncertainties in 238 U(n,γ) 238 U(n,f) and 239 Pu(n,f). Perturbation methods were developed to analyse the effects of finite changes in a large number of cross sections and sumarize the investigation by a group dependent sensitivity coefficient. Sensitivies at critical condition were carried out for the few group macroscopic cross section of the U 238 with respect to their 26 group microscopic absorption cross section. The results of this investigation point out that improvements should be done on specific range of energy of 238 U(n,γ). (Author) [pt
Energy Technology Data Exchange (ETDEWEB)
Brown, Nicholas [Pennsylvania State University, University Park; Burns, Joseph R. [ORNL
2017-12-01
The aftermath of the Tōhoku earthquake and the Fukushima accident has led to a global push to improve the safety of existing light water reactors. A key component of this initiative is the development of nuclear fuel and cladding materials with potentially enhanced accident tolerance, also known as accident-tolerant fuels (ATF). These materials are intended to improve core fuel and cladding integrity under beyond design basis accident conditions while maintaining or enhancing reactor performance and safety characteristics during normal operation. To complement research that has already been carried out to characterize ATF neutronics, the present study provides an initial investigation of the sensitivity and uncertainty of ATF systems responses to nuclear cross section data. ATF concepts incorporate novel materials, including SiC and FeCrAl cladding and high density uranium silicide composite fuels, in turn introducing new cross section sensitivities and uncertainties which may behave differently from traditional fuel and cladding materials. In this paper, we conducted sensitivity and uncertainty analysis using the TSUNAMI-2D sequence of SCALE with infinite lattice models of ATF assemblies. Of all the ATF materials considered, it is found that radiative capture in 56Fe in FeCrAl cladding is the most significant contributor to eigenvalue uncertainty. 56Fe yields significant potential eigenvalue uncertainty associated with its radiative capture cross section; this is by far the largest ATF-specific uncertainty found in these cases, exceeding even those of uranium. We found that while significant new sensitivities indeed arise, the general sensitivity behavior of ATF assemblies does not markedly differ from traditional UO2/zirconium-based fuel/cladding systems, especially with regard to uncertainties associated with uranium. We assessed the similarity of the IPEN/MB-01 reactor benchmark model to application models with FeCrAl cladding. We used TSUNAMI-IP to calculate
... this page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...
Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections
International Nuclear Information System (INIS)
Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane
2001-01-01
The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However, it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2013-01-01
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis
International Nuclear Information System (INIS)
Reyes F, M. C.; Del Valle G, E.; Gomez T, A. M.; Sanchez E, V.
2015-09-01
A methodology was implemented to carry out a sensitivity and uncertainty analysis for cross sections used in a coupled model for Trace/Parcs in a transient of control rod fall of a BWR-5. A model of the reactor core for the neutronic code Parcs was used, in which the assemblies located in the core are described. Thermo-hydraulic model in Trace was a simple model, where only a component type Chan was designed to represent all the core assemblies, which it was within a single vessel and boundary conditions were established. The thermo-hydraulic part was coupled with the neutron part, first for the steady state and then a transient of control rod fall was carried out for the sensitivity and uncertainty analysis. To carry out the analysis of cross sections used in the coupled model Trace/Parcs during the transient, the Probability Density Functions for 22 parameters selected from the total of neutronic parameters that use Parcs were generated, obtaining 100 different cases for the coupled model Trace/Parcs, each one with a database of different cross sections. All these cases were executed with the coupled model, obtaining in consequence 100 different output files for the transient of control rod fall doing emphasis in the nominal power, for which an uncertainty analysis was realized at the same time generate the band of uncertainty. With this analysis is possible to observe the ranges of results of the elected responses varying the selected uncertainty parameters. The sensitivity analysis complements the uncertainty analysis, identifying the parameter or parameters with more influence on the results and thus focuses on these parameters in order to better understand their effects. Beyond the obtained results, because is not a model with real operation data, the importance of this work is to know the application of the methodology to carry out the sensitivity and uncertainty analyses. (Author)
International Nuclear Information System (INIS)
Ku, L.P.; Price, W.G. Jr.
1977-08-01
The neutronic calculation for the Livermore mirror fusion/fission hybrid reactor blanket was performed using the PPPL cross section library. Significant differences were found in the tritium breeding and plutonium production in comparison to the results of the LLL calculation. The cross section sensitivity study for tritium breeding indicates that the response is sensitive to the cross section of 238 U in the neighborhood of 14 MeV and 1 MeV. The response is also sensitive to the cross sections of iron in the vicinity of 14 MeV near the first wall. Neutron transport in the resonance region is not important in this reactor model
International Nuclear Information System (INIS)
Gerstl, S.A.W.; Dudziak, D.J.; Muir, D.W.
1975-09-01
A computational method to determine cross-section requirements quantitatively is described and applied to the Tokamak Fusion Test Reactor (TFTR). In order to provide a rational basis for the priorities assigned to new cross-section measurements or evaluations, this method includes quantitative estimates of the uncertainty of currently available data, the sensitivity of important nuclear design parameters to selected cross sections, and the accuracy desired in predicting nuclear design parameters. Perturbation theory is used to combine estimated cross-section uncertainties with calculated sensitivities to determine the variance of any nuclear design parameter of interest
Nuclear characteristics of Pu fueled LWR and cross section sensitivities
Energy Technology Data Exchange (ETDEWEB)
Takeda, Toshikazu [Osaka Univ., Suita (Japan). Faculty of Engineering
1998-03-01
The present status of Pu utilization to thermal reactors in Japan, nuclear characteristics and topics and cross section sensitivities for analysis of Pu fueled thermal reactors are described. As topics we will discuss the spatial self-shielding effect on the Doppler reactivity effect and the cross section sensitivities with the JENDL-3.1 and 3.2 libraries. (author)
Directory of Open Access Journals (Sweden)
Rocchi Federico
2017-01-01
Full Text Available Gadolinium odd isotopes cross sections are crucial in assessing the neutronic performance and safety features of a light water reactor (LWR core. Accurate evaluations of the neutron capture behavior of gadolinium burnable poisons are necessary for a precise estimation of the economic gain due to the extension of fuel life, the residual reactivity penalty at the end of life, and the reactivity peak for partially spent fuel for the criticality safety analysis of Spent Fuel Pools. Nevertheless, present gadolinium odd isotopes neutron cross sections are somehow dated and poorly investigated in the high sensitivity thermal energy region and are available with an uncertainty which is too high in comparison to the present day typical industrial standards and needs. This article shows how the most recent gadolinium cross sections evaluations appear inadequate to provide accurate criticality calculations for a system with gadolinium fuel pins. In this article, a sensitivity and uncertainty analysis (S/U has been performed to investigate the effect of gadolinium odd isotopes nuclear cross sections data on the multiplication factor of some LWR fuel assemblies. The results have shown the importance of gadolinium odd isotopes in the criticality evaluation, and they confirmed the need of a re-evaluation of the neutron capture cross sections by means of new experimental measurements to be carried out at the n_TOF facility at CERN.
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Directory of Open Access Journals (Sweden)
M. E. Mosleh
2012-03-01
Full Text Available This paper presents an approach to calculate the equivalent stray capacitance (SC of n-turn of the helical flux compression generator (HFCG coil with multi layer conductor wire filaments (MLCWF in the form of rectangular cross-section. This approach is based on vespiary regular hexagonal (VRH model. In this method, wire filaments of the generator coil are separated into many very small similar elementary cells. By the expanded explosion in the liner and move explosion to the end of the liner, the coil turns number will be reduced. So, the equivalent SC of the HFCG will increase. The results show that by progress of explosion and decrease of the turns’ number in the generator coil total capacitance of the generator increases until the explosion reaches to the second turn. When only one turn remains in the circuit, a decrease occurs in the total capacitance of the generator.
Integrated Sensitivity Analysis Workflow
Energy Technology Data Exchange (ETDEWEB)
Friedman-Hill, Ernest J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoffman, Edward L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gibson, Marcus J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Clay, Robert L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-08-01
Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.
Directory of Open Access Journals (Sweden)
Ravinder Nagpal
2016-12-01
Full Text Available For decades, babies were thought to be born germ-free, but recent evidences suggest that they are already exposed to various bacteria in-utero. However, the data on population levels of such pioneer gut bacteria, particularly in context to birth mode, is sparse. We herein aimed to quantify such bacteria from the meconium of 151 healthy term Japanese infants born vaginally or by C-section. Neonatal first meconium was obtained within 24-48 hours of delivery; RNA was extracted and subjected to reverse-transcription-quantitative PCR using specific primers for Clostridium coccoides group, Clostridium leptum subgroup, Bacteroides fragilis group, Atopobium cluster, Prevotella, Bifidobacterium, Lactobacillus, Enterococcus, Enterobacteriaceae, Staphylococcus, Enterococcus, Streptococcus, Clostridium perfringens, and C. difficile. We detected several bacterial groups in both vaginally- and cesarean-born infants. B. fragilis group, Enterobacteriaceae, Enterococcus, Streptococcus and Staphylococcus were detected in more than 50% of infants, with counts ranging from 105-108 cells/g sample. About 30-35% samples harbored Bifidobacterium and Lactobacillus (104-105 cells/g; whereas C. coccoides group, C. leptum subgroup and C. perfringens were detected in 10-20% infants (103-105 cells/g. Compared to vaginally-born babies, cesarean-born babies were significantly less often colonized with Lactobacillus genus (6% vs. 37%; P=0.01 and L. gasseri subgroup (6% vs. 31%; P=0.04. Overall, seven Lactobacillus subgroups/ species i.e. L. gasseri subgroup, L. ruminis subgroup, L. casei subgroup, L. reuteri subgroup, L. sakei subgroup, L. plantarum subgroup and L. brevis were detected in the samples from vaginally-born group, whereas only two members i.e. L. gasseri subgroup and L. brevis were detected in the cesarean group. These data corroborate that several bacterial clades may already be present before birth in term infants’ gut. Further, Remarkably lower detection rate
DEFF Research Database (Denmark)
Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad
2018-01-01
point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...
Sensitivity Analysis Without Assumptions.
Ding, Peng; VanderWeele, Tyler J
2016-05-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.
International Nuclear Information System (INIS)
Boulaich, Y.; Bardouni, C.; Elyounoussi, C.; Elbakkari, H.; Boukhal, H.; Erradi, L.; Nacir, B.
2011-01-01
Full text: In this work, we present our analysis of the CREOLE experiment on the parameter by using the three-dimensional continuous energy code (MCNPS) and the last updated nuclear data evaluations. This experiment performed in the EOLE critical facility located at CEA-Cadarache, was dedicated to studies for both UO2 and UO2-PuO2 PWR type lattices covering the whole temperature range from 20 0 C to 300 0 C. We have developed an accurate model of the EOLE reactor to be used by the MCNP5 Monte Carlo code. This model guarantees a high level of fidelity in the description of different configurations at various temperatures taking into account their consequence on neutron cross section data and all thermal expansion effects. In this case, the remaining error between calculation and experiment will be awarded mainly to uncertainties on nuclear data. Our own cross section library was constructed by using NJOY99.259 code with point-wise nuclear data based on ENDF-BVII. JEFF3.1, JENDL3.3 and JENDL4 evaluation files. The MCNP model was validated through the axial and radial fission rate measurements at room and hot temperatures. Calculation-experiment discrepancies of the reactivity parameter were analyzed and the results have shown that the JENDL evaluations give the most consistent values. In order to specify the source of the relatively large difference between experiment and calculation due to ENDF-BVII nuclear data evaluation, the discrepancy in reactivity between ENDF-BVII and JENDL evaluations was decomposed using sensitivity and uncertainty analysis technique
Interference and Sensitivity Analysis.
VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth
2014-11-01
Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.
International Nuclear Information System (INIS)
Boulaich, Y.; El Bardouni, T.; Erradi, L.; Chakir, E.; Boukhal, H.; Nacir, B.; El Younoussi, C.; El Bakkari, B.; Merroun, O.; Zoubair, M.
2011-01-01
Highlights: → In the present work, we have analyzed the CREOLE experiment on the reactivity temperature coefficient (RTC) by using the three-dimensional continuous energy code (MCNP5) and the last updated nuclear data evaluations. → Calculation-experiment discrepancies of the RTC were analyzed and the results have shown that the JENDL3.3 and JEFF3.1 evaluations give the most consistent values. → In order to specify the source of the relatively large discrepancy in the case of ENDF-BVII nuclear data evaluation, the k eff discrepancy between ENDF-BVII and JENDL3.3 was decomposed by using sensitivity and uncertainty analysis technique. - Abstract: In the present work, we analyze the CREOLE experiment on the reactivity temperature coefficient (RTC) by using the three-dimensional continuous energy code (MCNP5) and the last updated nuclear data evaluations. This experiment performed in the EOLE critical facility located at CEA/Cadarache, was mainly dedicated to the RTC studies for both UO 2 and UO 2 -PuO 2 PWR type lattices covering the whole temperature range from 20 deg. C to 300 deg. C. We have developed an accurate 3D model of the EOLE reactor by using the MCNP5 Monte Carlo code which guarantees a high level of fidelity in the description of different configurations at various temperatures taking into account their consequence on neutron cross section data and all thermal expansion effects. In this case, the remaining error between calculation and experiment will be awarded mainly to uncertainties on nuclear data. Our own cross section library was constructed by using NJOY99.259 code with point-wise nuclear data based on ENDF-BVII, JEFF3.1 and JENDL3.3 evaluation files. The MCNP model was validated through the axial and radial fission rate measurements at room and hot temperatures. Calculation-experiment discrepancies of the RTC were analyzed and the results have shown that the JENDL3.3 and JEFF3.1 evaluations give the most consistent values; the discrepancy is
Chemical kinetic functional sensitivity analysis: Elementary sensitivities
International Nuclear Information System (INIS)
Demiralp, M.; Rabitz, H.
1981-01-01
Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research
MOVES regional level sensitivity analysis
2012-01-01
The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...
Damage energy and displacement cross sections: survey and sensitivity. [Neutrons
Energy Technology Data Exchange (ETDEWEB)
Doran, D.G.; Parkin, D.M.; Robinson, M.T.
1976-10-01
Calculations of damage energy and displacement cross sections using the recommendations of a 1972 IAEA Specialists' Meeting are reviewed. The sensitivity of the results to assumptions about electronic energy losses in cascade development and to different choices respecting the nuclear cross sections is indicated. For many metals, relative uncertainties and sensitivities in these areas are sufficiently small that adoption of standard displacement cross sections for neutron irradiations can be recommended.
Damage energy and displacement cross sections: survey and sensitivity
International Nuclear Information System (INIS)
Doran, D.G.; Parkin, D.M.; Robinson, M.T.
1976-10-01
Calculations of damage energy and displacement cross sections using the recommendations of a 1972 IAEA Specialists' Meeting are reviewed. The sensitivity of the results to assumptions about electronic energy losses in cascade development and to different choices respecting the nuclear cross sections is indicated. For many metals, relative uncertainties and sensitivities in these areas are sufficiently small that adoption of standard displacement cross sections for neutron irradiations can be recommended
Edwardson, Charlotte L; Henson, Joe; Bodicoat, Danielle H; Bakrania, Kishan; Khunti, Kamlesh; Davies, Melanie J; Yates, Thomas
2017-01-13
To quantify associations between sitting time and glucose, insulin and insulin sensitivity by considering reallocation of time into standing or stepping. Cross-sectional. Leicestershire, UK, 2013. Adults aged 30-75 years at high risk of impaired glucose regulation (IGR) or type 2 diabetes. 435 adults (age 66.8±7.4 years; 61.7% male; 89.2% white European) were included. Participants wore an activPAL3 monitor 24 hours/day for 7 days to capture time spent sitting, standing and stepping. Fasting and 2-hour postchallenge glucose and insulin were assessed; insulin sensitivity was calculated by Homeostasis Model Assessment of Insulin Secretion (HOMA-IS) and Matsuda-Insulin Sensitivity Index (Matsuda-ISI). Isotemporal substitution regression modelling was used to quantify associations of substituting 30 min of waking sitting time (accumulated in prolonged (≥30 min) or short (sitting to short sitting time and to standing was associated with 4% lower fasting insulin and 4% higher HOMA-IS; reallocation of prolonged sitting to standing was also associated with a 5% higher Matsuda-ISI. Reallocation to stepping was associated with 5% lower 2-hour glucose, 7% lower fasting insulin, 13% lower 2-hour insulin and a 9% and 16% higher HOMA-IS and Matsuda-ISI, respectively. Reallocation of short sitting time to stepping was associated with 5% and 10% lower 2-hour glucose and 2-hour insulin and 12% higher Matsuda-ISI. Results were not modified by IGR status or sex. Reallocating a small amount of short or prolonged sitting time with standing or stepping may improve 2-hour glucose, fasting and 2-hour insulin and insulin sensitivity. Findings should be confirmed through prospective and intervention research. ISRCTN31392913, Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Energy Technology Data Exchange (ETDEWEB)
Reyes F, M. C.; Del Valle G, E. [IPN, Escuela Superior de Fisica y Matematicas, Av. IPN s/n, Col. Lindavista, 07738 Ciudad de Mexico (Mexico); Gomez T, A. M. [ININ, Departamento de Sistemas Nucleares, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Sanchez E, V., E-mail: rf.melisa@gmail.com [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany)
2015-09-15
A methodology was implemented to carry out a sensitivity and uncertainty analysis for cross sections used in a coupled model for Trace/Parcs in a transient of control rod fall of a BWR-5. A model of the reactor core for the neutronic code Parcs was used, in which the assemblies located in the core are described. Thermo-hydraulic model in Trace was a simple model, where only a component type Chan was designed to represent all the core assemblies, which it was within a single vessel and boundary conditions were established. The thermo-hydraulic part was coupled with the neutron part, first for the steady state and then a transient of control rod fall was carried out for the sensitivity and uncertainty analysis. To carry out the analysis of cross sections used in the coupled model Trace/Parcs during the transient, the Probability Density Functions for 22 parameters selected from the total of neutronic parameters that use Parcs were generated, obtaining 100 different cases for the coupled model Trace/Parcs, each one with a database of different cross sections. All these cases were executed with the coupled model, obtaining in consequence 100 different output files for the transient of control rod fall doing emphasis in the nominal power, for which an uncertainty analysis was realized at the same time generate the band of uncertainty. With this analysis is possible to observe the ranges of results of the elected responses varying the selected uncertainty parameters. The sensitivity analysis complements the uncertainty analysis, identifying the parameter or parameters with more influence on the results and thus focuses on these parameters in order to better understand their effects. Beyond the obtained results, because is not a model with real operation data, the importance of this work is to know the application of the methodology to carry out the sensitivity and uncertainty analyses. (Author)
Sensitivity analysis of groundwater flow
International Nuclear Information System (INIS)
Bao Yungbing
1990-12-01
A sensitivity analysis of general linear and nonlinear simulation equation sets is developed in this study in order to facilitate the application of the sensitivity analysis to groundwater flow problems. Two methods are considered for the sensitivity calculation: the 'direct method' and the 'adjoint method'. Sensitivity theory was used to establish a sensitivity analysis model for general three dimensional transient groundwater flow. Three different methods for calculation of the sensitivity coefficient are presented. The sensitivity equations and the groundwater flow equations were nummerically solved by the Galerkin finite element method in the model. Sensitivity coefficients were carried out both numerically with the developed direct method and with the known analytic solution. Very good agreement between the two solutions was obtained. The developed sensitivity model was applied to three dimensional (axi-symmetric) groundwater flow in a tunnel system, which was supposed to be located at a depth of 500 meters below the ground surface in a four-layered rock formation. In this case, the sensitivity distribution of the piezometric head was calculated with the direct method and the sensitivity of multiple performance functions to perturbations of the permeability were analysed by using the adjoint method. The calculations results showed that the peaks of the sensitivity coefficients appear mostly in the area around the tunnel. The piezometric head at the studied points (nodes) was quite sensitive to perturbations of the permeability in the layer where the points were located, but practically insensitive to perturbations of the permeability in the bottom layer. The flux into the tunnel and the velocity performance were mostly sensitive to perturbation of the permeability in the layer next to the top layer, but practically insensitive to perturbation of the permeability in the bottom layer. (author)
Maternal sensitivity: a concept analysis.
Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae
2008-11-01
The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.
Global optimization and sensitivity analysis
International Nuclear Information System (INIS)
Cacuci, D.G.
1990-01-01
A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints
International Nuclear Information System (INIS)
Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.
1990-01-01
A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
Phantom pain : A sensitivity analysis
Borsje, Susanne; Bosmans, JC; Van der Schans, CP; Geertzen, JHB; Dijkstra, PU
2004-01-01
Purpose : To analyse how decisions to dichotomise the frequency and impediment of phantom pain into absent and present influence the outcome of studies by performing a sensitivity analysis on an existing database. Method : Five hundred and thirty-six subjects were recruited from the database of an
Sensitivity analysis using probability bounding
International Nuclear Information System (INIS)
Ferson, Scott; Troy Tucker, W.
2006-01-01
Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values
Sensitivity analysis in remote sensing
Ustinov, Eugene A
2015-01-01
This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...
Sensitivity of LWR fuel cycle costs to uncertainties in detailed thermal cross sections
International Nuclear Information System (INIS)
Ryskamp, J.M.; Becker, M.; Harris, D.R.
1979-01-01
Cross sections averaged over the thermal energy (< 1 or 2 eV) group have been shown to have an important economic role for light-water reactors. Cost implications of thermal cross section uncertainties at the few-group level were reported earlier. When it has been determined that costs are sensitive to a specific thermal-group cross section, it becomes desirable to determine how specific energy-dependent cross sections influence fuel cycle costs. Multigroup cross-section sensitivity coefficients vary with fuel exposure. By changing the shape of a cross section displayed on a view-tube through an interactive graphics system, one can compute the change in few-group cross section using the exposure dependent sensitivity coefficients. With the changed exposure dependent few-group cross section, a new fuel cycle cost is computed by a sequence of batch depletion, core analysis, and fuel batch cost code modules. Fuel cycle costs are generally most sensitive to cross section uncertainties near the peak of the hardened Maxwellian flux
Sensitivity Analysis of Viscoelastic Structures
Directory of Open Access Journals (Sweden)
A.M.G. de Lima
2006-01-01
Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.
UMTS Common Channel Sensitivity Analysis
DEFF Research Database (Denmark)
Pratas, Nuno; Rodrigues, António; Santos, Frederico
2006-01-01
and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....
TEMAC, Top Event Sensitivity Analysis
International Nuclear Information System (INIS)
Iman, R.L.; Shortencarier, M.J.
1988-01-01
1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement
Mesado Melia, Carles
2017-01-01
This PhD study, developed at Universitat Politècnica de València (UPV), aims to cover the first phase of the benchmark released by the expert group on Uncertainty Analysis in Modeling (UAM-LWR). The main contribution to the benchmark, made by the thesis' author, is the development of a MATLAB program requested by the benchmark organizers. This is used to generate neutronic libraries to distribute among the benchmark participants. The UAM benchmark pretends to determine the uncertainty introdu...
Systemization of burnup sensitivity analysis code. 2
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2005-02-01
Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For
Roughness Sensitivity Comparisons of Wind Turbine Blade Sections
Energy Technology Data Exchange (ETDEWEB)
Wilcox, Benjamin J. [Texas A & M Univ., College Station, TX (United States). Dept. of Aerospace Engineering; White, Edward B. [Texas A & M Univ., College Station, TX (United States). Dept. of Aerospace Engineering; Maniaci, David Charles [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Wind Energy Technologies Dept.
2017-10-01
One explanation for wind turbine power degradation is insect roughness. Historical studies on insect-induced power degradation have used simulation methods which are either un- representative of actual insect roughness or too costly or time-consuming to be applied to wide-scale testing. Furthermore, the role of airfoil geometry in determining the relations between insect impingement locations and roughness sensitivity has not been studied. To link the effects of airfoil geometry, insect impingement locations, and roughness sensitivity, a simulation code was written to determine representative insect collection patterns for different airfoil shapes. Insect collection pattern data was then used to simulate roughness on an NREL S814 airfoil that was tested in a wind tunnel at Reynolds numbers between 1.6 x 10^{6} and 4.0 x 10^{6}. Results are compared to previous tests of a NACA 63_{3} -418 airfoil. Increasing roughness height and density results in decreased maximum lift, lift curve slope, and lift-to-drag ratio. Increasing roughness height, density, or Reynolds number results in earlier bypass transition, with critical roughness Reynolds numbers lying within the historical range. Increased roughness sensitivity on the 25% thick NREL S814 is observed compared to the 18% thick NACA 63 3 -418. Blade-element-momentum analysis was used to calculate annual energy production losses of 4.9% and 6.8% for a NACA 63_{3} -418 turbine and an NREL S814 turbine, respectively, operating with 200 μm roughness. These compare well to historical field measurements.
Data fusion qualitative sensitivity analysis
International Nuclear Information System (INIS)
Clayton, E.A.; Lewis, R.E.
1995-09-01
Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables
High-sensitivity detection using isotachophoresis with variable cross-section geometry.
Bahga, Supreet S; Kaigala, Govind V; Bercovici, Moran; Santiago, Juan G
2011-02-01
We present a theoretical and experimental study on increasing the sensitivity of ITP assays by varying channel cross-section. We present a simple, unsteady, diffusion-free model for plateau mode ITP in channels with axially varying cross-section. Our model takes into account detailed chemical equilibrium calculations and handles arbitrary variations in channel cross-section. We have validated our model with numerical simulations of a more comprehensive model of ITP. We show that using strongly convergent channels can lead to a large increase in sensitivity and simultaneous reduction in assay time, compared to uniform cross-section channels. We have validated our theoretical predictions with detailed experiments by varying channel geometry and analyte concentrations. We show the effectiveness of using strongly convergent channels by demonstrating indirect fluorescence detection with a sensitivity of 100 nM. We also present simple analytical relations for dependence of zone length and assay time on geometric parameters of strongly convergent channels. Our theoretical analysis and experimental validations provide useful guidelines on optimizing chip geometry for maximum sensitivity under constraints of required assay time, chip area and power supply. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Probabilistic sensitivity analysis of biochemical reaction systems.
Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John
2009-09-07
Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Approaches to Sensitivity Analysis in MOLP
Sebastian Sitarz
2014-01-01
The paper presents two approaches to the sensitivity analysis in multi-objective linear programming (MOLP). The first one is the tolerance approach and the other one is the standard sensitivity analysis. We consider the perturbation of the objective function coefficients. In the tolerance method we simultaneously change all of the objective function coefficients. In the standard sensitivity analysis we change one objective function coefficient without changing the others. In the numerical exa...
A review of sensitivity analysis techniques
Energy Technology Data Exchange (ETDEWEB)
Hamby, D.M.
1993-12-31
Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.
Risk Characterization uncertainties associated description, sensitivity analysis
International Nuclear Information System (INIS)
Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.
2013-01-01
The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations
Object-sensitive Type Analysis of PHP
Van der Hoek, Henk Erik; Hage, J
2015-01-01
In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the
Mbatchou Ngahane, Bertrand Hugo; Nde, Francis; Ngomo, Eliane; Afane Ze, Emmanuel
2015-01-01
Sensitization to flour or fungal alpha-amylase is a prerequisite for the development of respiratory allergy in bakers. The knowledge of occupational allergen sensitization among bakery workers will facilitate the implementation of preventive measures for respiratory allergies in bakeries. The objective of this study was to determine the prevalence and factors associated with sensitization to wheat flour and α-amylase in bakers in Douala. A cross-sectional study was conducted in 42 of the 151 bakeries that are present in the city of Douala. Demographics, clinical data, as well as results of skin prick tests to wheat flour, α-amylase and common aeroallergens were collected from all participants. A logistic regression model of the SPSS.20 software was used to identify factors associated with sensitization to wheat flour and α-amylase. Of the 229 participants included in the study, 222 (96.9%) were male. The mean age was 36.3 ± 8.9 years. The prevalence of sensitization to flour and α-amylase were 16.6% and 8.3% respectively. After multivariate analysis, factors associated with sensitization to flour were work seniority and sensitization to storage mites while an age of 30 years and above was the only factor associated with sensitization to α-amylase. Bakers in Douala are at risk of sensitization to occupational allergens. The environmental hygiene in bakeries, health surveillance and the use of personal protective equipment could reduce the risk of respiratory allergies among bakers.
A hybrid approach for global sensitivity analysis
International Nuclear Information System (INIS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-01-01
Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.
Sensitivity analysis of a PWR pressurizer
International Nuclear Information System (INIS)
Bruel, Renata Nunes
1997-01-01
A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)
Ethical sensitivity in professional practice: concept analysis.
Weaver, Kathryn; Morse, Janice; Mitcham, Carl
2008-06-01
This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.
LBLOCA sensitivity analysis using meta models
International Nuclear Information System (INIS)
Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.
2014-01-01
This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)
Sensitivity analysis for solar plates
Aster, R. W.
1986-02-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Sensitivity analysis in optimization and reliability problems
International Nuclear Information System (INIS)
Castillo, Enrique; Minguez, Roberto; Castillo, Carmen
2008-01-01
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods
Techniques for sensitivity analysis of SYVAC results
International Nuclear Information System (INIS)
Prust, J.O.
1985-05-01
Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)
Multiple predictor smoothing methods for sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Sensitivity analysis and optimization issues in NASTRAN
Tischler, V. A.; Venkayya, V. B.
1991-01-01
The purpose is to develop procedures to extract sensitivity analysis information from COSMIC/NASTRAN and to couple it with a mathematical optimization package. At present, the analysis will be limited to stress, displacement, and frequency constraints with structures modeled with membrane elements, rods, and bar elements. Two types of sensitivity analysis are discussed: an adjoint variable approach which is most effective when the number of active constraints is significantly less than the number of physical variables, and an approach based on a first order approximation of a Taylor series. The latter approach is more effective when the number of independent design variables is significantly less than the number of active constraints.
Sensitivity Analysis of Fire Dynamics Simulation
DEFF Research Database (Denmark)
Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.
2007-01-01
equations require solution of the issues of combustion and gas radiation to mention a few. This paper performs a sensitivity analysis of a fire dynamics simulation on a benchmark case where measurement results are available for comparison. The analysis is performed using the method of Elementary Effects...
Dynamic Resonance Sensitivity Analysis in Wind Farms
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2017-01-01
(PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...
Sensitivity Analysis of a Physiochemical Interaction Model ...
African Journals Online (AJOL)
The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a single first order ordinary differential equation. In this analysis, we will study the sensitivity analysis due to a variation of the initial condition ...
Probabilistic sensitivity analysis in health economics.
Baio, Gianluca; Dawid, A Philip
2015-12-01
Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.
Sensitivity Analysis of Centralized Dynamic Cell Selection
DEFF Research Database (Denmark)
Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.
2016-01-01
and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...
Sensitivity analysis in a structural reliability context
International Nuclear Information System (INIS)
Lemaitre, Paul
2014-01-01
This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA
Energy Technology Data Exchange (ETDEWEB)
Flach, Greg [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Wohlwend, Jen [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-10-02
This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Computational Method for Global Sensitivity Analysis of Reactor Neutronic Parameters
Directory of Open Access Journals (Sweden)
Bolade A. Adetula
2012-01-01
Full Text Available The variance-based global sensitivity analysis technique is robust, has a wide range of applicability, and provides accurate sensitivity information for most models. However, it requires input variables to be statistically independent. A modification to this technique that allows one to deal with input variables that are blockwise correlated and normally distributed is presented. The focus of this study is the application of the modified global sensitivity analysis technique to calculations of reactor parameters that are dependent on groupwise neutron cross-sections. The main effort in this work is in establishing a method for a practical numerical calculation of the global sensitivity indices. The implementation of the method involves the calculation of multidimensional integrals, which can be prohibitively expensive to compute. Numerical techniques specifically suited to the evaluation of multidimensional integrals, namely, Monte Carlo and sparse grids methods, are used, and their efficiency is compared. The method is illustrated and tested on a two-group cross-section dependent problem. In all the cases considered, the results obtained with sparse grids achieved much better accuracy while using a significantly smaller number of samples. This aspect is addressed in a ministudy, and a preliminary explanation of the results obtained is given.
International Nuclear Information System (INIS)
Barber, A. D.; Busch, R.
2009-01-01
The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)
Sensitivity analysis of the Two Geometry Method
International Nuclear Information System (INIS)
Wichers, V.A.
1993-09-01
The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)
Sensitivity Analysis of Selected DIVOPS Input Factors
1977-12-01
v40. .............. o..... ....... H-3 viii CAA- TD -77-9 SENSITIVITY ANALYSIS OF SELECTED DIVOPS INPUT FACTORS CHAPTER 1 INTRODUCTION 1-1. BACKGROUND...u UI 3,743 3,79 3,183 3.790 3,709 J.648 U 1 3,793 J.791 4,74b D 3.703 3.700 3.733 i 3,14U 3,147 3,844 3,0442 3.753 3.751 U 3,406 3,b70 J, IZ4 Jlbd J
Sensitivity analysis of reactive ecological dynamics.
Verdy, Ariane; Caswell, Hal
2008-08-01
Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.
Aronovich, Sharon; Kim, Roderick Y
2014-05-01
The management of odontogenic cysts and tumors typically requires a biopsy, which may present significant challenges and prompt an additional visit to the operating room before definitive treatment. The aim of this study was to determine the validity of frozen-section diagnosis in the management of benign oral and maxillofacial lesions, allowing intraoperative diagnosis followed by definitive treatment under the same general anesthetic. A retrospective chart review of patients treated at the University of Michigan Health System was performed. Patients of all ages who had a diagnosis of a benign maxillofacial lesion by frozen-section and permanent histopathology reports were included for analysis. Patients were identified using the Current Procedural Terminology code for enucleation and curettage and International Classification of Diseases, Ninth Revision codes for benign cysts or tumors of skull, face, or lower jaw. Of 450 patients reviewed, 214 had intraoperative frozen-section examination available for comparison with permanent histopathology. There were 121 men (56.5%) and 93 women (43.5%), with a mean age of 41 years. Compared with final permanent histopathology, the overall sensitivity of frozen sections was 92.1%. Frozen-section histopathology had a sensitivity greater than 90% and a specificity greater than 95% for the diagnosis of dentigerous cyst and keratocyst odontogenic tumor. In this study of 214 patients with benign maxillofacial lesions, frozen-section histopathology was found to be a valid diagnostic modality with high sensitivity, specificity, and positive and negative predictive values. These results and analysis support the use of frozen-section histopathology for the treatment of benign maxillofacial lesions and underscore its value in the management of these lesions. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Contributions to sensitivity analysis and generalized discriminant analysis
International Nuclear Information System (INIS)
Jacques, J.
2005-12-01
Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)
Sensitivity of reactor multiplication factor to positions of cross-section ...
Indian Academy of Sciences (India)
V GOPALAKRISHNAN
2017-08-16
Aug 16, 2017 ... Neutron–nuclear interaction cross-section is sensitive to neutron kinetic energy and most nuclei exhibit resonance behaviour at specific energies within the resonance energy range, spanning from a fraction of an electron volt to several .... nite size avoids loss of neutrons through leakage; hence the neutron ...
Simple Sensitivity Analysis for Orion GNC
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Sensitivity analysis of floating offshore wind farms
International Nuclear Information System (INIS)
Castro-Santos, Laura; Diaz-Casas, Vicente
2015-01-01
Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. Copyright © 2012 Elsevier B.V. All rights reserved.
Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I
National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...
Is notch sensitivity a stress analysis problem?
Directory of Open Access Journals (Sweden)
Jaime Tupiassú Pinho de Castro
2013-07-01
Full Text Available Semi–empirical notch sensitivity factors q have been widely used to properly account for notch effects in fatigue design for a long time. However, the intrinsically empirical nature of this old concept can be avoided by modeling it using sound mechanical concepts that properly consider the influence of notch tip stress gradients on the growth behavior of mechanically short cracks. Moreover, this model requires only well-established mechanical properties, as it has no need for data-fitting or similar ill-defined empirical parameters. In this way, the q value can now be calculated considering the characteristics of the notch geometry and of the loading, as well as the basic mechanical properties of the material, such as its fatigue limit and crack propagation threshold, if the problem is fatigue, or its equivalent resistances to crack initiation and to crack propagation under corrosion conditions, if the problem is environmentally assisted or stress corrosion cracking. Predictions based on this purely mechanical model have been validated by proper tests both in the fatigue and in the SCC cases, indicating that notch sensitivity can indeed be treated as a stress analysis problem.
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
Sensitivity of SBLOCA analysis to model nodalization
International Nuclear Information System (INIS)
Lee, C.; Ito, T.; Abramson, P.B.
1983-01-01
The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery
Subset simulation for structural reliability sensitivity analysis
International Nuclear Information System (INIS)
Song Shufang; Lu Zhenzhou; Qiao Hongwei
2009-01-01
Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes
Sensitivity analysis of distributed volcanic source inversion
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
ASCAP. Resonance Region Cross Section Analysis
Energy Technology Data Exchange (ETDEWEB)
Smith, J.R.; Young, R.C. [EG and G Idaho Inc., Idaho Falls, ID (United States)
1972-09-01
ACSAP may be used to compute neutron cross section data from neutron resonance input. Total, fission, capture, or scattering cross section data may be computed. Experimental data may be compared by means of a wide selection of representations. ACSAP can also determine cross section resonance parameters from input experimental data.
Scalable analysis tools for sensitivity analysis and UQ (3160) results.
Energy Technology Data Exchange (ETDEWEB)
Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.
2009-09-01
The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.
Systemization of burnup sensitivity analysis code (2) (Contract research)
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2008-08-01
Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion
Longitudinal Genetic Analysis of Anxiety Sensitivity
Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.
2012-01-01
Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…
Sensitivity analysis and energy conservation measures implications
International Nuclear Information System (INIS)
Lam, Joseph C.; Wan, Kevin K.W.; Yang Liu
2008-01-01
Electricity use characteristics of 10 air-conditioned office buildings in subtropical Hong Kong were investigated. Monthly electricity consumption data were gathered and analysed. The annual electricity use per unit gross floor area ranged from 233 to 368 kWh/m 2 , with a mean of 292 kWh/m 2 . The ranges of percentage consumption for the four major electricity end-users - namely heating, ventilation and air-conditioning (HVAC), lighting, electrical equipment, and lifts and escalators - were 40.1-50.7%, 22.1-29%, 16.6-32.9% and 2.2-5.3%, respectively. Ten key design variables were identified in the parametric and sensitivity analysis using building energy simulation technique. Analysis of the resulting influence coefficients suggested that indoor design condition (from 22 to 25.5 deg. C), electric lighting (a modest 2 W/m 2 reduction in the current lighting code) and chiller COP (from air- to water-cooled) could offer great electricity savings potential, in the order of 14%, 5.2% and 11%, respectively
Evaluation of Cross-Section Sensitivities in Computing Burnup Credit Fission Product Concentrations
International Nuclear Information System (INIS)
Gauld, I.C.
2005-01-01
U.S. Nuclear Regulatory Commission Interim Staff Guidance 8 (ISG-8) for burnup credit covers actinides only, a position based primarily on the lack of definitive critical experiments and adequate radiochemical assay data that can be used to quantify the uncertainty associated with fission product credit. The accuracy of fission product neutron cross sections is paramount to the accuracy of criticality analyses that credit fission products in two respects: (1) the microscopic cross sections determine the reactivity worth of the fission products in spent fuel and (2) the cross sections determine the reaction rates during irradiation and thus influence the accuracy of predicted final concentrations of the fission products in the spent fuel. This report evaluates and quantifies the importance of the fission product cross sections in predicting concentrations of fission products proposed for use in burnup credit. The study includes an assessment of the major fission products in burnup credit and their production precursors. Finally, the cross-section importances, or sensitivities, are combined with the importance of each major fission product to the system eigenvalue (k eff ) to determine the net importance of cross sections to k eff . The importances established the following fission products, listed in descending order of priority, that are most likely to benefit burnup credit when their cross-section uncertainties are reduced: 151 Sm, 103 Rh, 155 Eu, 150 Sm, 152 Sm, 153 Eu, 154 Eu, and 143 Nd
Sensitivity analysis of Smith's AMRV model
International Nuclear Information System (INIS)
Ho, Chih-Hsiang
1995-01-01
Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
Wear-Out Sensitivity Analysis Project Abstract
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Supercritical extraction of oleaginous: parametric sensitivity analysis
Directory of Open Access Journals (Sweden)
Santos M.M.
2000-01-01
Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.
Multitarget global sensitivity analysis of n-butanol combustion.
Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T
2013-05-02
A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.
Sensitivity analysis in multi-parameter probabilistic systems
International Nuclear Information System (INIS)
Walker, J.R.
1987-01-01
Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model
An ESDIRK Method with Sensitivity Analysis Capabilities
DEFF Research Database (Denmark)
Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove
2004-01-01
of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...
Perturbation analysis for Monte Carlo continuous cross section models
International Nuclear Information System (INIS)
Kennedy, Chris B.; Abdel-Khalik, Hany S.
2011-01-01
Sensitivity analysis, including both its forward and adjoint applications, collectively referred to hereinafter as Perturbation Analysis (PA), is an essential tool to complete Uncertainty Quantification (UQ) and Data Assimilation (DA). PA-assisted UQ and DA have traditionally been carried out for reactor analysis problems using deterministic as opposed to stochastic models for radiation transport. This is because PA requires many model executions to quantify how variations in input data, primarily cross sections, affect variations in model's responses, e.g. detectors readings, flux distribution, multiplication factor, etc. Although stochastic models are often sought for their higher accuracy, their repeated execution is at best computationally expensive and in reality intractable for typical reactor analysis problems involving many input data and output responses. Deterministic methods however achieve computational efficiency needed to carry out the PA analysis by reducing problem dimensionality via various spatial and energy homogenization assumptions. This however introduces modeling error components into the PA results which propagate to the following UQ and DA analyses. The introduced errors are problem specific and therefore are expected to limit the applicability of UQ and DA analyses to reactor systems that satisfy the introduced assumptions. This manuscript introduces a new method to complete PA employing a continuous cross section stochastic model and performed in a computationally efficient manner. If successful, the modeling error components introduced by deterministic methods could be eliminated, thereby allowing for wider applicability of DA and UQ results. Two MCNP models demonstrate the application of the new method - a Critical Pu Sphere (Jezebel), a Pu Fast Metal Array (Russian BR-1). The PA is completed for reaction rate densities, reaction rate ratios, and the multiplication factor. (author)
MOVES2010a regional level sensitivity analysis
2012-12-10
This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...
International Nuclear Information System (INIS)
Added, N.
1987-01-01
The 18 O + 10 B fusion reaction has been investigated within the bombarding energy range of 29,0 MeV lab 0 lab 0 angular range. For this purpose, a high resolution position sensitive ionization chamber has been developed and constructed. Experimental results compared to model predictions and experimental systematics found in the literature allows to reject compound nucleus limitation to the fusion cross section up to energies as high as five times the coulomb barrier. Statistical model fits to the residues elementary distributions reveal a quite difuse partial fusion cross section in the angular momentum space. Systematic analysis of fusion barrier height (V B ) and radius (R B ) for neighbouring nuclei point out the importance of the nuclear matter difuseness in the competition between the fusion and quasi-direct process. Calculations within this framework were performed. (author) [pt
GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis
Kennedy, Christopher Brandon
model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.
Sensitivity Analysis of a Riparian Vegetation Growth Model
Directory of Open Access Journals (Sweden)
Michael Nones
2016-11-01
Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.
The accuracy of frozen section analysis in ultrasound- guided core needle biopsy of breast lesions
International Nuclear Information System (INIS)
Brunner, Andreas H; Sagmeister, Thomas; Kremer, Jolanta; Riss, Paul; Brustmann, Hermann
2009-01-01
Limited data are available to evaluate the accuracy of frozen section analysis and ultrasound- guided core needle biopsy of the breast. In a retrospective analysis data of 120 consecutive handheldultrasound- guided 14- gauge automated core needle biopsies (CNB) in 109 consecutive patients with breast lesions between 2006 and 2007 were evaluated. In our outpatient clinic120 CNB were performed. In 59/120 (49.2%) cases we compared histological diagnosis on frozen sections with those on paraffin sections of CNB and finally with the result of open biopsy. Of the cases 42/59 (71.2%) were proved to be malignant and 17/59 (28.8%) to be benign in the definitive histology. 2/59 (3.3%) biopsies had a false negative frozen section result. No false positive results of the intraoperative frozen section analysis were obtained, resulting in a sensitivity, specificity and positive predicting value (PPV) and negative predicting value (NPV) of 95%, 100%, 100% and 90%, respectively. Histological and morphobiological parameters did not show up relevance for correct frozen section analysis. In cases of malignancy time between diagnosis and definitive treatment could not be reduced due to frozen section analysis. The frozen section analysis of suspect breast lesions performed by CNB displays good sensitivity/specificity characteristics. Immediate investigations of CNB is an accurate diagnostic tool and an important step in reducing psychological strain by minimizing the period of uncertainty in patients with breast tumor
NPV Sensitivity Analysis: A Dynamic Excel Approach
Mangiero, George A.; Kraten, Michael
2017-01-01
Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…
Extended forward sensitivity analysis of one-dimensional isothermal flow
International Nuclear Information System (INIS)
Johnson, M.; Zhao, H.
2013-01-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
Variance-based sensitivity analysis for wastewater treatment plant modelling.
Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B
2014-02-01
Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.
Chronic malnutrition: a cross-section analysis
Directory of Open Access Journals (Sweden)
Emely Beatriz García González
2014-01-01
Full Text Available ABSTRACT Introduction: The objective of the study was to determine the main causes of chronic malnutrition worldwide. Materials and Methods: A cross-sectional study was employed to analyze the main determinants of chronic malnutrition in a sample of 86 countries. The variables used are based on the UNICEF conceptual framework of malnutrition. This framework classifies the determinants of malnutrition in three main causes: basic, immediate, and underlying. Findings: Droughts, floods, and extreme temperatures, and GDP per capita are the main basic determinants of malnutrition in the sample of countries. In addition one underlying determinant had a major impact in the prevalence of malnutrition: improved sanitation facilities. Conclusions: The findings of this study demonstrated that the variables within the basic and underlying cause classification are the ones with a greater impact on chronic malnutrition.
Chronic malnutrition: a cross-section analysis
Directory of Open Access Journals (Sweden)
Emely Beatriz García González
2014-01-01
Full Text Available Introduction: The objective of the study was to determine the main causes of chronic malnutrition worldwide. Materials and Methods: A cross-sectional study was employed to analyze the main determinants of chronic malnutrition in a sample of 86 countries. The variables used are based on the UNICEF conceptual framework of malnutrition. This framework classifies the determinants of malnutrition in three main causes: basic, immediate, and underlying. Findings: Droughts, floods, and extreme temperatures, and GDP per capita are the main basic determinants of malnutrition in the sample of countries. In addition one underlying determinant had a major impact in the prevalence of malnutrition: improved sanitation facilities. Conclusions: The findings of this study demonstrated that the variables within the basic and underlying cause classification are the ones with a greater impact on chronic malnutrition.
The role of sensitivity analysis in probabilistic safety assessment
International Nuclear Information System (INIS)
Hirschberg, S.; Knochenhauer, M.
1987-01-01
The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)
Special section on modern multivariate analysis
Kafadar, Karen
2012-01-01
A critically challenging problem facing statisticians is the identification of a suitable framework which consolidates data of various types, from different sources, and across different time frames or scales (many of which can be missing), and from which appropriate analysis and subsequent inference can proceed.
Automated sensitivity analysis using the GRESS language
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.; Wright, R.Q.
1986-04-01
An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...
sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
user
Sensitivity analysis is a tool used in the assessment of a model's performance. This study examined the application of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study area is Effurun, Uvwie Local Government Area of Delta State of Nigeria. In order to ...
Applications of the BEam Cross section Analysis Software (BECAS)
DEFF Research Database (Denmark)
Blasques, José Pedro Albergaria Amaral; Bitsche, Robert; Fedorov, Vladimir
2013-01-01
A newly developed framework is presented for structural design and analysis of long slender beam-like structures, e.g., wind turbine blades. The framework is based on the BEam Cross section Analysis Software – BECAS – a finite element based cross section analysis tool. BECAS is used for the gener......A newly developed framework is presented for structural design and analysis of long slender beam-like structures, e.g., wind turbine blades. The framework is based on the BEam Cross section Analysis Software – BECAS – a finite element based cross section analysis tool. BECAS is used...... for the generation of beam finite element models which correctly account for effects stemming from material anisotropy and inhomogeneity in cross sections of arbitrary geometry. These type of modelling approach allows for an accurate yet computationally inexpensive representation of a general class of three...
SECTION 6.2 SURFACE TOPOGRAPHY ANALYSIS
DEFF Research Database (Denmark)
Seah, M. P.; De Chiffre, Leonardo
2005-01-01
Surface physical analysis, i.e. topography characterisation, encompasses measurement, visualisation, and quantification. This is critical for both component form and for surface finish at macro-, micro- and nano-scales. The principal methods of surface topography measurement are stylus profilometry......, optical scanning techniques, and scanning probe microscopy (SPM). These methods, based on acquisition of topography data from point by point scans, give quantitative information of heights with respect to position. Based on a different approach, the so-called integral methods produce parameters...
Simplified procedures for fast reactor fuel cycle and sensitivity analysis
International Nuclear Information System (INIS)
Badruzzaman, A.
1979-01-01
The Continuous Slowing Down-Integral Transport Theory has been extended to perform criticality calculations in a Fast Reactor Core-blanket system achieving excellent prediction of the spectrum and the eigenvalue. The integral transport parameters did not need recalculation with source iteration and were found to be relatively constant with exposure. Fuel cycle parameters were accurately predicted when these were not varied, thus reducing a principal potential penalty of the Intergal Transport approach where considerable effort may be required to calculate transport parameters in more complicated geometries. The small variation of the spectrum in the central core region, and its weak dependence on exposure for both this region, the core blanket interface and blanket region led to the extension and development of inexpensive simplified procedures to complement exact methods. These procedures gave accurate predictions of the key fuel cycle parameters such as cost and their sensitivity to variation in spectrum-averaged and multigroup cross sections. They also predicted the implications of design variation on these parameters very well. The accuracy of these procedures and their use in analyzing a wide variety of sensitivities demonstrate the potential utility of survey calculations in Fast Reactor analysis and fuel management
Directory of Open Access Journals (Sweden)
R. L. Cabrini
1998-01-01
Full Text Available The exact knowledge of the section thickness is a requisite for making the necessary corrections on DNA measurements in tissue sections. Several methods have been proposed to evaluate section thickness, each of them with advantages and disadvantages depending on the type of specimen and equipment available. We herein report another method based on preparation of standard material whose optical density varies as a function of its thickness and is sectioned and measured alongside the tissue specimen. The standards consist of celloidin cylinders stained with the PAS reaction and embedded in paraffin. For prior characterization of the cylinders, sections of different thickness were obtained and mounted. The optical density of each section was measured by direct microphotometry or image analysis. The actual thickness of each section was evaluated following re-embedding of piled groups of sections in a paraffin block and transversal sectioning. The thickness was then measured with a micrometric eye-piece. Optical density and actual thickness of each section were plotted on a normogram curve. Once a given tissue is sectioned alongside with the reference cylinder, the actual thickness is determined by its optical density on the normogram curve.
Okayama, Masanobu; Takeshima, Taro; Ae, Ryusuke; Harada, Masanori; Kajii, Eiji
2013-10-09
The current research into single nucleotide polymorphisms has extended the role of genetic testing to the identification of increased risk for common medical conditions. Advances in genetic research may soon necessitate preparation for the role of genetic testing in primary care medicine. This study attempts to determine what proportion of patients would be willing to undergo genetic testing for salt-sensitive hypertension in a primary care setting, and what factors are related to this willingness. A cross-sectional study using a self-report questionnaire was conducted among outpatients in primary care clinics and hospitals in Japan. The main characteristics measured were education level, family medical history, personal medical history, concern about hypertension, salt preference, reducing salt intake, and willingness to undergo genetic testing for salt-sensitive hypertension. Of 1,932 potential participants, 1,457 (75%) responded to the survey. Of the respondents, 726 (50%) indicated a willingness to undergo genetic testing. Factors related to this willingness were being over 50 years old (adjusted odds ratio [ad-OR] = 1.42, 95% Confidence interval = 1.09 - 1.85), having a high level of education (ad-OR: 1.83, 1.38 - 2.42), having a family history of hypertension (ad-OR: 1.36, 1.09 - 1.71), and worrying about hypertension (ad-OR: 2.06, 1.59 - 2.68). Half of the primary care outpatients surveyed in this study wanted to know their genetic risk for salt-sensitive hypertension. Those who were worried about hypertension or had a family history of hypertension were more likely to be interested in getting tested. These findings suggest that primary care physicians should provide patients with advice on genetic testing, as well as address their anxieties and concerns related to developing hypertension.
Factors related to allergic sensitization to aeroallergens in a cross-sectional study in adults
DEFF Research Database (Denmark)
Linneberg, A; Nielsen, N H; Madsen, F
2001-01-01
-olds in Copenhagen was carried out in 1990. The participation rate was 77.5% (1112/1435). Different lifestyle/environmental factors (explanatory variables) were defined based on questionnaire data. Dependent (outcome) variables were skin prick test (SPT) positivity or specific IgE positivity to common aeroallergens....... Explanatory variables associated with outcome in univariate analysis (P young age, low......-response relationship. CONCLUSION: Being male, young age, a positive family history of hayfever, low number of siblings and never smoking, were independently associated with allergic sensitization. In addition, the results indicated a possible relationship of alcohol consumption, body mass index and previous keeping...
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Directory of Open Access Journals (Sweden)
Georgios Arampatzis
Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of
A demonstration sensitivity analysis for RADTRAN III
International Nuclear Information System (INIS)
Reardon, P.C.; Neuhauser, K.S.
1987-01-01
RADTRAN III is a computer code for the assessment of transportation risk. It has been used to conduct risk analyses of radioactive material shipments for the DOE Office of Defense Programs, the DOE Office of Civilian Radioactive Waste Management (OCRWM), and others. These analyses require large amounts of data, and the values of the input parameters influence the magnitudes of the total risk estimates to varying extents. The degree of change in the output (risk) to changes in certain input parameter values is examined here for a small problem from the OCRWM analyses. This paper demonstrates the sensitivity of risk estimates generated by RADTRAN III for a sample problem. Parameters contributing to incident-free and accident risk were analyzed
Analysis of Sensitivity Experiments - A Primer
National Research Council Canada - National Science Library
Nance, Douglas V
2008-01-01
.... A specialized version of this scheme is derived for stable digital computation. Confidence interval estimation is discussed along with an analysis of variance. A set of example problems are solved; our results are compared with archival solutions.
Sensitivity Analysis Based on Markovian Integration by Parts Formula
Directory of Open Access Journals (Sweden)
Yongsheng Hang
2017-10-01
Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.
Advanced Fuel Cycle Economic Sensitivity Analysis
Energy Technology Data Exchange (ETDEWEB)
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Sensitivity analysis of hybrid thermoelastic techniques
W.A. Samad; J.M. Considine
2017-01-01
Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...
Global and Local Sensitivity Analysis Methods for a Physical System
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Adjoint sensitivity analysis of high frequency structures with Matlab
Bakr, Mohamed; Demir, Veysel
2017-01-01
This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.
Sensitivity analysis of the RESRAD, a dose assessment code
International Nuclear Information System (INIS)
Yu, C.; Cheng, J.J.; Zielen, A.J.
1991-01-01
The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational
A sensitivity analysis approach to optical parameters of scintillation detectors
International Nuclear Information System (INIS)
Ghal-Eh, N.; Koohi-Fayegh, R.
2008-01-01
In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators
Experimental Design for Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2001-01-01
This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as
Sensitivity analysis of a greedy heuristic for knapsack problems
Ghosh, D; Chakravarti, N; Sierksma, G
2006-01-01
In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit
Sensitivity analysis of numerical solutions for environmental fluid problems
International Nuclear Information System (INIS)
Tanaka, Nobuatsu; Motoyama, Yasunori
2003-01-01
In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard (INL); Perez, Danielle (INL)
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis
DEFF Research Database (Denmark)
Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen
2017-01-01
Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which si...... a multivariate design space. As case study, we consider building performance simulations of a 15.000 m² educational centre with respect to energy demand, thermal comfort, and daylight....
Robust Sensitivity Analysis of the Optimal Value of Linear Programming
Xu, Guanglin; Burer, Samuel
2015-01-01
We propose a framework for sensitivity analysis of linear programs (LPs) in minimization form, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties to worst-case linear optimization and two-stage adaptive optimization. We define the minimum (best-case) and maximum...
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Hasegawa, Raiden; Small, Dylan
2017-12-01
In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.
Advanced nuclear measurements LDRD - Sensitivity analysis
International Nuclear Information System (INIS)
Dreicer, J.S.
1999-01-01
This component of the Advanced Nuclear Measurements LDRD-PD has focused on the analysis and methodologies to quantify and characterize existing inventories of weapons and commercial fissile materials, as well as to, anticipate future forms and quantities to fissile materials. Historically, domestic safeguards had been applied to either pure uniform homogeneous material or to well characterized materials. The future is different simplistically, measurement challenges will be associated with the materials recovered from dismantled nuclear weapons in the US and Russia subject to disposition, the residues and wastes left over from the weapons production process, and from the existing and growing inventory of materials in commercial/civilian programs. Nuclear measurement issues for the fissile materials coming from these sources are associated with homogeneity, purity, and matrix effects. Specifically, these difficult-to-measure fissile materials are heterogeneous, impure, and embedded in highly shielding non-uniform matrices. Currently, each of these effects creates problems for radiation-based assay and it is impossible to measure material that has a combination of all these effects. Nuclear materials control and measurement is a dynamic problem requiring a predictive capability. This component has been tasked with helping select which future problems are the most important to target, during the last year accomplishments include: characterization of weapons waste fissile materials, identification of measurement problem areas, defining instrument requirements, and characterization of commercial fissile materials. A discussion of accomplishments in each of these areas is presented
Advanced polarization sensitive analysis in optical coherence tomography
Wieloszyńska, Aleksandra; StrÄ kowski, Marcin R.
2017-08-01
The optical coherence tomography (OCT) is an optical imaging method, which is widely applied in variety applications. This technology is used to cross-sectional or surface imaging with high resolution in non-contact and non-destructive way. OCT is very useful in medical applications like ophthalmology, dermatology or dentistry, as well as beyond biomedical fields like stress mapping in polymers or protective coatings defects detection. Standard OCT imaging is based on intensity images which can visualize the inner structure of scattering devices. However, there is a number of extensions improving the OCT measurement abilities. The main of them are the polarization sensitive OCT (PS-OCT), Doppler enable OCT (D-OCT) or spectroscopic OCT (S-OCT). Our research activities have been focused on PS-OCT systems. The polarization sensitive analysis delivers an useful information about optical anisotropic properties of the evaluated sample. This kind of measurements is very important for inner stress monitoring or e.g. tissue recognition. Based on our research results and knowledge the standard PS-OCT provide only data about birefringence of the measured sample. However, based on the OCT measurements more information including depolarization and diattenuation might be obtained. In our work, the method based on Jones formalism are going to be presented. It is used to determine birefringence, dichroism and optic axis orientation of the tested sample. In this contribution the setup of the optical system, as well as tests results verifying the measurements abilities of the system are going to be presented. The brief discussion about the effectiveness and usefulness of this approach will be carried out.
Amosu, Adewale; Sun, Yuefeng
WheelerLab is an interactive program that facilitates the interpretation of stratigraphic data (seismic sections, outcrop data and well sections) within a sequence stratigraphic framework and the subsequent transformation of the data into the chronostratigraphic domain. The transformation enables the identification of significant geological features, particularly erosional and non-depositional features that are not obvious in the original seismic domain. Although there are some software products that contain interactive environments for carrying out chronostratigraphic analysis, none of them are open-source codes. In addition to being open source, WheelerLab adds two important functionalities not present in currently available software: (1) WheelerLab generates a dynamic chronostratigraphic section and (2) WheelerLab enables chronostratigraphic analysis of older seismic data sets that exist only as images and not in the standard seismic file formats; it can also be used for the chronostratigraphic analysis of outcrop images and interpreted well sections. The dynamic chronostratigraphic section sequentially depicts the evolution of the chronostratigraphic chronosomes concurrently with the evolution of identified genetic stratal packages. This facilitates a better communication of the sequence-stratigraphic process. WheelerLab is designed to give the user both interactive and interpretational control over the transformation; this is most useful when determining the correct stratigraphic order for laterally separated genetic stratal packages. The program can also be used to generate synthetic sequence stratigraphic sections for chronostratigraphic analysis.
Directory of Open Access Journals (Sweden)
Adewale Amosu
2017-01-01
Full Text Available WheelerLab is an interactive program that facilitates the interpretation of stratigraphic data (seismic sections, outcrop data and well sections within a sequence stratigraphic framework and the subsequent transformation of the data into the chronostratigraphic domain. The transformation enables the identification of significant geological features, particularly erosional and non-depositional features that are not obvious in the original seismic domain. Although there are some software products that contain interactive environments for carrying out chronostratigraphic analysis, none of them are open-source codes. In addition to being open source, WheelerLab adds two important functionalities not present in currently available software: (1 WheelerLab generates a dynamic chronostratigraphic section and (2 WheelerLab enables chronostratigraphic analysis of older seismic data sets that exist only as images and not in the standard seismic file formats; it can also be used for the chronostratigraphic analysis of outcrop images and interpreted well sections. The dynamic chronostratigraphic section sequentially depicts the evolution of the chronostratigraphic chronosomes concurrently with the evolution of identified genetic stratal packages. This facilitates a better communication of the sequence-stratigraphic process. WheelerLab is designed to give the user both interactive and interpretational control over the transformation; this is most useful when determining the correct stratigraphic order for laterally separated genetic stratal packages. The program can also be used to generate synthetic sequence stratigraphic sections for chronostratigraphic analysis.
Multiple predictor smoothing methods for sensitivity analysis: Description of techniques
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Carbon dioxide capture processes: Simulation, design and sensitivity analysis
DEFF Research Database (Denmark)
Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul
2012-01-01
Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...... equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made....
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, María
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Circulating Lipids and Acute Pain Sensitization: An Exploratory Analysis.
Starkweather, Angela; Julian, Thomas; Ramesh, Divya; Heineman, Amy; Sturgill, Jamie; Dorsey, Susan G; Lyon, Debra E; Wijesinghe, Dayanjan Shanaka
In individuals with low back pain, higher lipid levels have been documented and were associated with increased risk for chronic low back pain. The purpose of this research was to identify plasma lipids that discriminate participants with acute low back pain with or without pain sensitization as measured by quantitative sensory testing. This exploratory study was conducted as part of a larger parent randomized controlled trial. A cluster analysis of 30 participants with acute low back pain revealed two clusters: one with signs of peripheral and central sensitivity to mechanical and thermal stimuli and the other with an absence of peripheral and central sensitivity. Lipid levels were extracted from plasma and measured using mass spectroscopy. Triacylglycerol 50:2 was significantly higher in participants with peripheral and central sensitization compared to the nonsensitized cluster. The nonsensitized cluster had significantly higher levels of phosphoglyceride 34:2, plasmenyl phosphocholine 38:1, and phosphatidic acid 28:1 compared to participants with peripheral and central sensitization. Linear discriminant function analysis was conducted using the four statistically significant lipids to test their predictive power to classify those in the sensitization and no-sensitization clusters; the four lipids accurately predicted cluster classification 58% of the time (R = .58, -2 log likelihood = 14.59). The results of this exploratory study suggest a unique lipidomic signature in plasma of patients with acute low back pain based on the presence or absence of pain sensitization. Future work to replicate these preliminary findings is underway.
A general first-order global sensitivity analysis method
International Nuclear Information System (INIS)
Xu Chonggang; Gertner, George Zdzislaw
2008-01-01
Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters
Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.
Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun
2017-12-01
Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.
Optimization of PIXE-sensitivity for detection of Ti in thin human skin sections
Pallon, Jan; Garmer, Mats; Auzelyte, Vaida; Elfman, Mikael; Kristiansson, Per; Malmqvist, Klas; Nilsson, Christer; Shariff, Asad; Wegdén, Marie
2005-04-01
Modern sunscreens contain particles like TiO2 having sizes of 25-70 nm and acting as a reflecting substance. For cosmetic reasons the particle size is minimized. Questions have been raised to what degree these nano particles penetrate the skin barrier, and how they do affect the human. The EU funded project "Quality of skin as a barrier to ultra-fine particles" - NANODERM has started with the purpose to evaluate the possible risks of TiO2 penetration into vital skin layers. The purpose of the work presented here was to find the optimal conditions for micro-PIXE analysis of Ti in thin skin sections. In the skin region where Ti is expected to be found, the naturally occurring major elements phosphorus, chlorine, sulphur and potassium have steep gradients and thus influence the X-ray background in a non-predictable manner. Based on experimental studies of Ti-exposed human skin sections using proton energies ranging from 1.8-2.55 MeV, the corresponding PIXE detection limits for Ti were calculated. The energy that was found to be the most favourable, 1.9 MeV, was then selected for future studies.
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
Energy Technology Data Exchange (ETDEWEB)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)
2016-10-15
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
International Nuclear Information System (INIS)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae
2016-01-01
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed
Global sensitivity analysis of computer models with functional inputs
International Nuclear Information System (INIS)
Iooss, Bertrand; Ribatet, Mathieu
2009-01-01
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.
Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems
Blonigan, Patrick J.; Wang, Qiqi
2018-02-01
Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.
Analytical analysis of sensitivity of optical waveguide sensor
African Journals Online (AJOL)
user
In this article, we carried out analytical analysis of sensitivity and mode field of optical waveguide structure by use of effective index method. This structures as predicted have extended ..... analysis, Antennas, Optical & Photonic Waveguide. She has widely worked with Microcontrollers, uses artificial intelligence techniques .
Stochastic sensitivity analysis using HDMR and score function
Indian Academy of Sciences (India)
Section 4 presents a brief overview of HDMR and its applicability to reliability analysis. Section 5 presents approximation of the original ...... above mentioned one or two failure criteria satisfies. For evaluating the failure probability ..... be applied to solve any multi-physics problems. Some of the work, in the field of stochastic.
Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory
International Nuclear Information System (INIS)
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2005-01-01
This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
Application of sensitivity analysis for optimized piping support design
International Nuclear Information System (INIS)
Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.
1993-01-01
The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Otis, M.D.
1983-01-01
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities
Directory of Open Access Journals (Sweden)
Thi Thanh Huyen Nguyen
2015-11-01
Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.
Probabilistic and sensitivity analysis of Botlek Bridge structures
Directory of Open Access Journals (Sweden)
Králik Juraj
2017-01-01
Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.
Stable locality sensitive discriminant analysis for image recognition.
Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang
2014-06-01
Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Carbon dioxide capture processes: Simulation, design and sensitivity analysis
DEFF Research Database (Denmark)
Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul
2012-01-01
Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made....
Efficient sensitivity analysis method for chaotic dynamical systems
Energy Technology Data Exchange (ETDEWEB)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
2016-05-15
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Seismic analysis of steam generator and parameter sensitivity studies
International Nuclear Information System (INIS)
Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun
2013-01-01
Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
International Nuclear Information System (INIS)
Favorite, Jeffrey A.; Perkó, Zoltán; Kiedrowski, Brian C.; Perfetti, Christopher M.
2017-01-01
The evaluation of uncertainties is essential for criticality safety. Our paper deals with material density and composition uncertainties and provides guidance on how traditional first-order sensitivity methods can be used to predict their effects. Unlike problems that deal with traditional cross-section uncertainty analysis, material density and composition-related problems are often characterized by constraints that do not allow arbitrary and independent variations of the input parameters. Their proper handling requires constrained sensitivities that take into account the interdependence of the inputs. This paper discusses how traditional unconstrained isotopic density sensitivities can be calculated using the adjoint sensitivity capabilities of the popular Monte Carlo codes MCNP6 and SCALE 6.2, and we also present the equations to be used when forward and adjoint flux distributions are available. Subsequently, we show how the constrained sensitivities can be computed using the unconstrained (adjoint-based) sensitivities as well as by applying central differences directly. We present three distinct procedures for enforcing the constraint on the input variables, each leading to different constrained sensitivities. As a guide, the sensitivity and uncertainty formulas for several frequently encountered specific cases involving densities and compositions are given. One analytic k ∞ example highlights the relationship between constrained sensitivity formulas and central differences, and a more realistic numerical problem reveals similarities among the computer codes used and differences among the three methods of enforcing the constraint.
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1991-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Energy Technology Data Exchange (ETDEWEB)
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Sensitivity analysis in a Lassa fever deterministic mathematical model
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Sensitization trajectories in childhood revealed by using a cluster analysis
DEFF Research Database (Denmark)
Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik
2017-01-01
BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...... Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent...
Automated sensitivity analysis: New tools for modeling complex dynamic systems
International Nuclear Information System (INIS)
Pin, F.G.
1987-01-01
Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed
Time-dependent reliability sensitivity analysis of motion mechanisms
International Nuclear Information System (INIS)
Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng
2016-01-01
Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.
Sensitive analysis of a finite element model of orthogonal cutting
Brocail, J.; Watremez, M.; Dubar, L.
2011-01-01
This paper presents a two-dimensional finite element model of orthogonal cutting. The proposed model has been developed with Abaqus/explicit software. An Arbitrary Lagrangian-Eulerian (ALE) formulation is used to predict chip formation, temperature, chip-tool contact length, chip thickness, and cutting forces. This numerical model of orthogonal cutting will be validated by comparing these process variables to experimental and numerical results obtained by Filice et al. [1]. This model can be considered to be reliable enough to make qualitative analysis of entry parameters related to cutting process and frictional models. A sensitivity analysis is conducted on the main entry parameters (coefficients of the Johnson-Cook law, and contact parameters) with the finite element model. This analysis is performed with two levels for each factor. The sensitivity analysis realised with the numerical model on the entry parameters has allowed the identification of significant parameters and the margin identification of parameters.
Analytic uncertainty and sensitivity analysis of models with input correlations
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Event history analysis and the cross-section
DEFF Research Database (Denmark)
Keiding, Niels
2006-01-01
Examples are given of problems in event history analysis, where several time origins (generating calendar time, age, disease duration, time on study, etc.) are considered simultaneously. The focus is on complex sampling patterns generated around a cross-section. A basic tool is the Lexis diagram....
Sensitivity and specificity of coherence and phase synchronization analysis
International Nuclear Information System (INIS)
Winterhalder, Matthias; Schelter, Bjoern; Kurths, Juergen; Schulze-Bonhage, Andreas; Timmer, Jens
2006-01-01
In this Letter, we show that coherence and phase synchronization analysis are sensitive but not specific in detecting the correct class of underlying dynamics. We propose procedures to increase specificity and demonstrate the power of the approach by application to paradigmatic dynamic model systems
Sensitivity analysis of railpad parameters on vertical railway track dynamics
Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.
2016-01-01
This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad
Sensitivity analysis on parameters and processes affecting vapor intrusion risk
Picone, S.; Valstar, J.R.; Gaans, van P.; Grotenhuis, J.T.C.; Rijnaarts, H.H.M.
2012-01-01
A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the
General algorithm and sensitivity analysis for variational inequalities
Directory of Open Access Journals (Sweden)
Muhammad Aslam Noor
1992-01-01
Full Text Available The fixed point technique is used to prove the existence of a solution for a class of variational inequalities related to odd order boundary value problems, and to suggest a general algorithm. We also study the sensitivity analysis for these variational inequalities and complementarity problems using the projection technique. Several special cases are discussed, which can be obtained from our results.
Stochastic sensitivity analysis using HDMR and score function
Indian Academy of Sciences (India)
... in reliability analysis and often crucial towards understanding the physical behaviour underlying failure and modifying the design to mitigate and manage risk. This article presents a new computational approach for calculating stochastic sensitivities of mechanical systems with respect to distribution parameters of random ...
Sensitivity analysis on ultimate strength of aluminium stiffened panels
DEFF Research Database (Denmark)
Rigo, P.; Sarghiuta, R.; Estefen, S.
2003-01-01
This paper presents the results of an extensive sensitivity analysis carried out by the Committee III.1 "Ultimate Strength" of ISSC?2003 in the framework of a benchmark on the ultimate strength of aluminium stiffened panels. Previously, different benchmarks were presented by ISSC committees on ul...
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
Sensitivity analysis of physiochemical interaction model: which pair ...
African Journals Online (AJOL)
The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a deterministic system of first order ordinary differential equations. In this paper, we considered a sensitivity analysis of studying the qualitative ...
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind
2007-01-01
satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...
Sensitivity Analysis Applied in Design of Low Energy Office Building
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik
2008-01-01
satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...
Sensitivity analysis for contagion effects in social networks
VanderWeele, Tyler J.
2014-01-01
Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations
DEFF Research Database (Denmark)
Kamran, Faisal; Andersen, Peter E.
2015-01-01
profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...
Sensitivity Analysis of a Horizontal Earth Electrode under Impulse ...
African Journals Online (AJOL)
This paper presents the sensitivity analysis of an earthing conductor under the influence of impulse current arising from a lightning stroke. The approach is based on the 2nd order finite difference time domain (FDTD). The earthing conductor is regarded as a lossy transmission line where it is divided into series connected ...
Sequence length variation, indel costs, and congruence in sensitivity analysis
DEFF Research Database (Denmark)
Aagesen, Lone; Petersen, Gitte; Seberg, Ole
2005-01-01
The behavior of two topological and four character-based congruence measures was explored using different indel treatments in three empirical data sets, each with different alignment difficulties. The analyses were done using direct optimization within a sensitivity analysis framework in which...
Beyond the GUM: variance-based sensitivity analysis in metrology
International Nuclear Information System (INIS)
Lira, I
2016-01-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)
Sensitivity analysis of the Ohio phosphorus risk index
The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...
Analytical analysis of sensitivity of optical waveguide sensor | Verma ...
African Journals Online (AJOL)
In this article, we carried out analytical analysis of sensitivity and mode field of optical waveguide structure by use of effective index method. This structures as predicted have extended mode which could interact with the surrounding analyses in a much better way than the commonly used EWS.
Lower extremity angle measurement with accelerometers - error and sensitivity analysis
Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.
1991-01-01
The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found
Weighting-Based Sensitivity Analysis in Causal Mediation Studies
Hong, Guanglei; Qin, Xu; Yang, Fan
2018-01-01
Through a sensitivity analysis, the analyst attempts to determine whether a conclusion of causal inference could be easily reversed by a plausible violation of an identification assumption. Analytic conclusions that are harder to alter by such a violation are expected to add a higher value to scientific knowledge about causality. This article…
Design tradeoff studies and sensitivity analysis. Appendix B
Energy Technology Data Exchange (ETDEWEB)
1979-05-25
The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)
Sensitivity analysis and its application for dynamic improvement
Indian Academy of Sciences (India)
Keywords. Sensitivity analysis; dynamic improvement structural modoficaton; laser beam printer; motorbike; disc drive; mechatronics; automobile engine. Abstract. In order to determine appropriate points where natural frequency or mode shape under consideration can be effectively modified by structural modification, the ...
Energy Technology Data Exchange (ETDEWEB)
Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA
2017-05-01
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.
Material and morphology parameter sensitivity analysis in particulate composite materials
Zhang, Xiaoyu; Oskay, Caglar
2017-12-01
This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.
Sensitivity analysis and power for instrumental variable studies.
Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S
2018-03-31
In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.
SENSITIVITY ANALYSIS OF BUILDING STRUCTURES WITHIN THE SCOPE OF ENERGY, ENVIRONMENT AND INVESTMENT
Directory of Open Access Journals (Sweden)
František Kulhánek
2015-10-01
Full Text Available The primary objective of this paper is to prove the feasibility of sensitivity analysis with dominant weight method for structure parts of envelope of buildings inclusive of energy; ecological and financial assessments, and determination of different designs for same structural part via multi-criteria assessment with theoretical example designs ancillary. Multi-criteria assessment (MCA of different structural designs or in other word alternatives aims to find the best available alternative. The application of sensitivity analysis technique in this paper bases on dominant weighting method. In this research, to choose the best thermal insulation design in the case of that more than one projection, simultaneously, criteria of total thickness (T; heat transfer coefficient (U through the cross section; global warming potential (GWP; acid produce (AP; primary energy content (PEI non renewable and cost per m2 (C are investigated for all designs via sensitivity analysis. Three different designs for external wall (over soil which are convenient with regard to globally suggested energy features for passive house design are investigated through the mentioned six projections. By creating a given set of scenarios; depending upon the importance of each criterion, sensitivity analysis is distributed. As conclusion, uncertainty in the output of model is attributed to different sources in the model input. In this manner, determination of the best available design is achieved. The original outlook and the outlook afterwards the sensitivity analysis are visualized, that enables easily to choose the optimum design within the scope of verified components.
Sensitivity analysis of LOFT L2-5 test calculations
International Nuclear Information System (INIS)
Prosek, Andrej
2014-01-01
The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be
International Nuclear Information System (INIS)
Harper, W.V.; Gupta, S.K.
1983-10-01
A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables
On the infrared sensitivity of the longitudinal cross section in e+e- annihilation
International Nuclear Information System (INIS)
Beneke, M.
1996-09-01
The authors have calculated the contributions proportional to β 0 n α s n+1 to the longitudinal fragmentation function in e + e - annihilation to all orders of perturbation theory. They use this result to estimate higher-order perturbative corrections and nonperturbative corrections to the longitudinal cross section σ L and discuss the prospects of determining α s from σ L . The structure of infrared renormalons in the perturbative expansion suggests that the longitudinal cross section for hadron production with fixed momentum fraction x receives nonperturbative contributions of order 1/(x 2 Q 2 ), whereas the total cross section has a larger, 1/Q correction. This correction arises from very large longitudinal distances and is related to the behavior of the Borel integral for the cross section with fixed x at large values of the Borel parameter
Sensitivity analysis of critical experiments with evaluated nuclear data libraries
International Nuclear Information System (INIS)
Fujiwara, D.; Kosaka, S.
2008-01-01
Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)
Rethinking Sensitivity Analysis of Nuclear Simulations with Topology
Energy Technology Data Exchange (ETDEWEB)
Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci
2016-01-01
In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.
Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.
van Erp, Sara; Mulder, Joris; Oberski, Daniel L
2017-11-27
Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A Rasch analysis of nurses' ethical sensitivity to the norms of the code of conduct.
González-de Paz, Luis; Kostov, Belchin; Sisó-Almirall, Antoni; Zabalegui-Yárnoz, Adela
2012-10-01
To develop an instrument to measure nurses' ethical sensitivity and, secondarily, to use this instrument to compare nurses' ethical sensitivity between groups. Professional codes of conduct are widely accepted guidelines. However, their efficacy in daily nursing practice and influence on ethical sensitivity is controversial. A descriptive cross-sectional study was conducted. One hundred and forty-three registered nurses from Barcelona (Spain) participated in the study, of whom 45.83% were working in primary health care and 53.84% in hospital wards. A specifically designed confidential, self-administered questionnaire assessing ethical sensitivity was developed. Knowledge of the nursing code and data on ethical sensitivity were summarised, with the quality of the questionnaire assessed using Rasch analysis. Item on knowledge of the code showed that one-third of nurses knew the contents of the code and two-thirds had limited knowledge. To fit the Rasch model, it was necessary to rescore the rating scale from five to three categories. Residual principal components analysis confirmed the unidimensionality of the scale. Three items of the questionnaire presented fit problems with the Rasch model. Although nurses generally have high ethical sensitivity to their code of conduct, differences were found according to years of professional practice, place of work and knowledge of the code (pcode was high. However, many factors might influence the degree of ethical sensitivity. Further research to measure ethical sensitivity using invariant measures such as Rasch units would be valuable. Other factors, such as assertiveness or courage, should be considered to improve ethical sensitivity to the code of conduct. Rigorous measurement studies and analysis in applied ethics are needed to assess ethical performance in practice. © 2012 Blackwell Publishing Ltd.
Development of radar cross section analysis system of naval ships
Kim, Kookhyun; Kim, Jin-Hyeong; Choi, Tae-Muk; Cho, Dae-Seung
2012-03-01
A software system for a complex object scattering analysis, named SYSCOS, has been developed for a systematic radar cross section (RCS) analysis and reduction design. The system is based on the high frequency analysis methods of physical optics, geometrical optics, and physical theory of diffraction, which are suitable for RCS analysis of electromagnetically large and complex targets as like naval ships. In addition, a direct scattering center analysis function has been included, which gives relatively simple and intuitive way to discriminate problem areas in design stage when comparing with conventional image-based approaches. In this paper, the theoretical background and the organization of the SYSCOS system are presented. To verify its accuracy and to demonstrate its applicability, numerical analyses for a square plate, a sphere and a cylinder, a weapon system and a virtual naval ship have been carried out, of which results have been compared with analytic solutions and those obtained by the other existing software.
Development of radar cross section analysis system of naval ships
Directory of Open Access Journals (Sweden)
Kookhyun Kim
2012-03-01
Full Text Available A software system for a complex object scattering analysis, named SYSCOS, has been developed for a systematic radar cross section (RCS analysis and reduction design. The system is based on the high frequency analysis methods of physical optics, geometrical optics, and physical theory of diffraction, which are suitable for RCS analysis of electromagnetically large and complex targets as like naval ships. In addition, a direct scattering center analysis function has been included, which gives relatively simple and intuitive way to discriminate problem areas in design stage when comparing with conventional image-based approaches. In this paper, the theoretical background and the organization of the SYSCOS system are presented. To verify its accuracy and to demonstrate its applicability, numerical analyses for a square plate, a sphere and a cylinder, a weapon system and a virtual naval ship have been carried out, of which results have been compared with analytic solutions and those obtained by the other existing software.
Understanding dynamics using sensitivity analysis: caveat and solution
2011-01-01
Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095
Sensitivity analysis for improving nanomechanical photonic transducers biosensors
International Nuclear Information System (INIS)
Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C
2015-01-01
The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8 × 10 −2 nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)
Sensitivity of reactor multiplication factor to positions of cross-section ...
Indian Academy of Sciences (India)
V GOPALAKRISHNAN
2017-08-16
Aug 16, 2017 ... However, this is not likely to affect the predicted inte- gral parameters badly, because the average cross-section over a region of energies will not be affected signifi- cantly. It should further be remembered that the RRR of the fuel is below the region where the flux peaks in a fast reactor (FR), and well above ...
Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis
DEFF Research Database (Denmark)
Jensen, Rasmus Lund; Maagaard, Steffen; Østergård, Torben
2017-01-01
Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring...... a multivariate design space. As case study, we consider building performance simulations of a 15.000 m² educational centre with respect to energy demand, thermal comfort, and daylight....
Sensitivity analysis techniques for models of human behavior.
Energy Technology Data Exchange (ETDEWEB)
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models
Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko
2015-01-01
Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600
Sensitivity analysis of project appraisal variables. Volume I. Key variables
Energy Technology Data Exchange (ETDEWEB)
1979-07-01
The Division of Fossil Fuel Utilization within the US Department of Energy (DOE) uses a project appraisal methodology for annual assessment of its research and development projects. Exercise of the methodology provides input to the budget preparation and planning process. Consequently, it is essential that all apraisal inputs and outputs are as accurate and credible as possible. The purpose of this task is to examine the accuracy and credibility of 1979 appraisal results by conducting a sensitivity analysis of several appraisal inputs. This analysis is designed to: examine the sensitivity of the results to adjustments in the values of selected parameters; explain the differences between computed ranks and professional judgment ranks; and revise the final results of 1979 project appraisal and provide the first inputs to refinement of the appraisal methodology for future applications.
Global sensitivity analysis of multiscale properties of porous materials
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
The Methods of Sensitivity Analysis and Their Usage for Analysis of Multicriteria Decision
Directory of Open Access Journals (Sweden)
Rūta Simanavičienė
2011-08-01
Full Text Available In this paper we describe the application's fields of the sensitivity analysis methods. We pass in review the application of these methods in multiple criteria decision making, when the initial data are numbers. We formulate the problem, which of the sensitivity analysis methods is more effective for the usage in the decision making process.Article in Lithuanian
Noise analysis of a low noise charge sensitive preamplifier
International Nuclear Information System (INIS)
Chen Bo; Liu Songqiu; Xue Zhihua; Zhao Jie
2008-01-01
On the basis of the traditional noise model, this paper makes a quantitative noise analysis of a self-made charge sensitive pre-amplifier and compares its result with that of Pspice simulation and practical measurements. Moreover, this paper figures out the practical formulas for the spectrum of output noise, the equivalent noise charge (ENC) and its slope respectively, thus facilitating the design and improvement of pre-amplifier. (authors)
Influence analysis to assess sensitivity of the dropout process
Molenberghs, Geert; Verbeke, Geert; Thijs, Herbert; Lesaffre, Emmanuel; Kenward, Michael
2001-01-01
Diggle and Kenward (Appl. Statist. 43 (1994) 49) proposed a selection model for continuous longitudinal data subject to possible non-random dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumption upon which this type of models invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. One of their examples is a set of da...
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik
2009-01-01
Building performance can be expressed by different indicators such as primary energy use, environmental load and/or the indoor environmental quality and a building performance simulation can provide the decision maker with a quantitative measure of the extent to which an integrated design solutio...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....
Applications of the TSUNAMI sensitivity and uncertainty analysis methodology
International Nuclear Information System (INIS)
Rearden, Bradley T.; Hopper, Calvin M.; Elam, Karla R.; Goluoglu, Sedat; Parks, Cecil V.
2003-01-01
The TSUNAMI sensitivity and uncertainty analysis tools under development for the SCALE code system have recently been applied in four criticality safety studies. TSUNAMI is used to identify applicable benchmark experiments for criticality code validation, assist in the design of new critical experiments for a particular need, reevaluate previously computed computational biases, and assess the validation coverage and propose a penalty for noncoverage for a specific application. (author)
Probabilistic Safety Analysis Level 2 for units 5 and 6 of the Kozloduy NPP - sensitivity analysis
International Nuclear Information System (INIS)
Mancheva, K.; Velev, V.
2006-01-01
This paper covers the results of the sensitivity analysis performed under the Probabilistic Safety Analysis (PSA) level 2 for units 5 and 6 of the Kozloduy NPP. The analysis performs the status of the unit before modernization program accomplishment. Therefore none of the measures accomplished under the modernization program is accounted in the investigation. The goal of the sensitivity analysis is to give the impact of some of the characteristics of the severe accident to the Large Early Release Frequency (LERF). (authors)
Sensitivity Analysis of Launch Vehicle Debris Risk Model
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
Sensitivity analysis of urban flood flows to hydraulic controls
Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah
2017-04-01
Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.
B1 -sensitivity analysis of quantitative magnetization transfer imaging.
Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce
2018-01-01
To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis
Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.
2014-12-01
Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.
van Wilgen, Cornelis P; Vuijk, Pieter J; Kregel, Jeroen; Voogt, Lennard; Meeus, Mira; Descheemaeker, Filip; Keizer, Doeke; Nijs, Jo
2018-02-01
Central sensitization (CS) implies increased sensitivity of the nervous system, resulting in increased pain sensitivity as well as widespread pain. Recently, the Central Sensitization Inventory (CSI) was developed to assess symptoms of CS and central sensitivity syndromes. The aim of this study was to examine the convergent validity of the CSI by comparing the outcome to psychosocial factors and clinical features of CS. In a cross-sectional explorative study, patients with chronic pain completed multiple questionnaires, including the CSI, Pain Catastrophizing Scale, and Symptom Checklist 90, for psychological distress, duration of pain, intensity of pain, widespread pain, and lateralization of pain. Based on bivariate correlations, relevant predictors of CS were selected and used to fit an exploratory structural equation model (SEM) of CS. In total, 114 patients with chronic pain were included, 56.1% being women. The average pain duration was 88 months. The mean total score on the CSI was 36.09 (15.26). The CSI was strongly related to known contributing and related factors of CS. SEM analysis showed that both psychological distress and widespread pain contributed significantly to the variance in symptoms of CS in patients with chronic pain. In this study, the convergent validity of the CSI was measured with demonstration of a strong relationship between contributing factors and clinical features of CS. These findings of convergent validity, considering former studies of the CSI, underline the use of the questionnaire in the clinical practice. © 2017 World Institute of Pain.
Global sensitivity analysis of thermomechanical models in modelling of welding
International Nuclear Information System (INIS)
Petelet, M.
2008-01-01
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Analysis of a transport fuselage section drop test
Fasanella, E. L.; Hayduk, R. J.; Robinson, M. P.; Widmayer, E.
1984-01-01
Transport fuselage section drop tests provided useful information about the crash behavior of metal aircraft in preparation for a full-scale Boeing 720 controlled impact demonstration (CID). The fuselage sections have also provided an operational test environment for the data acquisition system designed for the CID test, and data for analysis and correlation with the DYCAST nonlinear finite-element program. The correlation of the DYCAST section model predictions was quite good for the total fuselage crushing deflection (22 to 24 inches predicted versus 24 to 26 inches measured), floor deformation, and accelerations for the floor and fuselage. The DYCAST seat and occupant model was adequate to approximate dynamic loading to the floor, but a more sophisticated model would be required for good correlation with dummy accelerations. Although a full-section model using only finite elements for the subfloor was desirable, constraints of time and computer resources limited the finite-element subfloor model to a two-frame model. Results from the two-frame model indicate that DYCAST can provide excellent correlation with experimental crash behavior of fuselage structure with a minimum of empirical force-deflection data representing structure in the analytical model.
Global Sensitivity Analysis for multivariate output using Polynomial Chaos Expansion
International Nuclear Information System (INIS)
Garcia-Cabrejo, Oscar; Valocchi, Albert
2014-01-01
Many mathematical and computational models used in engineering produce multivariate output that shows some degree of correlation. However, conventional approaches to Global Sensitivity Analysis (GSA) assume that the output variable is scalar. These approaches are applied on each output variable leading to a large number of sensitivity indices that shows a high degree of redundancy making the interpretation of the results difficult. Two approaches have been proposed for GSA in the case of multivariate output: output decomposition approach [9] and covariance decomposition approach [14] but they are computationally intensive for most practical problems. In this paper, Polynomial Chaos Expansion (PCE) is used for an efficient GSA with multivariate output. The results indicate that PCE allows efficient estimation of the covariance matrix and GSA on the coefficients in the approach defined by Campbell et al. [9], and the development of analytical expressions for the multivariate sensitivity indices defined by Gamboa et al. [14]. - Highlights: • PCE increases computational efficiency in 2 approaches of GSA of multivariate output. • Efficient estimation of covariance matrix of output from coefficients of PCE. • Efficient GSA on coefficients of orthogonal decomposition of the output using PCE. • Analytical expressions of multivariate sensitivity indices from coefficients of PCE
Sensitivity analysis: Interaction of DOE SNF and packaging materials
International Nuclear Information System (INIS)
Anderson, P.A.; Kirkham, R.J.; Shaber, E.L.
1999-01-01
A sensitivity analysis was conducted to evaluate the technical issues pertaining to possible destructive interactions between spent nuclear fuels (SNFs) and the stainless steel canisters. When issues are identified through such an analysis, they provide the technical basis for answering what if questions and, if needed, for conducting additional analyses, testing, or other efforts to resolve them in order to base the licensing on solid technical grounds. The analysis reported herein systematically assessed the chemical and physical properties and the potential interactions of the materials that comprise typical US Department of Energy (DOE) SNFs and the stainless steel canisters in which they will be stored, transported, and placed in a geologic repository for final disposition. The primary focus in each step of the analysis was to identify any possible phenomena that could potentially compromise the structural integrity of the canisters and to assess their thermodynamic feasibility
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
Eno, Larry; Rabitz, Herschel
1981-08-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.
Linear regression and sensitivity analysis in nuclear reactor design
International Nuclear Information System (INIS)
Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.
2015-01-01
Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)
Multi-criteria decision making: an example of sensitivity analysis
Directory of Open Access Journals (Sweden)
Dragan S. Pamučar
2017-05-01
Full Text Available This study provides a model for result consistency evaluation of multicriterial decision making (MDM methods and selection of the optimal one. The model is based on the analysis of results of MDM methods, that is, the analysis of changes in rankings of MDM methods that occur as a result of alterations in input parameters. In the recommended model, we examine sensitivity analysis of MDM methods to changes in criteria weight and result consistency of methods to changes in measurement scale and the way in which we formulate criteria. In the final phase of the model, we select the most suitable method to solve the observed problem and the optimal alternative. The model is tested on an example, when the optimal MDM method selection was required in order to determine the location of the logistical center. During the selection process, TOPSIS, COPRAS, VIKOR and ELECTRE methods were considered. VIKOR method demonstrated the biggest stability of rankings and was selected as the most fit method for ranking the locations of the logistical center. Results of the demonstrated analysis indicate sensitivity of standard MDM methods to criteria considered in this work. Therefore, it is necessary, to take into account stability of the considered method during the selection process of the optimal method.
Sensitivity analysis practices: Strategies for model-based inference
Energy Technology Data Exchange (ETDEWEB)
Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)
2006-10-15
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.
Sensitivity analysis practices: Strategies for model-based inference
International Nuclear Information System (INIS)
Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca
2006-01-01
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA
Multiobjective engineering design optimization problems: a sensitivity analysis approach
Directory of Open Access Journals (Sweden)
Oscar Brito Augusto
2012-12-01
Full Text Available This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.
Nuclear data sensitivity/uncertainty analysis for XT-ADS
International Nuclear Information System (INIS)
Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert
2011-01-01
Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.
Hines, Stella E; Barker, Elizabeth A; Robinson, Maura; Knight, Vijaya; Gaitens, Joanna; Sills, Michael; Duvall, Kirby; Rose, Cecile S
2015-12-01
An epoxy resin worker developed hypersensitivity pneumonitis requiring lung transplantation and had an abnormal blood lymphocyte proliferation test (LPT) to an epoxy hardener. We assessed the prevalence of symptoms, abnormal spirometry, and abnormal epoxy resin LPT results in epoxy resin workers compared to unexposed workers. Participants completed questionnaires and underwent spirometry. We collected blood for epoxy resin LPT and calculated stimulation indices for five epoxy resin products. We compared 38 exposed to 32 unexposed workers. Higher exposed workers were more likely to report cough (OR 10.86, [1.23-infinity], p = 0.030) or wheeze (OR 4.44, [1.00-22.25], p = 0.049) than unexposed workers, even controlling for smoking. Higher exposed workers were more likely to have abnormal FEV1 than unexposed workers (OR 10.51, [0.86-589.9], p = 0.071), although not statistically significant when adjusted for smoking. There were no differences in proportion of abnormal epoxy resin system LPTs between exposed and unexposed workers. In summary, workers exposed to epoxy resin system chemicals were more likely to report respiratory symptoms and have abnormal FEV1 than unexposed workers. Use of epoxy resin LPT was not helpful as a biomarker of exposure and sensitization. © 2015 Wiley Periodicals, Inc.
Barker, Elizabeth A.; Robinson, Maura; Knight, Vijaya; Gaitens, Joanna; Sills, Michael; Duvall, Kirby; Rose, Cecile S.
2015-01-01
Abstract Objectives An epoxy resin worker developed hypersensitivity pneumonitis requiring lung transplantation and had an abnormal blood lymphocyte proliferation test (LPT) to an epoxy hardener. We assessed the prevalence of symptoms, abnormal spirometry, and abnormal epoxy resin LPT results in epoxy resin workers compared to unexposed workers. Methods Participants completed questionnaires and underwent spirometry. We collected blood for epoxy resin LPT and calculated stimulation indices for five epoxy resin products. Results We compared 38 exposed to 32 unexposed workers. Higher exposed workers were more likely to report cough (OR 10.86, [1.23‐infinity], p = 0.030) or wheeze (OR 4.44, [1.00‐22.25], p = 0.049) than unexposed workers, even controlling for smoking. Higher exposed workers were more likely to have abnormal FEV1 than unexposed workers (OR 10.51, [0.86‐589.9], p = 0.071), although not statistically significant when adjusted for smoking. There were no differences in proportion of abnormal epoxy resin system LPTs between exposed and unexposed workers. Conclusions In summary, workers exposed to epoxy resin system chemicals were more likely to report respiratory symptoms and have abnormal FEV1 than unexposed workers. Use of epoxy resin LPT was not helpful as a biomarker of exposure and sensitization. PMID:26553118
Sensitization trajectories in childhood revealed by using a cluster analysis.
Schoos, Ann-Marie M; Chawes, Bo L; Melén, Erik; Bergström, Anna; Kull, Inger; Wickman, Magnus; Bønnelykke, Klaus; Bisgaard, Hans; Rasmussen, Morten A
2017-12-01
Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more biologically and clinically relevant. We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. We investigated 398 children from the at-risk Copenhagen Prospective Studies on Asthma in Childhood 2000 (COPSAC 2000 ) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent patterns explicitly characterizing temporal development of sensitization while clustering allergens and children. Subsequently, these patterns were investigated in relation to asthma, rhinitis, and eczema. Verification was sought in an independent unselected birth cohort (BAMSE) constituting 3051 children with specific IgE against the same allergens at 4 and 8 years of age. The nonnegative sparse parallel factor analysis indicated a complex latent structure involving 7 age- and allergen-specific patterns in the COPSAC 2000 birth cohort data: (1) dog/cat/horse, (2) timothy grass/birch, (3) molds, (4) house dust mites, (5) peanut/wheat flour/mugwort, (6) peanut/soybean, and (7) egg/milk/wheat flour. Asthma was solely associated with pattern 1 (odds ratio [OR], 3.3; 95% CI, 1.5-7.2), rhinitis with patterns 1 to 4 and 6 (OR, 2.2-4.3), and eczema with patterns 1 to 3 and 5 to 7 (OR, 1.6-2.5). All 7 patterns were verified in the independent BAMSE cohort (R 2 > 0.89). This study suggests the presence of specific sensitization patterns in early childhood differentially associated with development of
Biosphere dose conversion Factor Importance and Sensitivity Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-10-15
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.
Biosphere dose conversion Factor Importance and Sensitivity Analysis
International Nuclear Information System (INIS)
M. Wasiolek
2004-01-01
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
Sensitivity Analysis of OECD Benchmark Tests in BISON
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Local sensitivity analysis of a distributed parameters water quality model
International Nuclear Information System (INIS)
Pastres, R.; Franco, D.; Pecenik, G.; Solidoro, C.; Dejak, C.
1997-01-01
A local sensitivity analysis is presented of a 1D water-quality reaction-diffusion model. The model describes the seasonal evolution of one of the deepest channels of the lagoon of Venice, that is affected by nutrient loads from the industrial area and heat emission from a power plant. Its state variables are: water temperature, concentrations of reduced and oxidized nitrogen, Reactive Phosphorous (RP), phytoplankton, and zooplankton densities, Dissolved Oxygen (DO) and Biological Oxygen Demand (BOD). Attention has been focused on the identifiability and the ranking of the parameters related to primary production in different mixing conditions
Sensitivity Analysis of Structures by Virtual Distortion Method
DEFF Research Database (Denmark)
Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard
1991-01-01
are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see......-order reliability methods (FORM), see Madsen et al. [3]. Also the rapid growth of computing power has been very important. Most effective optimization algorithms require that the derivatives of the objective function and the constraints are determined with high accuracy. Usually, quasi-analytical derivatives...
Sensitivity analysis and design optimization through automatic differentiation
International Nuclear Information System (INIS)
Hovland, Paul D; Norris, Boyana; Strout, Michelle Mills; Bhowmick, Sanjukta; Utke, Jean
2005-01-01
Automatic differentiation is a technique for transforming a program or subprogram that computes a function, including arbitrarily complex simulation codes, into one that computes the derivatives of that function. We describe the implementation and application of automatic differentiation tools. We highlight recent advances in the combinatorial algorithms and compiler technology that underlie successful implementation of automatic differentiation tools. We discuss applications of automatic differentiation in design optimization and sensitivity analysis. We also describe ongoing research in the design of language-independent source transformation infrastructures for automatic differentiation algorithms
SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES
Energy Technology Data Exchange (ETDEWEB)
Flach, G.
2014-10-28
PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.
Sensitivity and uncertainty analysis of a polyurethane foam decomposition model
Energy Technology Data Exchange (ETDEWEB)
HOBBS,MICHAEL L.; ROBINSON,DAVID G.
2000-03-14
Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Sensitivity analysis for the effects of multiple unmeasured confounders.
Groenwold, Rolf H H; Sterne, Jonathan A C; Lawlor, Debbie A; Moons, Karel G M; Hoes, Arno W; Tilling, Kate
2016-09-01
Observational studies are prone to (unmeasured) confounding. Sensitivity analysis of unmeasured confounding typically focuses on a single unmeasured confounder. The purpose of this study was to assess the impact of multiple (possibly weak) unmeasured confounders. Simulation studies were performed based on parameters estimated from the British Women's Heart and Health Study, including 28 measured confounders and assuming no effect of ascorbic acid intake on mortality. In addition, 25, 50, or 100 unmeasured confounders were simulated, with various mutual correlations and correlations with measured confounders. The correlated unmeasured confounders did not need to be strongly associated with exposure and outcome to substantially bias the exposure-outcome association at interest, provided that there are sufficiently many unmeasured confounders. Correlations between unmeasured confounders, in addition to the strength of their relationship with exposure and outcome, are key drivers of the magnitude of unmeasured confounding and should be considered in sensitivity analyses. However, if the unmeasured confounders are correlated with measured confounders, the bias yielded by unmeasured confounders is partly removed through adjustment for the measured confounders. Discussions of the potential impact of unmeasured confounding in observational studies, and sensitivity analyses to examine this, should focus on the potential for the joint effect of multiple unmeasured confounders to bias results. Copyright © 2016 Elsevier Inc. All rights reserved.
Application of sensitivity analysis in design of sustainable buildings
Energy Technology Data Exchange (ETDEWEB)
Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan; Rasmussen, Henrik; Seinre, Erkki; Thomas, Sara [Department of Civil Engineering, Aalborg University, Sohngaardsholmsvej 57, 9000 Aalborg (Denmark)
2009-09-15
Building performance can be expressed by different indicators such as primary energy use, environmental load and/or the indoor environmental quality and a building performance simulation can provide the decision maker with a quantitative measure of the extent to which an integrated design solution satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify the most important parameters in relation to building performance and to focus design and optimization of sustainable buildings on these fewer, but most important parameters. The sensitivity analyses will typically be performed at a reasonably early stage of the building design process, where it is still possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark. (author)
Sensitivity Analysis of a process based erosion model using FAST
Gabelmann, Petra; Wienhöfer, Jan; Zehe, Erwin
2015-04-01
deposition are related to overland flow velocity using the equation of Engelund and Hansen and the sinking velocity of grain sizes, respectively. The sensitivity analysis was performed based on virtual hillslopes similar to those in the Weiherbach catchment. We applied the FAST-method (Fourier Amplitude Sensitivity Test), which provides a global sensitivity analysis with comparably few model runs. We varied model parameters in predefined and, for the Weiherbach catchment, physically meaningful parameter ranges. Those parameters included rainfall intensity, surface roughness, hillslope geometry, land use, erosion resistance, and soil hydraulic parameters. The results of this study allow guiding further modelling efforts in the Weiherbach catchment with respect to data collection and model modification.
Turbine blade temperature calculation and life estimation - a sensitivity analysis
Directory of Open Access Journals (Sweden)
Majid Rezazadeh Reyhani
2013-06-01
Full Text Available The overall operating cost of the modern gas turbines is greatly influenced by the durability of hot section components operating at high temperatures. In turbine operating conditions, some defects may occur which can decrease hot section life. In the present paper, methods used for calculating blade temperature and life are demonstrated and validated. Using these methods, a set of sensitivity analyses on the parameters affecting temperature and life of a high pressure, high temperature turbine first stage blade is carried out. Investigated uncertainties are: (1 blade coating thickness, (2 coolant inlet pressure and temperature (as a result of secondary air system, and (3 gas turbine load variation. Results show that increasing thermal barrier coating thickness by 3 times, leads to rise in the blade life by 9 times. In addition, considering inlet cooling temperature and pressure, deviation in temperature has greater effect on blade life. One of the interesting points that can be realized from the results is that 300 hours operation at 70% load can be equal to one hour operation at base load.
Cross section weighting spectrum for fast reactor analysis
International Nuclear Information System (INIS)
Nascimento, Jamil A. do; Ono, Shizuca; Guimaraes, Lamartine N.F.
2009-01-01
Preparation of a nuclear data library is the first task that a reactor analyst needs to perform a neutronic analysis of a reactor type. Today, in fast reactor area, the scheme used to generate this library includes the processing of an evaluated nuclear data file to obtain cross sections, in thousands of groups. Sequentially, the nuclear data are processed by a cell code to obtain neutron flux that is used to condense the large amount of energy groups to a practical calculation number of groups that can be used in reactor analysis. In the first step of the scheme it is necessary a weighting spectrum to generate the nuclear data. Here, it is proposed to use the flux estimated by Monte Carlo code using cell or the exact geometries and actual composition of the problem to obtain the main portion of the weighting spectrum instead of a code built-in function. As an example, it is presented the differences between selected pins spectrums obtained with MCNP5 calculation of a hexagonal fast reactor fuel assembly. Also, it is showed a comparison between these spectra and the one obtained in the representative unit-cell model of this fuel assembly. The comparisons support that the proposed procedure, problem dependent, may be more accurate and a good choice to generate weighting spectrum in ultra-fine energy structure for fast reactor analysis. The proposed method will be used in space reactor neutronic analysis. (author)
Directory of Open Access Journals (Sweden)
Brähler Elmar
2010-10-01
Full Text Available Abstract Background Disgust sensitivity is defined as a predisposition to experiencing disgust, which can be measured on the basis of the Disgust Scale and its German version, the Questionnaire for the Assessment of Disgust Sensitivity (QADS. In various studies, different factor structures were reported for either instrument. The differences may most likely be due to the selected factor analysis estimation methods and the small non-representative samples. Consequently, the aims of this study were to explore and confirm a theory-driven and statistically coherent QADS factor structure in a large representative sample and to present its standard values. Methods The QADS was answered by N = 2473 healthy subjects. The respective households and participants were selected using the random-route sampling method. Afterwards, the collected sample was compared to the information from the Federal Statistical Office to ensure that it was representative for the German residential population. With these data, an exploratory Promax-rotated Principal Axis Factor Analysis as well as comparative confirmatory factor analyses with robust Maximum Likelihood estimations were computed. Any possible socio-demographic influences were quantified as effect sizes. Results The data-driven and theoretically sound solution with the three highly interrelated factors Animal Reminder Disgust, Core Disgust, and Contamination Disgust led to a moderate model fit. All QADS scales had very good reliabilities (Cronbach's alpha from .90 to .95. There were no age-differences found among the participants, however, the female participants showed remarkably higher disgust ratings. Conclusions Based on the representative sample, the QADS factor structure was revised. Gender-specific standard percentages permit a population-based assessment of individual disgust sensitivity. The differences of the original QADS, the new solution, and the Disgust Scale - Revised will be discussed.
Sensitivity Analysis to Control the Far-Wake Unsteadiness Behind Turbines
Directory of Open Access Journals (Sweden)
Esteban Ferrer
2017-10-01
Full Text Available We explore the stability of wakes arising from 2D flow actuators based on linear momentum actuator disc theory. We use stability and sensitivity analysis (using adjoints to show that the wake stability is controlled by the Reynolds number and the thrust force (or flow resistance applied through the turbine. First, we report that decreasing the thrust force has a comparable stabilising effect to a decrease in Reynolds numbers (based on the turbine diameter. Second, a discrete sensitivity analysis identifies two regions for suitable placement of flow control forcing, one close to the turbines and one far downstream. Third, we show that adding a localised control force, in the regions identified by the sensitivity analysis, stabilises the wake. Particularly, locating the control forcing close to the turbines results in an enhanced stabilisation such that the wake remains steady for significantly higher Reynolds numbers or turbine thrusts. The analysis of the controlled flow fields confirms that modifying the velocity gradient close to the turbine is more efficient to stabilise the wake than controlling the wake far downstream. The analysis is performed for the first flow bifurcation (at low Reynolds numbers which serves as a foundation of the stabilization technique but the control strategy is tested at higher Reynolds numbers in the final section of the paper, showing enhanced stability for a turbulent flow case.
Parametric sensitivity analysis for temperature control in outdoor photobioreactors.
Pereira, Darlan A; Rodrigues, Vinicius O; Gómez, Sonia V; Sales, Emerson A; Jorquera, Orlando
2013-09-01
In this study a critical analysis of input parameters on a model to describe the broth temperature in flat plate photobioreactors throughout the day is carried out in order to assess the effect of these parameters on the model. Using the design of experiment approach, variation of selected parameters was introduced and the influence of each parameter on the broth temperature was evaluated by a parametric sensitivity analysis. The results show that the major influence on the broth temperature is that from the reactor wall and the shading factor, both related to the direct and reflected solar irradiation. Other parameter which play an important role on the temperature is the distance between plates. This study provides information to improve the design and establish the most appropriate operating conditions for the cultivation of microalgae in outdoor systems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Uncertainty and sensitivity analysis of environmental transport models
International Nuclear Information System (INIS)
Margulies, T.S.; Lancaster, L.E.
1985-01-01
An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables
Sensitivity analysis of stochastically forced quasiperiodic self-oscillations
Directory of Open Access Journals (Sweden)
Irina Bashkirtseva
2016-08-01
Full Text Available We study a problem of stochastically forced quasi-periodic self-oscillations of nonlinear dynamic systems, which are modelled by an invariant torus in the phase space. For weak noise, an asymptotic of the stationary distribution of random trajectories is studied using the quasipotential. For the constructive analysis of a probabilistic distribution near a torus, we use a quadratic approximation of the quasipotential. A parametric description of this approximation is based on the stochastic sensitivity functions (SSF technique. Using this technique, we create a new mathematical method for the probabilistic analysis of stochastic flows near the torus. The construction of SSF is reduced to a boundary value problem for a linear differential matrix equation. For the case of the two-torus in the three-dimensional space, a constructive solution of this problem is given. Our theoretical results are illustrated with an example.
Nordic reference study on uncertainty and sensitivity analysis
International Nuclear Information System (INIS)
Hirschberg, S.; Jacobsson, P.; Pulkkinen, U.; Porn, K.
1989-01-01
This paper provides a review of the first phase of Nordic reference study on uncertainty and sensitivity analysis. The main objective of this study is to use experiences form previous Nordic Benchmark Exercises and reference studies concerning critical modeling issues such as common cause failures and human interactions, and to demonstrate the impact of associated uncertainties on the uncertainty of the investigated accident sequence. This has been done independently by three working groups which used different approaches to modeling and to uncertainty analysis. The estimated uncertainty interval for the analyzed accident sequence is large. Also the discrepancies between the groups are substantial but can be explained. Sensitivity analyses which have been carried out concern e.g. use of different CCF-quantification models, alternative handling of CCF-data, time windows for operator actions and time dependences in phase mission operation, impact of state-of-knowledge dependences and ranking of dominating uncertainty contributors. Specific findings with respect to these issues are summarized in the paper
Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis
International Nuclear Information System (INIS)
Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.
1991-01-01
The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs
Cross-covariance based global dynamic sensitivity analysis
Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng
2018-02-01
For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.
A Workflow for Global Sensitivity Analysis of PBPK Models
Directory of Open Access Journals (Sweden)
Kevin eMcNally
2011-06-01
Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.
Procedures for uncertainty and sensitivity analysis in repository performance assessment
International Nuclear Information System (INIS)
Poern, K.; Aakerlund, O.
1985-10-01
The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)
Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2017-12-01
This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Sensitivity Analysis in Observational Research: Introducing the E-Value.
VanderWeele, Tyler J; Ding, Peng
2017-08-15
Sensitivity analysis is useful in assessing how robust an association is to potential unmeasured or uncontrolled confounding. This article introduces a new measure called the "E-value," which is related to the evidence for causality in observational studies that are potentially subject to confounding. The E-value is defined as the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment-outcome association, conditional on the measured covariates. A large E-value implies that considerable unmeasured confounding would be needed to explain away an effect estimate. A small E-value implies little unmeasured confounding would be needed to explain away an effect estimate. The authors propose that in all observational studies intended to produce evidence for causality, the E-value be reported or some other sensitivity analysis be used. They suggest calculating the E-value for both the observed association estimate (after adjustments for measured confounders) and the limit of the confidence interval closest to the null. If this were to become standard practice, the ability of the scientific community to assess evidence from observational studies would improve considerably, and ultimately, science would be strengthened.
Orbit uncertainty propagation and sensitivity analysis with separated representations
Balducci, Marc; Jones, Brandon; Doostan, Alireza
2017-09-01
Most approximations for stochastic differential equations with high-dimensional, non-Gaussian inputs suffer from a rapid (e.g., exponential) increase of computational cost, an issue known as the curse of dimensionality. In astrodynamics, this results in reduced accuracy when propagating an orbit-state probability density function. This paper considers the application of separated representations for orbit uncertainty propagation, where future states are expanded into a sum of products of univariate functions of initial states and other uncertain parameters. An accurate generation of separated representation requires a number of state samples that is linear in the dimension of input uncertainties. The computation cost of a separated representation scales linearly with respect to the sample count, thereby improving tractability when compared to methods that suffer from the curse of dimensionality. In addition to detailed discussions on their construction and use in sensitivity analysis, this paper presents results for three test cases of an Earth orbiting satellite. The first two cases demonstrate that approximation via separated representations produces a tractable solution for propagating the Cartesian orbit-state uncertainty with up to 20 uncertain inputs. The third case, which instead uses Equinoctial elements, reexamines a scenario presented in the literature and employs the proposed method for sensitivity analysis to more thoroughly characterize the relative effects of uncertain inputs on the propagated state.
Sensitivity analysis of numerical model of prestressed concrete containment
Energy Technology Data Exchange (ETDEWEB)
Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz
2015-12-15
Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier
Sensitivity analysis for modules for various biosphere types
International Nuclear Information System (INIS)
Karlsson, Sara; Bergstroem, U.; Rosen, K.
2000-09-01
This study presents the results of a sensitivity analysis for the modules developed earlier for calculation of ecosystem specific dose conversion factors (EDFs). The report also includes a comparison between the probabilistically calculated mean values of the EDFs and values gained in deterministic calculations. An overview of the distribution of radionuclides between different environmental parts in the models is also presented. The radionuclides included in the study were 36 Cl, 59 Ni, 93 Mo, 129 I, 135 Cs, 237 Np and 239 Pu, sel to represent various behaviour in the biosphere and some are of particular importance from the dose point of view. The deterministic and probabilistic EDFs showed a good agreement, for most nuclides and modules. Exceptions from this occurred if very skew distributions were used for parameters of importance for the results. Only a minor amount of the released radionuclides were present in the model compartments for all modules, except for the agricultural land module. The differences between the radionuclides were not pronounced which indicates that nuclide specific parameters were of minor importance for the retention of radionuclides for the simulated time period of 10 000 years in those modules. The results from the agricultural land module showed a different pattern. Large amounts of the radionuclides were present in the solid fraction of the saturated soil zone. The high retention within this compartment makes the zone a potential source for future exposure. Differences between the nuclides due to element specific Kd-values could be seen. The amount of radionuclides present in the upper soil layer, which is the most critical zone for exposure to humans, was less then 1% for all studied radionuclides. The sensitivity analysis showed that the physical/chemical parameters were the most important in most modules in contrast to the dominance of biological parameters in the uncertainty analysis. The only exception was the well module where
LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE
Bittker, D. A.
1994-01-01
LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis
GPU-based Integration with Application in Sensitivity Analysis
Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar
2010-05-01
The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is
Sensitivity analysis of the terrestrial food chain model FOOD III
International Nuclear Information System (INIS)
Zach, Reto.
1980-10-01
As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)
Robust and sensitive analysis of mouse knockout phenotypes.
Directory of Open Access Journals (Sweden)
Natasha A Karp
Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.
International Nuclear Information System (INIS)
Iman, R.L.; Helton, J.C.
1985-01-01
Probabilistic Risk Assessment (PRA) is playing an increasingly important role in the nuclear reactor regulatory process. The assessment of uncertainties associated with PRA results is widely recognized as an important part of the analysis process. One of the major criticisms of the Reactor Safety Study was that its representation of uncertainty was inadequate. The desire for the capability to treat uncertainties with the MELCOR risk code being developed at Sandia National Laboratories is indicative of the current interest in this topic. However, as yet, uncertainty analysis and sensitivity analysis in the context of PRA is a relatively immature field. In this paper, available methods for uncertainty analysis and sensitivity analysis in a PRA are reviewed. This review first treats methods for use with individual components of a PRA and then considers how these methods could be combined in the performance of a complete PRA. In the context of this paper, the goal of uncertainty analysis is to measure the imprecision in PRA outcomes of interest, and the goal of sensitivity analysis is to identify the major contributors to this imprecision. There are a number of areas that must be considered in uncertainty analysis and sensitivity analysis for a PRA: (1) information, (2) systems analysis, (3) thermal-hydraulic phenomena/fission product behavior, (4) health and economic consequences, and (5) display of results. Each of these areas and the synthesis of them into a complete PRA are discussed
The Analysis of Universty Student’s Interpersonal Sensitiveness and Rejection Sensitiveness
Atılgan ERÖZKAN
2004-01-01
The aim of this study is to compare the relationships between university students' interpersonal sensitivities-that defined as undue and excessive awareness of and sensitivity to the behavior and feelings of others- and rejection sensitivities. Gender, age, SES and grades differencess were also searched in this context. For this purpose 340 (170 females; 170 males) students are randomly recruited from KTU Fatih Faculty of Education's various departments. Main instruments are Information Gathe...
Sensitivity analysis of energy demands on performance of CCHP system
International Nuclear Information System (INIS)
Li, C.Z.; Shi, Y.M.; Huang, X.H.
2008-01-01
Sensitivity analysis of energy demands is carried out in this paper to study their influence on performance of CCHP system. Energy demand is a very important and complex factor in the optimization model of CCHP system. Average, uncertainty and historical peaks are adopted to describe energy demands. The mix-integer nonlinear programming model (MINLP) which can reflect the three aspects of energy demands is established. Numerical studies are carried out based on energy demands of a hotel and a hospital. The influence of average, uncertainty and peaks of energy demands on optimal facility scheme and economic advantages of CCHP system are investigated. The optimization results show that the optimal GT's capacity and economy of CCHP system mainly lie on the average energy demands. Sum of capacities of GB and HE is equal to historical heating demand peaks, and sum of capacities of AR and ER are equal to historical cooling demand peaks. Maximum of PG is sensitive with historical peaks of energy demands and not influenced by uncertainty of energy demands, while the corresponding influence on DH is adverse
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Energy Technology Data Exchange (ETDEWEB)
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Relative sensitivity analysis of the predictive properties of sloppy models.
Myasnikova, Ekaterina; Spirov, Alexander
2018-01-25
Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.
A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors
Directory of Open Access Journals (Sweden)
Xi Yu
2014-01-01
Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.
Sensitivity analysis of energy demands on performance of CCHP system
Energy Technology Data Exchange (ETDEWEB)
Li, C.Z.; Shi, Y.M.; Huang, X.H. [School of Mechanical and Power Engineering, Shanghai Jiaotong University, Dongchuan Road 800, Minhang District, Shanghai 200240 (China)
2008-12-15
Sensitivity analysis of energy demands is carried out in this paper to study their influence on performance of CCHP system. Energy demand is a very important and complex factor in the optimization model of CCHP system. Average, uncertainty and historical peaks are adopted to describe energy demands. The mix-integer nonlinear programming model (MINLP) which can reflect the three aspects of energy demands is established. Numerical studies are carried out based on energy demands of a hotel and a hospital. The influence of average, uncertainty and peaks of energy demands on optimal facility scheme and economic advantages of CCHP system are investigated. The optimization results show that the optimal GT's capacity and economy of CCHP system mainly lie on the average energy demands. Sum of capacities of GB and HE is equal to historical heating demand peaks, and sum of capacities of AR and ER are equal to historical cooling demand peaks. Maximum of PG is sensitive with historical peaks of energy demands and not influenced by uncertainty of energy demands, while the corresponding influence on DH is adverse. (author)
Joharatnam, Nalinie; McWilliams, Daniel F; Wilson, Deborah; Wheeler, Maggie; Pande, Ira; Walsh, David A
2015-01-20
Pain remains the most important problem for people with rheumatoid arthritis (RA). Active inflammatory disease contributes to pain, but pain due to non-inflammatory mechanisms can confound the assessment of disease activity. We hypothesize that augmented pain processing, fibromyalgic features, poorer mental health, and patient-reported 28-joint disease activity score (DAS28) components are associated in RA. In total, 50 people with stable, long-standing RA recruited from a rheumatology outpatient clinic were assessed for pain-pressure thresholds (PPTs) at three separate sites (knee, tibia, and sternum), DAS28, fibromyalgia, and mental health status. Multivariable analysis was performed to assess the association between PPT and DAS28 components, DAS28-P (the proportion of DAS28 derived from the patient-reported components of visual analogue score and tender joint count), or fibromyalgia status. More-sensitive PPTs at sites over or distant from joints were each associated with greater reported pain, higher patient-reported DAS28 components, and poorer mental health. A high proportion of participants (48%) satisfied classification criteria for fibromyalgia, and fibromyalgia classification or characteristics were each associated with more sensitive PPTs, higher patient-reported DAS28 components, and poorer mental health. Widespread sensitivity to pressure-induced pain, a high prevalence of fibromyalgic features, higher patient-reported DAS28 components, and poorer mental health are all linked in established RA. The increased sensitivity at nonjoint sites (sternum and anterior tibia), as well as over joints, indicates that central mechanisms may contribute to pain sensitivity in RA. The contribution of patient-reported components to high DAS28 should inform decisions on disease-modifying or pain-management approaches in the treatment of RA when inflammation may be well controlled.
Cross section homogenization analysis for a simplified Candu reactor
International Nuclear Information System (INIS)
Pounders, Justin; Rahnema, Farzad; Mosher, Scott; Serghiuta, Dumitru; Turinsky, Paul; Sarsour, Hisham
2008-01-01
The effect of using zero current (infinite medium) boundary conditions to generate bundle homogenized cross sections for a stylized half-core Candu reactor problem is examined. Homogenized cross section from infinite medium lattice calculations are compared with cross sections homogenized using the exact flux from the reference core environment. The impact of these cross section differences is quantified by generating nodal diffusion theory solutions with both sets of cross sections. It is shown that the infinite medium spatial approximation is not negligible, and that ignoring the impact of the heterogeneous core environment on cross section homogenization leads to increased errors, particularly near control elements and the core periphery. (authors)
Sensitivity analysis of an information fusion tool: OWA operator
Zarghaami, Mahdi; Ardakanian, Reza; Szidarovszky, Ferenc
2007-04-01
The successful design and application of the Ordered Weighted Averaging (OWA) method as a decision making tool depend on the efficient computation of its order weights. The most popular methods for determining the order weights are the Fuzzy Linguistic Quantifiers approach and the Minimal Variability method which give different behavior patterns for OWA. These methods will be compared by using Sensitivity Analysis on the outputs of OWA with respect to the optimism degree of the decision maker. The theoretical results are illustrated in a water resources management problem. The Fuzzy Linguistic Quantifiers approach gives more information about the behavior of the OWA outputs in comparison to the Minimal Variability method. However, in using the Minimal Variability method, the OWA has a linear behavior with respect to the optimism degree and therefore it has better computation efficiency.
Sensitivity Study on Analysis of Reactor Containment Response to LOCA
International Nuclear Information System (INIS)
Chung, Ku Young; Sung, Key Yong
2010-01-01
As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code
Sensitivity Study on Analysis of Reactor Containment Response to LOCA
Energy Technology Data Exchange (ETDEWEB)
Chung, Ku Young; Sung, Key Yong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2010-10-15
As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code
Displacement Monitoring and Sensitivity Analysis in the Observational Method
Górska, Karolina; Muszyński, Zbigniew; Rybak, Jarosław
2013-09-01
This work discusses the fundamentals of designing deep excavation support by means of observational method. The effective tools for optimum designing with the use of the observational method are both inclinometric and geodetic monitoring, which provide data for the systematically updated calibration of the numerical computational model. The analysis included methods for selecting data for the design (by choosing the basic random variables), as well as methods for an on-going verification of the results of numeric calculations (e.g., MES) by way of measuring the structure displacement using geodetic and inclinometric techniques. The presented example shows the sensitivity analysis of the calculation model for a cantilever wall in non-cohesive soil; that analysis makes it possible to select the data to be later subject to calibration. The paper presents the results of measurements of a sheet pile wall displacement, carried out by means of inclinometric method and, simultaneously, two geodetic methods, successively with the deepening of the excavation. This work includes also critical comments regarding the usefulness of the obtained data, as well as practical aspects of taking measurement in the conditions of on-going construction works.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pohl, Andrew Phillip [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jordan, Dirk [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Altmetric analysis of 2015 dental literature: a cross sectional survey.
Kolahi, J; Iranmanesh, P; Khazaei, S
2017-05-12
Introduction To report and analyse Altmetric data of all dental articles and journals in 2015.Methods To identify all 2015 dental articles, PubMed was searched via Altmetric platform using the following query: ("2015/1/1"[PDAT]: "2015/12/31"[PDAT]) AND jsubsetd[text] NOT 2016[PDAT] on November 12, 2016. Altmetric data of all 2015 dental articles and journals were extracted and analysed by Microsoft Office Excel 2016 using descriptive statistics, graphs and trend-line analysis. To find the most important and influential Altmetric factors, multi-layered perceptron artificial neural network was employed using SPSS 22.Results A total of 14,884 dental articles published in 2015 using PubMed database were found, from which 5,153 (34.62%) articles had an Altmetric score. The mean Altmetric score was 2.94 ± 9.2 (95% C.I:2.703.22). Mendeley readers (73.19%), Twitter (21.48%), Facebook walls (3.67%), news outlets (0.69%) and bloggers (0.57%) were the most popular Altmetric data resources. At journal level, 147 dental journals with valid Altmetric data were included in the study. The British Dental Journal had the first rank, followed by Journal of Dental Research, Journal of Clinical Periodontology and Journal of the American Dental Association. Sensitivity analysis showed news outlets, tweeters and scientific bloggers were the most important and influential Altmetric data resources.Discussion In comparison with all science subjects and medical and health sciences, 2015 Altmetric scores in dentistry were very low. Uses of new and emerging scholarly tools such as social media, scientific blogs and post-publication peer-review were not common in the dental science. This negligence may be due to lack of knowledge and attitude. An Altmetric score is dynamic and may fluctuate over time.
Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code
Energy Technology Data Exchange (ETDEWEB)
Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1997-12-31
The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Relative performance of academic departments using DEA with sensitivity analysis.
Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P
2009-05-01
The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.
Analysis of a Hybrid Wing Body Center Section Test Article
Wu, Hsi-Yung T.; Shaw, Peter; Przekop, Adam
2013-01-01
The hybrid wing body center section test article is an all-composite structure made of crown, floor, keel, bulkhead, and rib panels utilizing the Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) design concept. The primary goal of this test article is to prove that PRSEUS components are capable of carrying combined loads that are representative of a hybrid wing body pressure cabin design regime. This paper summarizes the analytical approach, analysis results, and failure predictions of the test article. A global finite element model of composite panels, metallic fittings, mechanical fasteners, and the Combined Loads Test System (COLTS) test fixture was used to conduct linear structural strength and stability analyses to validate the specimen under the most critical combination of bending and pressure loading conditions found in the hybrid wing body pressure cabin. Local detail analyses were also performed at locations with high stress concentrations, at Tee-cap noodle interfaces with surrounding laminates, and at fastener locations with high bearing/bypass loads. Failure predictions for different composite and metallic failure modes were made, and nonlinear analyses were also performed to study the structural response of the test article under combined bending and pressure loading. This large-scale specimen test will be conducted at the COLTS facility at the NASA Langley Research Center.
Sensitivity analysis of alkaline plume modelling: influence of mineralogy
International Nuclear Information System (INIS)
Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.
2010-01-01
Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both
Adewale Amosu; Yuefeng Sun
2017-01-01
WheelerLab is an interactive program that facilitates the interpretation of stratigraphic data (seismic sections, outcrop data and well sections) within a sequence stratigraphic framework and the subsequent transformation of the data into the chronostratigraphic domain. The transformation enables the identification of significant geological features, particularly erosional and non-depositional features that are not obvious in the original seismic domain. Although there are some software produ...
Sensitivity analysis of Monju using ERANOS with JENDL-4.0
Energy Technology Data Exchange (ETDEWEB)
Tamagno, P. [Institut National des Sciences et Techniques Nucleaires, INSTN - Point Courrier no 35, Centre CEA de Saclay, F-91191 Gif-sur-Yvette Cedex (France); Van Rooijen, W. F. G.; Takeda, T. [Research Inst. of Nuclear Engineering, Univ. of Fukui, Kanawa-cho 1-2-4, T914-0055 Fukui-ken, Tsuruga-shi (Japan); Konomura, M. [Japan Atomic Energy Agency, FBR Plant Engineering Center, Shiraki 1, 919-1279 Fukui-ken, Tsuruga-shi (Japan)
2012-07-01
This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore a cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)
Sensitivity analysis of Monju using ERANOS with JENDL-4.0
International Nuclear Information System (INIS)
Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.; Konomura, M.
2012-01-01
This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore a cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes 239 Pu, 238 U, 241 Am and 241 Pu account for a major part of observed differences. (authors)
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
ECOS - analysis of sensitivity to database and input parameters
International Nuclear Information System (INIS)
Sumerling, T.J.; Jones, C.H.
1986-06-01
The sensitivity of doses calculated by the generic biosphere code ECOS to parameter changes has been investigated by the authors for the Department of the Environment as part of its radioactive waste management research programme. The sensitivity of results to radionuclide dependent parameters has been tested by specifying reasonable parameter ranges and performing code runs for best estimate, upper-bound and lower-bound parameter values. The work indicates that doses are most sensitive to scenario parameters: geosphere input fractions, area of contaminated land, land use and diet, flux of contaminated waters and water use. Recommendations are made based on the results of sensitivity. (author)
Multi-scale sensitivity analysis of pile installation using DEM
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2017-12-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
Sensitivity analysis on parameters and processes affecting vapor intrusion risk
Picone, Sara
2012-03-30
A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion. © 2012 SETAC.
Nonparametric Bounds and Sensitivity Analysis of Treatment Effects
Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.
2015-01-01
This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743
Sensitivity Analysis for the CLIC Damping Ring Inductive Adder
Holma, Janne
2012-01-01
The CLIC study is exploring the scheme for an electron-positron collider with high luminosity and a nominal centre-of-mass energy of 3 TeV. The CLIC pre-damping rings and damping rings will produce, through synchrotron radiation, ultra-low emittance beam with high bunch charge, necessary for the luminosity performance of the collider. To limit the beam emittance blow-up due to oscillations, the pulse generators for the damping ring kickers must provide extremely flat, high-voltage pulses. The specifications for the extraction kickers of the CLIC damping rings are particularly demanding: the flattop of the output pulse must be 160 ns duration, 12.5 kV and 250 A, with a combined ripple and droop of not more than ±0.02 %. An inductive adder allows the use of different modulation techniques and is therefore a very promising approach to meeting the specifications. PSpice has been utilised to carry out a sensitivity analysis of the predicted output pulse to the value of both individual and groups of circuit compon...
A Sensitivity Analysis of fMRI Balloon Model
Zayane, Chadia
2015-04-22
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Methods and computer codes for probabilistic sensitivity and uncertainty analysis
International Nuclear Information System (INIS)
Vaurio, J.K.
1985-01-01
This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables
Understanding earth system models: how Global Sensitivity Analysis can help
Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.
International Nuclear Information System (INIS)
Gabriel, T.A.; Bishop, B.L.
1978-01-01
The sensitivity of primary knock-on atom (PKA) spectra and displacement per atom (DPA) cross sections to different secondary neutron energy and angular distributions and ''in-group'' weighting schemes is investigated. It is shown that the sensitivity of the PKA spectra and DPA cross sections for the (n,n' unresolved) and (n,2n) reactions in Fe to different angular distributions and the same secondary neutron spectrum is reasonably large (approximately 15%), whereas the sensitivity of these quantities to grossly different secondary neutron spectra and the same angular distribution is unexpectedly small. It is also shown that for Al the sensitivity of damage energy cross sections to different ''in-group'' weighting schemes is, for the most part, small
Verroken, Charlotte; Zmierczak, Hans-Georg; Goemaere, Stefan; Kaufman, Jean-Marc; Lapauw, Bruno
2017-01-01
Maternal age at childbirth is increasing worldwide, but studies investigating the consequences of this trend on offspring metabolic health are scarce. We investigated the associations of maternal age at childbirth with metabolic outcomes in adult male siblings. We used data from 586 men aged 25-45 participating in a cross-sectional, population-based sibling-pair study, including maternal age at childbirth and offspring birthweight, adult weight, height, dual-energy X-ray absorptiometry (DXA)-derived body composition, blood pressure, and total cholesterol, glucose and insulin levels from fasting serum samples. Insulin sensitivity was evaluated using the homeostasis model assessment of insulin resistance (HOMA-IR). Maternal age at childbirth was 27·1 ± 4·7 years and was inversely associated with glucose levels (β = -0·10, P = 0·022) and HOMA-IR (β = -0·06, P = 0·065) in age- and body composition-adjusted analyses. Moreover, sons of younger (aged HOMA-IR values than sons of mothers aged 30-34 (1·39, 1·35 and 1·42 vs 1·19, P = 0·028). Additional adjustment for birthweight did not substantially alter these results. Maternal age was inversely associated with cholesterol levels in unadjusted (β = -0·09, P = 0·032), but not in age- and body composition-adjusted analyses. No associations of maternal age were observed with blood pressure, leptin, or adiponectin levels or with any of the body composition measurements. Increasing maternal age at childbirth is associated with lower fasting glucose levels and higher insulin sensitivity in adult male offspring. However, this association might not hold true in offspring of women aged ≥35 years at childbirth. © 2016 John Wiley & Sons Ltd.
Matsui, H.; Mahowald, N.
2017-08-01
Global aerosol simulations are conducted by using the Community Atmosphere Model version 5 with the Aerosol Two-dimensional bin module for foRmation and Aging Simulation version 2 (CAM5-chem/ATRAS2) which was developed in part 1. The model uses a two-dimensional (2-D) section representation with 12 size bins from 1 nm to 10 μm and 8 black carbon (BC) mixing state bins, and it can calculate detailed aerosol processes and their interactions with radiation and clouds. The simulations have similar or better agreement with aerosol observations (e.g., aerosol optical depth, absorption aerosol optical depth (AAOD), aerosol number concentrations, mass concentrations of each species) compared with the simulations using the Modal Aerosol Model with three modes. Sensitivity simulations show that global mean AAOD is reduced by 15% by resolving BC mixing state as a result of two competing effects (optical and lifetime effects). AAOD is reduced by 10-50% at low and midlatitudes in the 2-D sectional simulation because BC absorption enhancement by coating species is reduced by resolving pure BC, thinly coated BC, and BC-free particles in the model (optical effect). In contrast, AAOD is enhanced by 5-30% at high-latitudes because BC concentrations are enhanced by 40-200% over the regions by resolving less CCN active particles (lifetime effect). The simulations also suggest a model which resolves more than 3 BC categories (including BC-free particles) is desirable to calculate the optical and lifetime effects accurately. The complexity of aerosol representation is shown to be especially important for simulations of BC and CCN concentrations and AAOD.
Considering Respiratory Tract Infections and Antimicrobial Sensitivity: An Exploratory Analysis
Directory of Open Access Journals (Sweden)
Amin, R.
2009-01-01
Full Text Available This study was conducted to observe the sensitivity and resistance of status of antibiotics for respiratory tract infection (RTI. Throat swab culture and sensitivity report of 383 patients revealed sensitivity profiles were observed with amoxycillin (7.9%, penicillin (33.7%, ampicillin (36.6%, co-trimoxazole (46.5%, azithromycin (53.5%, erythromycin (57.4%, cephalexin (69.3%, gentamycin (78.2%, ciprofloxacin (80.2%, cephradine (81.2%, ceftazidime (93.1%, ceftriaxone (93.1%. Sensitivity to cefuroxime was reported 93.1% cases. Resistance was found with amoxycillin (90.1%, ampicillin (64.1%, penicillin (61.4%, co-trimoxazole (43.6%, erythromycin (39.6%, and azithromycin (34.7%. Cefuroxime demonstrates high level of sensitivity than other antibiotics and supports its consideration with patients with upper RTI.
International Nuclear Information System (INIS)
Chung, Myung Jin; Lee, Kyung Soo; Kim, Tae Sung; Kim, Sung Mok; Koh, Won-Jung; Kwon, O Jung; Kang, Eun Young; Kim, Seonwoo
2006-01-01
The aim of this work was to compare thin-section CT (TSCT) findings of drug-sensitive (DS) tuberculosis (TB), multidrug-resistant (MDR) TB, and nontuberculous mycobacterial (NTM) pulmonary disease in nonAIDS adults. During 2003, 216 (113 DS TB, 35 MDR TB, and 68 NTM) patients with smear-positive sputum for acid-fast bacilli (AFB), and who were subsequently confirmed to have mycobacterial pulmonary disease, underwent thoracic TSCT. The frequency of lung lesion patterns on TSCT and patients' demographic data were compared. The commonest TSCT findings were tree-in-bud opacities and nodules. On a per-person basis, significant differences were found in the frequency of multiple cavities and bronchiectasis (P<0.001, chi-square test and multiple logistic regression analysis). Multiple cavities were more frequent in MDR TB than in the other two groups and extensive bronchiectasis in NTM disease (multiple logistic regression analysis). Patients with MDR TB were younger than those with DS TB or NTM disease (P<0.001, multiple logistic regression analysis). Previous tuberculosis treatment history was significantly more frequent in patients with MDR TB or NTM disease (P<0.001, chi-square test and multiple logistic regression analysis). In patients with positive sputum AFB, multiple cavities, young age, and previous tuberculosis treatment history imply MDR TB, whereas extensive bronchiectasis, old age, and previous tuberculosis treatment history NTM disease. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Chung, Myung Jin; Lee, Kyung Soo; Kim, Tae Sung; Kim, Sung Mok [Sungkyunkwan University School of Medicine, Department of Radiology and Center for Imaging Science, Samsung Medical Center, Seoul (Korea); Koh, Won-Jung; Kwon, O Jung [Sungkyunkwan University School of Medicine, Division of Pulmonary and Critical Care Medicine, Department of Medicine, Samsung Medical Center, Seoul (Korea); Kang, Eun Young [Korea University Guro Hospital, Department of Diagnostic Radiology, Korea University College of Medicine, Seoul (Korea); Kim, Seonwoo [Sungkyunkwan University School of Medicine, Biostatistics Unit of the Samsung Biomedical Research Institute, Samsung Medical Center, Seoul (Korea)
2006-09-15
The aim of this work was to compare thin-section CT (TSCT) findings of drug-sensitive (DS) tuberculosis (TB), multidrug-resistant (MDR) TB, and nontuberculous mycobacterial (NTM) pulmonary disease in nonAIDS adults. During 2003, 216 (113 DS TB, 35 MDR TB, and 68 NTM) patients with smear-positive sputum for acid-fast bacilli (AFB), and who were subsequently confirmed to have mycobacterial pulmonary disease, underwent thoracic TSCT. The frequency of lung lesion patterns on TSCT and patients' demographic data were compared. The commonest TSCT findings were tree-in-bud opacities and nodules. On a per-person basis, significant differences were found in the frequency of multiple cavities and bronchiectasis (P<0.001, chi-square test and multiple logistic regression analysis). Multiple cavities were more frequent in MDR TB than in the other two groups and extensive bronchiectasis in NTM disease (multiple logistic regression analysis). Patients with MDR TB were younger than those with DS TB or NTM disease (P<0.001, multiple logistic regression analysis). Previous tuberculosis treatment history was significantly more frequent in patients with MDR TB or NTM disease (P<0.001, chi-square test and multiple logistic regression analysis). In patients with positive sputum AFB, multiple cavities, young age, and previous tuberculosis treatment history imply MDR TB, whereas extensive bronchiectasis, old age, and previous tuberculosis treatment history NTM disease. (orig.)
Sensitivity analysis: Theory and practical application in safety cases
International Nuclear Information System (INIS)
Kuhlmann, Sebastian; Plischke, Elmar; Roehlig, Klaus-Juergen; Becker, Dirk-Alexander
2014-01-01
The projects described here aim at deriving an adaptive and stepwise approach to sensitivity analysis (SA). Since the appropriateness of a single SA method strongly depends on the nature of the model under study, a top-down approach (from simple to sophisticated methods) is suggested. If simple methods explain the model behaviour sufficiently well then there is no need for applying more sophisticated ones and the SA procedure can be considered complete. The procedure is developed and tested using a model for a LLW/ILW repository in salt. Additionally, a new model for the disposal of HLW in rock salt will be available soon for SA studies within the MOSEL/NUMSA projects. This model will address special characteristics of waste disposal in undisturbed rock salt, especially the case of total confinement, resulting in a zero release which is indeed the objective of radioactive waste disposal. A high proportion of zero-output realisations causes many SA methods to fail, so special treatment is needed and has to be developed. Furthermore, the HLW disposal model will be used as a first test case for applying the procedure described above, which was and is being derived using the LLW/ILW model. How to treat dependencies in the input, model conservatism and time-dependent outputs will be addressed in the future project programme: - If correlations or, more generally, dependencies between input parameters exist, the question arises about the deeper meaning of sensitivity results in such cases: A strict separation between inputs, internal states and outputs is no longer possible. Such correlations (or dependencies) might have different reasons. In some cases correlated input parameters might have a common physically (well-)known fundamental cause but there are reasons why this fundamental cause cannot or should not be integrated into the model, i.e. the cause might generate a very complex model which cannot be calculated in appropriate time. In other cases the correlation may
Uncertainty and sensitivity analysis in a Probabilistic Safety Analysis level-1
International Nuclear Information System (INIS)
Nunez Mc Leod, Jorge E.; Rivera, Selva S.
1996-01-01
A methodology for sensitivity and uncertainty analysis, applicable to a Probabilistic Safety Assessment Level I has been presented. The work contents are: correct association of distributions to parameters, importance and qualification of expert opinions, generations of samples according to sample sizes, and study of the relationships among system variables and systems response. A series of statistical-mathematical techniques are recommended along the development of the analysis methodology, as well as different graphical visualization for the control of the study. (author)
Directory of Open Access Journals (Sweden)
Daniel Tomas Naughton
2017-10-01
Full Text Available The widespread brittle failure of welded beam-to-column connections caused by the 1994 Northridge and 1995 Kobe earthquakes highlighted the need for retrofitting measures effective in reducing the strength demand imposed on connections under cyclic loading. Researchers presented the reduced beam section (RBS as a viable option to create a weak zone away from the connection, aiding the prevention of brittle failure at the connection weld. More recently, an alternative connection known as a reduced web section (RWS has been developed as a potential replacement, and initial studies show ideal performance in terms of rotational capacity and ductility. This study performs a series of non-linear static pushover analyses using a modal load case on three steel moment-resisting frames of 4-, 8-, and 16-storeys. The frames are studied with three different connection arrangements; fully fixed moment connections, RBSs and RWSs, in order to compare the differences in capacity curves, inter-storey drifts, and plastic hinge formation. The seismic-resistant connections have been modeled as non-linear hinges in ETABS, and their behavior has been defined by moment-rotation curves presented in previous recent research studies. The frames are displacement controlled to the maximum displacement anticipated in an earthquake with ground motions having a 2% probability of being exceeded in 50 years. The study concludes that RWSs perform satisfactorily when compared with frames with fully fixed moment connections in terms of providing consistent inter-storey drifts without drastic changes in drift between adjacent storeys in low- to mid-rise frames, without significantly compromising the overall strength capacity of the frames. The use of RWSs in taller frames causes an increase in inter-storey drifts in the lower storeys, as well as causing a large reduction in strength capacity (33%. Frames with RWSs behave comparably to frames with RBSs and are deemed a suitable
Directory of Open Access Journals (Sweden)
Marcin Luczak
2014-01-01
Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.
Experimental sensitivity analysis of oxygen transfer in the capillary fringe.
Haberer, Christina M; Cirpka, Olaf A; Rolle, Massimo; Grathwohl, Peter
2014-01-01
Oxygen transfer in the capillary fringe (CF) is of primary importance for a wide variety of biogeochemical processes occurring in shallow groundwater systems. In case of a fluctuating groundwater table two distinct mechanisms of oxygen transfer within the capillary zone can be identified: vertical predominantly diffusive mass flux of oxygen, and mass transfer between entrapped gas and groundwater. In this study, we perform a systematic experimental sensitivity analysis in order to assess the influence of different parameters on oxygen transfer from entrapped air within the CF to underlying anoxic groundwater. We carry out quasi two-dimensional flow-through experiments focusing on the transient phase following imbibition to investigate the influence of the horizontal flow velocity, the average grain diameter of the porous medium, as well as the magnitude and the speed of the water table rise. We present a numerical flow and transport model that quantitatively represents the main mechanisms governing oxygen transfer. Assuming local equilibrium between the aqueous and the gaseous phase, the partitioning process from entrapped air can be satisfactorily simulated. The different experiments are monitored by measuring vertical oxygen concentration profiles at high spatial resolution with a noninvasive optode technique as well as by determining oxygen fluxes at the outlet of the flow-through chamber. The results show that all parameters investigated have a significant effect and determine different amounts of oxygen transferred to the oxygen-depleted groundwater. Particularly relevant are the magnitude of the water table rise and the grain size of the porous medium. © 2013, National Ground Water Association.
Sorption of redox-sensitive elements: critical analysis
International Nuclear Information System (INIS)
Strickert, R.G.
1980-12-01
The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH - , CO -- 3 ) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states
Analysis of Sea Ice Cover Sensitivity in Global Climate Model
Directory of Open Access Journals (Sweden)
V. P. Parhomenko
2014-01-01
Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters
Cross-sectional vestibular nerve analysis in vestibular neuritis.
Fundakowski, Christopher E; Anderson, Joshua; Angeli, Simon
2012-07-01
We examined the association between the size and cross-sectional area of the superior vestibular nerve as measured on constructive interference in steady-state (CISS) parasagittal magnetic resonance imaging (MRI) and the vestibular nerve function as measured by electronystagmography. The retrospective observational cohort study took place at an academic tertiary referral center. Twenty-six patients who met established clinical and electronystagmographic criteria for vestibular neuritis and who underwent parasagittal CISS MRI were identified. Two blinded investigators measured vestibular nerve height and width bilaterally at the level of the fundus of the internal auditory canal and calculated the cross-sectional nerve areas. The inter-rater reliability and agreement were analyzed. Symptom duration, age, and gender were also examined. A statistically significant decrease was observed in both vestibular nerve cross-sectional area and height as compared to the contralateral vestibular nerve. A non-statistically significant trend was observed for a relative decreased cross-sectional nerve area with increased age, as well as a decrease in nerve area with an increase in symptom duration. Decreases in both vestibular nerve cross-sectional area and height are observed in patients with unilateral vestibular neuritis as measured on parasagittal CISS MRI.
Sensitivity analysis of hybrid power systems using Power Pinch Analysis considering Feed-in Tariff
International Nuclear Information System (INIS)
Mohammad Rozali, Nor Erniza; Wan Alwi, Sharifah Rafidah; Manan, Zainuddin Abdul; Klemeš, Jiří Jaromír
2016-01-01
Feed-in Tariff (FiT) has been one of the most effective policies in accelerating the development of renewable energy (RE) projects. The amount of RE electricity in the FiT purchase agreement is an important decision that has to be made by the RE project developers. They have to consider various crucial factors associated with RE system operation as well as its stochastic nature. The presented work aims to assess the sensitivity and profitability of a hybrid power system (HPS) in cases of RE system failure or shutdown. The amount of RE electricity for the FiT purchase agreement in various scenarios was determined using a novel tool called On-Grid Problem Table based on the Power Pinch Analysis (PoPA). A sensitivity table has also been introduced to assist planners to evaluate the effects of the RE system's failure on the profitability of the HPS. This table offers insights on the variance of the RE electricity. The sensitivity analysis of various possible scenarios shows that the RE projects can still provide financial benefits via the FiT, despite the losses incurred from the penalty levied. - Highlights: • A Power Pinch Analysis (PoPA) tool to assess the economics of an HPS with FiT. • The new On-Grid Problem Table for targeting the available RE electricity for FiT sale. • A sensitivity table showing the effect of RE electricity changes on the HPS profitability.
Spatiotemporal sensitivity analysis of vertical transport of pesticides in soil
Environmental fate and transport processes are influenced by many factors. Simulation models that mimic these processes often have complex implementations, which can lead to over-parameterization. Sensitivity analyses are subsequently used to identify critical parameters whose un...
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Cross-sectional dependence in panel data analysis
Sarafidis, V.; Wansbeek, T.J.
2012-01-01
This article provides an overview of the existing literature on panel data models with error cross-sectional dependence (CSD). We distinguish between weak and strong CSD and link these concepts to the spatial and factor structure approaches. We consider estimation under strong and weak exogeneity of
Partial wave analysis for folded differential cross sections
Machacek, J. R.; McEachran, R. P.
2018-03-01
The value of modified effective range theory (MERT) and the connection between differential cross sections and phase shifts in low-energy electron scattering has long been recognized. Recent experimental techniques involving magnetically confined beams have introduced the concept of folded differential cross sections (FDCS) where the forward (θ ≤ π/2) and backward scattered (θ ≥ π/2) projectiles are unresolved, that is the value measured at the angle θ is the sum of the signal for particles scattered into the angles θ and π - θ. We have developed an alternative approach to MERT in order to analyse low-energy folded differential cross sections for positrons and electrons. This results in a simplified expression for the FDCS when it is expressed in terms of partial waves and thereby enables one to extract the first few phase shifts from a fit to an experimental FDCS at low energies. Thus, this method predicts forward and backward angle scattering (0 to π) using only experimental FDCS data and can be used to determine the total elastic cross section solely from experimental results at low-energy, which are limited in angular range.
Thermal-Hydrological Sensitivity Analysis of Underground Coal Gasification
Energy Technology Data Exchange (ETDEWEB)
Buscheck, T A; Hao, Y; Morris, J P; Burton, E A
2009-10-05
. Specifically, we conducted a parameter sensitivity analysis of the influence of thermal and hydrological properties of the host coal, caprock, and bedrock on cavity temperature and steam production.
Rehrl, Jakob; Gruber, Arlin; Khinast, Johannes G; Horn, Martin
2017-01-30
This paper presents a sensitivity analysis of a pharmaceutical direct compaction process. Sensitivity analysis is an important tool for gaining valuable process insights and designing a process control concept. Examining its results in a systematic manner makes it possible to assign actuating signals to controlled variables. This paper presents mathematical models for individual unit operations, on which the sensitivity analysis is based. Two sensitivity analysis methods are outlined: (i) based on the so-called Sobol indices and (ii) based on the steady-state gains and the frequency response of the proposed plant model. Copyright © 2016 Elsevier B.V. All rights reserved.
Active Fault Diagnosis for Hybrid Systems Based on Sensitivity Analysis and EKF
DEFF Research Database (Denmark)
Gholami, Mehdi; Schiøler, Henrik; Bak, Thomas
2011-01-01
An active fault diagnosis approach for different kinds of faults is proposed. The input of the approach is designed off-line based on sensitivity analysis such that the maximum sensitivity for each individual system parameter is obtained. Using maximum sensitivity, results in a better precision...
Adjoint sensitivity analysis of the thermomechanical behavior of repositories
International Nuclear Information System (INIS)
Wilson, J.L.; Thompson, B.M.
1984-01-01
The adjoint sensitivity method is applied to thermomechanical models for the first time. The method provides an efficient and inexpensive answer to the question: how sensitive are thermomechanical predictions to assumed parameters. The answer is exact, in the sense that it yields exact derivatives of response measures to parameters, and approximate, in the sense that projections of the response fo other parameter assumptions are only first order correct. The method is applied to linear finite element models of thermomechanical behavior. Extensions to more complicated models are straight-forward but often laborious. An illustration of the method with a two-dimensional repository corridor model reveals that the chosen stress response measure was most sensitive to Poisson's ratio for the rock matrix
Remarks on variational sensitivity analysis of elastoplastic deformations
Barthold, Franz-Joseph; Liedmann, Jan
2017-10-01
Design optimisation of structures and materials becomes more important in most engineering disciplines, especially in forming. The treatment of inelastic, path-dependent materials is a recent topic in this connection. Unlike purely elastic materials, it is necessary to store and analyse the deformation history in order to appropriately describe path-dependent material behaviour. For structural optimisation with design variables such as the outer shape of a structure, the boundary conditions and the material properties, it is necessary to compute sensitivities of all quantities of influence to use gradient based optimisation algorithms. Considering path-dependent materials, this includes the sensitivities of internal variables that represent the deformation history. We present an algorithm to compute afore-mentioned sensitivities, based on variational principles, in the context of finite deformation elastoplasticity. The novel approach establishes the possibility of design exploration using singular value decomposition.
International Nuclear Information System (INIS)
Sola, A.
1978-01-01
An analytical sensitivity analysis has been made of the effect of various parameters on the evaluation of fission product concentration. Such parameters include cross sections, decay constants, branching ratios, fission yields, flux and time. The formulae are applied to isotopes of the Tin, Antimony and Tellurium series. The agreement between analytically obtained data and that derived from a computer evaluated model is good, suggesting that the analytical representation includes all the important parameters useful to the evaluation of the fission product concentrations
BOLD/VENTURE-4, Reactor Analysis System with Sensitivity and Burnup
International Nuclear Information System (INIS)
1998-01-01
1 - Description of program or function: The system of codes can be used to solve nuclear reactor core static neutronics and reactor history exposure problems. BOLD/VENTURE-4: First order perturbation and time-dependent sensitivity theories can be applied. Control rod positioning may be modeled explicitly and refueling treated with repositioning and recycle. Special capability is coded to model the continuously fueled core and to solve the importance and dominant harmonics problems. The modules of the code system are: VENTNEUT: VENTURE neutronics module; DRIVER and CONTRL: Control module; BURNER: Exposure calculation for reactor core analysis; FILEDTOR: File editor; INPROSER: Input processor; EXPOSURE: BURNER code module; REACRATE: Reaction rate calculation; CNTRODPO: Control rod positioning; FUELMANG: Fuel management positioning and accounting; PERTUBAT: Perturbation reactivity importance analyses; sensitivity analysis; DEPTHMOD: Static and time-dependent perturbation sensitivity analysis. The special processors are: DVENTR: Handles the input to the VENTURE module; DCMACR: Converts CITATION macroscopic cross sections to microscopic cross sections; DCRSPR: Produces input for the CROSPROS module; DUTLIN: Adds or replaces problem input data without exiting the program; DENMAN: Repositions fuel; DMISLY: Miscellaneous tasks. Standard interface files between modules are binary sequential files that follow a standardized format. VENTURE-PC: The microcomputer version is a subset of the mainframe version. The modules and special processors which are not part of VENTURE-PC are: REACRATE, CNTRODPO, PERTUBAT, FUELMANG, DEPTHMOD, DMISLY. 2 - method of solution: BOLD-VENTURE-4: The neutronics problems are solved by applying the multigroup diffusion theory representation of neutron transport applying an over-relaxation inner iteration, outer iteration scheme. Special modeling is used or source correction is done during iteration to solve importance and harmonics problems. No
Molecular analysis of Aspergillus section Flavi isolated from Brazil nuts.
Gonçalves, Juliana Soares; Ferracin, Lara Munique; Carneiro Vieira, Maria Lucia; Iamanaka, Beatriz Thie; Taniwaki, Marta Hiromi; Pelegrinelli Fungaro, Maria Helena
2012-04-01
Brazil nuts are an important export market in its main producing countries, including Brazil, Bolivia, and Peru. Approximately 30,000 tons of Brazil nuts are harvested each year. However, substantial nut contamination by Aspergillus section Flavi occurs with subsequent production of aflatoxins. In our study, Aspergillus section Flavi were isolated from Brazil nuts (Bertholletia excelsa), and identified by morphological and molecular means. We obtained 241 isolates from nut samples, 41% positive for aflatoxin production. Eighty-one isolates were selected for molecular investigation. Pairwise genetic distances among isolates and phylogenetic relationships were assessed. The following Aspergillus species were identified: A. flavus, A. caelatus, A. nomius, A. tamarii, A. bombycis, and A. arachidicola. Additionally, molecular profiles indicated a high level of nucleotide variation within β-tubulin and calmodulin gene sequences associated with high genetic divergence from RAPD data. Among the 81 isolates analyzed by molecular means, three of them were phylogenetically distinct from all other isolates representing the six species of section Flavi. A putative novel species was identified based on molecular profiles.
Hattori, Satoshi; Zhou, Xiao-Hua
2018-02-10
Publication bias is one of the most important issues in meta-analysis. For standard meta-analyses to examine intervention effects, the funnel plot and the trim-and-fill method are simple and widely used techniques for assessing and adjusting for the influence of publication bias, respectively. However, their use may be subjective and can then produce misleading insights. To make a more objective inference for publication bias, various sensitivity analysis methods have been proposed, including the Copas selection model. For meta-analysis of diagnostic studies evaluating a continuous biomarker, the summary receiver operating characteristic (sROC) curve is a very useful method in the presence of heterogeneous cutoff values. To our best knowledge, no methods are available for evaluation of influence of publication bias on estimation of the sROC curve. In this paper, we introduce a Copas-type selection model for meta-analysis of diagnostic studies and propose a sensitivity analysis method for publication bias. Our method enables us to assess the influence of publication bias on the estimation of the sROC curve and then judge whether the result of the meta-analysis is sufficiently confident or should be interpreted with much caution. We illustrate our proposed method with real data. Copyright © 2017 John Wiley & Sons, Ltd.
Intelligence and Interpersonal Sensitivity: A Meta-Analysis
Murphy, Nora A.; Hall, Judith A.
2011-01-01
A meta-analytic review investigated the association between general intelligence and interpersonal sensitivity. The review involved 38 independent samples with 2988 total participants. There was a highly significant small-to-medium effect for intelligence measures to be correlated with decoding accuracy (r=0.19, p less than 0.001). Significant…
Smart optimisation and sensitivity analysis in water distribution systems
CSIR Research Space (South Africa)
Page, Philip R
2015-12-01
Full Text Available optimisation of a water distribution system by keeping the average pressure unchanged as water demands change, by changing the speed of the pumps. Another application area considered, using the same mathematical notions, is the study of the sensitivity...
Stochastic sensitivity analysis using HDMR and score function
Indian Academy of Sciences (India)
The method involves high dimensional model representation and score functions associated with probability distribution of a random input. The proposed approach facilitates first-and second-order approximation of stochastic sensitivity measures and statistical simulation. The formulation is general such that any simulation ...
Fecal bacteria source characterization and sensitivity analysis of SWAT 2005
The Soil and Water Assessment Tool (SWAT) version 2005 includes a microbial sub-model to simulate fecal bacteria transport at the watershed scale. The objectives of this study were to demonstrate methods to characterize fecal coliform bacteria (FCB) source loads and to assess the model sensitivity t...
Sensitivity based reduced approaches for structural reliability analysis
Indian Academy of Sciences (India)
The difﬁculty in computing the failure probability increases rapidly with the number of variables. In this paper, a ... Based on the sensitivity of the failure surface, three new reduction methods, namely ... Department of Aerospace Engineering, School of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP, UK ...
Comparative Analysis of Intercultural Sensitivity among Teachers Working with Refugees
Strekalova-Hughes, Ekaterina
2017-01-01
The unprecedented global refugee crisis and the accompanying political discourse places added pressures on teachers working with children who are refugees in resettling countries. Given the increased chances of having a refugee child in one's classroom, it is critical to explore how interculturally sensitive teachers are and if working with…
Time course analysis of baroreflex sensitivity during postural stress
Westerhof, Berend E.; Gisolf, Janneke; Karemaker, John M.; Wesseling, Karel H.; Secher, Niels H.; van Lieshout, Johannes J.
2006-01-01
Postural stress requires immediate autonomic nervous action to maintain blood pressure. We determined time-domain cardiac baroreflex sensitivity (BRS) and time delay (tau) between systolic blood pressure and interbeat interval variations during stepwise changes in the angle of vertical body axis
Financial bubbles analysis with a cross-sectional estimator
Frederic Abergel; Nicolas Huth; Ioane Muni Toke
2009-01-01
We highlight a very simple statistical tool for the analysis of financial bubbles, which has already been studied in [1]. We provide extensive empirical tests of this statistical tool and investigate analytically its link with stocks correlation structure.
Sensitivity analysis of the GNSS derived Victoria plate motion
Apolinário, João; Fernandes, Rui; Bos, Machiel
2014-05-01
Fernandes et al. (2013) estimated the angular velocity of the Victoria tectonic block from geodetic data (GNSS derived velocities) only.. GNSS observations are sparse in this region and it is therefore of the utmost importance to use the available data (5 sites) in the most optimal way. Unfortunately, the existing time-series were/are affected by missing data and offsets. In addition, some time-series were close to the considered minimal threshold value to compute one reliable velocity solution: 2.5-3.0 years. In this research, we focus on the sensitivity of the derived angular velocity to changes in the data (longer data-span for some stations) by extending the used data-span: Fernandes et al. (2013) used data until September 2011. We also investigate the effect of adding other stations to the solution, which is now possible since more stations became available in the region. In addition, we study if the conventional power-law plus white noise model is indeed the best stochastic model. In this respect, we apply different noise models using HECTOR (Bos et al. (2013), which can use different noise models and estimate offsets and seasonal signals simultaneously. The seasonal signal estimation is also other important parameter, since the time-series are rather short or have large data spans at some stations, which implies that the seasonal signals still can have some effect on the estimated trends as shown by Blewitt and Lavellee (2002) and Bos et al. (2010). We also quantify the magnitude of such differences in the estimation of the secular velocity and their effect in the derived angular velocity. Concerning the offsets, we investigate how they can, detected and undetected, influence the estimated plate motion. The time of offsets has been determined by visual inspection of the time-series. The influence of undetected offsets has been done by adding small synthetic random walk signals that are too small to be detected visually but might have an effect on the
A global analysis of inclusive diffractive cross sections at HERA
International Nuclear Information System (INIS)
Royon, C.; Schoeffel, L.; Sapeta, S.; Peschanski, R.; Sauvan, E.
2006-10-01
We describe the most recent data on the diffractive structure functions from the H1 and ZEUS Collaborations at HERA using four models. First, a Pomeron Structure Function (PSF) model, in which the Pomeron is considered as an object with parton distribution functions. Then, the Bartels Ellis Kowalski Wuesthoff (BEKW) approach is discussed, assuming the simplest perturbative description of the Pomeron using a two-gluon ladder. A third approach, the Bialas Peschanski (BP) model, based on the dipole formalism is then described. Finally, we discuss the Golec-Biernat-Wuesthoff (GBW) saturation model which takes into account saturation effects. The best description of all available measurements can be achieved with either the PSF based model or the BEKW approach. In particular, the BEKW prediction allows to include the highest β measurements, which are dominated by higher twists effects and provide an efficient and compact parametrisation of the diffractive cross section. The two other models also give a good description of cross section measurements at small x with a small number of parameters. The comparison of all predictions allows us to identify interesting differences in the behaviour of the effective pomeron intercept and in the shape of the longitudinal component of the diffractive structure functions. In this last part, we present some features that can be discriminated by new experimental measurements, completing the HERA program. (authors)
Directory of Open Access Journals (Sweden)
Y. Tang
2007-01-01
Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.
Analytical sensitivity analysis of geometric errors in a three axis machine tool
International Nuclear Information System (INIS)
Park, Sung Ryung; Yang, Seung Han
2012-01-01
In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors
System reliability assessment via sensitivity analysis in the Markov chain scheme
International Nuclear Information System (INIS)
Gandini, A.
1988-01-01
Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given [fr
Sensitive Detection of Deliquescent Bacterial Capsules through Nanomechanical Analysis.
Nguyen, Song Ha; Webb, Hayden K
2015-10-20
Encapsulated bacteria usually exhibit strong resistance to a wide range of sterilization methods, and are often virulent. Early detection of encapsulation can be crucial in microbial pathology. This work demonstrates a fast and sensitive method for the detection of encapsulated bacterial cells. Nanoindentation force measurements were used to confirm the presence of deliquescent bacterial capsules surrounding bacterial cells. Force/distance approach curves contained characteristic linear-nonlinear-linear domains, indicating cocompression of the capsular layer and cell, indentation of the capsule, and compression of the cell alone. This is a sensitive method for the detection and verification of the encapsulation status of bacterial cells. Given that this method was successful in detecting the nanomechanical properties of two different layers of cell material, i.e. distinguishing between the capsule and the remainder of the cell, further development may potentially lead to the ability to analyze even thinner cellular layers, e.g. lipid bilayers.
Analytical capabilities of RIMS: absolute sensitivity and isotopic analysis
International Nuclear Information System (INIS)
Nogar, N.S.; Downey, S.W.; Miller, C.M.
1984-01-01
Resonance ionization mass spectrometry (RIMS) with thermal filament sources is becoming an established analytical technique. The results of recent isotope ratio measurements carried out on small (60 - 200ng) lutetium samples are presented. The sensitivity and selectivity of continuous wave (CW) laser RIMS allow the accurate determination of very large ratios (approx. 10 6 ) in real samples containing numerous isobaric interferences. In addition, high resolution optical spectra of lutetium isotopes have been generated using RIMS as a prelude to isotopically selective resonance ionization. Also, the results of two-color spectroscopic studies for isotope ratio measurements in technetium are presented. A large number of multiply-resonant sequences have been explored; however, the presence of Tc molecular species appears to limit the potential sensitivity of the measurement. (author)
Long vs. short-term energy storage:sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Schoenung, Susan M. (Longitude 122 West, Inc., Menlo Park, CA); Hassenzahl, William V. (,Advanced Energy Analysis, Piedmont, CA)
2007-07-01
This report extends earlier work to characterize long-duration and short-duration energy storage technologies, primarily on the basis of life-cycle cost, and to investigate sensitivities to various input assumptions. Another technology--asymmetric lead-carbon capacitors--has also been added. Energy storage technologies are examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. Sensitivity analyses include cost of electricity and natural gas, and system life, which impacts replacement costs and capital carrying charges. Results are presented in terms of annual cost, $/kW-yr. A major variable affecting system cost is hours of storage available for discharge.
Analysis of the individual radio sensitivity of breast cancer patients
International Nuclear Information System (INIS)
Auer, Judith
2013-01-01
Individual radiosensitivity has a crucial impact on radiotherapy related side effects. A prediction of individual radiosensitivity could avoid these side effects. Our aim was to study a breast cancer collective for its variation of individual radiosensitivity. Peripheral blood samples were obtained from 129 individuals. 67 breast cancer patients and 62 healthy and age matched individuals were looked at and their individual radiosensitivity was estimated by a 3-color Fluorescence in situ hybridization approach. Blood samples were obtained (i) before starting adjuvant radiotherapy and were in vitro irradiated by 2 Gy; (ii) after 5 single doses of 1.8 Gy and after 72 h had elapsed. DNA of lymphocytes was probed with whole chromosome painting for chromosomes 1, 2 and 4. The rate of breaks per metaphase was analyzed and used as a predictor of individual radiosensitivity. Breast cancer patients were distinctly more radio-sensitive compared to healthy controls. Additionally the distribution of the cancer patients' radiosensitivity was broader. A subgroup of 9 rather radio-sensitive and 9 rather radio-resistant patients was identified. A subgroup of patients aged between 40 and 50 was distinctly more radio-sensitive than younger or older patients. The in vivo irradiation approach was not applicable to detect individual radiosensitivity. In the breast cancer collective a distinctly resistant and sensitive subgroup is identified, which could be subject for treatment adjustment. Especially in the range of age 40 to 50 patients have an increased radiosensitivity. An in vivo irradiation in a breast cancer collective is not suitable to estimate individual radiosensitivity due to a low deposed dose.
Analysis of Consumers' Preferences and Price Sensitivity to Native Chickens.
Lee, Min-A; Jung, Yoojin; Jo, Cheorun; Park, Ji-Young; Nam, Ki-Chang
2017-01-01
This study analyzed consumers' preferences and price sensitivity to native chickens. A survey was conducted from Jan 6 to 17, 2014, and data were collected from consumers (n=500) living in Korea. Statistical analyses evaluated the consumption patterns of native chickens, preference marketing for native chicken breeds which will be newly developed, and price sensitivity measurement (PSM). Of the subjects who preferred broilers, 24.3% do not purchase native chickens because of the dryness and tough texture, while those who preferred native chickens liked their chewy texture (38.2%). Of the total subjects, 38.2% preferred fried native chickens (38.2%) for processed food, 38.4% preferred direct sales for native chicken distribution, 51.0% preferred native chickens to be slaughtered in specialty stores, and 32.4% wanted easy access to native chickens. Additionally, the price stress range (PSR) was 50 won and the point of marginal cheapness (PMC) and point of marginal expensiveness (PME) were 6,980 won and 12,300 won, respectively. Evaluation of the segmentation market revealed that consumers who prefer broiler to native chicken breeds were more sensitive to the chicken price. To accelerate the consumption of newly developed native chicken meat, it is necessary to develop a texture that each consumer needs, to increase the accessibility of native chickens, and to have diverse menus and recipes as well as reasonable pricing for native chickens.
Adjoint based sensitivity analysis of a reacting jet in crossflow
Sashittal, Palash; Sayadi, Taraneh; Schmid, Peter
2016-11-01
With current advances in computational resources, high fidelity simulations of reactive flows are increasingly being used as predictive tools in various industrial applications. In order to capture the combustion process accurately, detailed/reduced chemical mechanisms are employed, which in turn rely on various model parameters. Therefore, it would be of great interest to quantify the sensitivities of the predictions with respect to the introduced models. Due to the high dimensionality of the parameter space, methods such as finite differences which rely on multiple forward simulations prove to be very costly and adjoint based techniques are a suitable alternative. The complex nature of the governing equations, however, renders an efficient strategy in finding the adjoint equations a challenging task. In this study, we employ the modular approach of Fosas de Pando et al. (2012), to build a discrete adjoint framework applied to a reacting jet in crossflow. The developed framework is then used to extract the sensitivity of the integrated heat release with respect to the existing combustion parameters. Analyzing the sensitivities in the three-dimensional domain provides insight towards the specific regions of the flow that are more susceptible to the choice of the model.
International Nuclear Information System (INIS)
Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.
2008-01-01
In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)
Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model
Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance
2014-01-01
Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...
Intelligent switching between different noise propagation algorithms: analysis and sensitivity
2012-08-10
When modeling aircraft noise on a large scale (such as an analysis of annual aircraft : operations at an airport), it is important that the noise propagation model used for the : analysis be both efficient and accurate. In this analysis, three differ...
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Survey of sampling-based methods for uncertainty and sensitivity analysis
International Nuclear Information System (INIS)
Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.
2006-01-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition
Introduction to the Special Section on Forest Inventory and Analysis
John D. Shaw
2017-01-01
Eighteen years ago, in this journal, Gillespie (1999) described the transition of the US Department of Agriculture (USDA) Forest Service Forest Inventory and Analysis (FIA) program from its historical practice of periodic, state-level inventories to a spatially and temporally balanced annualized inventory. The article offered a rationale for the change and also noted...
Functional analysis of the cross-section form and X-ray density of human ulnae
International Nuclear Information System (INIS)
Hilgen, B.
1981-01-01
On 20 ulnae the form of the cross sections and distribution of the X-ray density were investigated in five different cross-section heights. The analysis of the cross-section forms was carried through using plane contraction figures, the X-ray density was established by means of the equidensity line method. (orig.) [de
Qualitative website analysis of information on birth after caesarean section.
Peddie, Valerie L; Whitelaw, Natalie; Cumming, Grant P; Bhattacharya, Siladitya; Black, Mairead
2015-08-19
The United Kingdom (UK) caesarean section (CS) rate is largely determined by reluctance to augment trial of labour and vaginal birth. Choice between repeat CS and attempting vaginal birth after CS (VBAC) in the next pregnancy is challenging, with neither offering clear safety advantages. Women may access online information during the decision-making process. Such information is known to vary in its support for either mode of birth when assessed quantitatively. Therefore, we sought to explore qualitatively, the content and presentation of web-based health care information on birth after caesarean section (CS) in order to identify the dominant messages being conveyed. The search engine Google™ was used to conduct an internet search using terms relating to birth after CS. The ten most frequently returned websites meeting relevant purposive sampling criteria were analysed. Sampling criteria were based upon funding source, authorship and intended audience. Images and written textual content together with presence of links to additional media or external web content were analysed using descriptive and thematic analyses respectively. Ten websites were analysed: five funded by Government bodies or professional membership; one via charitable donations, and four funded commercially. All sites compared the advantages and disadvantages of both repeat CS and VBAC. Commercially funded websites favoured a question and answer format alongside images, 'pop-ups', social media forum links and hyperlinks to third-party sites. The relationship between the parent sites and those being linked to may not be readily apparent to users, risking perception of endorsement of either VBAC or repeat CS whether intended or otherwise. Websites affiliated with Government or health services presented referenced clinical information in a factual manner with podcasts of real life experiences. Many imply greater support for VBAC than repeat CS although this was predominantly conveyed through subtle
Improved Extreme Learning Machine based on the Sensitivity Analysis
Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang
2018-03-01
Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.
Parameter identification and sensitivity analysis for a robotic manipulator arm
Brewer, D. W.; Gibson, J. S.
1988-01-01
The development of a nonlinear dynamic model for large oscillations of a robotic manipulator arm about a single joint is described. Optimization routines are formulated and implemented for the identification of electrical and physical parameters from dynamic data taken from an industrial robot arm. Special attention is given to difficulties caused by the large sensitivity of the model with respect to unknown parameters. Performance of the parameter identification algorithm is improved by choosing a control input that allows actuator emf to be included in an electro-mechanical model of the manipulator system.
Directory of Open Access Journals (Sweden)
Marius Henriksen
2013-01-01
Full Text Available Objectives. To investigate associations between muscle strength and pain sensitivity among healthy volunteers and associations between different pain sensitivity measures. Methods. Twenty-eight healthy volunteers (21 females participated. Pressure pain thresholds (PPTs were obtained from 1 computer-controlled pressure algometry on the vastus lateralis and deltoid muscles and on the infrapatellar fat pad and 2 computerized cuff pressure algometry applied on the lower leg. Deep-tissue pain sensitivity (intensity and duration was assessed by hypertonic saline injections into the vastus lateralis, deltoid, and infrapatellar fat pad. Quadriceps and hamstring muscle strength was assessed isometrically at 60-degree knee flexion using a dynamometer. Associations between pain sensitivity and muscle strength were investigated using multiple regressions including age, gender, and body mass index as covariates. Results. Knee extension strength was associated with computer-controlled PPT on the vastus lateralis muscle. Computer-controlled PPTs were significantly correlated between sites (r>0.72 and with cuff PPT (r>0.4. Saline induced pain intensity and duration were correlated between sites (r>0.39 and with all PPTs (r<-0.41. Conclusions. Pressure pain thresholds at the vastus lateralis are positively associated with knee extensor muscle strength. Different pain sensitivity assessment methods are generally correlated. The cuff PPT and evoked infrapatellar pain seem to reflect the general pain sensitivity. This trial is registered with ClinicalTrials.gov: NCT01351558.
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
Theme section: Multi-dimensional modelling, analysis and visualization
DEFF Research Database (Denmark)
Guilbert, Éric; Coltekin, Arzu; Antón Castro, Francesc/François
2016-01-01
describing complex multidimensional phenomena. An example of the relevance of multidimensional modelling is seen with the development of urban modelling where several dimensions have been added to the traditional 2D map representation (Sester et al.,2011). These include obviously the third spatial dimension...... in order to provide a meaningful representation and assist in data visualisation and mining, modelling and analysis; such as data structures allowing representation at different scalesor in different contexts of thematic information. Such issues are of importance with regard to the mission of theI SPRS...
A Flow-Sensitive Analysis of Privacy Properties
DEFF Research Database (Denmark)
Nielson, Hanne Riis; Nielson, Flemming
2007-01-01
that information I send to some service never is leaked to another service? - unless I give my permission? We shall develop a static program analysis for the pi- calculus and show how it can be used to give privacy guarantees like the ones requested above. The analysis records the explicit information flow...
SENSITIVITY AND UNCERTAINTY ANALYSIS OF COMMERCIAL REACTOR CRITICALS FOR BURNUP CREDIT
International Nuclear Information System (INIS)
Radulescu, Georgeta; Mueller, Don; Wagner, John C.
2009-01-01
The purpose of this study is to provide insights into the neutronic similarities that may exist between a generic cask containing typical spent nuclear fuel assemblies and commercial reactor critical (CRC) state-points. Forty CRC state-points from five pressurized-water reactors were selected for the study and the type of CRC state-points that may be applicable for validation of burnup credit criticality safety calculations for spent fuel transport/storage/disposal systems are identified. The study employed cross-section sensitivity and uncertainty analysis methods developed at Oak Ridge National Laboratory and the TSUNAMI set of tools in the SCALE code system as a means to investigate system similarity on an integral and nuclide-reaction specific level. The results indicate that, except for the fresh fuel core configuration, all analyzed CRC state-points are either highly similar, similar, or marginally similar to a generic cask containing spent nuclear fuel assemblies with burnups ranging from 10 to 60 GWd/MTU. Based on the integral system parameter, C k , approximately 30 of the 40 CRC state-points are applicable to validation of burnup credit in the generic cask containing typical spent fuel assemblies with burnups ranging from 10 to 60 GWd/MTU. The state-points providing the highest similarity (C k > 0.95) were attained at or near the end of a reactor cycle. The C k values are dominated by neutron reactions with major actinides and hydrogen, as the sensitivities of these reactions are much higher than those of the minor actinides and fission products. On a nuclide-reaction specific level, the CRC state-points provide significant similarity for most of the actinides and fission products relevant to burnup credit. A comparison of energy-dependent sensitivity profiles shows a slight shift of the CRC K eff sensitivity profiles toward higher energies in the thermal region as compared to the K eff sensitivity profile of the generic cask. Parameters representing
Analysis on Indications and Causes of Cesarean Section on Pemba Island of Zanzibar in Africa
Liping Zhou; Zubeir TS; Hamida SA
2013-01-01
Objective: To explore and analyze the indications and causes of cesarean section on Pemba island of Zanzibar in Africa to improve the quality of obstetrics. Methods: 564 patients performed cesarean section in Abdulla Mzee Hospital of Pemba from January, 2008 to December, 2011 were selected, and statistics was conducted by the method of retrospective analysis. Results: The rate of cesarean section in Abdulla Mzee Hospital of Pemba was 10.01%. The primary causes of cesarean section included cep...
A method of the sensitivity analysis of build-up and decay of actinides
International Nuclear Information System (INIS)
Mitani, Hiroshi; Koyama, Kinji; Kuroi, Hideo
1977-07-01
To make sensitivity analysis of build-up and decay of actinides, mathematical methods related to this problem have been investigated in detail. Application of time-dependent perturbation technique and Bateman method to sensitivity analysis is mainly studied. For the purpose, a basic equation and its adjoint equation for build-up and decay of actinides are systematically solved by introducing Laplace and modified Laplace transforms and their convolution theorems. Then, the mathematical method of sensitivity analyses is formulated by the above technique; its physical significance is also discussed. Finally, application of eigenvalue-method is investigated. Sensitivity coefficients can be directly calculated by this method. (auth.)
A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.
Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P
2018-04-01
Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.
Huysmans, Eva; Ickmans, Kelly; Van Dyck, Dries; Nijs, Jo; Gidron, Yori; Roussel, Nathalie; Polli, Andrea; Moens, Maarten; Goudman, Lisa; De Kooning, Margot
2018-02-01
The objective of this cross-sectional study was to analyze the relationship between symptoms of central sensitization (CS) and important cognitive behavioral and psychosocial factors in a sample of patients with chronic nonspecific low back pain. Participants with chronic nonspecific low back pain for at least 3 months were included in the study. They completed several questionnaires and a functional test. Pearson's correlation was used to analyze associations between symptoms of CS and pain behavior, functioning, pain, pain catastrophizing, kinesiophobia, and illness perceptions. Additionally, a between-group analysis was performed to compare patients with and without clinically relevant symptoms of CS. Data from 38 participants were analyzed. Significant associations were found between symptoms of CS and all other outcomes, especially current pain (r = 0.510, P = .001), mean pain during the past 7 days (r = 0.505, P = .001), and pain catastrophizing (r = 0.518, P = .001). Patients with clinically relevant symptoms of CS scored significantly worse on all outcomes compared with persons without relevant symptoms of CS, except on functioning (P = .128). Symptoms of CS were significantly associated with psychosocial and cognitive behavioral factors. Patients exhibiting a clinically relevant degree of symptoms of CS scored significantly worse on most outcomes, compared with the subgroup of the sample with fewer symptoms of CS. Copyright © 2017. Published by Elsevier Inc.
Multi Variate Analysis Of Risk Factors For Caesarean Section In The ...
African Journals Online (AJOL)
Method: Retrospective analysis of the mode of delivery within a 5 year period as contained in patients' medical records using frequency distribution and cross tabulations of risk factors. Logistic regression analysis was used to determine the predictors of Caesarean section. Result: Caesarean section rate was 22%.
Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone
International Nuclear Information System (INIS)
Hayes, F.; Jones, M.L.M.; Mills, G.; Ashmore, M.
2007-01-01
This study identified 83 species from existing publications suitable for inclusion in a database of sensitivity of species to ozone (OZOVEG database). An index, the relative sensitivity to ozone, was calculated for each species based on changes in biomass in order to test for species traits associated with ozone sensitivity. Meta-analysis of the ozone sensitivity data showed a wide inter-specific range in response to ozone. Some relationships in comparison to plant physiological and ecological characteristics were identified. Plants of the therophyte lifeform were particularly sensitive to ozone. Species with higher mature leaf N concentration were more sensitive to ozone than those with lower leaf N concentration. Some relationships between relative sensitivity to ozone and Ellenberg habitat requirements were also identified. In contrast, no relationships between relative sensitivity to ozone and mature leaf P concentration, Grime's CSR strategy, leaf longevity, flowering season, stomatal density and maximum altitude were found. The relative sensitivity of species and relationships with plant characteristics identified in this study could be used to predict sensitivity to ozone of untested species and communities. - Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone showed some relationships with physiological and ecological characteristics
Energy Technology Data Exchange (ETDEWEB)
Reyes F, M. del C.
2015-07-01
A methodology to perform uncertainty and sensitivity analysis for the cross sections used in a Trace/PARCS coupled model for a control rod drop transient of a BWR-5 reactor was implemented with the neutronics code PARCS. A model of the nuclear reactor detailing all assemblies located in the core was developed. However, the thermohydraulic model designed in Trace was a simple model, where one channel representing all the types of assemblies located in the core, it was located inside a simple vessel model and boundary conditions were established. The thermohydraulic model was coupled with the neutronics model, first for the steady state and then a Control Rod Drop (CRD) transient was performed, in order to carry out the uncertainty and sensitivity analysis. To perform the analysis of the cross sections used in the Trace/PARCS coupled model during the transient, Probability Density Functions (PDFs) were generated for the 22 parameters cross sections selected from the neutronics parameters that PARCS requires, thus obtaining 100 different cases for the Trace/PARCS coupled model, each with a database of different cross sections. All these cases were executed with the coupled model, therefore obtaining 100 different outputs for the CRD transient with special emphasis on 4 responses per output: 1) The reactivity, 2) the percentage of rated power, 3) the average fuel temperature and 4) the average coolant density. For each response during the transient an uncertainty analysis was performed in which the corresponding uncertainty bands were generated. With this analysis it is possible to observe the results ranges of the responses chose by varying the uncertainty parameters selected. This is very useful and important for maintaining the safety in the nuclear power plants, also to verify if the uncertainty band is within of safety margins. The sensitivity analysis complements the uncertainty analysis identifying the parameter or parameters with the most influence on the
International Nuclear Information System (INIS)
Reyes F, M. del C.
2015-01-01
A methodology to perform uncertainty and sensitivity analysis for the cross sections used in a Trace/PARCS coupled model for a control rod drop transient of a BWR-5 reactor was implemented with the neutronics code PARCS. A model of the nuclear reactor detailing all assemblies located in the core was developed. However, the thermohydraulic model designed in Trace was a simple model, where one channel representing all the types of assemblies located in the core, it was located inside a simple vessel model and boundary conditions were established. The thermohydraulic model was coupled with the neutronics model, first for the steady state and then a Control Rod Drop (CRD) transient was performed, in order to carry out the uncertainty and sensitivity analysis. To perform the analysis of the cross sections used in the Trace/PARCS coupled model during the transient, Probability Density Functions (PDFs) were generated for the 22 parameters cross sections selected from the neutronics parameters that PARCS requires, thus obtaining 100 different cases for the Trace/PARCS coupled model, each with a database of different cross sections. All these cases were executed with the coupled model, therefore obtaining 100 different outputs for the CRD transient with special emphasis on 4 responses per output: 1) The reactivity, 2) the percentage of rated power, 3) the average fuel temperature and 4) the average coolant density. For each response during the transient an uncertainty analysis was performed in which the corresponding uncertainty bands were generated. With this analysis it is possible to observe the results ranges of the responses chose by varying the uncertainty parameters selected. This is very useful and important for maintaining the safety in the nuclear power plants, also to verify if the uncertainty band is within of safety margins. The sensitivity analysis complements the uncertainty analysis identifying the parameter or parameters with the most influence on the
Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)
1993-12-01
This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.
Sensitivity analysis techniques applied to a system of hyperbolic conservation laws
International Nuclear Information System (INIS)
Weirs, V. Gregory; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.
2012-01-01
Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol' method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. - Highlights: ► Sensitivity analysis techniques for a model shock physics problem are compared. ► The model problem and the sensitivity analysis problem have exact solutions. ► Subtle details of the method for computing sensitivity indices can affect the results.
Mechanical analysis of the DSB cross-section
International Nuclear Information System (INIS)
Chen, Yanping.
1993-05-01
This paper presents the preliminary mechanical finite element analysis for the SSCL designed DSB dipole magnet. This SSCL version 50 mm aperture dipole magnet is for the SSCL High Energy Booster with nineteen turns, three wedges for inner coil and twenty six turns, one wedge for outer coil, the round collar is nineteen mm thick, the yoke and the shell are adopted from the design for quadrupole QSElO1. The main purposes of this mechanical study are to ensure that there are no excessive stresses in the cold mass under different loading, to avoid coils unloading from the collar at excitation of 6500 A, to ensure collar-to-yoke, line-to-line fit after welding the shell, and also to ensure the yoke midplane gaps are closed at an operating current of 6500 A. Therefore, the analyses performed include magnet assembly (collaring) to 69 Mpa azimuthal stress at the inner coil pole and 55 Mpa azimuthal stress at the outer coil pole; shell welding to 207 Mpa azimuthal stress in the shell; magnet cooldown to 4.25 K; and Lorentz excitation at a current of 6500 A
Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.
Ingalls, Brian; Mincheva, Maya; Roussel, Marc R
2017-07-01
A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.
International Nuclear Information System (INIS)
Spiessl, Sabine; Becker, Dirk-Alexander
2017-06-01
Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation
Energy Technology Data Exchange (ETDEWEB)
Spiessl, Sabine; Becker, Dirk-Alexander
2017-06-15
Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation
Sensitivity Analysis of the Bone Fracture Risk Model
Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane
2017-01-01
Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including
Sensitivity analysis of the evaporation module of the E-DiGOR model
AYDIN, Mehmet; KEÇECİOĞLU, Suzan Filiz
2010-01-01
Sensitivity analysis of the soil-water-evaporation module of the E-DiGOR (Evaporation and Drainage investigations at Ground of Ordinary Rainfed-areas) model is presented. The model outputs were generated using measured climatic data and soil properties. The first-order sensitivity formulas were derived to compute relative sensitivity coefficients. A change in the net solar radiation significantly affected potential evaporation from bare soils estimated by the Penman-Monteith equation. The se...
Sensitivity of PPI analysis to differences in noise reduction strategies.
Barton, M; Marecek, R; Rektor, I; Filip, P; Janousova, E; Mikl, M
2015-09-30
In some fields of fMRI data analysis, using correct methods for dealing with noise is crucial for achieving meaningful results. This paper provides a quantitative assessment of the effects of different preprocessing and noise filtering strategies on psychophysiological interactions (PPI) methods for analyzing fMRI data where noise management has not yet been established. Both real and simulated fMRI data were used to assess these effects. Four regions of interest (ROIs) were chosen for the PPI analysis on the basis of their engagement during two tasks. PPI analysis was performed for 32 different preprocessing and analysis settings, which included data filtering with RETROICOR or no such filtering; different filtering of the ROI "seed" signal with a nuisance data-driven time series; and the involvement of these data-driven time series in the subsequent PPI GLM analysis. The extent of the statistically significant results was quantified at the group level using simple descriptive statistics. Simulated data were generated to assess statistical improvement of different filtering strategies. We observed that different approaches for dealing with noise in PPI analysis yield differing results in real data. In simulated data, we found RETROICOR, seed signal filtering and the addition of data-driven covariates to the PPI design matrix significantly improves results. We recommend the use of RETROICOR, and data-driven filtering of the whole data, or alternatively, seed signal filtering with data-driven signals and the addition of data-driven covariates to the PPI design matrix. Copyright © 2015 Elsevier B.V. All rights reserved.
Shotgun lipidomic analysis of chemically sulfated sterols compromises analytical sensitivity
DEFF Research Database (Denmark)
Casanovas, Albert; Hannibal-Bach, Hans Kristian; Jensen, Ole Nørregaard
2014-01-01
Shotgun lipidomics affords comprehensive and quantitative analysis of lipid species in cells and tissues at high-throughput [1 5]. The methodology is based on direct infusion of lipid extracts by electrospray ionization (ESI) combined with tandem mass spectrometry (MS/MS) and/or high resolution F...... low ionization efficiency in ESI [7]. For this reason, chemical derivatization procedures including acetylation [8] or sulfation [9] are commonly implemented to facilitate ionization, detection and quantification of sterols for global lipidome analysis [1-3, 10]....
Contribution to the sample mean plot for graphical and numerical sensitivity analysis
International Nuclear Information System (INIS)
Bolado-Lavin, R.; Castaings, W.; Tarantola, S.
2009-01-01
The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling
Sensitive KIT D816V mutation analysis of blood as a diagnostic test in mastocytosis
DEFF Research Database (Denmark)
Kielsgaard Kristensen, Thomas; Vestergaard, Hanne; Bindslev-Jensen, Carsten
2014-01-01
The recent progress in sensitive KIT D816V mutation analysis suggests that mutation analysis of peripheral blood (PB) represents a promising diagnostic test in mastocytosis. However, there is a need for systematic assessment of the analytical sensitivity and specificity of the approach in order...... to establish its value in clinical use. We therefore evaluated sensitive KIT D816V mutation analysis of PB as a diagnostic test in an entire case-series of adults with mastocytosis. We demonstrate for the first time that by using a sufficiently sensitive KIT D816V mutation analysis, it is possible to detect...... the mutation in PB in nearly all adult mastocytosis patients. The mutation was detected in PB in 78 of 83 systemic mastocytosis (94%) and 3 of 4 cutaneous mastocytosis patients (75%). The test was 100% specific as determined by analysis of clinically relevant control patients who all tested negative. Mutation...
DEFF Research Database (Denmark)
Lesnikova, Iana; Lidang, Marianne; Hamilton-Dutoit, Steven
2010-01-01
ABSTRACT: Human papillomavirus (HPV) infection, and in particularly infection with HPVs 16 and 18, is a central carcinogenic factor in the uterine cervix. We established and optimized a PCR assay for the detection and discrimination of HPV types 16 and 18 in archival formaldehyde fixed and paraffin...... embedded (FFPE) sections of cervical cancer.Tissue blocks from 35 cases of in situ or invasive cervical squamous cell carcinoma and surrogate FFPE sections containing the cell lines HeLa and SiHa were tested for HPV 16 and HPV18 by conventional PCR using type specific primers, and for the housekeeping gene...... beta-actin. Using HPV 16 E7 primers, PCR products with the expected length were detected in 18 of 35 of FFPE sections (51%). HPV 18 E7 specific sequences were detected in 3 of 35 FFPE sections (9%).In our experience, the PCR technique is a robust, simple and sensitive way of type specific detection...
Seismic hazard analysis. Application of methodology, results, and sensitivity studies
International Nuclear Information System (INIS)
Bernreuter, D.L.
1981-10-01
As part of the Site Specific Spectra Project, this report seeks to identify the sources of and minimize uncertainty in estimates of seismic hazards in the Eastern United States. Findings are being used by the Nuclear Regulatory Commission to develop a synthesis among various methods that can be used in evaluating seismic hazard at the various plants in the Eastern United States. In this volume, one of a five-volume series, we discuss the application of the probabilistic approach using expert opinion. The seismic hazard is developed at nine sites in the Central and Northeastern United States, and both individual experts' and synthesis results are obtained. We also discuss and evaluate the ground motion models used to develop the seismic hazard at the various sites, analyzing extensive sensitivity studies to determine the important parameters and the significance of uncertainty in them. Comparisons are made between probabilistic and real spectra for a number of Eastern earthquakes. The uncertainty in the real spectra is examined as a function of the key earthquake source parameters. In our opinion, the single most important conclusion of this study is that the use of expert opinion to supplement the sparse data available on Eastern United States earthquakes is a viable approach for determining estimated seismic hazard in this region of the country. (author)
Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval
Gat, Ilana
2012-01-01
The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.
Sensitivity Analysis of Reactor Regulating System for SMART
International Nuclear Information System (INIS)
Jeon, Yu Lim; Kang, Han Ok; Lee, Seong Wook; Park, Cheon Tae
2009-01-01
The integral reactor technology is one of the Small and Medium sized Reactor (SMR) which has recently come into a spotlight due to its suitability for various fields. SMART (System integrated Modular Advanced ReacTor), a small sized integral type PWR with a rated thermal power of 330MWt is one of the advanced SMR. SMART developed by the Korea Atomic Energy Research Institute (KAERI), has a capacity to provide 40,000 m3 per day of potable water and 90 MW of electricity (Chang et al., 2000). Figure 1 shows the SMART which adopts a sensible mixture of new innovative design features and proven technologies aimed at achieving highly enhanced safety and improved economics. Design features contributing to a safety enhancement are basically inherent safety improving features and passive safety features. Fundamental thermal-hydraulic experiments were carried out during the design concepts development to assure the fundamental behavior of major concepts of the SMART systems. A TASS/SMR is a suitable code for accident and performance analyses of SMART. In this paper, we proposed a new power control logic for stable operating outputs of Reactor Regulating System (RRS) of SMART. We analyzed the sensitivity of operating parameter for various operating conditions
Regional sensitivity analysis using revised mean and variance ratio functions
International Nuclear Information System (INIS)
Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen
2014-01-01
The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure
Sensitivity analysis of uranium solubility under strongly oxidizing conditions
International Nuclear Information System (INIS)
Liu, L.; Neretnieks, I.
1999-01-01
To evaluate the effect of geochemical conditions in the repository on the solubility of uranium under strongly oxidizing conditions, a mathematical model has been developed to determine the solubility, by utilizing a set of nonlinear algebraic equations to describe the chemical equilibria in the groundwater environment. The model takes into account the predominant precipitation-dissolution reactions, hydrolysis reactions and complexation reactions that may occur under strongly oxidizing conditions. The model also includes the solubility-limiting solids induced by the presence of carbonate, phosphate, silicate, calcium, and sodium in the groundwater. The thermodynamic equilibrium constants used in the solubility calculations are essentially taken from the NEA Thermochemical Data Base of Uranium, with some modification and some uranium minerals added, such as soddyite, rutherfordite, uranophane, uranyl orthophosphate, and becquerelite. By applying this model, the sensitivities of uranium solubility to variations in the concentrations of various groundwater component species are systematically investigated. The results show that the total analytical concentrations of carbonate, phosphate, silicate, and calcium in deep groundwater play the most important role in determining the solubility of uranium under strongly oxidizing conditions
Results of an integrated structure-control law design sensitivity analysis
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2013-01-01
applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems. The method is applied to a half car with a two...
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2014-01-01
applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems. The method is applied to a half car with a two...
Sensitivity based reduced approaches for structural reliability analysis
Indian Academy of Sciences (India)
the system parameters and the natural frequencies. For these reasons a scientific and systematic approach is required to predict the probability of failure of a structure at the design stage. Probabilistic structural reliability analysis is one such approach. This can be implemented in conjunction with the stochastic finite element ...
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
An Ultra-Sensitive Method for the Analysis of Perfluorinated ...
In epidemiological research, it has become increasingly important to assess subjects' exposure to different classes of chemicals in multiple environmental media. It is a common practice to aliquot limited volumes of samples into smaller quantities for specific trace level chemical analysis. A novel method was developed for the determination of 14 perfluorinated alkyl acids (PFAAs) in small volumes (10 mL) of drinking water using off-line solid phase extraction (SPE) pre-treatment followed by on-line pre-concentration on WAX column before analysis on column-switching high performance liquid chromatography tandem mass spectrometry (HPLC-MS/MS). In general, large volumes (100 - 1000 mL) have been used for the analysis of PFAAs in drinking water. The current method requires approximately 10 mL of drinking water concentrated by using an SPE cartridge and eluted with methanol. A large volume injection of the extract was introduced on to a column-switching HPLC-MS/MS using a mix-mode SPE column for the trace level analysis of PFAAs in water. The recoveries for most of the analytes in the fortified laboratory blanks ranged from 73±14% to 128±5%. The lowest concentration minimum reporting levels (LCMRL) for the 14 PFAAs ranged from 0.59 to 3.4 ng/L. The optimized method was applied to a pilot-scale analysis of a subset of drinking water samples from an epidemiological study. These samples were collected directly from the taps in the households of Ohio and Nor
Yang, Xiao-Jing; Sun, Shan-Shan
2017-09-01
Though the same types of complication were found in both elective cesarean section (ElCS) and emergence cesarean section (EmCS), the aim of this study is to compare the rates of maternal and fetal morbidity and mortality between ElCS and EmCS. Full-text articles involved in the maternal and fetal complications and outcomes of ElCS and EmCS were searched in multiple database. Review Manager 5.0 was adopted for meta-analysis, sensitivity analysis, and bias analysis. Funnel plots and Egger's tests were also applied with STATA 10.0 software to assess possible publication bias. Totally nine articles were included in this study. Among these articles, seven, three, and four studies were involved in the maternal complication, fetal complication, and fetal outcomes, respectively. The combined analyses showed that both rates of maternal complication and fetal complication in EmCS were higher than those in ElCS. The rates of infection, fever, UTI (urinary tract infection), wound dehiscence, DIC (disseminated intravascular coagulation), and reoperation of postpartum women with EmCS were much higher than those with ElCS. Larger infant mortality rate of EmCS was also observed. Emergency cesarean sections showed significantly more maternal and fetal complications and mortality than elective cesarean sections in this study. Certain plans should be worked out by obstetric practitioners to avoid the post-operative complications.
Which preventive interventions effectively enhance depressed mothers' sensitivity? A meta-analysis
Kersten, L.E.; Hosman, C.M.H.; Riksen-Walraven, J.M.A.; Doesum, K.T.M. van; Hoefnagels, C.C.J.
2011-01-01
Improving depressed mothers' sensitivity is assumed to be a key element in preventing adverse outcomes for children of such mothers. This meta-analysis examines the short-term effectiveness of preventive interventions in terms of enhancing depressed mothers' sensitivity toward their child and
Development of a methodology for analysis of the impact of modifying neutron cross sections
International Nuclear Information System (INIS)
Wenner, M. T.; Haghighat, A.; Adams, J. M.; Carlson, A. D.; Grimes, S. M.; Massey, T. N.
2004-01-01
Monte Carlo analysis of a Time-of-Flight (TOF) experiment can be utilized to examine the accuracy of nuclear cross section data. Accurate determination of this data is paramount in characterization of reactor lifetime. We have developed a methodology to examine the impact of modifying the current cross section libraries available in ENDF-6 format (1) where deficiencies may exist, and have shown that this methodology may be an effective methodology for examining the accuracy of nuclear cross section data. The new methodology has been applied to the iron scattering cross sections, and the use of the revised cross sections suggests that reactor pressure vessel fluence may be underestimated. (authors)
Reliability analysis of a sensitive and independent stabilometry parameter set.
Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.
Development of Ultra-sensitive Laser Spectroscopic Analysis Technology
Energy Technology Data Exchange (ETDEWEB)
Cha, H. K.; Kim, D. H.; Song, K. S. (and others)
2007-04-15
Laser spectroscopic analysis technology has three distinct merits in detecting various nuclides found in nuclear fields. High selectivity originated from small bandwidth of tunable lasers makes it possible to distinguish various kinds of isotopes and isomers. High intensity of focused laser beam makes it possible to analyze ultratrace amount. Remote delivery of laser beam improves safety of workers who are exposed in dangerous environment. Also it can be applied to remote sensing of environment pollution.
Exploratory market structure analysis. Topology-sensitive methodology.
Mazanec, Josef
1999-01-01
Given the recent abundance of brand choice data from scanner panels market researchers have neglected the measurement and analysis of perceptions. Heterogeneity of perceptions is still a largely unexplored issue in market structure and segmentation studies. Over the last decade various parametric approaches toward modelling segmented perception-preference structures such as combined MDS and Latent Class procedures have been introduced. These methods, however, are not taylored for qualitative ...
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Cross-sectional analysis of fouled SWRO membranes by STEM-EDS
Aubry, Cyril
2014-01-01
The intact cross-section of two fouled reverse osmosis membranes was characterized using a scanning transmission electron microscope (STEM) equipped with an electron energy dispersive spectroscope (EDS). Focused ion beam (FIB) was used to prepare a thin lamella of each membrane. These lamellas were then attached to a TEM grid for further STEM/EDS analysis. The foulant in sample A was mainly inorganic in nature and predominantly composed of alumino-silicate particles. These particles were surrounded by carbon at high concentrations, indicating the presence of organic materials. Iron was diffusely present in the cake layer and this could have enhanced the fouling process. The cake layer of membrane B was mainly consisted of organic matter (C, O, and N representing 95% of the total elemental composition) and organized in thin parallel layers. Small concentrations of Si, F, Na, Mg, and Cl were detected inside the active layer and support layer of the membrane. Due to the high sensitivity of the cake layer of membrane A to the electron beam, STEM/EDS line analyses might have been performed on large areas. On the other hand, the cake layer of sample B was resistant to the electron beam and the resolution of STEM/EDS was gradually improved until obtaining a resolution of 25. nm. © 2013 Elsevier B.V.
Design and analysis of a canal section for minimum water loss
Directory of Open Access Journals (Sweden)
Yousry Mahmoud Ghazaw
2011-12-01
Full Text Available Seepage and evaporation are the most serious forms of water loss in an irrigation canal network. Seepage loss depends on the channel geometry, while evaporation loss is proportional to the area of free surface. In this paper, a methodology to determine the optimal canal dimensions for a particular discharge is developed. The nonlinear water loss function, for the canal, which comprises seepage and evaporation loss, was derived. Two constraints (minimum permissible velocity as a limit for sedimentation and maximum permissible velocity as a limit for erosion of canal have been taken into consideration in the canal design procedure. Using Lagrange’s method of undetermined multipliers, the optimal canal dimensions were obtained for minimum water loss. A computer program was developed to carry out design calculation for the optimal canal dimensions. The results are plotted in form of a set of design charts. The proposed charts facilitate easy design of the optimal canal dimensions guaranteeing minimum water loss. Water loss from the canal section can be estimated from these charts without going through the conventional and cumbersome trial and error method. Sensitivity analysis had been included to demonstrate the impact of important parameters.
Kuramoto, S. Janet; Stuart, Elizabeth A.
2013-01-01
Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282
Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A
2013-12-01
Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.
International Nuclear Information System (INIS)
Cacuci, D.G.
1984-07-01
This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations
DEFF Research Database (Denmark)
Bennike, Niels H; Zachariae, Claus; Johansen, Jeanne D
2017-01-01
BACKGROUND: For cosmetics, it is mandatory to label 26 fragrance substances, including all constituents of fragrance mix I (FM I) and fragrance mix II (FM II). Earlier reports have not included oxidized R-limonene [hydroperoxides of R-limonene (Lim-OOH)] and oxidized linalool [hydroperoxides...... of linalool (Lin-OOH)], and breakdown testing of FM I and FM II has mainly been performed in selected, mix-positive patients. OBJECTIVES: To report the prevalence of sensitization to the 26 fragrances, and to assess concomitant reactivity to FM I and/or FM II. METHODS: A cross-sectional study on consecutive...... dermatitis patients patch tested with the 26 fragrances and the European baseline series from 2010 to 2015 at a single university clinic was performed. RESULTS: Of 6004 patients, 940 (15.7%, 95%CI: 14.7-16.6%) were fragrance-sensitized. Regarding the single fragrances, most patients were sensitized to Lin...
International Nuclear Information System (INIS)
Pi Ting; Zhang Yunqing; Chen Liping
2012-01-01
Design sensitivity analysis of flexible multibody systems is important in optimizing the performance of mechanical systems. The choice of coordinates to describe the motion of multibody systems has a great influence on the efficiency and accuracy of both the dynamic and sensitivity analysis. In the flexible multibody system dynamics, both the floating frame of reference formulation (FFRF) and absolute nodal coordinate formulation (ANCF) are frequently utilized to describe flexibility, however, only the former has been used in design sensitivity analysis. In this article, ANCF, which has been recently developed and focuses on modeling of beams and plates in large deformation problems, is extended into design sensitivity analysis of flexible multibody systems. The Motion equations of a constrained flexible multibody system are expressed as a set of index-3 differential algebraic equations (DAEs), in which the element elastic forces are defined using nonlinear strain-displacement relations. Both the direct differentiation method and adjoint variable method are performed to do sensitivity analysis and the related dynamic and sensitivity equations are integrated with HHT-I3 algorithm. In this paper, a new method to deduce system sensitivity equations is proposed. With this approach, the system sensitivity equations are constructed by assembling the element sensitivity equations with the help of invariant matrices, which results in the advantage that the complex symbolic differentiation of the dynamic equations is avoided when the flexible multibody system model is changed. Besides that, the dynamic and sensitivity equations formed with the proposed method can be efficiently integrated using HHT-I3 method, which makes the efficiency of the direct differentiation method comparable to that of the adjoint variable method when the number of design variables is not extremely large. All these improvements greatly enhance the application value of the direct differentiation
Janssen, R.; Rietveld, P.
1989-01-01
Inclusion of evaluation methods in decision support systems gives way to extensive sensitivity analysis. In this article new methods for sensitivityanalysis are developed and applied to the siting of nuclear power plants in the Netherlands.
May Day: A computer code to perform uncertainty and sensitivity analysis. Manuals
International Nuclear Information System (INIS)
Bolado, R.; Alonso, A.; Moya, J.M.
1996-07-01
The computer program May Day was developed to carry out the uncertainty and sensitivity analysis in the evaluation of radioactive waste storage. The May Day was made by the Polytechnical University of Madrid. (Author)
The application of sensitivity analysis to models of large scale physiological systems
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
DEFF Research Database (Denmark)
Prunescu, Remus Mihail; Sin, Gürkan
2014-01-01
This study presents the uncertainty and sensitivity analysis of a lignocellulosic enzymatic hydrolysis model considering both model and feed parameters as sources of uncertainty. The dynamic model is parametrized for accommodating various types of biomass, and different enzymatic complexes...
High Sensitivity, High Frequency Sensors for Hypervelocity Testing and Analysis, Phase II
National Aeronautics and Space Administration — This NASA Phase II SBIR program would develop high sensitivity, high frequency nanomembrane based surface sensors for hypervelocity testing and analysis on wind...
Vanderweele, Tyler J; Arah, Onyebuchi A
2011-01-01
Uncontrolled confounding in observational studies gives rise to biased effect estimates. Sensitivity analysis techniques can be useful in assessing the magnitude of these biases. In this paper, we use the potential outcomes framework to derive a general class of sensitivity-analysis formulas for outcomes, treatments, and measured and unmeasured confounding variables that may be categorical or continuous. We give results for additive, risk-ratio and odds-ratio scales. We show that these results encompass a number of more specific sensitivity-analysis methods in the statistics and epidemiology literature. The applicability, usefulness, and limits of the bias-adjustment formulas are discussed. We illustrate the sensitivity-analysis techniques that follow from our results by applying them to 3 different studies. The bias formulas are particularly simple and easy to use in settings in which the unmeasured confounding variable is binary with constant effect on the outcome across treatment levels.
Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation
International Nuclear Information System (INIS)
Blonigan, Patrick J.; Wang, Qiqi
2014-01-01
Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters
High Sensitivity, High Frequency Sensors for Hypervelocity Testing and Analysis, Phase I
National Aeronautics and Space Administration — This NASA Phase I SBIR program would develop high sensitivity, high frequency nanomembrane (NM) based surface sensors for hypervelocity testing and analysis on wind...
Sensitivity analysis on ultimate strength of aluminium stiffened panels
DEFF Research Database (Denmark)
Rigo, P.; Sarghiuta, R.; Estefen, S.
2003-01-01
on ultimate strength. The goal has typically been to give guidance to the designer on how to predict the ultimate strength and to indicate what level of accuracy would be expected. This time, the target of this benchmark is to present reliable finite element methods to study the behaviour of axial compressed...... members analysed the same structure with a defined set of parameters and using different codes. It wa expected that all the codes/models predict the same results. In Phase B, to boost the scope of the analysis, the different members Uusing their own model) performed FE analyses for a range of variation...
Sensitivity and uncertainty analysis for fission product decay heat calculations
International Nuclear Information System (INIS)
Rebah, J.; Lee, Y.K.; Nimal, J.C.; Nimal, B.; Luneville, L.; Duchemin, B.
1994-01-01
The calculated uncertainty in decay heat due to the uncertainty in basic nuclear data given in the CEA86 Library, is presented. Uncertainties in summation calculation arise from several sources: fission product yields, half-lives and average decay energies. The correlation between basic data is taken into account. The uncertainty analysis were obtained for thermal-neutron-induced fission of U235 and Pu239 in the case of burst fission and irradiation time. The calculated decay heat in this study is compared with experimental results and with new calculation using the JEF2 Library. (from authors) 6 figs., 19 refs
Sensitivity analysis of optimized nuclear energy density functional
International Nuclear Information System (INIS)
Mondal, C.; Agrawal, B.K.; De, J.N.; Samaddar, S.K.
2016-01-01
Being the exact nature of nuclear force unknown, parameters of nuclear models have been optimized by fitting different kind of data e.g. properties of finite nuclei as well as neutron stars. As the information about exact correspondence between parameters of the model and the fitted data are missing, it has led to a plethora of nuclear models. These informations can be extracted by studying correlations between different parameters and fitted data in all sorts of combinations within the framework of covariance analysis. This not only minimizes the number of the parameters of the model, but also helps to restrict unnecessary addition of redundant data
GCR Environmental Models I: Sensitivity Analysis for GCR Environments
Slaba, Tony C.; Blattnig, Steve R.
2014-01-01
Accurate galactic cosmic ray (GCR) models are required to assess crew exposure during long-duration missions to the Moon or Mars. Many of these models have been developed and compared to available measurements, with uncertainty estimates usually stated to be less than 15%. However, when the models are evaluated over a common epoch and propagated through to effective dose, relative differences exceeding 50% are observed. This indicates that the metrics used to communicate GCR model uncertainty can be better tied to exposure quantities of interest for shielding applications. This is the first of three papers focused on addressing this need. In this work, the focus is on quantifying the extent to which each GCR ion and energy group, prior to entering any shielding material or body tissue, contributes to effective dose behind shielding. Results can be used to more accurately calibrate model-free parameters and provide a mechanism for refocusing validation efforts on measurements taken over important energy regions. Results can also be used as references to guide future nuclear cross-section measurements and radiobiology experiments. It is found that GCR with Z>2 and boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. This finding is important given that most of the GCR models are developed and validated against Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer (ACE/CRIS) measurements taken below 500 MeV/n. It is therefore possible for two models to very accurately reproduce the ACE/CRIS data while inducing very different effective dose values behind shielding.
Directory of Open Access Journals (Sweden)
Peter Kiehl
1999-01-01
Full Text Available Eosinophilic granulocytes are major effector cells in inflammation. Extracellular deposition of toxic eosinophilic granule proteins (EGPs, but not the presence of intact eosinophils, is crucial for their functional effect in situ. As even recent morphometric approaches to quantify the involvement of eosinophils in inflammation have been only based on cell counting, we developed a new method for the cell‐independent quantification of EGPs by image analysis of immunostaining. Highly sensitive, automated immunohistochemistry was done on paraffin sections of inflammatory skin diseases with 4 different primary antibodies against EGPs. Image analysis of immunostaining was performed by colour translation, linear combination and automated thresholding. Using strictly standardized protocols, the assay was proven to be specific and accurate concerning segmentation in 8916 fields of 520 sections, well reproducible in repeated measurements and reliable over 16 weeks observation time. The method may be valuable for the cell‐independent segmentation of immunostaining in other applications as well.
Sensitivity analysis of FDS 6 results for nuclear power plants
Energy Technology Data Exchange (ETDEWEB)
Alvear, Daniel; Puente, Eduardo; Abreu, Orlando [Cantabria Univ., Santander (Spain). Group GIDAI - Fire Safety-Research and Technology; Peco, Julian [Consejo de Seguridad Nuclear, Madrid (Spain)
2015-12-15
The Spanish standard ''Instruction IS-30, Rev. 1'' (February 21, 2013) allows the new approaches of risk informed performance based design (PBD) The Spanish standard ''Instruction IS-30, rev. 1'' (February 21, 2013) for demonstrating the safe shutdown capability in case of fire in nuclear power plants. In this sense, fire computer models have become an interesting tool to study real fire scenarios. Such models use a set of input parameters that define the features of the physical domain, material, radiation, turbulence, etc. This paper analyses the impact of the grid size and different sub-models of the fire simulation code FDS, version 6 with the objective to evaluate and define their relative weight in the final simulation results. For the grid size analysis, two different scale scenarios were selected, the bench scale test PENLIGHT and a large-scale test similar to Appendix B of NUREG - 1934 (17 m x 10 m x 4.6 m, with an ignition source of 2 MW and 16 cable trays). For the sub-model analysis, the PRS-INT4 real scale configuration of the INTEGRAL experimental campaign of the international OECD PRISME Project has been used. The results offer relevant data for users and show the critical parameters that must be selected properly to guarantee the quality of the simulations.
Sensitivity analysis of characteristics of spent mixed oxide fuel
International Nuclear Information System (INIS)
Hagura, Naoto; Yoshida, Tadashi
2008-01-01
Prediction error was evaluated for decay heat and nuclide generation in spent mixed oxide (MOX) fuels on the basis of error files in JENDL-3.3. This computational analysis was performed using SWAT code system, ORIGEN2 code, and ERRORJ code. The results of nuclide generation error evaluation were compared with some discrepancies in the calculated values to experimental values (C/E ratio) which were already published and were obtained by analysis of post irradiated experiments (PIE) data. Though the discrepancies of some C/E values, especially those of americium and curium isotopes, ranged from a half to twice, the present error evaluation based on the error file of nuclide generation became 10% or less. We conclude that the discrepancy between calculation and the PIE data is almost factor 5 larger than that evaluated from the covariance data in JENDL-3.3. Therefore the practical error value of total decay heat should be 20% or more on 1 σ basis. (authors)
Sensitivity analysis in Gaussian Bayesian networks using a symbolic-numerical technique
International Nuclear Information System (INIS)
Castillo, Enrique; Kjaerulff, Uffe
2003-01-01
The paper discusses the problem of sensitivity analysis in Gaussian Bayesian networks. The algebraic structure of the conditional means and variances, as rational functions involving linear and quadratic functions of the parameters, are used to simplify the sensitivity analysis. In particular the probabilities of conditional variables exceeding given values and related probabilities are analyzed. Two examples of application are used to illustrate all the concepts and methods
Directory of Open Access Journals (Sweden)
Emre Sert
2017-06-01
In summary, within the scope of this work, unlike the previous studies, experiments involving physical tests (i.e. tilt table, fishhook and cornering and numerical calculations are included. In addition, verification of the virtual model, parametric sensitivity analysis and the comparison of the virtual test and the physical test is performed. Because of the vigorous verification, sensitivity analysis and validation process, the results can be more reliable compared to previous studies.
Im, Hyungbin; Bae, Dae Sung; Chung, Jintai
2012-04-01
This paper presents a design sensitivity analysis of dynamic responses of a BLDC motor with mechanical and electromagnetic interactions. Based on the equations of motion which consider mechanical and electromagnetic interactions of the motor, the sensitivity equations for the dynamic responses were derived by applying the direct differential method. From the sensitivity equation along with the equations of motion, the time responses for the sensitivity analysis were obtained by using the Newmark time integration method. The sensitivities of the motor performances such as the electromagnetic torque, rotating speed, and vibration level were analyzed for the six design parameters of rotor mass, shaft/bearing stiffness, rotor eccentricity, winding resistance, coil turn number, and residual magnetic flux density. Furthermore, to achieve a higher torque, higher speed, and lower vibration level, a new BLDC motor was designed by applying the multi-objective function method. It was found that all three performances are sensitive to the design parameters in the order of the coil turn number, magnetic flux density, rotor mass, winding resistance, rotor eccentricity, and stiffness. It was also found that the torque and vibration level are more sensitive to the parameters than the rotating speed. Finally, by applying the sensitivity analysis results, a new optimized design of the motor resulted in better performances. The newly designed motor showed an improved torque, rotating speed, and vibration level.
Users manual for the FORSS sensitivity and uncertainty analysis code system
International Nuclear Information System (INIS)
Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.
1981-01-01
FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology
Sensitivity analysis of sandwich panels with rectangular openings
Chuda-Kowalska, Monika; Malendowski, Michal
2018-01-01
Sandwich panels, composed of thin metal sheets and a thick, anisotropic foam core, are considered in the paper. These lightweight structures are frequently weakened by cut-outs and various openings, what is the subject of the present analysis. Due to a complex behavior of such structures, there are no universal, acceptable design rules, that take into account factors related to the kind of modifications that made the panel weaker. In this paper, the problem of the influence of two factors on the mechanical response of sandwich panel is considered, namely: the influence of opening location and type of the load. Additionally, the influence of stiffener, in the form of window frame, is considered. Finally, selected results obtained from FE analyses are compared with the experimental results carried out by the authors and some conclusions are drawn.
Automated Image Analysis in Undetermined Sections of Human Permanent Third Molars
DEFF Research Database (Denmark)
Bjørndal, Lars; Darvann, Tron Andre; Bro-Nielsen, Morten
1997-01-01
A computerized histomorphometric analysis was made of Karnovsky-fixed, hydroxethylmethacrylate embedded and toluidine blue/pyronin-stained sections to determine: (1) the two-dimensional size of the coronal odontoblasts given by their cytoplasm:nucleus ratio; (2) the ratio between the number of co...... sectioning profiles should be analysed. The use of advanced image processing on undemineralized tooth sections provides a rational foundation for further work on the reactions of the odontoblasts to external injuries including dental caries....
Prevalence and Causes of Cesarean Section in Iran: Systematic Review and Meta-Analysis
AZAMI-AGHDASH, Saber; GHOJAZADEH, Morteza; DEHDILANI, Nima; MOHAMMADI, Marzieh; ASL AMIN ABAD, Ramin
2014-01-01
Abstract Unfortunately, the prevalence of cesarean section has increased in recent years. Whereas awareness of the prevalence and causes is inevitable for planning and effective interventions, so aim of this study has designed and conducted for reviewing of systematic Prevalence and caesarean causes in Iran. In this meta-analysis, the required information have been collected using several keywords which are Cesarean section rate, Cesarean section prevalence, delivery, childhood, childbirth, r...
Höllering, Simon; Wienhöfer, Jan; Ihringer, Jürgen; Samaniego, Luis; Zehe, Erwin
2018-01-01
Diagnostics of hydrological models are pivotal for a better understanding of catchment functioning, and the analysis of dominating model parameters plays a key role for region-specific calibration or parameter transfer. A major challenge in the analysis of parameter sensitivity is the assessment of both temporal and spatial differences of parameter influences on simulated streamflow response. We present a methodological approach for global sensitivity analysis of hydrological models. The multilevel approach is geared towards complementary forms of streamflow response targets, and combines sensitivity analysis directed to hydrological fingerprints, i.e. temporally independent and temporally aggregated characteristics of streamflow (INDPAS), with the conventional analysis of the temporal dynamics of parameter sensitivity (TEDPAS). The approach was tested in 14 mesoscale headwater catchments of the Ruhr River in western Germany using simulations with the spatially distributed hydrological model mHM. The multilevel analysis with diverse response characteristics allowed us to pinpoint parameter sensitivity patterns much more clearly as compared to using TEDPAS alone. It was not only possible to identify two dominating parameters, for soil moisture dynamics and evapotranspiration, but we could also disentangle the role of these and other parameters with reference to different streamflow characteristics. The combination of TEDPAS and INDPAS further allowed us to detect regional differences in parameter sensitivity and in simulated hydrological functioning, despite the rather small differences in the hydroclimatic and topographic setting of the Ruhr headwaters.
Sensitivity analysis methods and a biosphere test case implemented in EIKOS
International Nuclear Information System (INIS)
Ekstroem, P.A.; Broed, R.
2006-05-01
Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked
Study plan for the sensitivity analysis of the Terrain-Responsive Atmospheric Code (TRAC)
International Nuclear Information System (INIS)
Restrepo, L.F.; Deitesfeld, C.A.
1987-01-01
Rocky Flats Plant, Golden, Colorado is presently developing a computer code to model the dispersion of potential or actual releases of radioactive or toxic materials to the environment, along with the public consequences from these releases. The model, the Terrain-Responsive Atmospheric Code (TRAC), considers several complex features which could affect the overall dispersion and consequences. To help validate TRAC, a sensitivity analysis is being planned to determine how sensitive the model's solutions are to input variables. This report contains a brief description of the code, along with a list of tasks and resources needed to complete the sensitivity analysis
International Nuclear Information System (INIS)
Lee, Tae Hee; Yoo, Jung Hun; Choi, Hyeong Cheol
2002-01-01
A finite element package is often used as a daily design tool for engineering designers in order to analyze and improve the design. The finite element analysis can provide the responses of a system for given design variables. Although finite element analysis can quite well provide the structural behaviors for given design variables, it cannot provide enough information to improve the design such as design sensitivity coefficients. Design sensitivity analysis is an essential step to predict the change in responses due to a change in design variables and to optimize a system with the aid of the gradient-based optimization techniques. To develop a numerical method of design sensitivity analysis, analytical derivatives that are based on analytical differentiation of the continuous or discrete finite element equations are effective but analytical derivatives are difficult because of the lack of internal information of the commercial finite element package such as shape functions. Therefore, design sensitivity analysis outside of the finite element package is necessary for practical application in an industrial setting. In this paper, the semi-analytic method for design sensitivity analysis is used for the development of the design sensitivity module outside of a commercial finite element package of ANSYS. The direct differentiation method is employed to compute the design derivatives of the response and the pseudo-load for design sensitivity analysis is effectively evaluated by using the design variation of the related internal nodal forces. Especially, we suggest an effective method for stress and nonlinear design sensitivity analyses that is independent of the commercial finite element package is also discussed. Numerical examples are illustrated to show the accuracy and efficiency of the developed method and to provide insights for implementation of the suggested method into other commercial finite element packages
Caton, R. G.; Colman, J. J.; Parris, R. T.; Nickish, L.; Bullock, G.
2017-12-01
The Air Force Research Laboratory, in collaboration with NorthWest Research Associates, is developing advanced software capabilities for high fidelity simulations of high frequency (HF) sky wave propagation and performance analysis of HF systems. Based on the HiCIRF (High-frequency Channel Impulse Response Function) platform [Nickisch et. al, doi:10.1029/2011RS004928], the new Air Force Coverage Analysis Program (AFCAP) provides the modular capabilities necessary for a comprehensive sensitivity study of the large number of variables which define simulations of HF propagation modes. In this paper, we report on an initial exercise of AFCAP to analyze the sensitivities of the tool to various environmental and geophysical parameters. Through examination of the channel scattering function and amplitude-range-Doppler output on two-way propagation paths with injected target signals, we will compare simulated returns over a range of geophysical conditions as well as varying definitions for environmental noise, meteor clutter, and sea state models for Bragg backscatter. We also investigate the impacts of including clutter effects due to field-aligned backscatter from small scale ionization structures at varied levels of severity as defined by the climatologically WideBand Model (WBMOD). In the absence of additional user provided information, AFCAP relies on International Reference Ionosphere (IRI) model to define the ionospheric state for use in 2D ray tracing algorithms. Because the AFCAP architecture includes the option for insertion of a user defined gridded ionospheric representation, we compare output from the tool using the IRI and ionospheric definitions from assimilative models such as GPSII (GPS Ionospheric Inversion).
Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.
2001-01-01
This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis
Iordanov, Tzvetelin D; Schenter, Gregory K; Garrett, Bruce C
2006-01-19
A sensitivity analysis of bulk water thermodynamics is presented in an effort to understand the relation between qualitative features of molecular potentials and properties that they predict. The analysis is incorporated in molecular dynamics simulations and investigates the sensitivity of the Helmholtz free energy, internal energy, entropy, heat capacity, pressure, thermal pressure coefficient, and static dielectric constant to components of the potential rather than the parameters of a given functional form. The sensitivities of the properties are calculated with respect to the van der Waals repulsive and the attractive parts, plus short- and long-range Coulomb parts of three four site empirical water potentials: TIP4P, Dang-Chang and TTM2R. The polarization sensitivity is calculated for the polarizable Dang-Chang and TTM2R potentials. This new type of analysis allows direct comparisons of the sensitivities for different potentials that use different functional forms. The analysis indicates that all investigated properties are most sensitive to the van der Waals repulsive, the short-range Coulomb and the polarization components of the potentials. When polarization is included in the potentials, the magnitude of the sensitivity of the Helmholtz free energy, internal energy, and entropy with respect to this part of the potential is comparable in magnitude to the other electrostatic components. In addition similarities in trends of observed sensitivities for nonpolarizable and polarizable potentials lead to the conclusion that the complexity of the model is not of critical importance for the calculation of these thermodynamic properties for bulk water. The van der Waals attractive and the long-range Coulomb sensitivities are relatively small for the entropy, heat capacity, thermal pressure coefficient and the static dielectric constant, while small changes in any of the potential contributions will significantly affect the pressure. The analysis suggests a procedure
Directory of Open Access Journals (Sweden)
Stefan Doering
Full Text Available The continuous exposure to inorganic mercury vapour in artisanal small-scale gold mining (ASGM areas leads to chronic health problems. It is therefore essential to have a quick, but reliable risk assessing tool to diagnose chronic inorganic mercury intoxication. This study re-evaluates the state-of-the-art toolkit to diagnose chronic inorganic mercury intoxication by analysing data from multiple pooled cross-sectional studies. The primary research question aims to reduce the currently used set of indicators without affecting essentially the capability to diagnose chronic inorganic mercury intoxication. In addition, a sensitivity analysis is performed on established biomonitoring exposure limits for mercury in blood, hair, urine and urine adjusted by creatinine, where the biomonitoring exposure limits are compared to thresholds most associated with chronic inorganic mercury intoxication in artisanal small-scale gold mining.Health data from miners and community members in Indonesia, Tanzania and Zimbabwe were obtained as part of the Global Mercury Project and pooled into one dataset together with their biomarkers mercury in urine, blood and hair. The individual prognostic impact of the indicators on the diagnosis of mercury intoxication is quantified using logistic regression models. The selection is performed by a stepwise forward/backward selection. Different models are compared based on the Bayesian information criterion (BIC and Cohen`s kappa is used to evaluate the level of agreement between the diagnosis of mercury intoxication based on the currently used set of indicators and the result based on our reduced set of indicators. The sensitivity analysis of biomarker exposure limits of mercury is based on a sequence of chi square tests.The variable selection in logistic regression reduced the number of medical indicators from thirteen to ten in addition to the biomarkers. The estimated level of agreement using ten of thirteen medical indicators
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model
International Nuclear Information System (INIS)
Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.
2016-01-01
Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.
DEFF Research Database (Denmark)
Ferrari, A.; Gutierrez, S.; Sin, Gürkan
2016-01-01
A steady state model for a production scale milk drying process was built to help process understanding and optimization studies. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a comprehensive statistical analysis for quality assurance using...... sensitivity analysis of inputs/parameters, and uncertainty analysis to estimate confidence intervals on parameters and model predictions (error propagation). Variance based sensitivity analysis (Sobol's method) was used to quantify the influence of inputs on the final powder moisture as the model output...... at chamber inlet air (variation > 100%). The sensitivity analysis results suggest exploring improvements in the current control (Proportional Integral Derivative) for moisture content at concentrate chamber feed in order to reduce the output variance. It is also confirmed that humidity control at chamber...
Results of an integrated structure/control law design sensitivity analysis
Gilbert, Michael G.
1989-01-01
A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.
Energy Technology Data Exchange (ETDEWEB)
Blasques, J.P.
2012-02-15
The BEam Cross section Analysis Software - BECAS - is a group of Matlab functions used for the analysis of the stiffness and mass properties of beam cross sections. The report presents BECAS' code and user's guide. (LN)
Failure and sensitivity analysis of a reconfigurable vibrating screen using finite element analysis
Directory of Open Access Journals (Sweden)
Boitumelo Ramatsetse
2017-10-01
Full Text Available In mineral processing industries vibrating screens operate under high structural loading and continuous vibrations. In this regard, this may result in high strain rates, which may often lead to structural failure or damage to the screen. In order to lessen the possibility of failure occurring, theories and techniques for analyzing machine structures are investigated and applied to perform a sensitivity study of a newly developed vibrating screen. Structural strength and stability of a vibrating screen is essential to insure that failure doesn’t occur during production. In this paper a finite element analysis (FEA on a reconfigurable vibrating screen (RVS is carried out to determine whether the structure will perform as desired under extreme working conditions at the different configurations of 305 mm × 610 mm, 305 mm × 1220 mm and 610 mm × 1220 mm. This process is aimed at eliminating unplanned shutdowns and minimizes maintenance cost of the equipment. Each component of a screen structure is analyzed separately, stress and displacement parameters are determined based on dynamic analysis. In addition, a modal analysis was carried out for the first three (3 modes at frequency f of 18.756 Hz, 32.676 Hz and 39.619 Hz respectively. The results from the analysis showed weak points on the side plates of screen structure. Further improvements were incorporated to effectively optimize the RVS structure after undergoing an industrial investigation of similar machines.
Restructuring of burnup sensitivity analysis code system by using an object-oriented design approach
International Nuclear Information System (INIS)
Kenji, Yokoyama; Makoto, Ishikawa; Masahiro, Tatsumi; Hideaki, Hyoudou
2005-01-01
A new burnup sensitivity analysis code system was developed with help from the object-oriented technique and written in Python language. It was confirmed that they are powerful to support complex numerical calculation procedure such as reactor burnup sensitivity analysis. The new burnup sensitivity analysis code system PSAGEP was restructured from a complicated old code system and reborn as a user-friendly code system which can calculate the sensitivity coefficients of the nuclear characteristics considering multicycle burnup effect based on the generalized perturbation theory (GPT). A new encapsulation framework for conventional codes written in Fortran was developed. This framework supported to restructure the software architecture of the old code system by hiding implementation details and allowed users of the new code system to easily calculate the burnup sensitivity coefficients. The framework can be applied to the other development projects since it is carefully designed to be independent from PSAGEP. Numerical results of the burnup sensitivity coefficient of a typical fast breeder reactor were given with components based on GPT and the multicycle burnup effects on the sensitivity coefficient were discussed. (authors)
DEFF Research Database (Denmark)
Rosendahl, A; Kasch, Helge
Background and aims: A major proportion of spinal cord injured subjects (SCIS) suffers from chronic pain. A majority with neuropathic pain, being: shooting, burning and stabbing. Neurological examination reveals signs of central sensitization (CS) e.g. allodynia and hyperalgesia. CS plays...... an important role in maintained neuropathic pain conditions and may lead to or be induced by analgesics. Medication-overuse-headaches (MOH) alter CNS pain processing systems, and the situation is reversed after discontinuation of headache medication. Aim: To determine the occurrence of CS and conditions...... pressure algometry, Von Frey filaments and pinprick test. Patients fulfill McGill Pain Questionnaire and the International SCI pain data-set. All participants undergo examination of the Pressure Pain Detection Threshold, Pressure Pain Tolerance Threshold, Mechanical Detection Threshold, and Wind...
Directory of Open Access Journals (Sweden)
Milad Farzadi
2013-02-01
Full Text Available Background and Aims: Different mechanisms have been developed for connecting abutment to implant. One of the most popular mechanisms is Tapered Integrated Screw (TIS, which is a Tapered Interference Fit (TIF with a screw integrated at the bottom of that. The aim of this study was to investigate the mechanism of TIS and effective factors in employing TIS during design and implementation processes using an analytic method.Materials and Methods: Relevant equations were developed to predict tightening and loosening torques, contactpressure and preloads with and without bone tissue in this analysis. The efficiency is defined as the ratio of the loosening torque to the tightening torque. The effects of the change in elastic modulus and thickness of the bone on operation of this mechanism were investigated.Results: In this study, 14 independent parameters such as taper angle, friction coefficient, abutment and implantgeometry that are effective on performance of TIS mechanism were presented. The role of some factors was shown in the performance of ITI implant using sensitivity analysis.Conclusion: It was shown that friction coefficient, contact length, and implant radius play major roles on tightening and loosening torques and efficiency of the mechanism. Furthermore, the results revealed that the change in the elastic modulus and thickness of the bone influenced the efficiency of the mechanism less than 15%.
Analytical methods for analysis of neutron cross sections of amino acids and proteins
Energy Technology Data Exchange (ETDEWEB)
Voi, Dante L.; Ferreira, Francisco de O.; Nunes, Rogerio Chaffin; Carvalheira, Luciana, E-mail: dante@ien.gov.br, E-mail: fferreira@ien.gov.br, E-mail: Chaffin@ien.gov.br, E-mail: luciana@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Rocha, Hélio F. da, E-mail: helionutro@gmail.com.br [Universidade Federal do Rio de Janeiro (IPPMG/UFRJ), Rio de Janeiro, RJ (Brazil). Instituto de Pediatria
2017-07-01
Two unpublished analytical processes were developed at IEN-CNEN-RJ for the analysis of neutron cross sections of chemical compounds and complex molecules, the method of data parceling and grouping (P and G) and the method of data equivalence and similarity (E and S) of cross-sections. The former allows the division of a complex compound or molecule so that the parts can be manipulated to construct a value of neutron cross section for the compound or the entire molecule. The second method allows by comparison obtain values of neutron cross-sections of specific parts of the compound or molecule, as the amino acid radicals or its parts. The processes were tested for the determination of neutron cross-sections of the 20 human amino acids and a small database was built for future use in the construction of neutron cross-sections of proteins and other components of the human being cells, also in other industrial applications. (author)
Analytical methods for analysis of neutron cross sections of amino acids and proteins
International Nuclear Information System (INIS)
Voi, Dante L.; Ferreira, Francisco de O.; Nunes, Rogerio Chaffin; Carvalheira, Luciana; Rocha, Hélio F. da
2017-01-01
Two unpublished analytical processes were developed at IEN-CNEN-RJ for the analysis of neutron cross sections of chemical compounds and complex molecules, the method of data parceling and grouping (P and G) and the method of data equivalence and similarity (E and S) of cross-sections. The former allows the division of a complex compound or molecule so that the parts can be manipulated to construct a value of neutron cross section for the compound or the entire molecule. The second method allows by comparison obtain values of neutron cross-sections of specific parts of the compound or molecule, as the amino acid radicals or its parts. The processes were tested for the determination of neutron cross-sections of the 20 human amino acids and a small database was built for future use in the construction of neutron cross-sections of proteins and other components of the human being cells, also in other industrial applications. (author)
Energy use pattern and sensitivity analysis of rice production: A case ...
African Journals Online (AJOL)
Energy use pattern and sensitivity analysis of rice production: A case study of Guilane province of Iran. ... Because of the direct links between energy and crop yields, and food supplies, rice energy analysis is essential. The objective of this study was to evaluate ... to be 1.29. Key Words: Energy ratio, fuel, renewable energy ...
DEFF Research Database (Denmark)
Price, Jason Anthony; Nordblad, Mathias; Woodley, John
2014-01-01
This paper demonstrates the added benefits of using uncertainty and sensitivity analysis in the kinetics of enzymatic biodiesel production. For this study, a kinetic model by Fedosov and co-workers is used. For the uncertainty analysis the Monte Carlo procedure was used to statistically quantify...
Sensitivity and detection limit analysis of silicon nanowire bio(chemical) sensors
Chen, S.; van den Berg, Albert; Carlen, Edwin
2015-01-01
This paper presents an analysis of the sensitivity and detection limit of silicon nanowire biosensors using an analytical model in combination with I-V and current noise measurements. The analysis shows that the limit of detection (LOD) and signal to noise ratio (SNR) can be optimized by determining
Penafiel, Johanne; Hesketh, Amelia V; Granot, Ori; Scott McIndoe, J
2016-10-04
Electron ionization (EI) is a reliable mass spectrometric method for the analysis of the vast majority of thermally stable and volatile compounds. In direct EI-MS, the sample is placed into the probe and introduced to the source. For air- and moisture-sensitive organometallic complexes, the sample introduction step is critical. A small quantity must be briefly exposed to the atmosphere, during which time decomposition can occur. Here we present a simple tool that allows convenient analysis of air- and moisture-sensitive organometallic species by direct probe methods: a small purge-able glove chamber affixed to the front end of the mass spectrometer. Using the upgraded mass spectrometer, we successfully characterized a series of air- and moisture-sensitive organometallic complexes, ranging from mildly to very air-sensitive.
International Nuclear Information System (INIS)
Nimura, Yoshinori; Kumagai, Ken; Kouzu, Yoshinao; Higo, Morihiro; Kato, Yoshikuni; Seki, Naohiko; Yamada, Shigeru
2005-01-01
In order to identify a set of genes related to radiation sensitivity of squamous cell carcinoma (SCC) and establish a predictive method, we compared expression profiles of radio-sensitive/radio-resistant SCC cell lines, using the in-house cDNA microarray consisting of 2,201 human genes derived from full-length enriched SCC cDNA libraries and the Human oligo chip 30 K (Hitachi Software Engineering). Surviving fractions (SF) after irradiation of heavy iron were calculated by colony formation assay. Three pairs (TE2-TE13, YES5-YES6, and HSC3-HSC2), sensitive (SF1 0.6), were selected for the microarray analysis. The results of cDNA microarray analysis showed that 20 genes in resistant cell lines and 5 genes in sensitive cell lines were up regulated more than 1.5-fold compared with sensitive and resistant cell lines respectively. Fourteen out of 25 genes were confirmed the gene expression profiles by real-time polymerase chain reaction (PCR). Twenty-seven genes identified by Human oligo chip 30 K are candidate for the markers to distinguish radio-sensitive from radio-resistant. These results suggest that the isolated 27 genes are the candidates that might be used as specific molecular markers to predict radiation sensitivity. (author)
Kamasani, Swapna; Akula, Sravani; Sivan, Sree Kanth; Manga, Vijjulatha; Duyster, Justus; Vudem, Dashavantha Reddy; Kancha, Rama Krishna
2017-05-01
The ABL kinase inhibitor imatinib has been used as front-line therapy for Philadelphia-positive chronic myeloid leukemia. However, a significant proportion of imatinib-treated patients relapse due to occurrence of mutations in the ABL kinase domain. Although inhibitor sensitivity for a set of mutations was reported, the role of less frequent ABL kinase mutations in drug sensitivity/resistance is not known. Moreover, recent reports indicate distinct resistance profiles for second-generation ABL inhibitors. We thus employed a computational approach to predict drug sensitivity of 234 point mutations that were reported in chronic myeloid leukemia patients. Initial validation analysis of our approach using a panel of previously studied frequent mutations indicated that the computational data generated in this study correlated well with the published experimental/clinical data. In addition, we present drug sensitivity profiles for remaining point mutations by computational docking analysis using imatinib as well as next generation ABL inhibitors nilotinib, dasatinib, bosutinib, axitinib, and ponatinib. Our results indicate distinct drug sensitivity profiles for ABL mutants toward kinase inhibitors. In addition, drug sensitivity profiles of a set of compound mutations in ABL kinase were also presented in this study. Thus, our large scale computational study provides comprehensive sensitivity/resistance profiles of ABL mutations toward specific kinase inhibitors.
A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.
2013-12-01
Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.
Structure and sensitivity analysis of individual-based predator–prey models
International Nuclear Information System (INIS)
Imron, Muhammad Ali; Gergs, Andre; Berger, Uta
2012-01-01
The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.
Stability and Sensitive Analysis of a Model with Delay Quorum Sensing
Directory of Open Access Journals (Sweden)
Zhonghua Zhang
2015-01-01
Full Text Available This paper formulates a delay model characterizing the competition between bacteria and immune system. The center manifold reduction method and the normal form theory due to Faria and Magalhaes are used to compute the normal form of the model, and the stability of two nonhyperbolic equilibria is discussed. Sensitivity analysis suggests that the growth rate of bacteria is the most sensitive parameter of the threshold parameter R0 and should be targeted in the controlling strategies.
Siamphukdee, Kanjana; Collins, Frank; Zou, Roger
2013-06-01
Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.
Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models
International Nuclear Information System (INIS)
Lamboni, Matieyendou; Monod, Herve; Makowski, David
2011-01-01
Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.
An adaptive Mantel-Haenszel test for sensitivity analysis in observational studies.
Rosenbaum, Paul R; Small, Dylan S
2017-06-01
In a sensitivity analysis in an observational study with a binary outcome, is it better to use all of the data or to focus on subgroups that are expected to experience the largest treatment effects? The answer depends on features of the data that may be difficult to anticipate, a trade-off between unknown effect-sizes and known sample sizes. We propose a sensitivity analysis for an adaptive test similar to the Mantel-Haenszel test. The adaptive test performs two highly correlated analyses, one focused analysis using a subgroup, one combined analysis using all of the data, correcting for multiple testing using the joint distribution of the two test statistics. Because the two component tests are highly correlated, this correction for multiple testing is small compared with, for instance, the Bonferroni inequality. The test has the maximum design sensitivity of two component tests. A simulation evaluates the power of a sensitivity analysis using the adaptive test. Two examples are presented. An R package, sensitivity2x2xk, implements the procedure. © 2016, The International Biometric Society.
Camarda, C. J.; Adelman, H. M.
1984-01-01
The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.
Analysis of rosen piezoelectric transformers with a varying cross-section.
Xue, H; Yang, J; Hu, Y
2008-07-01
We study the effects of a varying cross-section on the performance of Rosen piezoelectric transformers operating with length extensional modes of rods. A theoretical analysis is performed using an extended version of a one-dimensional model developed in a previous paper. Numerical results based on the theoretical analysis are presented.
Analysis of Cesarean section delivery at Nova Bila Hospital according to the Robson classification.
Josipović, Ljiljana Bilobrk; Stojkanović, Jadranka Dizdarević; Brković, Irma
2015-03-01
An increase in Cesarean section birth rate is evident worldwide, especially in developed and developing countries. Since this trend is rapidly gaining epidemic status with unpredictable consequences regarding the reproductive and overall women's health, there is a need for systematic collection and analysis of Cesarean section occurrence data. At this moment, there is no standardized, internationally accepted classification that would be easy to understand and simple to apply. In 2001, Robson Cesarean section classification in ten groups, which might satisfy good classification criteria, was published. In this paper, we have retrospectively collected and sorted the data on Cesarean section births from the "Dr. Fra Mato Nikolić" Croatian Hospital in Nova Bila, according to Robson classification, for the period from January 1st, 1998 to December 31st, 2007. During this period, 6603 women have given birth. Of these, 1010 opted for Cesarean sec- tion (15.30%). The largest group of women giving birth belongs to group 3 (multiparous, single pregnancy, head down, 37 weeks gestation age or more, spontaneous labor), where 49.74% of all the analyzed births belong. The largest group for those with Cesarean sections is group 5 (previous Cesarean section) with 26.93% of all the Cesarean sections. Our results are similar to the results of studies done elsewhere in the world. Robson classification identifies the risk groups with high Cesarean section percentage and is appropriate for long-term tracking and international comparison of the recognized increase of the Cesarean section trend.
Analysis of the sensitivity concept of self-powered neutron detector (SPND)
International Nuclear Information System (INIS)
Moreira, O.; Lescano, H.
2012-01-01
Self powered neutron detectors (SPND) are widely used to monitor the neutron flux, either in nuclear as in irradiation facilities and medical treatments. However, the physical meaning of the parameter that is used to relate the detector signal (an electrical current) with the neutron flux, i.e., the sensitivity of the detector, has not been sufficiently analyzed. Since the definition of sensitivity, ε=i/φ is calculated for particular reactor conditions, i.e., for thermal neutrons at room temperature, it does not take into account the deviation originated from other conditions of temperature (above ambient), as found for example in nuclear power plants. In this work we calculated the microscopic cross section weighted with the neutron flux, defined in the usual way. This weighted microscopic cross section reveals the no proportionality between the absorption rate and the neutron flux, exhibiting the problem that the SPND current signal has to properly represent the neutron flux (author)
Bojórquez-Tapia, Luis A; Sánchez-Colon, Salvadur; Florez, Arturo
2005-09-01
Multicriteria decision analysis (MCDA) increasingly is being applied in environmental impact assessment (EIA). In this article, two MCDA techniques, stochastic analytic hierarchy process and compromise programming, are combined to ascertain the environmental impacts of and to rank two alternative sites for Mexico City's new airport. Extensive sensitivity analyses were performed to determine the probability of changes in rank ordering given uncertainty in the hierarchy structure, decision criteria weights, and decision criteria performances. Results demonstrate that sensitivity analysis is fundamental for attaining consensus among members of interdisciplinary teams and for settling debates in controversial projects. It was concluded that sensitivity analysis is critical for achieving a transparent and technically defensible MCDA implementation in controversial EIA.
International Nuclear Information System (INIS)
Zhang, Hongbin; Zhao, Haihua; Zou, Ling; Burns, Douglas; Ladd, Jacob
2017-01-01
BISON is an advanced fuels performance code being developed at Idaho National Laboratory and is the code of choice for fuels performance by the U.S. Department of Energy (DOE)’s Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. An approach to uncertainty quantification and sensitivity analysis with BISON was developed and a new toolkit was created. A PWR fuel rod model was developed and simulated by BISON, and uncertainty quantification and sensitivity analysis were performed with eighteen uncertain input parameters. The maximum fuel temperature and gap conductance were selected as the figures of merit (FOM). Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis. (author)
Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool
Directory of Open Access Journals (Sweden)
Wenjie Tian
2014-01-01
Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.
Comparison of global sensitivity analysis techniques and importance measures in PSA
International Nuclear Information System (INIS)
Borgonovo, E.; Apostolakis, G.E.; Tarantola, S.; Saltelli, A.
2003-01-01
This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell-Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA
Energy Technology Data Exchange (ETDEWEB)
Zhang, Hongbin; Ladd, Jacob; Zhao, Haihua; Zou, Ling; Burns, Douglas
2015-11-01
BISON is an advanced fuels performance code being developed at Idaho National Laboratory and is the code of choice for fuels performance by the U.S. Department of Energy (DOE)’s Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. An approach to uncertainty quantification and sensitivity analysis with BISON was developed and a new toolkit was created. A PWR fuel rod model was developed and simulated by BISON, and uncertainty quantification and sensitivity analysis were performed with eighteen uncertain input parameters. The maximum fuel temperature and gap conductance were selected as the figures of merit (FOM). Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis.
International Nuclear Information System (INIS)
Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.
2009-01-01
The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.
Steady state likelihood ratio sensitivity analysis for stiff kinetic Monte Carlo simulations.
Núñez, M; Vlachos, D G
2015-01-28
Kinetic Monte Carlo simulation is an integral tool in the study of complex physical phenomena present in applications ranging from heterogeneous catalysis to biological systems to crystal growth and atmospheric sciences. Sensitivity analysis is useful for identifying important parameters and rate-determining steps, but the finite-difference application of sensitivity analysis is computationally demanding. Techniques based on the likelihood ratio method reduce the computational cost of sensitivity analysis by obtaining all gradient information in a single run. However, we show that disparity in time scales of microscopic events, which is ubiquitous in real systems, introduces drastic statistical noise into derivative estimates for parameters affecting the fast events. In this work, the steady-state likelihood ratio sensitivity analysis is extended to singularly perturbed systems by invoking partial equilibration for fast reactions, that is, by working on the fast and slow manifolds of the chemistry. Derivatives on each time scale are computed independently and combined to the desired sensitivity coefficients to considerably reduce the noise in derivative estimates for stiff systems. The approach is demonstrated in an analytically solvable linear system.
Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in technologies for heavy computation and data acquisition. An improved decision needs a proper approach to modelling. This implies that sensitivity analysis (SA) should be considered as an integral part of modelling, since it plays a crucial role for understanding the behavior of these complex models and improving their performance. Local sensitivity analysis approaches are helpful in this context but insufficient to thoroughly characterize model sensitivity. This is mainly due to non-linear behaviour of complex environmental systems and interactions within them. Therefore, a global sensitivity analysis (GSA) should be adopted to provide a comprehensive understanding of model behavior in these cases. One of the main challenges associated with the GSA methods is their substantial computational demand to generate robust sensitivity metrics over the entire factor space. Accordingly, a novel GSA technique, Variogram Analysis of Response Surfaces (VARS) is recently developed. VARS uses the Variogram concept to efficiently provide a variety of global sensitivity indices across a range of scales within the parameter space; a feature that is unique to VARS. In this work, for an enhanced understanding of the model behavior, we adopted a multi-criteria multi-model approach to conduct a thorough GSA. We applied VARS to a couple of environmental models, with various levels of complexity, and used various metrics to measure sensitivity of model response (streamflow) to model parameters. These metrics measure model performance for simulating high flows, low flows, and flow volume. It is indicated that VARS can be used efficiently to provide a thorough and unique GSA for models, and that the metric choice has a great influence on the assessment of model sensitivity.