... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...
Meyer, Hans Jonas; Emmer, Alexander; Kornhuber, Malte; Surov, Alexey
2018-05-01
Diffusion-weighted imaging (DWI) has the potential of being able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize tissues on MRI. The aim of this study was to correlate histogram parameters derived from apparent diffusion coefficient (ADC) maps with serological parameters in myositis. 16 patients with autoimmune myositis were included in this retrospective study. DWI was obtained on a 1.5 T scanner by using the b-values of 0 and 1000 s mm - 2 . Histogram analysis was performed as a whole muscle measurement by using a custom-made Matlab-based application. The following ADC histogram parameters were estimated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, and the following percentiles ADCp10, ADCp25, ADCp75, ADCp90, as well histogram parameters kurtosis, skewness, and entropy. In all patients, the blood sample was acquired within 3 days to the MRI. The following serological parameters were estimated: alanine aminotransferase, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, C-reactive protein (CRP) and myoglobin. All patients were screened for Jo1-autobodies. Kurtosis correlated inversely with CRP (p = -0.55 and 0.03). Furthermore, ADCp10 and ADCp90 values tended to correlate with creatine kinase (p = -0.43, 0.11, and p = -0.42, = 0.12 respectively). In addition, ADCmean, p10, p25, median, mode, and entropy were different between Jo1-positive and Jo1-negative patients. ADC histogram parameters are sensitive for detection of muscle alterations in myositis patients. Advances in knowledge: This study identified that kurtosis derived from ADC maps is associated with CRP in myositis patients. Furthermore, several ADC histogram parameters are statistically different between Jo1-positive and Jo1-negative patients.
Rohmer, V; Ligeard-Ducoroy, A; Perdrisot, R; Beldent, V; Jallet, P; Bigorgne, J C
1990-05-12
Highly sensitive TSH assays make it easier to diagnose thyroid diseases. During one year, we performed 5,300 sensitive TSH assays (normal range: 0.15-4 mU/l) in various patients. The purpose of this work was to test the value of the low TSH plasma concentrations found in 580 patients. In 99.7 percent of the cases, low TSH levels were the consequence of a thyroid disorder or a treatment by thyroid hormones; non thyroidal illnesses were detected in only 0.3 percent. However, not all TSH values below 0.15 mU/l were associated with overt or occult thyrotoxicosis. When TSH was undetectable (less than 0.04 mU/l), and excluding thyroid hormone-treated patients, thyrotoxicosis was present in 97 percent of the cases. On the other hand, when TSH values were between 0.04 and 0.15 mU/l, 41 percent of the patients failed to show any sign or symptom of hyperthyroidism, although they had functioning thyroid nodules, multinodular goitre or iodine overload, or they received thyroid hormones.
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Directory of Open Access Journals (Sweden)
Iulian N. BUJOREANU
2011-01-01
Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.
Sensitivity Analysis Without Assumptions.
Ding, Peng; VanderWeele, Tyler J
2016-05-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.
Guide on reflectivity data analysis
International Nuclear Information System (INIS)
Lee, Jeong Soo; Ku, Ja Seung; Seong, Baek Seok; Lee, Chang Hee; Hong, Kwang Pyo; Choi, Byung Hoon
2004-09-01
This report contains reduction and fitting process of neutron reflectivity data by REFLRED and REFLFIT in NIST. Because the detail of data reduction like BKG, footprint and data normalization was described, it will be useful to the user who has no experience in this field. Also, reflectivity and BKG of d-PS thin film were measured by HANARO neutron reflectometer. From these, the structure of d-PS thin film was analyzed with REFLRED and REFLFIT. Because the structure of thin film such as thickness, roughness and SLD was attained in the work, the possibility of data analysis with REFLRED and REFLFIT was certified
Interference and Sensitivity Analysis.
VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth
2014-11-01
Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.
Polarization sensitivity testing of off-plane reflection gratings
Marlowe, Hannah; McEntaffer, Randal L.; DeRoo, Casey T.; Miles, Drew M.; Tutt, James H.; Laubis, Christian; Soltwisch, Victor
2015-09-01
Off-Plane reflection gratings were previously predicted to have different efficiencies when the incident light is polarized in the transverse-magnetic (TM) versus transverse-electric (TE) orientations with respect to the grating grooves. However, more recent theoretical calculations which rigorously account for finitely conducting, rather than perfectly conducting, grating materials no longer predict significant polarization sensitivity. We present the first empirical results for radially ruled, laminar groove profile gratings in the off-plane mount which demonstrate no difference in TM versus TE efficiency across our entire 300-1500 eV bandpass. These measurements together with the recent theoretical results confirm that grazing incidence off-plane reflection gratings using real, not perfectly conducting, materials are not polarization sensitive.
DEFF Research Database (Denmark)
Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad
2018-01-01
of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...
Chemical kinetic functional sensitivity analysis: Elementary sensitivities
International Nuclear Information System (INIS)
Demiralp, M.; Rabitz, H.
1981-01-01
Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research
MOVES regional level sensitivity analysis
2012-01-01
The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...
Energy Technology Data Exchange (ETDEWEB)
Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)
2010-07-01
This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.
DEFF Research Database (Denmark)
Manevski, Kiril; Jabloun, Mohamed; Gupta, Manika
2016-01-01
a more powerful input to a nonparametric analysis for discrimination at the field scale, when compared with unaltered reflectance and parametric analysis. However, the discrimination outputs interact and are very sensitive to the number of observations - an important implication for the design......Remote sensing of land covers utilizes an increasing number of methods for spectral reflectance processing and its accompanying statistics to discriminate between the covers’ spectral signatures at various scales. To this end, the present chapter deals with the field-scale sensitivity...... of the vegetation spectral discrimination to the most common types of reflectance (unaltered and continuum-removed) and statistical tests (parametric and nonparametric analysis of variance). It is divided into two distinct parts. The first part summarizes the current knowledge in relation to vegetation...
Data fusion qualitative sensitivity analysis
International Nuclear Information System (INIS)
Clayton, E.A.; Lewis, R.E.
1995-09-01
Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables
Maternal sensitivity: a concept analysis.
Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae
2008-11-01
The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.
Global optimization and sensitivity analysis
International Nuclear Information System (INIS)
Cacuci, D.G.
1990-01-01
A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints
PROP sensitivity reflects sensory discrimination between custard desserts
Wijk, R.A. de; Dijksterhuis, G.; Vereijken, P.; Prinz, J.F.; Weenen, H.
2007-01-01
Sensitivity to 6-n-propylthiouracil (PROP) for a group of 180 naïve consumers was related to their perception of 16 commercially available vanilla custard desserts. Rated intensities of taste and texture attributes varied moderately and inconsistently with PROP sensitivity. In contrast,
International Nuclear Information System (INIS)
Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.
1990-01-01
A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs
High order depletion sensitivity analysis
International Nuclear Information System (INIS)
Naguib, K.; Adib, M.; Morcos, H.N.
2002-01-01
A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations
Reflection and Reflective Practice Discourses in Coaching: A Critical Analysis
Cushion, Christopher J.
2018-01-01
Reflection and reflective practice is seen as an established part of coaching and coach education practice. It has become a "taken-for-granted" part of coaching that is accepted enthusiastically and unquestioningly, and is assumed to be "good" for coaching and coaches. Drawing on sociological concepts, a primarily Foucauldian…
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
Sensitivity analysis using probability bounding
International Nuclear Information System (INIS)
Ferson, Scott; Troy Tucker, W.
2006-01-01
Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values
Sensitivity analysis in a structural reliability context
International Nuclear Information System (INIS)
Lemaitre, Paul
2014-01-01
This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)
Sensitivity analysis in remote sensing
Ustinov, Eugene A
2015-01-01
This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...
Total reflection X-ray fluorescence analysis
International Nuclear Information System (INIS)
Michaelis, W.; Prange, A.
1987-01-01
In the past few years, total reflection X-ray flourescence analysis (TXRF) has found an increasing number of assignments and applications. Experience of trace element analysis using TXRF and examples of applications are already widespread. Therefore, users of TXRF had the opportunity of an intensive exchange of their experience at the 1st workshop on total reflection X-ray fluorescence analysis which took place on May 27th and 28th 1986 at the GKSS Research Centre at Geesthacht. In a series of lectures and discussions dealing with the analytical principle itself, sample preparation techniques and applications as well as comuter programs for spectrum evaluation, the present state of development and the range of applications were outlined. 3 studies out of a total of 14 were included separately in the INIS and ENERGY databases. With 61 figs., 12 tabs [de
Sensitivity Analysis of Viscoelastic Structures
Directory of Open Access Journals (Sweden)
A.M.G. de Lima
2006-01-01
Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.
UMTS Common Channel Sensitivity Analysis
DEFF Research Database (Denmark)
Pratas, Nuno; Rodrigues, António; Santos, Frederico
2006-01-01
and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....
TEMAC, Top Event Sensitivity Analysis
International Nuclear Information System (INIS)
Iman, R.L.; Shortencarier, M.J.
1988-01-01
1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement
A Seismic Analysis for Reflective Metal Insulation
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyuhyung; Kim, Taesoon [KHNP CRI, Daejeon (Korea, Republic of)
2016-10-15
U.S. NRC (Nuclear Regulatory Commission) GSI- 191 (Generic Safety Issue-191) is concerned about the head-loss of emergency core cooling pumps caused by calcium silicate insulation debris accumulated on a sump screen when a loss of coolant accident (LOCA). In order to cope with the concern, many nuclear plants in U. S. have been replacing calcium silicate insulation in containment building with reflective metal insulation (RMI). In Korea, RMI has been used for only reactor vessels recently constructed, but the RMI was imported. Therefore, we have been developing the domestic design of RMI to supply to nuclear power plants under operation and construction in relation to the GSI-191. This paper covers that the structural integrity of the RMI assembly was evaluated under SSE (safety shutdown earthquake) load. An analysis model was built for the seismic test system of a reflective metal insulation assembly and pre-stress, modal, and spectrum analysis for the model were performed using a commercial structural analysis code, ANSYS. According to the results of the analyses, the buckles fastening the RMIs showed the structural integrity under the required response spectrum containing the safety shutdown earthquake loads applied to main components in containment building. Consequently, since the RMI isn't disassembled under the SSE load, the RMI is judged not to affect safety related components.
A Seismic Analysis for Reflective Metal Insulation
International Nuclear Information System (INIS)
Kim, Kyuhyung; Kim, Taesoon
2016-01-01
U.S. NRC (Nuclear Regulatory Commission) GSI- 191 (Generic Safety Issue-191) is concerned about the head-loss of emergency core cooling pumps caused by calcium silicate insulation debris accumulated on a sump screen when a loss of coolant accident (LOCA). In order to cope with the concern, many nuclear plants in U. S. have been replacing calcium silicate insulation in containment building with reflective metal insulation (RMI). In Korea, RMI has been used for only reactor vessels recently constructed, but the RMI was imported. Therefore, we have been developing the domestic design of RMI to supply to nuclear power plants under operation and construction in relation to the GSI-191. This paper covers that the structural integrity of the RMI assembly was evaluated under SSE (safety shutdown earthquake) load. An analysis model was built for the seismic test system of a reflective metal insulation assembly and pre-stress, modal, and spectrum analysis for the model were performed using a commercial structural analysis code, ANSYS. According to the results of the analyses, the buckles fastening the RMIs showed the structural integrity under the required response spectrum containing the safety shutdown earthquake loads applied to main components in containment building. Consequently, since the RMI isn't disassembled under the SSE load, the RMI is judged not to affect safety related components
Systemization of burnup sensitivity analysis code. 2
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2005-02-01
Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For
Synchrotron radiation total reflection for rainwater analysis
International Nuclear Information System (INIS)
Simabuco, Silvana M.; Matsumoto, Edson
1999-01-01
Total reflection X-ray fluorescence analysis excited with synchrotron radiation (SR-TXRF) has been used for rainwater trace element analysis. The samples were collected in four different sites at Campinas City, SP. Standard solutions with gallium as internal standard were prepared for the calibration system. Rainwater samples of 10 μl were putted onto Perspex reflector disk, dried on vacuum and analyzed for 100 s measuring time. The detection limits obtained for K-shell varied from 29 ng.ml -1 for sulfur to 1.3 ng.ml -1 for zinc and copper, while for L-shell the values were 4.5 ng.ml -1 for mercury and 7.0 ng.ml -1 for lead. (author)
Systemization of burnup sensitivity analysis code
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2004-02-01
To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this
Probabilistic sensitivity analysis of biochemical reaction systems.
Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John
2009-09-07
Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.
Strzalka, Joseph; Satija, Sushil; Dimasi, Elaine; Kuzmenko, Ivan; Gog, Thomas; Blasie, J. Kent
2004-03-01
Labeling groups with ^2H to distinguish them in the scattering length density (SLD) profile constitutes the chief advantage of neutron reflectivity (NR) in studying Langmuir monolayers (LM) of lipids and proteins. Solid phase synthesis (SPPS) permits the labeling of a single residue in a peptide. Recent work demonstrates the sensitivity of NR to single ^2H-labeled residues in LM of vectorially oriented α -helical bundle peptides. NR requires comparison of isomorphic samples of all-^1H and ^2H-labeled peptides. Alternately, resonant x-ray reflectivity (RXR) uses only one sample. RXR exploits energy-dependent changes in the scattering factor from heavy atoms to distinguish them within the SLD profile. Peptides may be labeled by SPPS (e.g. Br-Phe), or may have inherent labels (e.g. Fe in heme proteins). As test cases, we studied LM of Br-labeled lipids and peptides with RXR. Both approaches require a model-independent means of obtaining SLD profiles from the reflectivity data. We have applied box-refinement to obtain the gradient SLD profile. This is fit uniquely with a sum of Gaussians and integrated analytically [Blasie et al., PRB 67 224201 (2003)] to provide the SLD profile. Label positions can then be determined to sub-Ångstrom accuracy. This work supported by the NIH (GM55876).
Sensitivity Analysis of a Physiochemical Interaction Model ...
African Journals Online (AJOL)
In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Reflections on Teaching Financial Statement Analysis
Entwistle, Gary
2015-01-01
In her 2011 article "Towards a 'scholarship of teaching and learning': The individual and the communal journey," Ursula Lucas calls for more critical reflection on individual teaching experiences and encourages sharing such experiences with the wider academy. In this spirit Gary Entwistle reflects upon his experiences teaching financial…
Zhang, Ke; Tang, Yiwen; Meng, Jinsong; Wang, Ge; Zhou, Han; Fan, Tongxiang; Zhang, Di
2014-11-03
Polarization-sensitive color originates from polarization-dependent reflection or transmission, exhibiting abundant light information, including intensity, spectral distribution, and polarization. A wide range of butterflies are physiologically sensitive to polarized light, but the origins of polarized signal have not been fully understood. Here we systematically investigate the colorful scales of six species of butterfly to reveal the physical origins of polarization-sensitive color. Microscopic optical images under crossed polarizers exhibit their polarization-sensitive characteristic, and micro-structural characterizations clarify their structural commonality. In the case of the structural scales that have deep ridges, the polarization-sensitive color related with scale azimuth is remarkable. Periodic ridges lead to the anisotropic effective refractive indices in the parallel and perpendicular grating orientations, which achieves form-birefringence, resulting in the phase difference of two different component polarized lights. Simulated results show that ridge structures with reflecting elements reflect and rotate the incident p-polarized light into s-polarized light. The dimensional parameters and shapes of grating greatly affect the polarization conversion process, and the triangular deep grating extends the outstanding polarization conversion effect from the sub-wavelength period to the period comparable to visible light wavelength. The parameters of ridge structures in butterfly scales have been optimized to fulfill the polarization-dependent reflection for secret communication. The structural and physical origin of polarization conversion provides a more comprehensive perspective on the creation of polarization-sensitive color in butterfly wing scales. These findings show great potential in anti-counterfeiting technology and advanced optical material design.
Risk Characterization uncertainties associated description, sensitivity analysis
International Nuclear Information System (INIS)
Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.
2013-01-01
The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations
Object-sensitive Type Analysis of PHP
Van der Hoek, Henk Erik; Hage, J
2015-01-01
In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the
Kim, Jong Man; Choi, Byung So; Choi, Yoon Sun; Kim, Jong Min; Bjelkhagen, Hans I.; Phillips, Nicholas J.
2002-03-01
Silver halide sensitized gelatin (SHSG) holograms are similar to holograms recorded in dichromated gelatin (DCG), the main recording material for holographic optical elements (HOEs). The drawback of DCG is its low energetic sensitivity and limited spectral response. Silver halide materials can be processed in such a way that the final hologram will have properties like a DCG hologram. Recently this technique has become more interesting since the introduction of new ultra-fine-grain silver halide (AgHal) emulsions. In particular, high spatial-frequency fringes associated with HOEs of the reflection type are difficult to construct when SHSG processing methods are employed. Therefore an optimized processing technique for reflection HOEs recorded in the new AgHal materials is introduced. Diffraction efficiencies over 90% can be obtained repeatably for reflection diffraction gratings. Understanding the importance of a selective hardening process has made it possible to obtain results similar to conventional DCG processing. The main advantage of the SHSG process is that high-sensitivity recording can be performed with laser wavelengths anywhere within the visible spectrum. This simplifies the manufacturing of high-quality, large-format HOEs, also including high-quality display holograms of the reflection type in both monochrome and full color.
Liang, M; Lee, M C; O'Neill, J; Dickenson, A H; Iannetti, G D
2016-08-01
Central sensitization (CS), the increased sensitivity of the central nervous system to somatosensory inputs, accounts for secondary hyperalgesia, a typical sign of several painful clinical conditions. Brain potentials elicited by mechanical punctate stimulation using flat-tip probes can provide neural correlates of CS, but their signal-to-noise ratio is limited by poor synchronization of the afferent nociceptive input. Additionally, mechanical punctate stimulation does not activate nociceptors exclusively. In contrast, low-intensity intraepidermal electrical stimulation (IES) allows selective activation of type II Aδ-mechano-heat nociceptors (II-AMHs) and elicits reproducible brain potentials. However, it is unclear whether hyperalgesia from IES occurs and coexists with secondary mechanical punctate hyperalgesia, and whether the magnitude of the electroencephalographic (EEG) responses evoked by IES within the hyperalgesic area is increased. To address these questions, we explored the modulation of the psychophysical and EEG responses to IES by intraepidermal injection of capsaicin in healthy human subjects. We obtained three main results. First, the intensity of the sensation elicited by IES was significantly increased in participants who developed robust mechanical punctate hyperalgesia after capsaicin injection (i.e., responders), indicating that hyperalgesia from IES coexists with punctate mechanical hyperalgesia. Second, the N2 peak magnitude of the EEG responses elicited by IES was significantly increased after the intraepidermal injection of capsaicin in responders only. Third, a receiver-operator characteristics analysis showed that the N2 peak amplitude is clearly predictive of the presence of CS. These findings suggest that the EEG responses elicited by IES reflect secondary hyperalgesia and therefore represent an objective correlate of CS. Copyright © 2016 the American Physiological Society.
A hybrid approach for global sensitivity analysis
International Nuclear Information System (INIS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-01-01
Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.
Analysis of contaminants on electronic components by reflectance FTIR spectroscopy
International Nuclear Information System (INIS)
Griffith, G.W.
1982-09-01
The analysis of electronic component contaminants by infrared spectroscopy is often a difficult process. Most of the contaminants are very small, which necessitates the use of microsampling techniques. Beam condensers will provide the required sensitivity but most require that the sample be removed from the substrate before analysis. Since it can be difficult and time consuming, it is usually an undesirable approach. Micro ATR work can also be exasperating, due to the difficulty of positioning the sample at the correct place under the ATR plate in order to record a spectrum. This paper describes a modified reflection beam condensor which has been adapted to a Nicolet 7199 FTIR. The sample beam is directed onto the sample surface and reflected from the substrate back to the detector. A micropositioning XYZ stage and a close-focusing telescope are used to position the contaminant directly under the infrared beam. It is possible to analyze contaminants on 1 mm wide leads surrounded by an epoxy matrix using this device. Typical spectra of contaminants found on small circuit boards are included
Analysis of Specular Reflections Off Geostationary Satellites
Jolley, A.
2016-09-01
Many photometric studies of artificial satellites have attempted to define procedures that minimise the size of datasets required to infer information about satellites. However, it is unclear whether deliberately limiting the size of datasets significantly reduces the potential for information to be derived from them. In 2013 an experiment was conducted using a 14 inch Celestron CG-14 telescope to gain multiple night-long, high temporal resolution datasets of six geostationary satellites [1]. This experiment produced evidence of complex variations in the spectral energy distribution (SED) of reflections off satellite surface materials, particularly during specular reflections. Importantly, specific features relating to the SED variations could only be detected with high temporal resolution data. An update is provided regarding the nature of SED and colour variations during specular reflections, including how some of the variables involved contribute to these variations. Results show that care must be taken when comparing observed spectra to a spectral library for the purpose of material identification; a spectral library that uses wavelength as the only variable will be unable to capture changes that occur to a material's reflected spectra with changing illumination and observation geometry. Conversely, colour variations with changing illumination and observation geometry might provide an alternative means of determining material types.
Sensitivity analysis of a PWR pressurizer
International Nuclear Information System (INIS)
Bruel, Renata Nunes
1997-01-01
A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)
Intra-Cavity Total Reflection For High Sensitivity Measurement Of Optical Properties
Pipino, Andrew Charles Rule
1999-11-16
An optical cavity resonator device is provided for conducting sensitive murement of optical absorption by matter in any state with diffraction-limited spatial resolution through utilization of total internal reflection within a high-Q (high quality, low loss) optical cavity. Intracavity total reflection generates an evanescent wave that decays exponentially in space at a point external to the cavity, thereby providing a localized region where absorbing materials can be sensitively probed through alteration of the Q-factor of the otherwise isolated cavity. When a laser pulse is injected into the cavity and passes through the evanescent state, an amplitude loss resulting from absorption is incurred that reduces the lifetime of the pulse in the cavity. By monitoring the decay of the injected pulse, the absorption coefficient of manner within the evanescent wave region is accurately obtained from the decay time measurement.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
CFD Analysis of Water Solitary Wave Reflection
Directory of Open Access Journals (Sweden)
K. Smida
2011-12-01
Full Text Available A new numerical wave generation method is used to investigate the head-on collision of two solitary waves. The reflection at vertical wall of a solitary wave is also presented. The originality of this model, based on the Navier-Stokes equations, is the specification of an internal inlet velocity, defined as a source line within the computational domain for the generation of these non linear waves. This model was successfully implemented in the PHOENICS (Parabolic Hyperbolic Or Elliptic Numerical Integration Code Series code. The collision of two counter-propagating solitary waves is similar to the interaction of a soliton with a vertical wall. This wave generation method allows the saving of considerable time for this collision process since the counter-propagating wave is generated directly without reflection at vertical wall. For the collision of two solitary waves, numerical results show that the run-up phenomenon can be well explained, the solution of the maximum wave run-up is almost equal to experimental measurement. The simulated wave profiles during the collision are in good agreement with experimental results. For the reflection at vertical wall, the spatial profiles of the wave at fixed instants show that this problem is equivalent to the collision process.
Sensitivity analysis for large-scale problems
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Sensitivity analysis in life cycle assessment
Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.
2014-01-01
Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two
Ethical sensitivity in professional practice: concept analysis.
Weaver, Kathryn; Morse, Janice; Mitcham, Carl
2008-06-01
This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
LBLOCA sensitivity analysis using meta models
International Nuclear Information System (INIS)
Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.
2014-01-01
This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)
Sensitivity analysis in optimization and reliability problems
International Nuclear Information System (INIS)
Castillo, Enrique; Minguez, Roberto; Castillo, Carmen
2008-01-01
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods
Sensitivity analysis in optimization and reliability problems
Energy Technology Data Exchange (ETDEWEB)
Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es
2008-12-15
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.
Techniques for sensitivity analysis of SYVAC results
International Nuclear Information System (INIS)
Prust, J.O.
1985-05-01
Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)
Multiple predictor smoothing methods for sensitivity analysis
International Nuclear Information System (INIS)
Helton, Jon Craig; Storlie, Curtis B.
2006-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Dynamic Resonance Sensitivity Analysis in Wind Farms
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2017-01-01
(PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...
Critical reflection activation analysis - a new near-surface probe
International Nuclear Information System (INIS)
Gunn, J.M.F.; Trohidou, K.N.
1988-09-01
We propose a new surface analytic technique, Critical Reflection Activation Analysis (CRAA). This technique allows accurate depth profiling of impurities ≤ 100A beneath a surface. The depth profile of the impurity is simply related to the induced activity as a function of the angle of reflection. We argue that the technique is practical and estimate its accuracy. (author)
International Nuclear Information System (INIS)
Greenspan, E.
1982-01-01
This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory
Stacks, Ann M; Muzik, Maria; Wong, Kristyn; Beeghly, Marjorie; Huth-Bocks, Alissa; Irwin, Jessica L; Rosenblum, Katherine L
2014-01-01
This study examined relationships among maternal reflective functioning, parenting, infant attachment, and demographic risk in a relatively large (N = 83) socioeconomically diverse sample of women with and without a history of childhood maltreatment and their infants. Most prior research on parental reflective functioning has utilized small homogenous samples. Reflective functioning was assessed with the Parent Development Interview, parenting was coded from videotaped mother-child interactions, and infant attachment was evaluated in Ainsworth's Strange Situation by independent teams of reliable coders masked to maternal history. Reflective functioning was associated with parenting sensitivity and secure attachment, and inversely associated with demographic risk and parenting negativity; however, it was not associated with maternal maltreatment history or PTSD. Parenting sensitivity mediated the relationship between reflective functioning and infant attachment, controlling for demographic risk. Findings are discussed in the context of prior research on reflective functioning and the importance of targeting reflective functioning in interventions.
Analysis of Smith-Purcell BWO with end reflections
International Nuclear Information System (INIS)
Kumar, V.; Kim, K.-J.
2006-01-01
We present a one-dimensional time-dependent analysis and simulation of Smith-Purcell (SP) backward wave oscillator (BWO) taking end reflections and attenuation into account. In the linear regime, we obtain an analytic solution and calculate the start current. The dependence of start current on end reflections is studied taking the attenuation due to finite conductivity into account. In this paper, we have set up Maxwell-Lorentz equations for the one-dimensional time-dependent analysis of SP-BWO including end reflection and attenuation due to finite conductivity. We have obtained a solution in the linear regime and extended the analysis to the nonlinear regime by solving the Maxwell-Lorentz equations numerically. Our analysis can be used for detailed optimization of outcoupled power and start current in SP-BWO taking end reflection and attenuation into account.
Probabilistic sensitivity analysis in health economics.
Baio, Gianluca; Dawid, A Philip
2015-12-01
Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.
TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER
Directory of Open Access Journals (Sweden)
Richard E. Wendell
2010-12-01
Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.
Born reflection kernel analysis and wave-equation reflection traveltime inversion in elastic media
Wang, Tengfei
2017-08-17
Elastic reflection waveform inversion (ERWI) utilize the reflections to update the low and intermediate wavenumbers in the deeper part of model. However, ERWI suffers from the cycle-skipping problem due to the objective function of waveform residual. Since traveltime information relates to the background model more linearly, we use the traveltime residuals as objective function to update background velocity model using wave equation reflected traveltime inversion (WERTI). The reflection kernel analysis shows that mode decomposition can suppress the artifacts in gradient calculation. We design a two-step inversion strategy, in which PP reflections are firstly used to invert P wave velocity (Vp), followed by S wave velocity (Vs) inversion with PS reflections. P/S separation of multi-component seismograms and spatial wave mode decomposition can reduce the nonlinearity of inversion effectively by selecting suitable P or S wave subsets for hierarchical inversion. Numerical example of Sigsbee2A model validates the effectiveness of the algorithms and strategies for elastic WERTI (E-WERTI).
Some reflections on uncertainty analysis and management
International Nuclear Information System (INIS)
Aven, Terje
2010-01-01
A guide to quantitative uncertainty analysis and management in industry has recently been issued. The guide provides an overall framework for uncertainty modelling and characterisations, using probabilities but also other uncertainty representations (including the Dempster-Shafer theory). A number of practical applications showing how to use the framework are presented. The guide is considered as an important contribution to the field, but there is a potential for improvements. These relate mainly to the scientific basis and clarification of critical issues, for example, concerning the meaning of a probability and the concept of model uncertainty. A reformulation of the framework is suggested using probabilities as the only representation of uncertainty. Several simple examples are included to motivate and explain the basic ideas of the modified framework.
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Global approach of emergency response, reflection analysis
International Nuclear Information System (INIS)
Velasco Garcia, E.; Garcia Ahumada, F.; Albaladejo Vidal, S.
1998-01-01
The emergency response management approach must be dealt with adequately within company strategy, since a badly managed emergency situation can adversely affect a company, not only in terms of asset, but also in terms of the negative impact on its credibility, profitability and image. Thereby, it can be said that there are three main supports to manage the response in an emergency situation. a) Diagnosis b) Prognosis. c) Communications. To reach these capabilities it is necessary a co-ordination of different actions at the following levels. i. Facility Operation implies Local level. ii. Facility Property implies National level iii. Local Authority implies Local level iv. National Authority implies National level Taking into account all the last, these following functions must be covered: a) Management: incorporating communication, diagnosis and prognosis areas. b) Decision: incorporating communication and information means. c) Services: in order to facilitate the decision, as well as the execution of this decision. d) Analysis: in order to facilitate the situations that make easier to decide. e) Documentation: to seek the information for the analysts and decision makers. (Author)
Anisotropic analysis for seismic sensitivity of groundwater monitoring wells
Pan, Y.; Hsu, K.
2011-12-01
Taiwan is located at the boundaries of Eurasian Plate and the Philippine Sea Plate. The movement of plate causes crustal uplift and lateral deformation to lead frequent earthquakes in the vicinity of Taiwan. The change of groundwater level trigged by earthquake has been observed and studied in Taiwan for many years. The change of groundwater may appear in oscillation and step changes. The former is caused by seismic waves. The latter is caused by the volumetric strain and reflects the strain status. Since the setting of groundwater monitoring well is easier and cheaper than the setting of strain gauge, the groundwater measurement may be used as a indication of stress. This research proposes the concept of seismic sensitivity of groundwater monitoring well and apply to DonHer station in Taiwan. Geostatistical method is used to analysis the anisotropy of seismic sensitivity. GIS is used to map the sensitive area of the existing groundwater monitoring well.
Sensitivity Analysis of Centralized Dynamic Cell Selection
DEFF Research Database (Denmark)
Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.
2016-01-01
and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...
Applications of advances in nonlinear sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Werbos, P J
1982-01-01
The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.
*Corresponding Author Sensitivity Analysis of a Physiochemical ...
African Journals Online (AJOL)
Michael Horsfall
The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.
Sensitivity Analysis in Two-Stage DEA
Directory of Open Access Journals (Sweden)
Athena Forghani
2015-07-01
Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.
Sensitivity Analysis in Two-Stage DEA
Directory of Open Access Journals (Sweden)
Athena Forghani
2015-12-01
Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.
Directory of Open Access Journals (Sweden)
Joanne Embree
2001-01-01
Full Text Available Ideally, editorials are written one to two months before publication in the Journal. It was my turn to write this one. I had planned to write the first draft the evening after my clinic on Tuesday, September 11. It didn't get done that night or during the next week. Somehow, the topic that I had originally chosen just didn't seem that important anymore as I, along my friends and colleagues, reflected on the changes that the events of that day were likely to have on our lives.
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Werdell, P. Jeremy; Ooesler, Collin S.
2012-01-01
The daily, synoptic images provided by satellite ocean color instruments provide viable data streams for observing changes in the biogeochemistrY of marine ecosystems. Ocean reflectance inversion models (ORMs) provide a common mechanism for inverting the "color" of the water observed a satellite into marine inherent optical properties (lOPs) through a combination of empiricism and radiative transfer theory. lOPs, namely the spectral absorption and scattering characteristics of ocean water and its dissolved and particulate constituents, describe the contents of the upper ocean, information critical for furthering scientific understanding of biogeochemical oceanic processes. Many recent studies inferred marine particle sizes and discriminated between phytoplankton functional groups using remotely-sensed lOPs. While all demonstrated the viability of their approaches, few described the vertical distributions of the water column constituents under consideration and, thus, failed to report the biophysical conditions under which their model performed (e.g., the depth and thickness of the phytoplankton bloom(s)). We developed an ORM to remotely identifY Noctiluca miliaris and other phytoplankton functional types using satellite ocean color data records collected in the northern Arabian Sea. Here, we present results from analyses designed to evaluate the applicability and sensitivity of the ORM to varied biophysical conditions. Specifically, we: (1) synthesized a series of vertical profiles of spectral inherent optical properties that represent a wide variety of bio-optical conditions for the northern Arabian Sea under aN Miliaris bloom; (2) generated spectral remote-sensing reflectances from these profiles using Hydrolight; and, (3) applied the ORM to the synthesized reflectances to estimate the relative concentrations of diatoms and N Miliaris for each example. By comparing the estimates from the inversion model to those from synthesized vertical profiles, we were able to
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Global sensitivity analysis by polynomial dimensional decomposition
Energy Technology Data Exchange (ETDEWEB)
Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2011-07-15
This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.
Demonstration sensitivity analysis for RADTRAN III
International Nuclear Information System (INIS)
Neuhauser, K.S.; Reardon, P.C.
1986-10-01
A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves
See, reflect, learn more: qualitative analysis of breaking bad news reflective narratives.
Karnieli-Miller, Orit; Palombo, Michal; Meitar, Dafna
2018-05-01
Breaking bad news (BBN) is a challenge that requires multiple professional competencies. BBN teaching often includes didactic and group role-playing sessions. Both are useful and important, but exclude another critical component of students' learning: day-to-day role-model observation in the clinics. Given the importance of observation and the potential benefit of reflective writing in teaching, we have incorporated reflective writing into our BBN course. The aim of this study was to enhance our understanding of the learning potential in reflective writing about BBN encounters and the ability to identify components that inhibit this learning. This was a systematic qualitative immersion/crystallization analysis of 166 randomly selected BBN narratives written by 83 senior medical students. We analysed the narratives in an iterative consensus-building process to identify the issues discussed, the lessons learned and the enhanced understanding of BBN. Having previously been unaware of, not invited to or having avoided BBN encounters, the mandatory assignment led students to search for or ask their mentors to join them in BBN encounters. Observation and reflective writing enhanced students' awareness that 'bad news' is relative and subjective, while shedding light on patients', families', physicians' and their own experiences and needs, revealing the importance of the different components of the BBN protocol. We identified diversity among the narratives and the extent of students' learning. Narrative writing provided students with an opportunity for a deliberative learning process. This led to deeper understanding of BBN encounters, of how to apply the newly taught protocol, or of the need for it. This process connected the formal and informal or hidden curricula. To maximise learning through reflective writing, students should be encouraged to write in detail about a recent observed encounter, analyse it according to the protocol, address different participants
International Nuclear Information System (INIS)
Barber, A. D.; Busch, R.
2009-01-01
The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)
Sensitivity analysis of the Two Geometry Method
International Nuclear Information System (INIS)
Wichers, V.A.
1993-09-01
The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)
A Frame-Reflective Discourse Analysis of Serious Games
Mayer, Igor; Warmelink, Harald; Zhou, Qiqi
2016-01-01
The authors explore how framing theory and the method of frame-reflective discourse analysis provide foundations for the emerging discipline of serious games (SGs) research. Starting with Wittgenstein's language game and Berger and Luckmann's social constructivist view on science, the authors demonstrate why a definitional or taxonomic approach to…
Cross-covariance based global dynamic sensitivity analysis
Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng
2018-02-01
For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.
Sensitivity analysis of reactive ecological dynamics.
Verdy, Ariane; Caswell, Hal
2008-08-01
Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.
Global sensitivity analysis using polynomial chaos expansions
International Nuclear Information System (INIS)
Sudret, Bruno
2008-01-01
Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices
Global sensitivity analysis using polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr
2008-07-15
Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.
Contributions to sensitivity analysis and generalized discriminant analysis
International Nuclear Information System (INIS)
Jacques, J.
2005-12-01
Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)
Simple Sensitivity Analysis for Orion GNC
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Sensitivity analysis of floating offshore wind farms
International Nuclear Information System (INIS)
Castro-Santos, Laura; Diaz-Casas, Vicente
2015-01-01
Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future
A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors
Directory of Open Access Journals (Sweden)
Xi Yu
2014-01-01
Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.
Internal reflection spectroscopic analysis of sulphide mineral surfaces
International Nuclear Information System (INIS)
Kaoma, J.
1989-01-01
To establish the reason for flotation of sulfide minerals in the absence of any conventional collector, internal reflection spectroscopic analysis (IRS) of their surfaces was conducted. sulfur, sulfates, thiosulfates, and hydrocarbonates have been detected on the surface of as-grand sulfide minerals. On sodium sulfide-treated surfaces, both sulfur and polysulfide have also been found to be present. From these findings, the flotation of sulfide minerals without collectors is discussed. (author). 26 refs
Nursing Student Perceptions of Reflective Journaling: A Conjoint Value Analysis
Hendrix, Thomas J.; O'Malley, Maureen; Sullivan, Catherine; Carmon, Bernice
2012-01-01
This study used a statistical technique, conjoint value analysis, to determine student perceptions related to the importance of predetermined reflective journaling attributes. An expert Delphi panel determined these attributes and integrated them into a survey which presented students with multiple journaling experiences from which they had to choose. After obtaining IRB approval, a convenience sample of 66 baccalaureate nursing students completed the survey. The relative importance of the at...
Sensitivity analysis of a modified energy model
International Nuclear Information System (INIS)
Suganthi, L.; Jagadeesan, T.R.
1997-01-01
Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)
Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I
National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
Yin, Changhai; Iqbal, Jibran; Hu, Huilian; Liu, Bingxiang; Zhang, Lei; Zhu, Bilin; Du, Yiping
2012-09-30
A simple, sensitive and selective solid phase reflectometry method is proposed for the determination of trace mercury in aqueous samples. The complexation reagent dithizone was firstly injected into the properly buffered solution with vigorous stirring, which started a simultaneous formation of nanoparticles suspension of dithizone and its complexation reaction with the mercury(II) ions to make Hg-dithizone nanoparticles. After a definite time, the mixture was filtered with membrane, and then quantified directly on the surface of the membrane by using integrating sphere accessory of the UV-visible spectrophotometer. The quantitative analysis was carried out at a wavelength of 485 nm since it yielded the largest difference in diffuse reflectance spectra before and after reaction with mercury(II).A good linear correlation in the range of 0.2-4.0 μg/L with a squared correlation coefficient (R(2)) of 0.9944 and a detection limit of 0.12 μg/L were obtained. The accuracy of the method was evaluated by the analysis of spiked mercury(II) concentrations determined using this method along with those determined by the atomic fluorescence mercury vapourmeter and the results obtained were in good agreement. The proposed method was applied to the determination of mercury in tap water and river water samples with the recovery in an acceptable range (95.7-105.3%). Copyright © 2012 Elsevier B.V. All rights reserved.
A new importance measure for sensitivity analysis
International Nuclear Information System (INIS)
Liu, Qiao; Homma, Toshimitsu
2010-01-01
Uncertainty is an integral part of risk assessment of complex engineering systems, such as nuclear power plants and space crafts. The aim of sensitivity analysis is to identify the contribution of the uncertainty in model inputs to the uncertainty in the model output. In this study, a new importance measure that characterizes the influence of the entire input distribution on the entire output distribution was proposed. It represents the expected deviation of the cumulative distribution function (CDF) of the model output that would be obtained when one input parameter of interest were known. The applicability of this importance measure was tested with two models, a nonlinear nonmonotonic mathematical model and a risk model. In addition, a comparison of this new importance measure with several other importance measures was carried out and the differences between these measures were explained. (author)
DEA Sensitivity Analysis for Parallel Production Systems
Directory of Open Access Journals (Sweden)
J. Gerami
2011-06-01
Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.
Sensitivity of SBLOCA analysis to model nodalization
International Nuclear Information System (INIS)
Lee, C.; Ito, T.; Abramson, P.B.
1983-01-01
The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery
Subset simulation for structural reliability sensitivity analysis
International Nuclear Information System (INIS)
Song Shufang; Lu Zhenzhou; Qiao Hongwei
2009-01-01
Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes
Systemization of burnup sensitivity analysis code (2) (Contract research)
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2008-08-01
Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion
Born reflection kernel analysis and wave-equation reflection traveltime inversion in elastic media
Wang, Tengfei; Cheng, Jiubing
2017-01-01
Elastic reflection waveform inversion (ERWI) utilize the reflections to update the low and intermediate wavenumbers in the deeper part of model. However, ERWI suffers from the cycle-skipping problem due to the objective function of waveform residual
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
Coping with breast cancer: a qualitative analysis of reflective journals.
Gonzalez, Lois O; Lengacher, Cecile A
2007-05-01
Non-disclosure of emotions has been shown to inhibit individuals' adjustment to illness and formulation of adequate coping mechanisms. The purpose of this qualitative study was to examine responses to the diagnosis and treatment of breast cancer and patterns of coping through an analysis of written reflective journals. Eight women submitted their journals to the researchers for analysis. Issues identified were (1) the assumption of an adaptive position; (2) the need for tangible evidence of love and support with three divergent responses, and (3) the need for something more. Specific patterns were identified within each issue.
Calibration, validation, and sensitivity analysis: What's what
International Nuclear Information System (INIS)
Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.
2006-01-01
One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty
Global sensitivity analysis in wind energy assessment
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present
Analysis of speech: a reflection on health research
Directory of Open Access Journals (Sweden)
Laura Christina Macedo
2008-01-01
Full Text Available In this study, we take speech and writing as discursive construction, indicating the reasons for making it the object of analysis and introducing different instruments to achieve this. We highlight the importance of discourse analysis for the development of health research, since this method enables the interpretation of reality from a text or texts, revealing the subjects of production and their interpretation, as well as the context of their production. The historical construction of contradictions, continuities and ruptures that make discourse a social practice is unveiled. Discourse analysis is considered a means of eliciting the implied meaning in speech and writing and, thus, as another approach to the health-disease process. Therefore, this reflection aims to incorporate Discourse Analysis into the health area, emphasizing this method as a significant contribution to Social Sciences.
Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis
Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi
To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.
Analysis of higher order harmonics with holographic reflection gratings
Mas-Abellan, P.; Madrigal, R.; Fimia, A.
2017-05-01
Silver halide emulsions have been considered one of the most energetic sensitive materials for holographic applications. Nonlinear recording effects on holographic reflection gratings recorded on silver halide emulsions have been studied by different authors obtaining excellent experimental results. In this communication specifically we focused our investigation on the effects of refractive index modulation, trying to get high levels of overmodulation that will produce high order harmonics. We studied the influence of the overmodulation and its effects on the transmission spectra for a wide exposure range by use of 9 μm thickness films of ultrafine grain emulsion BB640, exposed to single collimated beams using a red He-Ne laser (wavelength 632.8 nm) with Denisyuk configuration obtaining a spatial frequency of 4990 l/mm recorded on the emulsion. The experimental results show that high overmodulation levels of refractive index produce second order harmonics with high diffraction efficiency (higher than 75%) and a narrow grating bandwidth (12.5 nm). Results also show that overmodulation produce diffraction spectra deformation of the second order harmonic, transforming the spectrum from sinusoidal to approximation of square shape due to very high overmodulation. Increasing the levels of overmodulation of refractive index, we have obtained higher order harmonics, obtaining third order harmonic with diffraction efficiency (up to 23%) and narrowing grating bandwidth (5 nm). This study is the first step to develop a new easy technique to obtain narrow spectral filters based on the use of high index modulation reflection gratings.
Sensitivity analysis of Smith's AMRV model
International Nuclear Information System (INIS)
Ho, Chih-Hsiang
1995-01-01
Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years
Energy Technology Data Exchange (ETDEWEB)
Huang, Q.Z. [Key Laboratory of Renewable Energy, Guangdong Key Laboratory of New and Renewable Energy Research and Development, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou 510000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Shi, J.F., E-mail: shijf@ms.giec.ac.cn [Key Laboratory of Renewable Energy, Guangdong Key Laboratory of New and Renewable Energy Research and Development, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou 510000 (China); Wang, L.L.; Li, Y.J.; Zhong, L.W. [Key Laboratory of Renewable Energy, Guangdong Key Laboratory of New and Renewable Energy Research and Development, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou 510000 (China); Xu, G., E-mail: xugang@ms.giec.ac.cn [Key Laboratory of Renewable Energy, Guangdong Key Laboratory of New and Renewable Energy Research and Development, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou 510000 (China)
2016-07-01
In this paper, anti-reflective (AR) films are prepared from sodium water glass with a simple dip-coating method. The effects of SiO{sub 2}/Na{sub 2}O molar ratio, concentration of water glass, and withdrawal speed on the anti-reflection performance of the AR films are systematically studied. The optimized AR film is further applied in dye-sensitized solar cells (DSCs). The optical properties and surface morphology of AR films are analyzed by ultraviolet-visible spectrophotometer, scanning electron microscope, and atomic force microscope. Transmittance of the glass coated with sodium water glass-based AR film is increased by 3.2% when the SiO{sub 2}/Na{sub 2}O molar ratio, concentration, and withdrawal speed equal to 3.8, 5 wt%, and 80 mm/min, respectively. Under this condition, the thickness of the AR film is 127 nm and the AR film has obvious porous structure. In addition, the power conversion efficiency of DSC coated by AR film is increased from 7.92% to 8.24%, compared with the DSC without AR film. - Highlights: • Anti-reflective films are prepared from sodium water glass. • Transmittance of anti-reflective film is increased by 3.2%. • Efficiency of dye-sensitized cell is improved by anti-reflective film.
International Nuclear Information System (INIS)
Huang, Q.Z.; Shi, J.F.; Wang, L.L.; Li, Y.J.; Zhong, L.W.; Xu, G.
2016-01-01
In this paper, anti-reflective (AR) films are prepared from sodium water glass with a simple dip-coating method. The effects of SiO_2/Na_2O molar ratio, concentration of water glass, and withdrawal speed on the anti-reflection performance of the AR films are systematically studied. The optimized AR film is further applied in dye-sensitized solar cells (DSCs). The optical properties and surface morphology of AR films are analyzed by ultraviolet-visible spectrophotometer, scanning electron microscope, and atomic force microscope. Transmittance of the glass coated with sodium water glass-based AR film is increased by 3.2% when the SiO_2/Na_2O molar ratio, concentration, and withdrawal speed equal to 3.8, 5 wt%, and 80 mm/min, respectively. Under this condition, the thickness of the AR film is 127 nm and the AR film has obvious porous structure. In addition, the power conversion efficiency of DSC coated by AR film is increased from 7.92% to 8.24%, compared with the DSC without AR film. - Highlights: • Anti-reflective films are prepared from sodium water glass. • Transmittance of anti-reflective film is increased by 3.2%. • Efficiency of dye-sensitized cell is improved by anti-reflective film.
Wear-Out Sensitivity Analysis Project Abstract
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Supercritical extraction of oleaginous: parametric sensitivity analysis
Directory of Open Access Journals (Sweden)
Santos M.M.
2000-01-01
Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
SENSIT: a cross-section and design sensitivity and uncertainty analysis code
International Nuclear Information System (INIS)
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE
Multitarget global sensitivity analysis of n-butanol combustion.
Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T
2013-05-02
A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.
Sensitivity analysis in multi-parameter probabilistic systems
International Nuclear Information System (INIS)
Walker, J.R.
1987-01-01
Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model
An ESDIRK Method with Sensitivity Analysis Capabilities
DEFF Research Database (Denmark)
Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove
2004-01-01
of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...
Sensitivity Analysis of Fire Dynamics Simulation
DEFF Research Database (Denmark)
Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.
2007-01-01
(Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...
International Nuclear Information System (INIS)
Roshan Entezar, S.
2015-01-01
The phase difference between two p-polarized and s-polarized plane waves which are reflected under total internal reflection from the base of a prism with a thin metal coating is studied. Typically such a quantity can be used to measure the refractive index of a test material using the total internal reflection method. It is shown that due to the excitation of surface plasmon polaritons at the interface between the tested dielectric material and the thin metal layer, the p-polarized light experiences a large phase shift which enlarges the phase difference between the p-polarized and the s-polarized waves. As a result, the sensitivity of refractive index measurement increases and the error in determining the refractive index decreases. - Highlights: • Phase difference of totally internally reflected p and s polarized beams is studied. • Excitation of the surface wave increases the phase shift of the p-polarized light. • The sensitivity of refractive index measurement increases by using a coated prism. • The error in determining the refractive index decreases using the coated prism
Liu, Rong; Zhou, Jiawei; Zhao, Haoxin; Dai, Yun; Zhang, Yudong; Tang, Yong; Zhou, Yifeng
2014-01-01
This study aimed to explore the neural development status of the visual system of children (around 8 years old) using contrast sensitivity. We achieved this by eliminating the influence of higher order aberrations (HOAs) with adaptive optics correction. We measured HOAs, modulation transfer functions (MTFs) and contrast sensitivity functions (CSFs) of six children and five adults with both corrected and uncorrected HOAs. We found that when HOAs were corrected, children and adults both showed improvements in MTF and CSF. However, the CSF of children was still lower than the adult level, indicating the difference in contrast sensitivity between groups cannot be explained by differences in optical factors. Further study showed that the difference between the groups also could not be explained by differences in non-visual factors. With these results we concluded that the neural systems underlying vision in children of around 8 years old are still immature in contrast sensitivity. PMID:24732728
Superconducting Accelerating Cavity Pressure Sensitivity Analysis
International Nuclear Information System (INIS)
Rodnizki, J.; Horvits, Z.; Ben Aliz, Y.; Grin, A.; Weissman, L.
2014-01-01
The measured sensitivity of the cavity was evaluated and it is full consistent with the measured values. It was explored that the tuning system (the fog structure) has a significant contribution to the cavity sensitivity. By using ribs or by modifying the rigidity of the fog we may reduce the HWR sensitivity. During cool down and warming up we have to analyze the stresses on the HWR to avoid plastic deformation to the HWR since the Niobium yield is an order of magnitude lower in room temperature
A global sensitivity analysis of crop virtual water content
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for
Sensitivity analysis of energy demands on performance of CCHP system
International Nuclear Information System (INIS)
Li, C.Z.; Shi, Y.M.; Huang, X.H.
2008-01-01
Sensitivity analysis of energy demands is carried out in this paper to study their influence on performance of CCHP system. Energy demand is a very important and complex factor in the optimization model of CCHP system. Average, uncertainty and historical peaks are adopted to describe energy demands. The mix-integer nonlinear programming model (MINLP) which can reflect the three aspects of energy demands is established. Numerical studies are carried out based on energy demands of a hotel and a hospital. The influence of average, uncertainty and peaks of energy demands on optimal facility scheme and economic advantages of CCHP system are investigated. The optimization results show that the optimal GT's capacity and economy of CCHP system mainly lie on the average energy demands. Sum of capacities of GB and HE is equal to historical heating demand peaks, and sum of capacities of AR and ER are equal to historical cooling demand peaks. Maximum of PG is sensitive with historical peaks of energy demands and not influenced by uncertainty of energy demands, while the corresponding influence on DH is adverse
Derivative based sensitivity analysis of gamma index
Directory of Open Access Journals (Sweden)
Biplab Sarkar
2015-01-01
Full Text Available Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD and distance-to-agreement (DTA measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm, the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA against the RP, the first and second order derivatives of the DDs (δD', δD" between these two curves were derived and used as the
MOVES2010a regional level sensitivity analysis
2012-12-10
This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...
Self-reflection on personal values to support value-sensitive design
Pommeranz, A.; Detweiler, C.A.; Wiggers, P.; Jonker, C.M.
2011-01-01
The impact of ubiquitous technology and social media on our lives is rapidly increasing. We explicitly need to consider personal values affected or violated by these systems. Value-sensitive design can guide a designer in building systems that account for human values. However, the framework lacks
Critical incident analysis through narrative reflective practice: A case study
Directory of Open Access Journals (Sweden)
Thomas S. C. Farrell
2013-01-01
Full Text Available Teachers can reflect on their practices by articulating and exploring incidents they consider critical to themselves or others. By talking about these critical incidents, teachers can make better sense of seemingly random experiences that occur in their teaching because they hold the real inside knowledge, especially personal intuitive knowledge, expertise and experience that is based on their accumulated years as language educators teaching in schools and classrooms. This paper is about one such critical incident analysis that an ESL teacher in Canada revealed to her critical friend and how both used McCabe’s (2002 narrative framework for analyzing an important critical incident that occurred in the teacher’s class.
Energy sources and nuclear energy. Comparative analysis and ethical reflections
International Nuclear Information System (INIS)
Hoenraet, C.
1999-01-01
Under the authority of the episcopacy of Brugge in Belgium an independent working group Ethics and Nuclear Energy was set up. The purpose of the working group was to collect all the necessary information on existing energy sources and to carry out a comparative analysis of their impact on mankind and the environment. Also attention was paid to economical and social aspects. The results of the study are subjected to an ethical reflection. The book is aimed at politicians, teachers, journalists and every interested layman who wants to gain insight into the consequences of the use of nuclear energy and other energy sources. Based on the information in this book one should be able to objectively define one's position in future debates on this subject
Honesty in Critically Reflective Essays: An Analysis of Student Practice
Maloney, Stephen; Tai, Joanna Hong-Meng; Lo, Kristin; Molloy, Elizabeth; Ilic, Dragan
2013-01-01
In health professional education, reflective practice is seen as a potential means for self-improvement from everyday clinical encounters. This study aims to examine the level of student honesty in critical reflection, and barriers and facilitators for students engaging in honest reflection. Third year physiotherapy students, completing summative…
NPV Sensitivity Analysis: A Dynamic Excel Approach
Mangiero, George A.; Kraten, Michael
2017-01-01
Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…
Sensitivity Analysis for Multidisciplinary Systems (SAMS)
2016-12-01
release. Distribution is unlimited. 14 Server and Client Code Server from geometry import Point, Geometry import math import zmq class Server...public release; Distribution is unlimited. DISTRIBUTION STATEMENT A: Approved for public release. Distribution is unlimited. 19 Example Application Boeing...Materials Conference, 2011. Cross, D. M., Local continuum sensitivity method for shape design derivatives using spatial gradient reconstruction. Diss
Diffuse Reflectance Spectroscopy for Total Carbon Analysis of Hawaiian Soils
McDowell, M. L.; Bruland, G. L.; Deenik, J. L.; Grunwald, S.; Uchida, R.
2010-12-01
Accurate assessment of total carbon (Ct) content is important for fertility and nutrient management of soils, as well as for carbon sequestration studies. The non-destructive analysis of soils by diffuse reflectance spectroscopy (DRS) is a potential supplement or alternative to the traditional time-consuming and costly combustion method of Ct analysis, especially in spatial or temporal studies where sample numbers are large. We investigate the use of the visible to near-infrared (VNIR) and mid-infrared (MIR) spectra of soils coupled with chemometric analysis to determine their Ct content. Our specific focus is on Hawaiian soils of agricultural importance. Though this technique has been introduced to the soil community, it has yet to be fully tested and used in practical applications for all soil types, and this is especially true for Hawaii. In short, DRS characterizes and differentiates materials based on the variation of the light reflected by a material at certain wavelengths. This spectrum is dependent on the material’s composition, structure, and physical state. Multivariate chemometric analysis unravels the information in a set of spectra that can help predict a property such as Ct. This study benefits from the remarkably diverse soils of Hawaii. Our sample set includes 216 soil samples from 145 pedons from the main Hawaiian Islands archived at the National Soil Survey Center in Lincoln, NE, along with more than 50 newly-collected samples from Kauai, Oahu, Molokai, and Maui. In total, over 90 series from 10 of the 12 soil orders are represented. The Ct values of these samples range from < 1% - 55%. We anticipate that the diverse nature of our sample set will ensure a model with applicability to a wide variety of soils, both in Hawaii and globally. We have measured the VNIR and MIR spectra of these samples and obtained their Ct values by dry combustion. Our initial analyses are conducted using only samples obtained from the Lincoln archive. In this
A fast position sensitive photodetector based on a CsI reflective photocathode
International Nuclear Information System (INIS)
Arnold, R.; Christophel, E.; Guyonnet, J.L.
1991-01-01
A fast detector was built for UV photon detection that depends on a CsI sensitized pad cathode. The rapidity of the detector is compared with that of a more classical chamber filled with photosensitive gases such as TEA or TMAE. Estimates of the quantum yield of the photocathode at 160 and 200 nm are given. The performances obtained make it a good photodetector candidate to be operated at high luminosity accelerators. (author) 7 refs., 19 figs
Design and Analysis of Underwater Acoustic Networks with Reflected Links
Emokpae, Lloyd
-of-sight (LOS) and NLOS links by utilizing directional antennas, which will boost the signal-to-noise ratio (SNR) at the receiver while promoting NLOS usage. In our model, we employ a directional underwater acoustic antenna composed of an array of hydrophones that can be summed up at various phases and amplitudes resulting in a beam-former. We have also adopted a practical multimodal directional transducer concept which generates both directional and omni-directional beam patterns by combining the fundamental vibration modes of a cylindrical acoustic radiator. This allows the transducer to be electrically controlled and steered by simply adjusting the electrical voltage weights. A prototype acoustic modem is then developed to utilize the multimodal directional transducer for both LOS and NLOS communication. The acoustic modem has also been used as a platform for empirically validating our SBR communication model in a tank and with empirical data. Networking protocols have been developed to exploit the SBR communication model. These protocols include node discovery and localization, directional medium access control (D-MAC) and geographical routing. In node discovery and localization, each node will utilize SBR-based range measurements to its neighbors to determine their relative position. The D-MAC protocol utilizes directional antennas to increase the network throughput due to the spatial efficiency of the antenna model. In the proposed reflection-enabled directional MAC protocol (RED-MAC), each source node will be able to determine if an obstacle is blocking the LOS link to the destination and switch to the best NLOS link by utilizing surface/bottom reflections. Finally, we have developed a geographical routing algorithm which aims to establish the best stable route from a source node to a destination node. The optimized route is selected to achieve maximum network throughput. Extensive analysis of the network throughput when utilizing directional antennas is also presented
Extended forward sensitivity analysis of one-dimensional isothermal flow
International Nuclear Information System (INIS)
Johnson, M.; Zhao, H.
2013-01-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
The role of sensitivity analysis in probabilistic safety assessment
International Nuclear Information System (INIS)
Hirschberg, S.; Knochenhauer, M.
1987-01-01
The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)
Automated sensitivity analysis using the GRESS language
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.; Wright, R.Q.
1986-04-01
An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...
Fernando, Denise R; Marshall, Alan T; Lynch, Jonathan P
2016-01-01
Sugar maple and red maple are closely-related co-occurring tree species significant to the North American forest biome. Plant abiotic stress effects including nutritional imbalance and manganese (Mn) toxicity are well documented within this system, and are implicated in enhanced susceptibility to biotic stresses such as insect attack. Both tree species are known to overaccumulate foliar manganese (Mn) when growing on unbuffered acidified soils, however, sugar maple is Mn-sensitive, while red maple is not. Currently there is no knowledge about the cellular sequestration of Mn and other nutrients in these two species. Here, electron-probe x-ray microanalysis was employed to examine cellular and sub-cellular deposition of excessively accumulated foliar Mn and other mineral nutrients in vivo. For both species, excess foliar Mn was deposited in symplastic cellular compartments. There were striking between-species differences in Mn, magnesium (Mg), sulphur (S) and calcium (Ca) distribution patterns. Unusually, Mn was highly co-localised with Mg in mesophyll cells of red maple only. The known sensitivity of sugar maple to excess Mn is likely linked to Mg deficiency in the leaf mesophyll. There was strong evidence that Mn toxicity in sugar maple is primarily a symplastic process. For each species, leaf-surface damage due to biotic stress including insect herbivory was compared between sites with acidified and non-acidified soils. Although it was greatest overall in red maple, there was no difference in biotic stress damage to red maple leaves between acidified and non-acidified soils. Sugar maple trees on buffered non-acidified soil were less damaged by biotic stress compared to those on unbuffered acidified soil, where they are also affected by Mn toxicity abiotic stress. This study concluded that foliar nutrient distribution in symplastic compartments is a determinant of Mn sensitivity, and that Mn stress hinders plant resistance to biotic stress.
The Theory of law in Post-Modernity: Reflection from Sustainability to Sensitivity
Directory of Open Access Journals (Sweden)
Suzete Habitzreuter Hartke
2016-06-01
Full Text Available The Law represents one expression of the human Culture. Facing such context, this paper aims at presenting proposals of possible solutions to a problem that is present in postmodernity, which is: the Law produced nowadays does not seem to solve the problems related to Sustainable development which are submitted to the Brazilian Legal System, the methodology used the Inductive Method. The solution resides in the use of Sensitive Reasoning and the Law Politics, since they enable the correction of the current law and the construction of the one that might exist in a humanitarian sense.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Kuramoto, S. Janet; Stuart, Elizabeth A.
2013-01-01
Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282
Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A
2013-12-01
Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.
Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1986-01-01
An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies
Nursing student perceptions of reflective journaling: a conjoint value analysis.
Hendrix, Thomas J; O'Malley, Maureen; Sullivan, Catherine; Carmon, Bernice
2012-01-01
This study used a statistical technique, conjoint value analysis, to determine student perceptions related to the importance of predetermined reflective journaling attributes. An expert Delphi panel determined these attributes and integrated them into a survey which presented students with multiple journaling experiences from which they had to choose. After obtaining IRB approval, a convenience sample of 66 baccalaureate nursing students completed the survey. The relative importance of the attributes varied from a low of 16.75% (format) to a high of 23.58% (time). The model explained 77% of the variability of student journaling preferences (r(2) = 0.77). Students preferred shorter time, complete confidentiality, one-time complete feedback, semistructured format, and behavior recognition. Students with more experience had a much greater preference for a free-form format (P journaling experience. Additionally, the results of English as a second language students were significantly different from the rest of the sample. In order to better serve them, educators must consider the relative importance of these attributes when developing journaling experiences for their students.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Directory of Open Access Journals (Sweden)
Georgios Arampatzis
Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of
Allured, Ryan; Okajima, Takashi; Soufli, Regina; Fernández-Perea, Mónica; Daly, Ryan O.; Marlowe, Hannah; Griffiths, Scott T.; Pivovaroff, Michael J.; Kaaret, Philip
2012-10-01
The Bragg Reflection Polarimeter (BRP) on the NASA Gravity and Extreme Magnetism Small Explorer Mission is designed to measure the linear polarization of astrophysical sources in a narrow band centered at about 500 eV. X-rays are focused by Wolter I mirrors through a 4.5 m focal length to a time projection chamber (TPC) polarimeter, sensitive between 2{10 keV. In this optical path lies the BRP multilayer reflector at a nominal 45 degree incidence angle. The reflector reflects soft X-rays to the BRP detector and transmits hard X-rays to the TPC. As the spacecraft rotates about the optical axis, the reflected count rate will vary depending on the polarization of the incident beam. However, false polarization signals may be produced due to misalignments and spacecraft pointing wobble. Monte-Carlo simulations have been carried out, showing that the false modulation is below the statistical uncertainties for the expected focal plane offsets of < 2 mm.
Directory of Open Access Journals (Sweden)
Christian W. Huck
2016-05-01
Full Text Available A review with more than 100 references on the principles and recent developments in the solid-phase extraction (SPE prior and for in situ near and attenuated total reflection (ATR infrared spectroscopic analysis is presented. New materials, chromatographic modalities, experimental setups and configurations are described. Their advantages for fast sample preparation for distinct classes of compounds containing different functional groups in order to enhance selectivity and sensitivity are discussed and compared. This is the first review highlighting both the fundamentals of SPE, near and ATR spectroscopy with a view to real sample applicability and routine analysis. Most of real sample analyses examples are found in environmental research, followed by food- and bioanalysis. In this contribution a comprehensive overview of the most potent SPE-NIR and SPE-ATR approaches is summarized and provided.
Reflectance analysis of porosity gradient in nanostructured silicon layers
Jurečka, Stanislav; Imamura, Kentaro; Matsumoto, Taketoshi; Kobayashi, Hikaru
2017-12-01
In this work we study optical properties of nanostructured layers formed on silicon surface. Nanostructured layers on Si are formed in order to reach high suppression of the light reflectance. Low spectral reflectance is important for improvement of the conversion efficiency of solar cells and for other optoelectronic applications. Effective method of forming nanostructured layers with ultralow reflectance in a broad interval of wavelengths is in our approach based on metal assisted etching of Si. Si surface immersed in HF and H2O2 solution is etched in contact with the Pt mesh roller and the structure of the mesh is transferred on the etched surface. During this etching procedure the layer density evolves gradually and the spectral reflectance decreases exponentially with the depth in porous layer. We analyzed properties of the layer porosity by incorporating the porosity gradient into construction of the layer spectral reflectance theoretical model. Analyzed layer is splitted into 20 sublayers in our approach. Complex dielectric function in each sublayer is computed by using Bruggeman effective media theory and the theoretical spectral reflectance of modelled multilayer system is computed by using Abeles matrix formalism. Porosity gradient is extracted from the theoretical reflectance model optimized in comparison to the experimental values. Resulting values of the structure porosity development provide important information for optimization of the technological treatment operations.
[Analysis of influencing factors of snow hyperspectral polarized reflections].
Sun, Zhong-Qiu; Zhao, Yun-Sheng; Yan, Guo-Qian; Ning, Yan-Ling; Zhong, Gui-Xin
2010-02-01
Due to the need of snow monitoring and the impact of the global change on the snow, on the basis of the traditional research on snow, starting from the perspective of multi-angle polarized reflectance, we analyzed the influencing factors of snow from the incidence zenith angles, the detection zenith angles, the detection azimuth angles, polarized angles, the density of snow, the degree of pollution, and the background of the undersurface. It was found that these factors affected the spectral reflectance values of the snow, and the effect of some factors on the polarization hyperspectral reflectance observation is more evident than in the vertical observation. Among these influencing factors, the pollution of snow leads to an obvious change in the snow reflectance spectrum curve, while other factors have little effect on the shape of the snow reflectance spectrum curve and mainly impact the reflection ratio of the snow. Snow reflectance polarization information has not only important theoretical significance, but also wide application prospect, and provides new ideas and methods for the quantitative research on snow using the remote sensing technology.
Parker, Lewan; Trewin, Adam; Levinger, Itamar; Shaw, Christopher S; Stepto, Nigel K
2018-04-01
Redox homeostasis and redox-sensitive protein signaling play a role in exercise-induced adaptation. The effects of sprint-interval exercise (SIE), high-intensity interval exercise (HIIE) and continuous moderate-intensity exercise (CMIE), on post-exercise plasma redox status are unclear. Furthermore, whether post-exercise plasma redox status reflects skeletal muscle redox-sensitive protein signaling is unknown. In a randomized crossover design, eight healthy adults performed a cycling session of HIIE (5×4min at 75% W max ), SIE (4×30s Wingate's), and CMIE work-matched to HIIE (30min at 50% of W max ). Plasma hydrogen peroxide (H 2 O 2 ), thiobarbituric acid reactive substances (TBARS), superoxide dismutase (SOD) activity, and catalase activity were measured immediately post, 1h, 2h and 3h post-exercise. Plasma redox status biomarkers were correlated with phosphorylation of skeletal muscle p38-MAPK, JNK, NF-κB, and IκBα protein content immediately and 3h post-exercise. Plasma catalase activity was greater with SIE (56.6±3.8Uml -1 ) compared to CMIE (42.7±3.2, pexercise plasma TBARS and SOD activity significantly (pexercise protocol. A significant positive correlation was detected between plasma catalase activity and skeletal muscle p38-MAPK phosphorylation 3h post-exercise (r=0.40, p=0.04). No other correlations were detected (all p>0.05). Low-volume SIE elicited greater post-exercise plasma catalase activity compared to HIIE and CMIE, and greater H 2 O 2 compared to CMIE. Plasma redox status did not, however, adequately reflect skeletal muscle redox-sensitive protein signaling. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Sensitivity Analysis Based on Markovian Integration by Parts Formula
Directory of Open Access Journals (Sweden)
Yongsheng Hang
2017-10-01
Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.
Advanced Fuel Cycle Economic Sensitivity Analysis
Energy Technology Data Exchange (ETDEWEB)
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Spin analysis and new effects in reflectivity measurements
International Nuclear Information System (INIS)
Fermon, C.
1996-01-01
We present two new effects in polarized neutron reflectivity. We show that we have a non symmetric spin-flip signal in reflectivity measurements on magnetic films when the external field is not negligible. This phenomenon is due to different Larmor precessions for the two spin states and has to be taken into account in some experiments. The second effect is still not understood but we present results indicating that the specular reflection on a non magnetic surface can induce a neutron beam depolarization or rotation. (authors)
The role of sensitivity analysis in assessing uncertainty
International Nuclear Information System (INIS)
Crick, M.J.; Hill, M.D.
1987-01-01
Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice
Analysis of Sensitivity Experiments - An Expanded Primer
2017-03-08
conducted with this purpose in mind. Due diligence must be paid to the structure of the dosage levels and to the number of trials. The chosen data...analysis. System reliability is of paramount importance for protecting both the investment of funding and human life . Failing to accurately estimate
Sensitivity analysis of hybrid thermoelastic techniques
W.A. Samad; J.M. Considine
2017-01-01
Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...
Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2014-01-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...
Global and Local Sensitivity Analysis Methods for a Physical System
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Adjoint sensitivity analysis of high frequency structures with Matlab
Bakr, Mohamed; Demir, Veysel
2017-01-01
This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.
Dispersion sensitivity analysis & consistency improvement of APFSDS
Directory of Open Access Journals (Sweden)
Sangeeta Sharma Panda
2017-08-01
In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Sensitivity analysis of the RESRAD, a dose assessment code
International Nuclear Information System (INIS)
Yu, C.; Cheng, J.J.; Zielen, A.J.
1991-01-01
The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational
A sensitivity analysis approach to optical parameters of scintillation detectors
International Nuclear Information System (INIS)
Ghal-Eh, N.; Koohi-Fayegh, R.
2008-01-01
In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators
sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
user
of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...
Sobol’ sensitivity analysis for stressor impacts on honeybee colonies
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Experimental Design for Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2001-01-01
This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as
Sensitivity analysis of a greedy heuristic for knapsack problems
Ghosh, D; Chakravarti, N; Sierksma, G
2006-01-01
In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit
Sensitivity analysis of numerical solutions for environmental fluid problems
International Nuclear Information System (INIS)
Tanaka, Nobuatsu; Motoyama, Yasunori
2003-01-01
In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)
Risk and sensitivity analysis in relation to external events
International Nuclear Information System (INIS)
Alzbutas, R.; Urbonas, R.; Augutis, J.
2001-01-01
This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)
High sensitivity analysis of atmospheric gas elements
International Nuclear Information System (INIS)
Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo
2006-01-01
We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10 17 atoms/cm 3 for H, of 3 x 10 16 atoms/cm 3 for C and of 2 x 10 16 atoms/cm 3 for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it
High sensitivity analysis of atmospheric gas elements
Energy Technology Data Exchange (ETDEWEB)
Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)
2006-07-30
We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10{sup 17} atoms/cm{sup 3} for H, of 3 x 10{sup 16} atoms/cm{sup 3} for C and of 2 x 10{sup 16} atoms/cm{sup 3} for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it.
Sensitivity Analysis of BLISK Airfoil Wear †
Directory of Open Access Journals (Sweden)
Andreas Kellersmann
2018-05-01
Full Text Available The decreasing performance of jet engines during operation is a major concern for airlines and maintenance companies. Among other effects, the erosion of high-pressure compressor (HPC blades is a critical one and leads to a changed aerodynamic behavior, and therefore to a change in performance. The maintenance of BLISKs (blade-integrated-disks is especially challenging because the blade arrangement cannot be changed and individual blades cannot be replaced. Thus, coupled deteriorated blades have a complex aerodynamic behavior which can have a stronger influence on compressor performance than a conventional HPC. To ensure effective maintenance for BLISKs, the impact of coupled misshaped blades are the key factor. The present study addresses these effects on the aerodynamic performance of a first-stage BLISK of a high-pressure compressor. Therefore, a design of experiments (DoE is done to identify the geometric properties which lead to a reduction in performance. It is shown that the effect of coupled variances is dependent on the operating point. Based on the DoE analysis, the thickness-related parameters, the stagger angle, and the max. profile camber as coupled parameters are identified as the most important parameters for all operating points.
Hasegawa, Raiden; Small, Dylan
2017-12-01
In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.
Application of Stochastic Sensitivity Analysis to Integrated Force Method
Directory of Open Access Journals (Sweden)
X. F. Wei
2012-01-01
Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.
The EVEREST project: sensitivity analysis of geological disposal systems
International Nuclear Information System (INIS)
Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus
1997-01-01
The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied
Multiple predictor smoothing methods for sensitivity analysis: Description of techniques
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis: Example results
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Carbon dioxide capture processes: Simulation, design and sensitivity analysis
DEFF Research Database (Denmark)
Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul
2012-01-01
equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information
Directory of Open Access Journals (Sweden)
Chuanqi Li
2014-11-01
Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, María
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
International Nuclear Information System (INIS)
Wegrzynek, D.; Holynska, B.
1997-01-01
A method for the determination of the concentrations of elements in particulate-like samples measured in total reflection geometry is proposed. In the proposed method the fundamental parameters are utilized for calculating the sensitivities of elements and an internal standard is used to account for the unknown mass per unit area of a sample and geometrical constant of the spectrometer. The modification of the primary excitation spectrum on its way to a sample has been taken into consideration. The concentrations of the elements to be determined are calculated simultaneously with the spectra deconvolution procedure. In the process of quantitative analysis the intensities of all X-ray peaks corresponding to K and L-series lines present in the analyzed spectrum are taken into account. (Author)
Transient analysis of reflected Lévy processes
Kella, O.; Mandjes, M.R.H.
2013-01-01
In this paper we establish a formula for the joint Laplace-Stieltjes transform of a reflected Lévy process and its regulator at an independent exponentially distributed time, starting at an independent exponentially distributed state. The Lévy process is general, that is, it is not assumed that it
Transient analysis of reflected Lévy processes
Kella, O.; Mandjes, M.
2013-01-01
In this paper, we establish a formula for the joint Laplace-Stieltjes transform of a reflected Lévy process and its regulator at an independent exponentially distributed time, starting at an independent exponentially distributed state. The Lévy process is general, that is, it is not assumed that it
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data.
Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.
Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun
2017-12-01
Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.
A general first-order global sensitivity analysis method
International Nuclear Information System (INIS)
Xu Chonggang; Gertner, George Zdzislaw
2008-01-01
Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
International Nuclear Information System (INIS)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae
2016-01-01
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
Energy Technology Data Exchange (ETDEWEB)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)
2016-10-15
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.
International Nuclear Information System (INIS)
Sivia, D.S.; Hamilton, W.A.; Smith, G.S.
1991-01-01
The analysis of neutron reflectivity data to obtain nuclear scattering length density profiles is akin to the notorious phaseless Fourier problem, well known in many fields such as crystallography. Current methods of analysis culminate in the refinement of a few parameters of a functional model, and are often preceded by a long and laborious process of trial and error. We start by discussing the use of maximum entropy for obtained 'free-form' solutions of the density profile, as an alternative to the trial and error phase when a functional model is not available. Next we consider a Bayesian spectral analysis approach, which is appropriate for optimising the parameters of a simple (but adequate) type of model when the number of parameters is not known. Finally, we suggest a novel experimental procedure, the analogue of astronomical speckle holography, designed to alleviate the ambiguity problems inherent in traditional reflectivity measurements. (orig.)
Global sensitivity analysis of computer models with functional inputs
International Nuclear Information System (INIS)
Iooss, Bertrand; Ribatet, Mathieu
2009-01-01
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.
Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos
Directory of Open Access Journals (Sweden)
Jianbin Guo
2014-07-01
Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.
Campbell, Petya K. Entcheva; Middleton, Elizabeth M.; Thome, Kurt J.; Kokaly, Raymond F.; Huemmrich, Karl Fred; Lagomasino, David; Novick, Kimberly A.; Brunsell, Nathaniel A.
2013-01-01
This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends and their stability consistently is within 2.5-5 percent throughout most of the spectral range spanning the 12-plus year data record. Using three vegetated sites instrumented with eddy covariance towers, the Hyperion reflectance time series were evaluated for their ability to determine important variables of ecosystem function. A number of narrowband and derivative vegetation indices (VI) closely described the seasonal profiles in vegetation function and ecosystem carbon exchange (e.g., net and gross ecosystem productivity) in three very different ecosystems, including a hardwood forest and tallgrass prairie in North America, and a Miombo woodland in Africa. Our results demonstrate the potential for scaling the carbon flux tower measurements to local and regional landscape levels. The VIs with stronger relationships to the CO2 parameters were derived using continuous reflectance spectra and included wavelengths associated with chlorophyll content and/or chlorophyll fluorescence. Since these indices cannot be calculated from broadband multispectral instrument data, the opportunity to exploit these spectrometer-based VIs in the future will depend on the launch of satellites such as EnMAP and HyspIRI. This study highlights the practical utility of space-borne spectrometers for characterization of the spectral stability and uniformity of the calibration sites in support of sensor cross-comparisons, and demonstrates the potential of narrowband VIs to track and spatially extend ecosystem functional status as well as carbon processes measured at flux towers.
Campbell, P.K.E.; Middleton, E.M.; Thome, K.J.; Kokaly, Raymond F.; Huemmrich, K.F.; Novick, K.A.; Brunsell, N.A.
2013-01-01
This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends and their stability consistently is within 2.5-5 percent throughout most of the spectral range spanning the 12+ year data record. Using three vegetated sites instrumented with eddy covariance towers, the Hyperion reflectance time series were evaluated for their ability to determine important variables of ecosystem function. A number of narrowband and derivative vegetation indices (VI) closely described the seasonal profiles in vegetation function and ecosystem carbon exchange (e.g., net and gross ecosystem productivity) in three very different ecosystems, including a hardwood forest and tallgrass prairie in North America, and a Miombo woodland in Africa. Our results demonstrate the potential for scaling the carbon flux tower measurements to local and regional landscape levels. The VIs with stronger relationships to the CO2 parameters were derived using continuous reflectance spectra and included wavelengths associated with chlorophyll content and/or chlorophyll fluorescence. Since these indices cannot be calculated from broadband multispectral instrument data, the opportunity to exploit these spectrometer-based VIs in the future will depend on the launch of satellites such as EnMAP and HyspIRI. This study highlights the practical utility of space-borne spectrometers for characterization of the spectral stability and uniformity of the calibration sites in support of sensor cross-comparisons, and demonstrates the potential of narrowband VIs to track and spatially extend ecosystem functional status as well as carbon processes measured at flux towers.
A tool model for predicting atmospheric kinetics with sensitivity analysis
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.
Sensitivity analysis of the nuclear data for MYRRHA reactor modelling
International Nuclear Information System (INIS)
Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan
2014-01-01
A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)
Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory
International Nuclear Information System (INIS)
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2005-01-01
This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning
The identification of model effective dimensions using global sensitivity analysis
International Nuclear Information System (INIS)
Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang
2011-01-01
It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.
The identification of model effective dimensions using global sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)
2011-04-15
It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik
2009-01-01
satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Sensitivity analysis of network DEA illustrated in branch banking
N. Avkiran
2010-01-01
Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...
Sensitivity analysis of periodic errors in heterodyne interferometry
International Nuclear Information System (INIS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-01-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors
Sensitivity analysis of periodic errors in heterodyne interferometry
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Sensitivity analysis of the reactor safety study. Final report
International Nuclear Information System (INIS)
Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.
1979-01-01
The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates
Total Reflection X-ray Fluorescence attachment module modified for analysis in vacuum
International Nuclear Information System (INIS)
Wobrauschek, P.; Streli, C.; Kregsamer, P.; Meirer, F.; Jokubonis, C.; Markowicz, A.; Wegrzynek, D.; Chinea-Cano, E.
2008-01-01
Based on the design of the low cost Total Reflection X-Ray Fluorescence attachment module available since 1986 from Atominstitut (WOBRAUSCHEK-module) which can be attached to existing X-ray equipment, a new version was developed which allows the analysis of samples in vacuum. This design was in particular possible as the Peltier cooled light weight Silicon Drift Detector is following all adjustment procedures for total reflection as angle rotation and linear motion. The detector is mounted through a vacuum feed and O-ring tightening to the small vacuum chamber. The standard 30 mm round quartz, Si-wafer or Plexiglas reflectors are used to carry the samples. The reflectors are placed on the reference plane with the dried sample down looking facing in about 0.5 mm distance the up looking detector window. The reflectors are resting on 3 steel balls defining precisely the reference plane for the adjustment procedure. As the rotation axis of the module is in the plane of the reflector surface, angle dependent experiments can be made to distinguish between film and particulate type contamination of samples. Operating with a Mo anode at 50 kV and 40 mA with a closely attached multilayer monochromator and using a 10 mm 2 KETEK silicon drift detector with 8 μm Be window, a sensitivity of 70 cps/ng for Rb was measured and detection limits of 2 pg were obtained
Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks
Directory of Open Access Journals (Sweden)
Harry R. Millwater
2006-01-01
Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.
Application of sensitivity analysis for optimized piping support design
International Nuclear Information System (INIS)
Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.
1993-01-01
The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Otis, M.D.
1983-01-01
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Analysis of the reflection of a micro drop fiber sensor
Sun, Weimin; Liu, Qiang; Zhao, Lei; Li, Yingjuan; Yuan, Libo
2005-01-01
Micro drop fiber sensors are effective tools for measuring characters of liquids. These types of sensors are wildly used in biotechnology, beverage and food markets. For a fiber micro drop sensor, the signal of the output light is wavy with two peaks, normally. Carefully analyzing the wavy process can identify the liquid components. Understanding the reason of forming this wavy signal is important to design a suitable sensing head and to choose a suitable signal-processing method. The dripping process of a type of liquids is relative to the characters of the liquid and the shape of the sensing head. The quasi-Gauss model of the light field from the input-fiber end is used to analyse the distribution of the light field in the liquid drop. In addition, considering the characters of the liquid to be measured, the dripping process of the optical signal from the output-fiber end can be expected. The reflection surface of the micro drop varies as serials of spheres with different radiuses and global centers. The intensity of the reflection light changes with the shape of the surface. The varying process of the intensity relates to the tense, refractive index, transmission et al. To support the analyse above, an experimental system is established. In the system, LED is chosen as the light source and the PIN transform the light signal to the electrical signal, which is collected by a data acquisition card. An on-line testing system is made to check the theory discussed above.
Variance estimation for sensitivity analysis of poverty and inequality measures
Directory of Open Access Journals (Sweden)
Christian Dudel
2017-04-01
Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.
Sensitivity analysis of water consumption in an office building
Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan
2018-02-01
This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.
Probabilistic and sensitivity analysis of Botlek Bridge structures
Directory of Open Access Journals (Sweden)
Králik Juraj
2017-01-01
Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.
Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities
Directory of Open Access Journals (Sweden)
Thi Thanh Huyen Nguyen
2015-11-01
Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.
Seismic analysis of steam generator and parameter sensitivity studies
International Nuclear Information System (INIS)
Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun
2013-01-01
Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1990-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1991-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab
Analysis of snow bidirectional reflectance from ARCTAS Spring-2008 Campaign
Directory of Open Access Journals (Sweden)
A. Lyapustin
2010-05-01
Full Text Available The spring 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS experiment was one of major intensive field campaigns of the International Polar Year aimed at detailed characterization of atmospheric physical and chemical processes in the Arctic region. A part of this campaign was a unique snow bidirectional reflectance experiment on the NASA P-3B aircraft conducted on 7 and 15 April by the Cloud Absorption Radiometer (CAR jointly with airborne Ames Airborne Tracking Sunphotometer (AATS and ground-based Aerosol Robotic Network (AERONET sunphotometers. The CAR data were atmospherically corrected to derive snow bidirectional reflectance at high 1° angular resolution in view zenith and azimuthal angles along with surface albedo. The derived albedo was generally in good agreement with ground albedo measurements collected on 15 April. The CAR snow bidirectional reflectance factor (BRF was used to study the accuracy of analytical Ross-Thick Li-Sparse (RTLS, Modified Rahman-Pinty-Verstraete (MRPV and Asymptotic Analytical Radiative Transfer (AART BRF models. Except for the glint region (azimuthal angles φ<40°, the best fit MRPV and RTLS models fit snow BRF to within ±0.05. The plane-parallel radiative transfer (PPRT solution was also analyzed with the models of spheres, spheroids, randomly oriented fractal crystals, and with a synthetic phase function. The latter merged the model of spheroids for the forward scattering angles with the fractal model in the backscattering direction. The PPRT solution with synthetic phase function provided the best fit to measured BRF in the full range of angles. Regardless of the snow grain shape, the PPRT model significantly over-/underestimated snow BRF in the glint/backscattering regions, respectively, which agrees with other studies. To improve agreement with experiment, we introduced a model of macroscopic snow surface roughness by averaging the PPRT solution over the
Analysis of Snow Bidirectional Reflectance from ARCTAS Spring-2008 Campaign
Lyapustin, A.; Gatebe, C. K.; Redemann, J.; Kahn, R.; Brandt, R.; Russell, P.; King, M. D.; Pedersen, C. A.; Gerland, S.; Poudyal, R.;
2010-01-01
The spring 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) experiment was one of major intensive field campaigns of the International Polar Year aimed at detailed characterization of atmospheric physical and chemical processes in the Arctic region. A part of this campaign was a unique snow bidirectional reflectance experiment on the NASA P-3B aircraft conducted on 7 and 15 April by the Cloud Absorption Radiometer (CAR) jointly with airborne Ames Airborne Tracking Sunphotometer (AATS) and ground-based Aerosol Robotic Network (AERONET) sunphotometers. The CAR data were atmospherically corrected to derive snow bidirectional reflectance at high 1 degree angular resolution in view zenith and azimuthal angles along with surface albedo. The derived albedo was generally in good agreement with ground albedo measurements collected on 15 April. The CAR snow bidirectional reflectance factor (BRF) was used to study the accuracy of analytical Ross-Thick Li-Sparse (RTLS), Modified Rahman-Pinty-Verstraete (MRPV) and Asymptotic Analytical Radiative Transfer (AART) BRF models. Except for the glint region (azimuthal angles phi less than 40 degrees), the best fit MRPV and RTLS models fit snow BRF to within 0.05. The plane-parallel radiative transfer (PPRT) solution was also analyzed with the models of spheres, spheroids, randomly oriented fractal crystals, and with a synthetic phase function. The latter merged the model of spheroids for the forward scattering angles with the fractal model in the backscattering direction. The PPRT solution with synthetic phase function provided the best fit to measured BRF in the full range of angles. Regardless of the snow grain shape, the PPRT model significantly over-/underestimated snow BRF in the glint/backscattering regions, respectively, which agrees with other studies. To improve agreement with experiment, we introduced a model of macroscopic snow surface roughness by averaging the PPRT solution
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Energy Technology Data Exchange (ETDEWEB)
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Dadgar, Sina; Rodríguez Troncoso, Joel; Rajaram, Narasimhan
2018-02-01
Currently, anatomical assessment of tumor volume performed several weeks after completion of treatment is the clinical standard to determine whether a cancer patient has responded to a treatment. However, functional changes within the tumor could potentially provide information regarding treatment resistance or response much earlier than anatomical changes. We have used diffuse reflectance spectroscopy to assess the short and long-term re-oxygenation kinetics of a human head and neck squamous cell carcinoma xenografts in response to radiation therapy. First, we injected UM-SCC-22B cell line into the flank of 50 mice to grow xenografts. Once the tumor volume reached 200 mm3 (designated as Day 1), the mice were distributed into radiation and control groups. Members of radiation group underwent a clinical dose of radiation of 2 Gy/day on Days 1, 4, 7, and 10 for a cumulative dose of 8 Gy. DRS spectra of these tumors were collected for 14 days during and after therapy, and the collected spectra of each tumor were converted to its optical properties using a lookup table-base inverse model. We found statistically significant differences in tumor growth rate between two groups which is in indication of the sensitivity of this cell line to radiation. We further acquired significantly different contents of hemoglobin and scattering magnitude and size in two groups. The scattering has previously been associated with necrosis. We furthermore found significantly different time-dependent changes in vascular oxygenation and tumor hemoglobin concentration in post-radiation days.
Automated sensitivity analysis: New tools for modeling complex dynamic systems
International Nuclear Information System (INIS)
Pin, F.G.
1987-01-01
Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis
DEFF Research Database (Denmark)
Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen
2017-01-01
simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...... in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...
Sensitization trajectories in childhood revealed by using a cluster analysis
DEFF Research Database (Denmark)
Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik
2017-01-01
Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent......BACKGROUND: Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more...... biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...
Time-dependent reliability sensitivity analysis of motion mechanisms
International Nuclear Information System (INIS)
Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng
2016-01-01
Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.
Probabilistic sensitivity analysis of system availability using Gaussian processes
International Nuclear Information System (INIS)
Daneshkhah, Alireza; Bedford, Tim
2013-01-01
The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis
Analytic uncertainty and sensitivity analysis of models with input correlations
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Sensitivity Analysis Applied in Design of Low Energy Office Building
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik
2008-01-01
satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind
2007-01-01
satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...
Sensitivity analysis of physiochemical interaction model: which pair ...
African Journals Online (AJOL)
... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
Sensitivity analysis for contagion effects in social networks
VanderWeele, Tyler J.
2014-01-01
Analyses of social network data have suggested that obesity, smoking, happiness and loneliness all travel through social networks. Individuals exert “contagion effects” on one another through social ties and association. These analyses have come under critique because of the possibility that homophily from unmeasured factors may explain these statistical associations and because similar findings can be obtained when the same methodology is applied to height, acne and head-aches, for which the conclusion of contagion effects seems somewhat less plausible. We use sensitivity analysis techniques to assess the extent to which supposed contagion effects for obesity, smoking, happiness and loneliness might be explained away by homophily or confounding and the extent to which the critique using analysis of data on height, acne and head-aches is relevant. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so. Supposed effects for height, acne and head-aches are all easily explained away by latent homophily and confounding. The methodology that has been employed in past studies for contagion effects in social networks, when used in conjunction with sensitivity analysis, may prove useful in establishing social influence for various behaviors and states. The sensitivity analysis approach can be used to address the critique of latent homophily as a possible explanation of associations interpreted as contagion effects. PMID:25580037
Sensitivity Analysis of a Horizontal Earth Electrode under Impulse ...
African Journals Online (AJOL)
This paper presents the sensitivity analysis of an earthing conductor under the influence of impulse current arising from a lightning stroke. The approach is based on the 2nd order finite difference time domain (FDTD). The earthing conductor is regarded as a lossy transmission line where it is divided into series connected ...
Beyond the GUM: variance-based sensitivity analysis in metrology
International Nuclear Information System (INIS)
Lira, I
2016-01-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)
Sensitivity analysis of the Ohio phosphorus risk index
The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...
Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations
DEFF Research Database (Denmark)
Kamran, Faisal; Andersen, Peter E.
2015-01-01
profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Weighting-Based Sensitivity Analysis in Causal Mediation Studies
Hong, Guanglei; Qin, Xu; Yang, Fan
2018-01-01
Through a sensitivity analysis, the analyst attempts to determine whether a conclusion of causal inference could be easily reversed by a plausible violation of an identification assumption. Analytic conclusions that are harder to alter by such a violation are expected to add a higher value to scientific knowledge about causality. This article…
Sensitivity analysis of railpad parameters on vertical railway track dynamics
Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.
2016-01-01
This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad
Methods for global sensitivity analysis in life cycle assessment
Groen, Evelyne A.; Bokkers, Eddy; Heijungs, Reinout; Boer, de Imke J.M.
2017-01-01
Purpose: Input parameters required to quantify environmental impact in life cycle assessment (LCA) can be uncertain due to e.g. temporal variability or unknowns about the true value of emission factors. Uncertainty of environmental impact can be analysed by means of a global sensitivity analysis to
Sensitivity analysis on ultimate strength of aluminium stiffened panels
DEFF Research Database (Denmark)
Rigo, P.; Sarghiuta, R.; Estefen, S.
2003-01-01
This paper presents the results of an extensive sensitivity analysis carried out by the Committee III.1 "Ultimate Strength" of ISSC?2003 in the framework of a benchmark on the ultimate strength of aluminium stiffened panels. Previously, different benchmarks were presented by ISSC committees on ul...
Sensitivity and specificity of coherence and phase synchronization analysis
International Nuclear Information System (INIS)
Winterhalder, Matthias; Schelter, Bjoern; Kurths, Juergen; Schulze-Bonhage, Andreas; Timmer, Jens
2006-01-01
In this Letter, we show that coherence and phase synchronization analysis are sensitive but not specific in detecting the correct class of underlying dynamics. We propose procedures to increase specificity and demonstrate the power of the approach by application to paradigmatic dynamic model systems
Sensitivity Analysis of Structures by Virtual Distortion Method
DEFF Research Database (Denmark)
Gierlinski, J.T.; Holnicki-Szulc, J.; Sørensen, John Dalsgaard
1991-01-01
are used in structural optimization, see Haftka [4]. The recently developed Virtual Distortion Method (VDM) is a numerical technique which offers an efficient approach to calculation of the sensitivity derivatives. This method has been orginally applied to structural remodelling and collapse analysis, see...
Design tradeoff studies and sensitivity analysis. Appendix B
Energy Technology Data Exchange (ETDEWEB)
1979-05-25
The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)
Sensitivity analysis and power for instrumental variable studies.
Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S
2018-03-31
In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.
Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs
International Nuclear Information System (INIS)
Marshall, Margaret A.; Bess, John D.
2009-01-01
A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.
Sensitivity analysis of LOFT L2-5 test calculations
International Nuclear Information System (INIS)
Prosek, Andrej
2014-01-01
The uncertainty quantification of best-estimate code predictions is typically accompanied by a sensitivity analysis, in which the influence of the individual contributors to uncertainty is determined. The objective of this study is to demonstrate the improved fast Fourier transform based method by signal mirroring (FFTBM-SM) for the sensitivity analysis. The sensitivity study was performed for the LOFT L2-5 test, which simulates the large break loss of coolant accident. There were 14 participants in the BEMUSE (Best Estimate Methods-Uncertainty and Sensitivity Evaluation) programme, each performing a reference calculation and 15 sensitivity runs of the LOFT L2-5 test. The important input parameters varied were break area, gap conductivity, fuel conductivity, decay power etc. For the influence of input parameters on the calculated results the FFTBM-SM was used. The only difference between FFTBM-SM and original FFTBM is that in the FFTBM-SM the signals are symmetrized to eliminate the edge effect (the so called edge is the difference between the first and last data point of one period of the signal) in calculating average amplitude. It is very important to eliminate unphysical contribution to the average amplitude, which is used as a figure of merit for input parameter influence on output parameters. The idea is to use reference calculation as 'experimental signal', 'sensitivity run' as 'calculated signal', and average amplitude as figure of merit for sensitivity instead for code accuracy. The larger is the average amplitude the larger is the influence of varied input parameter. The results show that with FFTBM-SM the analyst can get good picture of the contribution of the parameter variation to the results. They show when the input parameters are influential and how big is this influence. FFTBM-SM could be also used to quantify the influence of several parameter variations on the results. However, the influential parameters could not be
Energy Technology Data Exchange (ETDEWEB)
Streli, C. [Atominstitut, Vienna University of Technology, Stadionallee 2, A-1020 Vienna (Austria)]. E-mail: streli@ati.ac.at; Pepponi, G. [ITC-irst, Povo (Italy); Wobrauschek, P. [Atominstitut, Vienna University of Technology, Stadionallee 2, A-1020 Vienna (Austria); Jokubonis, C. [Atominstitut, Vienna University of Technology, Stadionallee 2, A-1020 Vienna (Austria); Falkenberg, G. [Hamburger Synchrotronstrahlungslabor at Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, D-22603 Hamburg (Germany); Zaray, G. [Institute of Inorganic and Applied Chemistry, 3 EOTVOS Univ, Budapest (Hungary); Broekaert, J. [Institute of Anorganic and Applied Chemistry, University Hamburg, Martin-Luther-King-Platz 6, 20146 Hamburg (Germany); Fittschen, U. [Institute of Anorganic and Applied Chemistry, University Hamburg, Martin-Luther-King-Platz 6, 20146 Hamburg (Germany); Peschel, B. [Institute of Anorganic and Applied Chemistry, University Hamburg, Martin-Luther-King-Platz 6, 20146 Hamburg (Germany)
2006-11-15
At the Hamburger Synchrotronstrahlungslabor (HASYLAB), Beamline L, a vacuum chamber for synchrotron radiation-induced total reflection X-ray fluorescence analysis, is now available which can easily be installed using the adjustment components for microanalysis present at this beamline. The detector is now in the final version of a Vortex silicon drift detector with 50-mm{sup 2} active area from Radiant Detector Technologies. With the Ni/C multilayer monochromator set to 17 keV extrapolated detection limits of 8 fg were obtained using the 50-mm{sup 2} silicon drift detector with 1000 s live time on a sample containing 100 pg of Ni. Various applications are presented, especially of samples which are available in very small amounts: As synchrotron radiation-induced total reflection X-ray fluorescence analysis is much more sensitive than tube-excited total reflection X-ray fluorescence analysis, the sampling time of aerosol samples can be diminished, resulting in a more precise time resolution of atmospheric events. Aerosols, directly sampled on Si reflectors in an impactor were investigated. A further application was the determination of contamination elements in a slurry of high-purity Al{sub 2}O{sub 3}. No digestion is required; the sample is pipetted and dried before analysis. A comparison with laboratory total reflection X-ray fluorescence analysis showed the higher sensitivity of synchrotron radiation-induced total reflection X-ray fluorescence analysis, more contamination elements could be detected. Using the Si-111 crystal monochromator also available at beamline L, XANES measurements to determine the chemical state were performed. This is only possible with lower sensitivity as the flux transmitted by the crystal monochromator is about a factor of 100 lower than that transmitted by the multilayer monochromator. Preliminary results of X-ray absorption near-edge structure measurements for As in xylem sap from cucumber plants fed with As(III) and As(V) are
International Nuclear Information System (INIS)
Streli, C.; Pepponi, G.; Wobrauschek, P.; Jokubonis, C.; Falkenberg, G.; Zaray, G.; Broekaert, J.; Fittschen, U.; Peschel, B.
2006-01-01
At the Hamburger Synchrotronstrahlungslabor (HASYLAB), Beamline L, a vacuum chamber for synchrotron radiation-induced total reflection X-ray fluorescence analysis, is now available which can easily be installed using the adjustment components for microanalysis present at this beamline. The detector is now in the final version of a Vortex silicon drift detector with 50-mm 2 active area from Radiant Detector Technologies. With the Ni/C multilayer monochromator set to 17 keV extrapolated detection limits of 8 fg were obtained using the 50-mm 2 silicon drift detector with 1000 s live time on a sample containing 100 pg of Ni. Various applications are presented, especially of samples which are available in very small amounts: As synchrotron radiation-induced total reflection X-ray fluorescence analysis is much more sensitive than tube-excited total reflection X-ray fluorescence analysis, the sampling time of aerosol samples can be diminished, resulting in a more precise time resolution of atmospheric events. Aerosols, directly sampled on Si reflectors in an impactor were investigated. A further application was the determination of contamination elements in a slurry of high-purity Al 2 O 3 . No digestion is required; the sample is pipetted and dried before analysis. A comparison with laboratory total reflection X-ray fluorescence analysis showed the higher sensitivity of synchrotron radiation-induced total reflection X-ray fluorescence analysis, more contamination elements could be detected. Using the Si-111 crystal monochromator also available at beamline L, XANES measurements to determine the chemical state were performed. This is only possible with lower sensitivity as the flux transmitted by the crystal monochromator is about a factor of 100 lower than that transmitted by the multilayer monochromator. Preliminary results of X-ray absorption near-edge structure measurements for As in xylem sap from cucumber plants fed with As(III) and As(V) are reported. Detection
International Nuclear Information System (INIS)
Harper, W.V.; Gupta, S.K.
1983-10-01
A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables
Sensitivity and uncertainty analysis of NET/ITER shielding blankets
International Nuclear Information System (INIS)
Hogenbirk, A.; Gruppelaar, H.; Verschuur, K.A.
1990-09-01
Results are presented of sensitivity and uncertainty calculations based upon the European fusion file (EFF-1). The effect of uncertainties in Fe, Cr and Ni cross sections on the nuclear heating in the coils of a NET/ITER shielding blanket has been studied. The analysis has been performed for the total cross section as well as partial cross sections. The correct expression for the sensitivity profile was used, including the gain term. The resulting uncertainty in the nuclear heating lies between 10 and 20 per cent. (author). 18 refs.; 2 figs.; 2 tabs
Sensitivity analysis of critical experiments with evaluated nuclear data libraries
International Nuclear Information System (INIS)
Fujiwara, D.; Kosaka, S.
2008-01-01
Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)
Importance measures in global sensitivity analysis of nonlinear models
International Nuclear Information System (INIS)
Homma, Toshimitsu; Saltelli, Andrea
1996-01-01
The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost
Rethinking Sensitivity Analysis of Nuclear Simulations with Topology
Energy Technology Data Exchange (ETDEWEB)
Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci
2016-01-01
In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.
The Teacher Is a Facilitator: Reflecting on ESL Teacher Beliefs through Metaphor Analysis
Farrell, Thomas S. C.
2016-01-01
Metaphors offer a lens through which language teachers express their understanding of their work. Metaphor analysis can be a powerful reflective tool for expressing meanings that underpin ways of thinking about teaching and learning English as a second/foreign language. Through reflecting on their personal teaching metaphors, teachers become more…
International Nuclear Information System (INIS)
Penfold, J.
1992-07-01
Data reduction and analysis programs for neutron reflectivity data from monolayer adsorption at interfaces are described. The application of model fitting to the reflectivity data, and the determination of partial structure factors within the kinematic approximation are discussed. Recent data for the adsorption of surfactants at the air-solution interface are used to illustrate the programs described. (author)
A rational analysis of alternating search and reflection strategies in problem solving
Taatgen, N; Shafto, MG; Langley, P
1997-01-01
In this paper two approaches to problem solving, search and reflection, are discussed, and combined in two models, both based on rational analysis (Anderson, 1990). The first model is a dynamic growth model, which shows that alternating search and reflection is a rational strategy. The second model
Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.
van Erp, Sara; Mulder, Joris; Oberski, Daniel L
2017-11-27
Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Subsurface offset behaviour in velocity analysis with extended reflectivity images
Mulder, W.A.
2012-01-01
Migration velocity analysis with the wave equation can be accomplished by focusing of extended migration images, obtained by introducing a subsurface offset or shift. A reflector in the wrong velocity model will show up as a curve in the extended image. In the correct model, it should collapse to a
Subsurface offset behaviour in velocity analysis with extended reflectivity images
Mulder, W.A.
2013-01-01
Migration velocity analysis with the constant-density acoustic wave equation can be accomplished by the focusing of extended migration images, obtained by introducing a subsurface shift in the imaging condition. A reflector in a wrong velocity model will show up as a curve in the extended image. In
Subhash, Hrebesh M.; Wang, Ruikang K.; Chen, Fangyi; Nuttall, Alfred L.
2013-03-01
Most of the optical coherence tomographic (OCT) systems for high resolution imaging of biological specimens are based on refractive type microscope objectives, which are optimized for specific wave length of the optical source. In this study, we present the feasibility of using commercially available reflective type objective for high sensitive and high resolution structural and functional imaging of cochlear microstructures of an excised guinea pig through intact temporal bone. Unlike conventional refractive type microscopic objective, reflective objective are free from chromatic aberrations due to their all-reflecting nature and can support a broadband of spectrum with very high light collection efficiency.
Directory of Open Access Journals (Sweden)
Guiju ZHANG
2015-11-01
Full Text Available Developments in micro and nanofabrication technologies have led a variety of grating waveguide structures (GWS being proposed and implemented in optics and laser application systems. A new design of multilayered nanostructure double-grating is described for reflection notch filter. Thin metal film and dielectric film are used and designed with one-dimensional composite gratings. The results calculated by rigorous coupled-wave analysis (RCWA present that the thin metal film between substrate and grating can produce significant attenuated reflections and efficiency in a broad reflected spectral range. The behavior of such a reflection filter is evaluated for refractive index sensing, which can be applied inside the integrated waveguide structure while succeeding cycles in measurement. The filter peaks are designed and obtained in a visible range with full width half maximum (FWHM of several nanometers to less than one nanometer. The multilayered structure shows a sensitivity of refractive index of 220nm/RIU as changing the surroundings. The reflection spectra are studied under different periods, depths and duty cycles. The passive structure and its characteristics can achieve practical applications in various fields, such as optical sensing, color filtering, Raman spectroscopy and laser technology.DOI: http://dx.doi.org/10.5755/j01.ms.21.4.9625
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Sensitivity analysis of predictive models with an automated adjoint generator
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.
1987-01-01
The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig
Sensitivity analysis of time-dependent laminar flows
International Nuclear Information System (INIS)
Hristova, H.; Etienne, S.; Pelletier, D.; Borggaard, J.
2004-01-01
This paper presents a general sensitivity equation method (SEM) for time dependent incompressible laminar flows. The SEM accounts for complex parameter dependence and is suitable for a wide range of problems. The formulation is verified on a problem with a closed form solution obtained by the method of manufactured solution. Systematic grid convergence studies confirm the theoretical rates of convergence in both space and time. The methodology is then applied to pulsatile flow around a square cylinder. Computations show that the flow starts with symmetrical vortex shedding followed by a transition to the traditional Von Karman street (alternate vortex shedding). Simulations show that the transition phase manifests itself earlier in the sensitivity fields than in the flow field itself. Sensitivities are then demonstrated for fast evaluation of nearby flows and uncertainty analysis. (author)
Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety
International Nuclear Information System (INIS)
Broadhead, B.L.; Childs, R.L.; Rearden, B.T.
1999-01-01
Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community
International Nuclear Information System (INIS)
Evora, Maria C.; Goncalez, Odair L.
2002-01-01
A comparative study involving transmission, reflection and photoacoustic FTIR techniques is presented with analysis of polyamide-6. The potential and limitations of these methods are investigated by analyzing structural variations that take place at the surface in the bulk in recycled and irradiated polyamide-6 with a 1.5 MeV electron beam with a 500kGY dose, in the presence of O 2 . FTIR techniques appear to be sensitive in detecting small structural changes that occur in recycled and irradiated polyamide-6. The analysis of samples indicated the formation of OH, HOC=O-, - C=O groups. Also, small structural changes were detected which are characterisitic of NH and CN-C=O groups. Transmission techniques show better the structural changes in the bulk, and microscopy-FTIR appears to be more sensitive in detecting what occurs at the sample surface. (author)
Parameter uncertainty effects on variance-based sensitivity analysis
International Nuclear Information System (INIS)
Yu, W.; Harris, T.J.
2009-01-01
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used
Understanding dynamics using sensitivity analysis: caveat and solution
2011-01-01
Background Parametric sensitivity analysis (PSA) has become one of the most commonly used tools in computational systems biology, in which the sensitivity coefficients are used to study the parametric dependence of biological models. As many of these models describe dynamical behaviour of biological systems, the PSA has subsequently been used to elucidate important cellular processes that regulate this dynamics. However, in this paper, we show that the PSA coefficients are not suitable in inferring the mechanisms by which dynamical behaviour arises and in fact it can even lead to incorrect conclusions. Results A careful interpretation of parametric perturbations used in the PSA is presented here to explain the issue of using this analysis in inferring dynamics. In short, the PSA coefficients quantify the integrated change in the system behaviour due to persistent parametric perturbations, and thus the dynamical information of when a parameter perturbation matters is lost. To get around this issue, we present a new sensitivity analysis based on impulse perturbations on system parameters, which is named impulse parametric sensitivity analysis (iPSA). The inability of PSA and the efficacy of iPSA in revealing mechanistic information of a dynamical system are illustrated using two examples involving switch activation. Conclusions The interpretation of the PSA coefficients of dynamical systems should take into account the persistent nature of parametric perturbations involved in the derivation of this analysis. The application of PSA to identify the controlling mechanism of dynamical behaviour can be misleading. By using impulse perturbations, introduced at different times, the iPSA provides the necessary information to understand how dynamics is achieved, i.e. which parameters are essential and when they become important. PMID:21406095
Angle-domain Migration Velocity Analysis using Wave-equation Reflection Traveltime Inversion
Zhang, Sanzong; Schuster, Gerard T.; Luo, Yi
2012-01-01
way as wave-equation transmission traveltime inversion. The residual movemout analysis in the angle-domain common image gathers provides a robust estimate of the depth residual which is converted to the reflection traveltime residual for the velocity
Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization
Directory of Open Access Journals (Sweden)
Jianjun Tang
2014-01-01
Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.
Sensitivity analysis for improving nanomechanical photonic transducers biosensors
International Nuclear Information System (INIS)
Fariña, D; Álvarez, M; Márquez, S; Lechuga, L M; Dominguez, C
2015-01-01
The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8 × 10 −2 nm −1 , an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal. (paper)
Houston, Cynthia R.
2016-01-01
Reflective practice is an important skill that teachers must develop to be able to assess the effectiveness of their teaching and modify their instructional behavior. In many education programs reflective narratives, which are often part of teaching portfolios, are intended to assess students' abilities in these areas. Research on reflectivity in…
Energy Technology Data Exchange (ETDEWEB)
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
The Rural Institutions in Colombia: Reflections for Analysis and Strengthening
Directory of Open Access Journals (Sweden)
Sandro Ropero Beltran
2016-09-01
Full Text Available The rural question is one of the great challenges for institutions in Colombia. The discussion regarding institutional efficiency and effectiveness for the rural sector should be brought forward based on circumstantial aspects that in turn mediate social the social, political, cultural, environmental, economic and productive in the Colombian agriculture, including trade agreements and post-conflict eventually included. The new rurality as an approach to rural development poses a different view about the subject, conceives the rural thing as a multisectorial and multidimensional space, which is the starting point from which arise the elements of analysis that allow advance an institutional debate broad and participatory facing the structural transformation of the rural reality.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
Energy Technology Data Exchange (ETDEWEB)
Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models
Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko
2015-01-01
Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600
Global sensitivity analysis of multiscale properties of porous materials
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
Sensitivity analysis overlaps of friction elements in cartridge seals
Directory of Open Access Journals (Sweden)
Žmindák Milan
2018-01-01
Full Text Available Cartridge seals are self-contained units consisting of a shaft sleeve, seals, and gland plate. The applications of mechanical seals are numerous. The most common example of application is in bearing production for automobile industry. This paper deals with the sensitivity analysis of overlaps friction elements in cartridge seal and their influence on the friction torque sealing and compressive force. Furthermore, it describes materials for the manufacture of sealings, approaches usually used to solution of hyperelastic materials by FEM and short introduction into the topic wheel bearings. The practical part contains one of the approach for measurement friction torque, which results were used to specifying the methodology and precision of FEM calculation realized by software ANSYS WORKBENCH. This part also contains the sensitivity analysis of overlaps friction elements.
An overview of the design and analysis of simulation experiments for sensitivity analysis
Kleijnen, J.P.C.
2005-01-01
Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs
International Nuclear Information System (INIS)
Liu, Ying; Imashuku, Susumu; Kawai, Jun
2014-01-01
A portable total reflection X-ray fluorescence spectrometer (TXRF) was used to analyze leaching solutions of hijiki seaweeds. S, Cl, K, Ca, Ti, Fe, Ni, As and Br were detected in the solutions. Arsenic quantification results were compared to those from ICP-AES. The TXRF quantification results of arsenic were not significantly different from those of ICP-AES, as two-way analysis of variance (ANOVA) method was applied to the significance test. This kind of small and high sensitive TXRF spectrometer can be used in food quality and environmental pollution investigation. (author)
Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks
Harry R. Millwater; R. Wesley Osborn
2006-01-01
A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel) and common damage mechanisms (inherent defects or su...
Influence analysis to assess sensitivity of the dropout process
Molenberghs, Geert; Verbeke, Geert; Thijs, Herbert; Lesaffre, Emmanuel; Kenward, Michael
2001-01-01
Diggle and Kenward (Appl. Statist. 43 (1994) 49) proposed a selection model for continuous longitudinal data subject to possible non-random dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumption upon which this type of models invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. One of their examples is a set of da...
Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)
2015-04-01
determined. From the results of the study, UN is safe to store under normal operating conditions. 15. SUBJECT TERMS urea, nitrate , sensitivity, thermal ...HNO3). Due to its simple composition, ease of manufacture, and higher detonation parameters than ammonium nitrate , it has become one of the...an H50 value of 10.054 ± 0.620 inches. 5. Conclusions From the results of the thermal analysis study, it can be concluded that urea nitrate is
Applications of the TSUNAMI sensitivity and uncertainty analysis methodology
International Nuclear Information System (INIS)
Rearden, Bradley T.; Hopper, Calvin M.; Elam, Karla R.; Goluoglu, Sedat; Parks, Cecil V.
2003-01-01
The TSUNAMI sensitivity and uncertainty analysis tools under development for the SCALE code system have recently been applied in four criticality safety studies. TSUNAMI is used to identify applicable benchmark experiments for criticality code validation, assist in the design of new critical experiments for a particular need, reevaluate previously computed computational biases, and assess the validation coverage and propose a penalty for noncoverage for a specific application. (author)
Player-Driven Video Analysis to Enhance Reflective Soccer Practice in Talent Development
Hjort, Anders; Henriksen, Kristoffer; Elbæk, Lars
2018-01-01
In the present article, we investigate the introduction of a cloud-based video analysis platform called Player Universe (PU). Video analysis is not a new performance-enhancing element in sports, but PU is innovative in how it facilitates reflective learning. Video analysis is executed in the PU platform by involving the players in the analysis…
Sensitivity Analysis of Launch Vehicle Debris Risk Model
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
Global sensitivity analysis using a Gaussian Radial Basis Function metamodel
International Nuclear Information System (INIS)
Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua
2016-01-01
Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.
A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.
McCandless, Lawrence C; Gustafson, Paul
2017-08-15
Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.
Energy Technology Data Exchange (ETDEWEB)
Takaura, Norikatsu
1997-10-01
As dimensions in state-of-the-art CMOS devices shrink to less than 0.1 pm, even low levels of impurities on wafer surfaces can cause device degradation. Conventionally, metal contamination on wafer surfaces is measured using Total Reflection X-Ray Fluorescence Spectroscopy (TXRF). However, commercially available TXRF systems do not have the necessary sensitivity for measuring the lower levels of contamination required to develop new CMOS technologies. In an attempt to improve the sensitivity of TXRF, this research investigates Synchrotron Radiation TXRF (SR TXRF). The advantages of SR TXRF over conventional TXRF are higher incident photon flux, energy tunability, and linear polarization. We made use of these advantages to develop an optimized SR TXRF system at the Stanford Synchrotron Radiation Laboratory (SSRL). The results of measurements show that the Minimum Detection Limits (MDLs) of SR TXRF for 3-d transition metals are typically at a level-of 3x10{sup 8} atoms/cm{sup 2}, which is better than conventional TXRF by about a factor of 20. However, to use our SR TXRF system for practical applications, it was necessary to modify a commercially available Si (Li) detector which generates parasitic fluorescence signals. With the modified detector, we could achieve true MDLs of 3x10{sup 8} atoms/cm{sup 2} for 3-d transition metals. In addition, the analysis of Al on Si wafers is described. Al analysis is difficult because strong Si signals overlap the Al signals. In this work, the Si signals are greatly reduced by tuning the incident beam energy below the Si K edge. The results of our measurements show that the sensitivity for Al is limited by x-ray Raman scattering. Furthermore, we show the results of theoretical modeling of SR TXRF backgrounds consisting of the bremsstrahlung generated by photoelectrons, Compton scattering, and Raman scattering. To model these backgrounds, we extended conventional theoretical models by taking into account several aspects particular
B1 -sensitivity analysis of quantitative magnetization transfer imaging.
Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce
2018-01-01
To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B 1 inaccuracies, focusing on the difference between two categories of T 1 mapping techniques: B 1 -independent and B 1 -dependent. The B 1 -sensitivity of qMT was investigated and compared using two T 1 measurement methods: inversion recovery (IR) (B 1 -independent) and variable flip angle (VFA), B 1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B 1 imaging techniques. For typical B 1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B 1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B 1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B 1 -dependency of the T 1 mapping method. Particularly, the pool-size ratio is more robust against B 1 inaccuracies if VFA T 1 mapping is used, so much so that B 1 mapping could be omitted without substantially biasing F. Magn Reson Med 79:276-285, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Probability and sensitivity analysis of machine foundation and soil interaction
Directory of Open Access Journals (Sweden)
Králik J., jr.
2009-06-01
Full Text Available This paper deals with the possibility of the sensitivity and probabilistic analysis of the reliability of the machine foundation depending on variability of the soil stiffness, structure geometry and compressor operation. The requirements to design of the foundation under rotating machines increased due to development of calculation method and computer tools. During the structural design process, an engineer has to consider problems of the soil-foundation and foundation-machine interaction from the safety, reliability and durability of structure point of view. The advantages and disadvantages of the deterministic and probabilistic analysis of the machine foundation resistance are discussed. The sensitivity of the machine foundation to the uncertainties of the soil properties due to longtime rotating movement of machine is not negligible for design engineers. On the example of compressor foundation and turbine fy. SIEMENS AG the affectivity of the probabilistic design methodology was presented. The Latin Hypercube Sampling (LHS simulation method for the analysis of the compressor foundation reliability was used on program ANSYS. The 200 simulations for five load cases were calculated in the real time on PC. The probabilistic analysis gives us more complex information about the soil-foundation-machine interaction as the deterministic analysis.
Global Sensitivity Analysis for multivariate output using Polynomial Chaos Expansion
International Nuclear Information System (INIS)
Garcia-Cabrejo, Oscar; Valocchi, Albert
2014-01-01
Many mathematical and computational models used in engineering produce multivariate output that shows some degree of correlation. However, conventional approaches to Global Sensitivity Analysis (GSA) assume that the output variable is scalar. These approaches are applied on each output variable leading to a large number of sensitivity indices that shows a high degree of redundancy making the interpretation of the results difficult. Two approaches have been proposed for GSA in the case of multivariate output: output decomposition approach [9] and covariance decomposition approach [14] but they are computationally intensive for most practical problems. In this paper, Polynomial Chaos Expansion (PCE) is used for an efficient GSA with multivariate output. The results indicate that PCE allows efficient estimation of the covariance matrix and GSA on the coefficients in the approach defined by Campbell et al. [9], and the development of analytical expressions for the multivariate sensitivity indices defined by Gamboa et al. [14]. - Highlights: • PCE increases computational efficiency in 2 approaches of GSA of multivariate output. • Efficient estimation of covariance matrix of output from coefficients of PCE. • Efficient GSA on coefficients of orthogonal decomposition of the output using PCE. • Analytical expressions of multivariate sensitivity indices from coefficients of PCE
Parametric Sensitivity Analysis of the WAVEWATCH III Model
Directory of Open Access Journals (Sweden)
Beng-Chun Lee
2009-01-01
Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.
ADGEN: a system for automated sensitivity analysis of predictive models
International Nuclear Information System (INIS)
Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.
1987-01-01
A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures
ADGEN: a system for automated sensitivity analysis of predictive models
International Nuclear Information System (INIS)
Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.
1986-09-01
A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Sensitivity analysis: Interaction of DOE SNF and packaging materials
International Nuclear Information System (INIS)
Anderson, P.A.; Kirkham, R.J.; Shaber, E.L.
1999-01-01
A sensitivity analysis was conducted to evaluate the technical issues pertaining to possible destructive interactions between spent nuclear fuels (SNFs) and the stainless steel canisters. When issues are identified through such an analysis, they provide the technical basis for answering what if questions and, if needed, for conducting additional analyses, testing, or other efforts to resolve them in order to base the licensing on solid technical grounds. The analysis reported herein systematically assessed the chemical and physical properties and the potential interactions of the materials that comprise typical US Department of Energy (DOE) SNFs and the stainless steel canisters in which they will be stored, transported, and placed in a geologic repository for final disposition. The primary focus in each step of the analysis was to identify any possible phenomena that could potentially compromise the structural integrity of the canisters and to assess their thermodynamic feasibility
Linear regression and sensitivity analysis in nuclear reactor design
International Nuclear Information System (INIS)
Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.
2015-01-01
Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data
Directory of Open Access Journals (Sweden)
Eric Ariel L. Salas
2013-12-01
Full Text Available We present the Moment Distance (MD method to advance spectral analysis in vegetation studies. It was developed to take advantage of the information latent in the shape of the reflectance curve that is not available from other spectral indices. Being mathematically simple but powerful, the approach does not require any curve transformation, such as smoothing or derivatives. Here, we show the formulation of the MD index (MDI and demonstrate its potential for vegetation studies. We simulated leaf and canopy reflectance samples derived from the combination of the PROSPECT and SAIL models to understand the sensitivity of the new method to leaf and canopy parameters. We observed reasonable agreements between vegetation parameters and the MDI when using the 600 to 750 nm wavelength range, and we saw stronger agreements in the narrow red-edge region 720 to 730 nm. Results suggest that the MDI is more sensitive to the Chl content, especially at higher amounts (Chl > 40 mg/cm2 compared to other indices such as NDVI, EVI, and WDRVI. Finally, we found an indirect relationship of MDI against the changes of the magnitude of the reflectance around the red trough with differing values of LAI.
International Nuclear Information System (INIS)
Lizana, A; Foldyna, M; Garcia-Caurel, E; Stchakovsky, M; Georges, B; Nicolas, D
2013-01-01
High sensitivity of spectroscopic ellipsometry and reflectometry for the characterization of thin films can strongly decrease when layers, typically metals, absorb a significant fraction of the light. In this paper, we propose a solution to overcome this drawback using total internal reflection ellipsometry (TIRE) and exciting a surface longitudinal wave: a plasmon-polariton. As in the attenuated total reflectance technique, TIRE exploits a minimum in the intensity of reflected transversal magnetic (TM) polarized light and enhances the sensitivity of standard methods to thicknesses of absorbing films. Samples under study were stacks of three films, ZnO : Al/Ag/ZnO : Al, deposited on glass substrates. The thickness of the silver layer varied from sample to sample. We performed measurements with a UV–visible phase-modulated ellipsometer, an IR Mueller ellipsometer and a UV–NIR reflectometer. We used the variance–covariance formalism to evaluate the sensitivity of the ellipsometric data to different parameters of the optical model. Results have shown that using TIRE doubled the sensitivity to the silver layer thickness when compared with the standard ellipsometry. Moreover, the thickness of the ZnO : Al layer below the silver layer can be reliably quantified, unlike for the fit of the standard ellipsometry data, which is limited by the absorption of the silver layer. (paper)
Lizana, A.; Foldyna, M.; Stchakovsky, M.; Georges, B.; Nicolas, D.; Garcia-Caurel, E.
2013-03-01
High sensitivity of spectroscopic ellipsometry and reflectometry for the characterization of thin films can strongly decrease when layers, typically metals, absorb a significant fraction of the light. In this paper, we propose a solution to overcome this drawback using total internal reflection ellipsometry (TIRE) and exciting a surface longitudinal wave: a plasmon-polariton. As in the attenuated total reflectance technique, TIRE exploits a minimum in the intensity of reflected transversal magnetic (TM) polarized light and enhances the sensitivity of standard methods to thicknesses of absorbing films. Samples under study were stacks of three films, ZnO : Al/Ag/ZnO : Al, deposited on glass substrates. The thickness of the silver layer varied from sample to sample. We performed measurements with a UV-visible phase-modulated ellipsometer, an IR Mueller ellipsometer and a UV-NIR reflectometer. We used the variance-covariance formalism to evaluate the sensitivity of the ellipsometric data to different parameters of the optical model. Results have shown that using TIRE doubled the sensitivity to the silver layer thickness when compared with the standard ellipsometry. Moreover, the thickness of the ZnO : Al layer below the silver layer can be reliably quantified, unlike for the fit of the standard ellipsometry data, which is limited by the absorption of the silver layer.
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
International Nuclear Information System (INIS)
Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won
2015-01-01
In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario
Energy Technology Data Exchange (ETDEWEB)
Na, Jang-Hwan; Jeon, Ho-Jun; Hwang, Seok-Won [KHNP Central Research Institute, Daejeon (Korea, Republic of)
2015-10-15
In this paper, we focus on risk insights of Westinghouse typed reactors. We identified that Reactor Coolant Pump (RCP) seal integrity is the most important contributor to Core Damage Frequency (CDF). As we reflected the latest technical report; WCAP-15603(Rev. 1-A), 'WOG2000 RCP Seal Leakage Model for Westinghouse PWRs' instead of the old version, RCP seal integrity became more important to Westinghouse typed reactors. After Fukushima accidents, Korea Hydro and Nuclear Power (KHNP) decided to develop Low Power and Shutdown (LPSD) Probabilistic Safety Assessment (PSA) models and upgrade full power PSA models of all operating Nuclear Power Plants (NPPs). As for upgrading full power PSA models, we have tried to standardize the methodology of CCF (Common Cause Failure) and HRA (Human Reliability Analysis), which are the most influential factors to risk measures of NPPs. Also, we have reviewed and reflected the latest operating experiences, reliability data sources and technical methods to improve the quality of PSA models. KHNP has operating various types of reactors; Optimized Pressurized Reactor (OPR) 1000, CANDU, Framatome and Westinghouse. So, one of the most challengeable missions is to keep the balance of risk contributors of all types of reactors. This paper presents the method of new RCP seal leakage model and the sensitivity analysis results from applying the detailed method to PSA models of Westinghouse typed reference reactors. To perform the sensitivity analysis on LOCCW of the reference Westinghouse typed reactors, we reviewed WOG2000 RCP seal leakage model and developed the detailed event tree of LOCCW considering all scenarios of RCP seal failures. Also, we performed HRA based on the T/H analysis by using the leakage rates for each scenario. We could recognize that HRA was the sensitive contributor to CDF, and the RCP seal failure scenario of 182gpm leakage rate was estimated as the most important scenario.
Sensitivity analysis practices: Strategies for model-based inference
International Nuclear Information System (INIS)
Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca
2006-01-01
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA
Sensitivity analysis practices: Strategies for model-based inference
Energy Technology Data Exchange (ETDEWEB)
Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)
2006-10-15
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.
Directory of Open Access Journals (Sweden)
Bart eGeurten
2012-03-01
Full Text Available Hoverflies and blowflies have distinctly different flight styles. Yet, both species have been shown to structure their flight behaviour in a way that facilitates extraction of 3D information from the image flow on the retina (optic flow. Neuronal candidates to analyse the optic flow are the tangential cells in the third optical ganglion – the lobula complex. These neurons are directionally selective and integrate the optic flow over large parts of the visual field. Homologue tangential cells in hoverflies and blowflies have a similar morphology. Because blowflies and hoverflies have similar neuronal layout but distinctly different flight behaviours, they are an ideal substrate to pinpoint potential neuronal adaptations to the different flight styles.In this article we describe the relationship between locomotion behaviour and motion vision on three different levels:1.We compare the different flight styles based on the categorisation of flight behaviour into prototypical movements.2.We measure the species specific dynamics of the optic flow under naturalistic flight conditions. We found the translational optic flow of both species to be very different.3.We describe possible adaptations of a homologue motion sensitive neuron. We stimulate this cell in blowflies (Calliphora and hoverflies (Eristalis with naturalistic optic flow generated by both species during free flight. The characterized hoverfly tangential cell responds faster to transient changes in the optic flow than its blowfly homologue. It is discussed whether and how the different dynamical response properties aid optic flow analysis.
Directory of Open Access Journals (Sweden)
Ioan Alexandru Ivan
2012-12-01
Full Text Available When related to a single and good contrast object or a laser spot, position sensing, or sensitive, detectors (PSDs have a series of advantages over the classical camera sensors, including a good positioning accuracy for a fast response time and very simple signal conditioning circuits. To test the performance of this kind of sensor for microrobotics, we have made a comparative analysis between a precise but slow video camera and a custom-made fast PSD system applied to the tracking of a diffuse-reflectivity object transported by a pneumatic microconveyor called Smart-Surface. Until now, the fast system dynamics prevented the full control of the smart surface by visual servoing, unless using a very expensive high frame rate camera. We have built and tested a custom and low cost PSD-based embedded circuit, optically connected with a camera to a single objective by means of a beam splitter. A stroboscopic light source enhanced the resolution. The obtained results showed a good linearity and a fast (over 500 frames per second response time which will enable future closed-loop control by using PSD.
Regional and parametric sensitivity analysis of Sobol' indices
International Nuclear Information System (INIS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2015-01-01
Nowadays, utilizing the Monte Carlo estimators for variance-based sensitivity analysis has gained sufficient popularity in many research fields. These estimators are usually based on n+2 sample matrices well designed for computing both the main and total effect indices, where n is the input dimension. The aim of this paper is to use such n+2 sample matrices to investigate how the main and total effect indices change when the uncertainty of the model inputs are reduced. For this purpose, the regional main and total effect functions are defined for measuring the changes on the main and total effect indices when the distribution range of one input is reduced, and the parametric main and total effect functions are introduced to quantify the residual main and total effect indices due to the reduced variance of one input. Monte Carlo estimators are derived for all the developed sensitivity concepts based on the n+2 samples matrices originally used for computing the main and total effect indices, thus no extra computational cost is introduced. The Ishigami function, a nonlinear model and a planar ten-bar structure are utilized for illustrating the developed sensitivity concepts, and for demonstrating the efficiency and accuracy of the derived Monte Carlo estimators. - Highlights: • The regional main and total effect functions are developed. • The parametric main and total effect functions are introduced. • The proposed sensitivity functions are all generalizations of Sobol' indices. • The Monte Carlo estimators are derived for the four sensitivity functions. • The computational cost of the estimators is the same as that of Sobol' indices
Analysis of Grazing GNSS Reflections Observed at the Zeppelin Mountain Station, Spitsbergen
Peraza, L.; Semmling, M.; Falck, C.; Pavlova, O.; Gerland, S.; Wickert, J.
2017-11-01
A reflectometry station has been set up in 2013 near Ny-Ålesund, Svalbard, at 78.9082°N, 11.9031°E. The main goal of the setup is to resolve the spatial and temporal variations in snow and ice cover, based on reflection power observations at grazing elevations. In this study, we develop a method to map the recorded signal power to the main reflection contributions while also discussing the spatial characteristics of the observations. A spectral analysis resolving differential Doppler between direct and reflected signals is presented to identify reflection contributions for a complete year (2014). Strong water reflections are identified with power ratios higher than 70 dB/Hz and constant Doppler shifts of 0.5-0.6 Hz for all elevations. Contributions with ratios higher than 40 dB/Hz can be related to specular land or glacier reflections, for which Doppler shift usually increases with the elevation angle and the distance between reflection point and receiver. Reflections nearby, around 3-5 km, show differential Doppler of 0.4-0.5 Hz, while for reflections farther than 16 km away, Doppler shift is usually larger than 0.8 Hz. Azimuth variations cause cross-track drift of up to 4° during the observation year. Topography-induced shadowing of very low lying satellites limits the extent of the monitoring area. However, the amount of satellites tracked daily, up to 30, allows the reflectometry station to constantly record reflections over areas with thick snow cover and glaciers. This offers the possibility to compare the derived reflected power with local meteorological data to resolve snow and ice variations on the area.
A sensitivity analysis of regional and small watershed hydrologic models
Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.
1975-01-01
Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.
Stochastic sensitivity analysis and Langevin simulation for neural network learning
International Nuclear Information System (INIS)
Koda, Masato
1997-01-01
A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method
An easily implemented static condensation method for structural sensitivity analysis
Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.
1990-01-01
A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.
Nuclear data sensitivity/uncertainty analysis for XT-ADS
International Nuclear Information System (INIS)
Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert
2011-01-01
Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.
Uncertainty and sensitivity analysis of the nuclear fuel thermal behavior
Energy Technology Data Exchange (ETDEWEB)
Boulore, A., E-mail: antoine.boulore@cea.fr [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Struzik, C. [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Gaudier, F. [Commissariat a l' Energie Atomique (CEA), DEN, Systems and Structure Modeling Department, 91191 Gif-sur-Yvette (France)
2012-12-15
Highlights: Black-Right-Pointing-Pointer A complete quantitative method for uncertainty propagation and sensitivity analysis is applied. Black-Right-Pointing-Pointer The thermal conductivity of UO{sub 2} is modeled as a random variable. Black-Right-Pointing-Pointer The first source of uncertainty is the linear heat rate. Black-Right-Pointing-Pointer The second source of uncertainty is the thermal conductivity of the fuel. - Abstract: In the global framework of nuclear fuel behavior simulation, the response of the models describing the physical phenomena occurring during the irradiation in reactor is mainly conditioned by the confidence in the calculated temperature of the fuel. Amongst all parameters influencing the temperature calculation in our fuel rod simulation code (METEOR V2), several sources of uncertainty have been identified as being the most sensitive: thermal conductivity of UO{sub 2}, radial distribution of power in the fuel pellet, local linear heat rate in the fuel rod, geometry of the pellet and thermal transfer in the gap. Expert judgment and inverse methods have been used to model the uncertainty of these parameters using theoretical distributions and correlation matrices. Propagation of these uncertainties in the METEOR V2 code using the URANIE framework and a Monte-Carlo technique has been performed in different experimental irradiations of UO{sub 2} fuel. At every time step of the simulated experiments, we get a temperature statistical distribution which results from the initial distributions of the uncertain parameters. We then can estimate confidence intervals of the calculated temperature. In order to quantify the sensitivity of the calculated temperature to each of the uncertain input parameters and data, we have also performed a sensitivity analysis using the Sobol' indices at first order.
Biosphere dose conversion Factor Importance and Sensitivity Analysis
International Nuclear Information System (INIS)
M. Wasiolek
2004-01-01
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty
Automatic analysis of macerals and reflectance; Analisis Automatico de Macerales y Reflectancia
Energy Technology Data Exchange (ETDEWEB)
Catalina, J.C.; Alarcon, D.; Gonzalez Prado, J.
1998-12-01
A new system has been developed to perform automatically macerals and reflectance analysis of single-seam bituminous coals, improving the interlaboratory accuracy of these types of analyses. The system follows the same steps as the manual method, requiring a human operator for preparation of coal samples and system startup; then, sample scanning, microscope focusing and field centre analysis are fully automatic. The main and most innovative idea of this approach is to coordinate an expert system with an image processing system, using both reflectance and morphological information. In this way, the system tries to reproduce the analysis procedure followed by a human expert in petrography. (Author)
Khanmohammadi, Mohammadreza; Bagheri Garmarudi, Amir; Samani, Simin; Ghasemi, Keyvan; Ashuri, Ahmad
2011-06-01
Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) microspectroscopy was applied for detection of colon cancer according to the spectral features of colon tissues. Supervised classification models can be trained to identify the tissue type based on the spectroscopic fingerprint. A total of 78 colon tissues were used in spectroscopy studies. Major spectral differences were observed in 1,740-900 cm(-1) spectral region. Several chemometric methods such as analysis of variance (ANOVA), cluster analysis (CA) and linear discriminate analysis (LDA) were applied for classification of IR spectra. Utilizing the chemometric techniques, clear and reproducible differences were observed between the spectra of normal and cancer cases, suggesting that infrared microspectroscopy in conjunction with spectral data processing would be useful for diagnostic classification. Using LDA technique, the spectra were classified into cancer and normal tissue classes with an accuracy of 95.8%. The sensitivity and specificity was 100 and 93.1%, respectively.
Coad, Jane; Evans, Ruth
2008-01-01
This article reflects on key methodological issues emerging from children and young people's involvement in data analysis processes. We outline a pragmatic framework illustrating different approaches to engaging children, using two case studies of children's experiences of participating in data analysis. The article highlights methods of…
Sensitivity Analysis of OECD Benchmark Tests in BISON
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Quero, G.; Severino, R.; Vaiano, P.; Consales, M.; Ruvo, M.; Sandomenico, A.; Borriello, A.; Giordano, M.; Zuppolini, S.; Diodato, L.; Cutolo, A.; Cusano, A.
2015-09-01
We report the development of a reflection-type long period fiber grating (LPG) biosensor able to perform the real time detection of thyroid cancer markers in the needle washout of fine-needle aspiration biopsy. A standard LPG is first transformed in a practical probe working in reflection mode, then it is coated by an atactic-polystyrene overlay in order to increase its surrounding refractive index sensitivity and to provide, at the same time, the desired interfacial properties for a stable bioreceptor immobilization. The results provide a clear demonstration of the effectiveness and sensitivity of the developed biosensing platform, allowing the in vitro detection of human Thyroglobulin at sub-nanomolar concentrations.
SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES
Energy Technology Data Exchange (ETDEWEB)
Flach, G.
2014-10-28
PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.
Sensitivity analysis and design optimization through automatic differentiation
International Nuclear Information System (INIS)
Hovland, Paul D; Norris, Boyana; Strout, Michelle Mills; Bhowmick, Sanjukta; Utke, Jean
2005-01-01
Automatic differentiation is a technique for transforming a program or subprogram that computes a function, including arbitrarily complex simulation codes, into one that computes the derivatives of that function. We describe the implementation and application of automatic differentiation tools. We highlight recent advances in the combinatorial algorithms and compiler technology that underlie successful implementation of automatic differentiation tools. We discuss applications of automatic differentiation in design optimization and sensitivity analysis. We also describe ongoing research in the design of language-independent source transformation infrastructures for automatic differentiation algorithms
Analysis of Hydrological Sensitivity for Flood Risk Assessment
Directory of Open Access Journals (Sweden)
Sanjay Kumar Sharma
2018-02-01
Full Text Available In order for the Indian government to maximize Integrated Water Resource Management (IWRM, the Brahmaputra River has played an important role in the undertaking of the Pilot Basin Study (PBS due to the Brahmaputra River’s annual regional flooding. The selected Kulsi River—a part of Brahmaputra sub-basin—experienced severe floods in 2007 and 2008. In this study, the Rainfall-Runoff-Inundation (RRI hydrological model was used to simulate the recent historical flood in order to understand and improve the integrated flood risk management plan. The ultimate objective was to evaluate the sensitivity of hydrologic simulation using different Digital Elevation Model (DEM resources, coupled with DEM smoothing techniques, with a particular focus on the comparison of river discharge and flood inundation extent. As a result, the sensitivity analysis showed that, among the input parameters, the RRI model is highly sensitive to Manning’s roughness coefficient values for flood plains, followed by the source of the DEM, and then soil depth. After optimizing its parameters, the simulated inundation extent showed that the smoothing filter was more influential than its simulated discharge at the outlet. Finally, the calibrated and validated RRI model simulations agreed well with the observed discharge and the Moderate Imaging Spectroradiometer (MODIS-detected flood extents.
Sensitivity analysis for the effects of multiple unmeasured confounders.
Groenwold, Rolf H H; Sterne, Jonathan A C; Lawlor, Debbie A; Moons, Karel G M; Hoes, Arno W; Tilling, Kate
2016-09-01
Observational studies are prone to (unmeasured) confounding. Sensitivity analysis of unmeasured confounding typically focuses on a single unmeasured confounder. The purpose of this study was to assess the impact of multiple (possibly weak) unmeasured confounders. Simulation studies were performed based on parameters estimated from the British Women's Heart and Health Study, including 28 measured confounders and assuming no effect of ascorbic acid intake on mortality. In addition, 25, 50, or 100 unmeasured confounders were simulated, with various mutual correlations and correlations with measured confounders. The correlated unmeasured confounders did not need to be strongly associated with exposure and outcome to substantially bias the exposure-outcome association at interest, provided that there are sufficiently many unmeasured confounders. Correlations between unmeasured confounders, in addition to the strength of their relationship with exposure and outcome, are key drivers of the magnitude of unmeasured confounding and should be considered in sensitivity analyses. However, if the unmeasured confounders are correlated with measured confounders, the bias yielded by unmeasured confounders is partly removed through adjustment for the measured confounders. Discussions of the potential impact of unmeasured confounding in observational studies, and sensitivity analyses to examine this, should focus on the potential for the joint effect of multiple unmeasured confounders to bias results. Copyright © 2016 Elsevier Inc. All rights reserved.
High order effects in cross section sensitivity analysis
International Nuclear Information System (INIS)
Greenspan, E.; Karni, Y.; Gilai, D.
1978-01-01
Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method
Energy Technology Data Exchange (ETDEWEB)
Kelsey, Adrian [Health and Safety Laboratory, Harpur Hill, Buxton (United Kingdom)
2015-12-15
Laboratory (SNL) in 2009. At the largest LNG release rate the flames did not cover the entire area of the LNG spill, this behaviour had not been observed in previous large-scale experiments. Also the height of the flames was also greater than expected from previous large-scale tests. One possible explanation for the observed behaviour is that in this very large-scale release the speed at which air and fuel vapour was drawn into the fire exceeded the flame speed. Therefore the flames could not propagate upwind to ignite the whole surface of the LNG pool. Fuel vapour from the unignited region, drawn into the fire, may also account for the higher flame height. A global sensitivity analysis allows the influence of uncertain parameters on the quantities of interest to be examined. This publication and the work it describes were funded by the Health and Safety Executive (HSE). Its contents, including any opinions and/or conclusions expressed, are those of the authors alone and do not necessarily reflect HSE policy.
Total-reflection X-ray fluorescence analysis of Austrian wine
Energy Technology Data Exchange (ETDEWEB)
Gruber, X. [Atominstitut der Osterreichischen Universitaeten, 1020 Vienna (Austria); Kregsamer, P. [Atominstitut der Osterreichischen Universitaeten, 1020 Vienna (Austria); Wobrauschek, P. [Atominstitut der Osterreichischen Universitaeten, 1020 Vienna (Austria); Streli, C. [Atominstitut der Osterreichischen Universitaeten, 1020 Vienna (Austria)]. E-mail: streli@ati.ac.at
2006-11-15
The concentration of major, minor and trace elements in Austrian wine was determined by total-reflection X-ray fluorescence using gallium as internal standard. A multi-elemental analysis was possible by pipetting 6 {mu}l of wine directly on the reflector and drying. Total-reflection X-ray fluorescence analysis was performed with Atomika EXTRA II A (Cameca) X-rays from a Mo tube with a high-energy cut-off at 20 keV in total-reflection geometry. The results showed that it was possible to identify only by the elemental analysis as fingerprint the vineyards and year of vintage among 11 different wines.
DEFF Research Database (Denmark)
Ilic, C; Chadwick, A; Helm-Petersen, Jacob
2000-01-01
, non-phased locked methods are more appropriate. In this paper, the accuracy of two non-phased locked methods of directional analysis, the maximum likelihood method (MLM) and the Bayesian directional method (BDM) have been quantitatively evaluated using numerical simulations for the case...... of multidirectional waves with partial reflections. It is shown that the results are influenced by the ratio of distance from the reflector (L) to the length of the time series (S) used in the spectral analysis. Both methods are found to be capable of determining the incident and reflective wave fields when US > 0......Recent studies of advanced directional analysis techniques have mainly centred on incident wave fields. In the study of coastal structures, however, partially reflective wave fields are commonly present. In the near structure field, phase locked methods can be successfully applied. In the far field...
Total-reflection X-ray fluorescence analysis of Austrian wine
International Nuclear Information System (INIS)
Gruber, X.; Kregsamer, P.; Wobrauschek, P.; Streli, C.
2006-01-01
The concentration of major, minor and trace elements in Austrian wine was determined by total-reflection X-ray fluorescence using gallium as internal standard. A multi-elemental analysis was possible by pipetting 6 μl of wine directly on the reflector and drying. Total-reflection X-ray fluorescence analysis was performed with Atomika EXTRA II A (Cameca) X-rays from a Mo tube with a high-energy cut-off at 20 keV in total-reflection geometry. The results showed that it was possible to identify only by the elemental analysis as fingerprint the vineyards and year of vintage among 11 different wines
Variance-based sensitivity analysis for wastewater treatment plant modelling.
Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B
2014-02-01
Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.
DDASAC, Double-Precision Differential or Algebraic Sensitivity Analysis
International Nuclear Information System (INIS)
Caracotsios, M.; Stewart, W.E.; Petzold, L.
1997-01-01
1 - Description of program or function: DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the sensitivities on request. 2 - Method of solution: Reconciliation of initial conditions is done with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithm of Shampine (1987), extended here to differential-algebraic equation systems. The solution is continued with the DASSL predictor- corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase detected and with row scaling of the Jacobian added. The backward-difference formulas for the predictor and corrector are expressed in divide-difference form, and the fixed-leading-coefficient form of the corrector (Jackson and Sacks-Davis 1980, Brenan et al. 1989) is used. Weights for error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations as given by Catacotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigns the work array lengths and the output unit. The machine number range and precision are determined at run time by a
Uncertainty and sensitivity analysis of environmental transport models
International Nuclear Information System (INIS)
Margulies, T.S.; Lancaster, L.E.
1985-01-01
An uncertainty and sensitivity analysis has been made of the CRAC-2 (Calculations of Reactor Accident Consequences) atmospheric transport and deposition models. Robustness and uncertainty aspects of air and ground deposited material and the relative contribution of input and model parameters were systematically studied. The underlying data structures were investigated using a multiway layout of factors over specified ranges generated via a Latin hypercube sampling scheme. The variables selected in our analysis include: weather bin, dry deposition velocity, rain washout coefficient/rain intensity, duration of release, heat content, sigma-z (vertical) plume dispersion parameter, sigma-y (crosswind) plume dispersion parameter, and mixing height. To determine the contributors to the output variability (versus distance from the site) step-wise regression analyses were performed on transformations of the spatial concentration patterns simulated. 27 references, 2 figures, 3 tables
Complex finite element sensitivity method for creep analysis
International Nuclear Information System (INIS)
Gomez-Farias, Armando; Montoya, Arturo; Millwater, Harry
2015-01-01
The complex finite element method (ZFEM) has been extended to perform sensitivity analysis for mechanical and structural systems undergoing creep deformation. ZFEM uses a complex finite element formulation to provide shape, material, and loading derivatives of the system response, providing an insight into the essential factors which control the behavior of the system as a function of time. A complex variable-based quadrilateral user element (UEL) subroutine implementing the power law creep constitutive formulation was incorporated within the Abaqus commercial finite element software. The results of the complex finite element computations were verified by comparing them to the reference solution for the steady-state creep problem of a thick-walled cylinder in the power law creep range. A practical application of the ZFEM implementation to creep deformation analysis is the calculation of the skeletal point of a notched bar test from a single ZFEM run. In contrast, the standard finite element procedure requires multiple runs. The value of the skeletal point is that it identifies the location where the stress state is accurate, regardless of the certainty of the creep material properties. - Highlights: • A novel finite element sensitivity method (ZFEM) for creep was introduced. • ZFEM has the capability to calculate accurate partial derivatives. • ZFEM can be used for identification of the skeletal point of creep structures. • ZFEM can be easily implemented in a commercial software, e.g. Abaqus. • ZFEM results were shown to be in excellent agreement with analytical solutions
Overview of hybrid subspace methods for uncertainty quantification, sensitivity analysis
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Bang, Youngsuk; Wang, Congjian
2013-01-01
Highlights: ► We overview the state-of-the-art in uncertainty quantification and sensitivity analysis. ► We overview new developments in above areas using hybrid methods. ► We give a tutorial introduction to above areas and the new developments. ► Hybrid methods address the explosion in dimensionality in nonlinear models. ► Representative numerical experiments are given. -- Abstract: The role of modeling and simulation has been heavily promoted in recent years to improve understanding of complex engineering systems. To realize the benefits of modeling and simulation, concerted efforts in the areas of uncertainty quantification and sensitivity analysis are required. The manuscript intends to serve as a pedagogical presentation of the material to young researchers and practitioners with little background on the subjects. We believe this is important as the role of these subjects is expected to be integral to the design, safety, and operation of existing as well as next generation reactors. In addition to covering the basics, an overview of the current state-of-the-art will be given with particular emphasis on the challenges pertaining to nuclear reactor modeling. The second objective will focus on presenting our own development of hybrid subspace methods intended to address the explosion in the computational overhead required when handling real-world complex engineering systems.
Control strategies and sensitivity analysis of anthroponotic visceral leishmaniasis model.
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2017-12-01
This study proposes a mathematical model of Anthroponotic visceral leishmaniasis epidemic with saturated infection rate and recommends different control strategies to manage the spread of this disease in the community. To do this, first, a model formulation is presented to support these strategies, with quantifications of transmission and intervention parameters. To understand the nature of the initial transmission of the disease, the reproduction number [Formula: see text] is obtained by using the next-generation method. On the basis of sensitivity analysis of the reproduction number [Formula: see text], four different control strategies are proposed for managing disease transmission. For quantification of the prevalence period of the disease, a numerical simulation for each strategy is performed and a detailed summary is presented. Disease-free state is obtained with the help of control strategies. The threshold condition for globally asymptotic stability of the disease-free state is found, and it is ascertained that the state is globally stable. On the basis of sensitivity analysis of the reproduction number, it is shown that the disease can be eradicated by using the proposed strategies.
Sensitivity Analysis in Observational Research: Introducing the E-Value.
VanderWeele, Tyler J; Ding, Peng
2017-08-15
Sensitivity analysis is useful in assessing how robust an association is to potential unmeasured or uncontrolled confounding. This article introduces a new measure called the "E-value," which is related to the evidence for causality in observational studies that are potentially subject to confounding. The E-value is defined as the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment-outcome association, conditional on the measured covariates. A large E-value implies that considerable unmeasured confounding would be needed to explain away an effect estimate. A small E-value implies little unmeasured confounding would be needed to explain away an effect estimate. The authors propose that in all observational studies intended to produce evidence for causality, the E-value be reported or some other sensitivity analysis be used. They suggest calculating the E-value for both the observed association estimate (after adjustments for measured confounders) and the limit of the confidence interval closest to the null. If this were to become standard practice, the ability of the scientific community to assess evidence from observational studies would improve considerably, and ultimately, science would be strengthened.
Nordic reference study on uncertainty and sensitivity analysis
International Nuclear Information System (INIS)
Hirschberg, S.; Jacobsson, P.; Pulkkinen, U.; Porn, K.
1989-01-01
This paper provides a review of the first phase of Nordic reference study on uncertainty and sensitivity analysis. The main objective of this study is to use experiences form previous Nordic Benchmark Exercises and reference studies concerning critical modeling issues such as common cause failures and human interactions, and to demonstrate the impact of associated uncertainties on the uncertainty of the investigated accident sequence. This has been done independently by three working groups which used different approaches to modeling and to uncertainty analysis. The estimated uncertainty interval for the analyzed accident sequence is large. Also the discrepancies between the groups are substantial but can be explained. Sensitivity analyses which have been carried out concern e.g. use of different CCF-quantification models, alternative handling of CCF-data, time windows for operator actions and time dependences in phase mission operation, impact of state-of-knowledge dependences and ranking of dominating uncertainty contributors. Specific findings with respect to these issues are summarized in the paper
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Deterministic sensitivity analysis for the numerical simulation of contaminants transport
International Nuclear Information System (INIS)
Marchand, E.
2007-12-01
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Sensitivity Analysis of a Riparian Vegetation Growth Model
Directory of Open Access Journals (Sweden)
Michael Nones
2016-11-01
Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.
Sensitivity analysis of numerical model of prestressed concrete containment
Energy Technology Data Exchange (ETDEWEB)
Bílý, Petr, E-mail: petr.bily@fsv.cvut.cz; Kohoutková, Alena, E-mail: akohout@fsv.cvut.cz
2015-12-15
Graphical abstract: - Highlights: • FEM model of prestressed concrete containment with steel liner was created. • Sensitivity analysis of changes in geometry and loads was conducted. • Steel liner and temperature effects are the most important factors. • Creep and shrinkage parameters are essential for the long time analysis. • Prestressing schedule is a key factor in the early stages. - Abstract: Safety is always the main consideration in the design of containment of nuclear power plant. However, efficiency of the design process should be also taken into consideration. Despite the advances in computational abilities in recent years, simplified analyses may be found useful for preliminary scoping or trade studies. In the paper, a study on sensitivity of finite element model of prestressed concrete containment to changes in geometry, loads and other factors is presented. Importance of steel liner, reinforcement, prestressing process, temperature changes, nonlinearity of materials as well as density of finite elements mesh is assessed in the main stages of life cycle of the containment. Although the modeling adjustments have not produced any significant changes in computation time, it was found that in some cases simplified modeling process can lead to significant reduction of work time without degradation of the results.
Procedures for uncertainty and sensitivity analysis in repository performance assessment
International Nuclear Information System (INIS)
Poern, K.; Aakerlund, O.
1985-10-01
The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)
Global sensitivity analysis for models with spatially dependent outputs
International Nuclear Information System (INIS)
Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.
2011-01-01
The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)
Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion
Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger
2014-09-01
Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.
Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis
International Nuclear Information System (INIS)
Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.
1991-01-01
The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier
Source-driven noise analysis measurements with neptunium metal reflected by high enriched uranium
International Nuclear Information System (INIS)
Valentine, Timothy E.; Mattingly, John K.
2003-01-01
Subcritical noise analysis measurements have been performed with neptunium ( 237 Np) sphere reflected by highly enriched uranium. These measurements were performed at the Los Alamos Critical Experiment Facility in December 2002 to provide an estimate of the subcriticality of 237 Np reflected by various amounts of high-enriched uranium. This paper provides a description of the measurements and presents some preliminary results of the analysis of the measurements. The measured and calculated spectral ratios differ by 15% whereas the 'interpreted' and calculated k eff values differ by approximately 1%. (author)
Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment
Lee, Meemong; Bowman, Kevin
2014-01-01
Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.
Sensitivity analysis for modules for various biosphere types
International Nuclear Information System (INIS)
Karlsson, Sara; Bergstroem, U.; Rosen, K.
2000-09-01
This study presents the results of a sensitivity analysis for the modules developed earlier for calculation of ecosystem specific dose conversion factors (EDFs). The report also includes a comparison between the probabilistically calculated mean values of the EDFs and values gained in deterministic calculations. An overview of the distribution of radionuclides between different environmental parts in the models is also presented. The radionuclides included in the study were 36 Cl, 59 Ni, 93 Mo, 129 I, 135 Cs, 237 Np and 239 Pu, sel to represent various behaviour in the biosphere and some are of particular importance from the dose point of view. The deterministic and probabilistic EDFs showed a good agreement, for most nuclides and modules. Exceptions from this occurred if very skew distributions were used for parameters of importance for the results. Only a minor amount of the released radionuclides were present in the model compartments for all modules, except for the agricultural land module. The differences between the radionuclides were not pronounced which indicates that nuclide specific parameters were of minor importance for the retention of radionuclides for the simulated time period of 10 000 years in those modules. The results from the agricultural land module showed a different pattern. Large amounts of the radionuclides were present in the solid fraction of the saturated soil zone. The high retention within this compartment makes the zone a potential source for future exposure. Differences between the nuclides due to element specific Kd-values could be seen. The amount of radionuclides present in the upper soil layer, which is the most critical zone for exposure to humans, was less then 1% for all studied radionuclides. The sensitivity analysis showed that the physical/chemical parameters were the most important in most modules in contrast to the dominance of biological parameters in the uncertainty analysis. The only exception was the well module where
Liu, Ming; Zhao, Jing; Lu, XiaoZuo; Li, Gang; Wu, Taixia; Zhang, LiFu
2018-05-10
With spectral methods, noninvasive determination of blood hyperviscosity in vivo is very potential and meaningful in clinical diagnosis. In this study, 67 male subjects (41 health, and 26 hyperviscosity according to blood sample analysis results) participate. Reflectance spectra of subjects' tongue tips is measured, and a classification method bases on principal component analysis combined with artificial neural network model is built to identify hyperviscosity. Hold-out and Leave-one-out methods are used to avoid significant bias and lessen overfitting problem, which are widely accepted in the model validation. To measure the performance of the classification, sensitivity, specificity, accuracy and F-measure are calculated, respectively. The accuracies with 100 times Hold-out method and 67 times Leave-one-out method are 88.05% and 97.01%, respectively. Experimental results indicate that the built classification model has certain practical value and proves the feasibility of using spectroscopy to identify hyperviscosity by noninvasive determination.
Sensitivity analysis of the terrestrial food chain model FOOD III
International Nuclear Information System (INIS)
Zach, Reto.
1980-10-01
As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)
Uncertainty and sensitivity analysis in nuclear accident consequence assessment
International Nuclear Information System (INIS)
Karlberg, Olof.
1989-01-01
This report contains the results of a four year project in research contracts with the Nordic Cooperation in Nuclear Safety and the National Institute for Radiation Protection. An uncertainty/sensitivity analysis methodology consisting of Latin Hypercube sampling and regression analysis was applied to an accident consequence model. A number of input parameters were selected and the uncertainties related to these parameter were estimated within a Nordic group of experts. Individual doses, collective dose, health effects and their related uncertainties were then calculated for three release scenarios and for a representative sample of meteorological situations. From two of the scenarios the acute phase after an accident were simulated and from one the long time consequences. The most significant parameters were identified. The outer limits of the calculated uncertainty distributions are large and will grow to several order of magnitudes for the low probability consequences. The uncertainty in the expectation values are typical a factor 2-5 (1 Sigma). The variation in the model responses due to the variation of the weather parameters is fairly equal to the parameter uncertainty induced variation. The most important parameters showed out to be different for each pathway of exposure, which could be expected. However, the overall most important parameters are the wet deposition coefficient and the shielding factors. A general discussion of the usefulness of uncertainty analysis in consequence analysis is also given. (au)
Robust and sensitive analysis of mouse knockout phenotypes.
Directory of Open Access Journals (Sweden)
Natasha A Karp
Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.
DISPELLING ILLUSIONS OF REFLECTION: A NEW ANALYSIS OF THE 2007 MAY 19 CORONAL 'WAVE' EVENT
International Nuclear Information System (INIS)
Attrill, Gemma D. R.
2010-01-01
A new analysis of the 2007 May 19 coronal wave-coronal mass ejection-dimmings event is offered employing base difference extreme-ultraviolet (EUV) images. Previous work analyzing the coronal wave associated with this event concluded strongly in favor of purely an MHD wave interpretation for the expanding bright front. This conclusion was based to a significant extent on the identification of multiple reflections of the coronal wave front. The analysis presented here shows that the previously identified 'reflections' are actually optical illusions and result from a misinterpretation of the running difference EUV data. The results of this new multiwavelength analysis indicate that two coronal wave fronts actually developed during the eruption. This new analysis has implications for our understanding of diffuse coronal waves and questions the validity of the analysis and conclusions reached in previous studies.
International Nuclear Information System (INIS)
Iman, R.L.; Helton, J.C.
1985-01-01
Probabilistic Risk Assessment (PRA) is playing an increasingly important role in the nuclear reactor regulatory process. The assessment of uncertainties associated with PRA results is widely recognized as an important part of the analysis process. One of the major criticisms of the Reactor Safety Study was that its representation of uncertainty was inadequate. The desire for the capability to treat uncertainties with the MELCOR risk code being developed at Sandia National Laboratories is indicative of the current interest in this topic. However, as yet, uncertainty analysis and sensitivity analysis in the context of PRA is a relatively immature field. In this paper, available methods for uncertainty analysis and sensitivity analysis in a PRA are reviewed. This review first treats methods for use with individual components of a PRA and then considers how these methods could be combined in the performance of a complete PRA. In the context of this paper, the goal of uncertainty analysis is to measure the imprecision in PRA outcomes of interest, and the goal of sensitivity analysis is to identify the major contributors to this imprecision. There are a number of areas that must be considered in uncertainty analysis and sensitivity analysis for a PRA: (1) information, (2) systems analysis, (3) thermal-hydraulic phenomena/fission product behavior, (4) health and economic consequences, and (5) display of results. Each of these areas and the synthesis of them into a complete PRA are discussed
Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A
2011-09-26
The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America
Sensitivity Analysis Of Financing Demand In Syariah Banking
Directory of Open Access Journals (Sweden)
DR. HJ. ROSYETTI
2017-11-01
Full Text Available This study aims to analyze the Sensitivity of Demand Financing in syariah banking with a focus on the elasticity of financing demand income elasticity and cross elasticity. The type of data used in this study is secondary data quantitative and time series obtained from the publication of BPS BI and OJK. The data analysis technique begins by estimating multiple linear regression equations using the Eviews Application further measuring the sensitivity using elasticity. The research variables consist of revenue gross domestic product and conventional bank interest rate as independent variables and demand for financing as a dependent variable. The results obtained for the results gross domestic product and interest rate of conventional banks simultaneously affect the demand for financing in Islamic banking with a significant level of 5 obtained probability value F statistic amp945 005. Partially revenue share and gross domestic product have a significant effect on demand for financing. While the variable interest rate of conventional banks partially does not have a significant effect on demand for financing in Islamic banking. The ability of the three independent variables to explain the dependent variable of 99.06 the rest of 0.04 influenced by other factors outside this study. The sensitive value of demand for financing in syariah banking during the observation period was 3.94 amp400P 1 so that it can be said that demand for financing in syariah banking is elastic. The elasticity of income demand for financing in syariah banking during the observation period of 3.08 amp400I 1 is categorized as luxuries goods. The cross elasticity value of financing demand in syariah banking during the observation period is 0.52 or positive amp400C 0 it can be categorized that the interest rate of a conventional bank is a substitute of profit sharing.
Leurent, Baptiste; Gomes, Manuel; Faria, Rita; Morris, Stephen; Grieve, Richard; Carpenter, James R
2018-04-20
Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice.
Simplified procedures for fast reactor fuel cycle and sensitivity analysis
International Nuclear Information System (INIS)
Badruzzaman, A.
1979-01-01
The Continuous Slowing Down-Integral Transport Theory has been extended to perform criticality calculations in a Fast Reactor Core-blanket system achieving excellent prediction of the spectrum and the eigenvalue. The integral transport parameters did not need recalculation with source iteration and were found to be relatively constant with exposure. Fuel cycle parameters were accurately predicted when these were not varied, thus reducing a principal potential penalty of the Intergal Transport approach where considerable effort may be required to calculate transport parameters in more complicated geometries. The small variation of the spectrum in the central core region, and its weak dependence on exposure for both this region, the core blanket interface and blanket region led to the extension and development of inexpensive simplified procedures to complement exact methods. These procedures gave accurate predictions of the key fuel cycle parameters such as cost and their sensitivity to variation in spectrum-averaged and multigroup cross sections. They also predicted the implications of design variation on these parameters very well. The accuracy of these procedures and their use in analyzing a wide variety of sensitivities demonstrate the potential utility of survey calculations in Fast Reactor analysis and fuel management
Relative sensitivity analysis of the predictive properties of sloppy models.
Myasnikova, Ekaterina; Spirov, Alexander
2018-01-25
Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called "sloppy" parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill's, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Energy Technology Data Exchange (ETDEWEB)
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Advanced analysis techniques for X-ray reflectivities. Theory and application
International Nuclear Information System (INIS)
Zimmermann, Klaus Martin
2005-01-01
The first part of this thesis adresses the phase problem in X-ray reflectivity. The analytical properties of the reflection coefficient imply that the phase is completely determined by the Hilbert transform of the logarithm of the modulus and the zeros in the upper half complex plane (UHP). To account in addition for interfacial roughness, a new formula for the Hilbert-phase is derived.In the following, the conditions for which the reflection coefficient has zeros in the UHP is discussed and the existing sufficient condition is extended to rough multi-layer systems. Procedures for locating these zeros are developed. The second part of this thesis introduces a new iterative inversion method for X-ray reflectivity. It expands the profile in a set of eigenfunctions, which are discrete approximations of the eigenfunction of the classical reconstruction problem of a compact supported function from its partially known Fourier-transform. In this work, piecewise constant functions, polygons and second-order B-splines are used to expand the density profile. The eigenvalue problems for the calculation of the above mentioned approximations are stated and solved. The formalism for the calculation of the reflection coefficient for these profiles is developed in dynamical and single-scattering theory. In the experimental part of this work iterative inverse schemes are applied to the analysis of X-ray reflectivity. Different sample systems are investigated: For two titanium-carbon samples tiny details at the Ti/C interface such as the formation of a thin TiC layer can be observed.The density profiles obtained from the reflectivities taken from nickel-carbon samples show the formation of SiC inside the Si sub strate. Finally, the new inversion scheme is applied to a series of reflectivities from a 700 AaSiGe film on a substrate.
Advanced analysis techniques for X-ray reflectivities. Theory and application
Energy Technology Data Exchange (ETDEWEB)
Zimmermann, Klaus Martin
2005-07-01
The first part of this thesis adresses the phase problem in X-ray reflectivity. The analytical properties of the reflection coefficient imply that the phase is completely determined by the Hilbert transform of the logarithm of the modulus and the zeros in the upper half complex plane (UHP). To account in addition for interfacial roughness, a new formula for the Hilbert-phase is derived.In the following, the conditions for which the reflection coefficient has zeros in the UHP is discussed and the existing sufficient condition is extended to rough multi-layer systems. Procedures for locating these zeros are developed. The second part of this thesis introduces a new iterative inversion method for X-ray reflectivity. It expands the profile in a set of eigenfunctions, which are discrete approximations of the eigenfunction of the classical reconstruction problem of a compact supported function from its partially known Fourier-transform. In this work, piecewise constant functions, polygons and second-order B-splines are used to expand the density profile. The eigenvalue problems for the calculation of the above mentioned approximations are stated and solved. The formalism for the calculation of the reflection coefficient for these profiles is developed in dynamical and single-scattering theory. In the experimental part of this work iterative inverse schemes are applied to the analysis of X-ray reflectivity. Different sample systems are investigated: For two titanium-carbon samples tiny details at the Ti/C interface such as the formation of a thin TiC layer can be observed.The density profiles obtained from the reflectivities taken from nickel-carbon samples show the formation of SiC inside the Si sub strate. Finally, the new inversion scheme is applied to a series of reflectivities from a 700 AaSiGe film on a substrate.
Sensitivity Study on Analysis of Reactor Containment Response to LOCA
International Nuclear Information System (INIS)
Chung, Ku Young; Sung, Key Yong
2010-01-01
As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code
Sensitivity Study on Analysis of Reactor Containment Response to LOCA
Energy Technology Data Exchange (ETDEWEB)
Chung, Ku Young; Sung, Key Yong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2010-10-15
As a reactor containment vessel is the final barrier to the release of radioactive material during design basis accidents (DBAs), its structural integrity must be maintained by withstanding the high pressure conditions resulting from DBAs. To verify the structural integrity of the containment, response analyses are performed to get the pressure transient inside the containment after DBAs, including loss of coolant accidents (LOCAs). The purpose of this study is to give regulative insights into the importance of input variables in the analysis of containment responses to a large break LOCA (LBLOCA). For the sensitivity study, a LBLOCA in Kori 3 and 4 nuclear power plant (NPP) is analyzed by CONTEMPT-LT computer code
Emissivity compensated spectral pyrometry—algorithm and sensitivity analysis
International Nuclear Information System (INIS)
Hagqvist, Petter; Sikström, Fredrik; Christiansson, Anna-Karin; Lennartson, Bengt
2014-01-01
In order to solve the problem of non-contact temperature measurements on an object with varying emissivity, a new method is herein described and evaluated. The method uses spectral radiance measurements and converts them to temperature readings. It proves to be resilient towards changes in spectral emissivity and tolerates noisy spectral measurements. It is based on an assumption of smooth changes in emissivity and uses historical values of spectral emissivity and temperature for estimating current spectral emissivity. The algorithm, its constituent steps and accompanying parameters are described and discussed. A thorough sensitivity analysis of the method is carried out through simulations. No rigorous instrument calibration is needed for the presented method and it is therefore industrially tractable. (paper)
Sensitive Spectroscopic Analysis of Biomarkers in Exhaled Breath
Bicer, A.; Bounds, J.; Zhu, F.; Kolomenskii, A. A.; Kaya, N.; Aluauee, E.; Amani, M.; Schuessler, H. A.
2018-06-01
We have developed a novel optical setup which is based on a high finesse cavity and absorption laser spectroscopy in the near-IR spectral region. In pilot experiments, spectrally resolved absorption measurements of biomarkers in exhaled breath, such as methane and acetone, were carried out using cavity ring-down spectroscopy (CRDS). With a 172-cm-long cavity, an efficient optical path of 132 km was achieved. The CRDS technique is well suited for such measurements due to its high sensitivity and good spectral resolution. The detection limits for methane of 8 ppbv and acetone of 2.1 ppbv with spectral sampling of 0.005 cm-1 were achieved, which allowed to analyze multicomponent gas mixtures and to observe absorption peaks of 12CH4 and 13CH4. Further improvements of the technique have the potential to realize diagnostics of health conditions based on a multicomponent analysis of breath samples.
Sequence length variation, indel costs, and congruence in sensitivity analysis
DEFF Research Database (Denmark)
Aagesen, Lone; Petersen, Gitte; Seberg, Ole
2005-01-01
The behavior of two topological and four character-based congruence measures was explored using different indel treatments in three empirical data sets, each with different alignment difficulties. The analyses were done using direct optimization within a sensitivity analysis framework in which...... the cost of indels was varied. Indels were treated either as a fifth character state, or strings of contiguous gaps were considered single events by using linear affine gap cost. Congruence consistently improved when indels were treated as single events, but no congruence measure appeared as the obviously...... preferable one. However, when combining enough data, all congruence measures clearly tended to select the same alignment cost set as the optimal one. Disagreement among congruence measures was mostly caused by a dominant fragment or a data partition that included all or most of the length variation...
Applied Drama and the Higher Education Learning Spaces: A Reflective Analysis
Moyo, Cletus
2015-01-01
This paper explores Applied Drama as a teaching approach in Higher Education learning spaces. The exploration takes a reflective analysis approach by first examining the impact that Applied Drama has had on my career as a Lecturer/Educator/Teacher working in Higher Education environments. My engagement with Applied Drama practice and theory is…
Global sensitivity analysis using low-rank tensor approximations
International Nuclear Information System (INIS)
Konakli, Katerina; Sudret, Bruno
2016-01-01
In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.
Introducing Player-Driven Video Analysis to Enhance Reflective Soccer Practice
DEFF Research Database (Denmark)
Hjort, Anders; Elbæk, Lars; Henriksen, Kristoffer
2017-01-01
. The implementation and evaluation of PU took place in the FC Copenhagen (FCK) School of Excellence. Findings show that PU can improve youth football players’ reflection skills through consistent video analyses and tagging, that coaches are important as role models and providers of feedback, and that the use......In the present study, we investigated the introduction of a cloud-based video analysis platform called Player Universe (PU) in a Danish football club. Video analysis is not a new performance-enhancing element in sport, but PU is innovative in the way players and coaches produce footage and how...... it facilitates reflective learning. Video analysis is executed in the (PU) platform by involving the players in the analysis process, in the sense that they are encouraged to tag game actions in video-documented football matches. Following this, players can get virtual feedback from their coach. The philosophy...
Player-Driven Video Analysis to Enhance Reflective Soccer Practice in Talent Development
DEFF Research Database (Denmark)
Hjort, Anders; Henriksen, Kristoffer; Elbæk, Lars
2018-01-01
consistent video analyses and tagging; coaches are important as role models and providers of feedback; and that the use of the platform primarily stimulated deliberate practice activities. PU can be seen as a source of inspiration for soccer players and clubs as to how analytical platforms can motivate......In the present article, we investigate the introduction of a cloud-based video analysis platform called Player Universe (PU). Video analysis is not a new performance-enhancing element in sports, but PU is innovative in how it facilitates reflective learning. Video analysis is executed in the PU...... platform by involving the players in the analysis process, in the sense that they are encouraged to tag game actions in video-documented soccer matches. Following this, players can get virtual feedback from their coach. Findings show that PU can improve youth soccer players' reflection skills through...
Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code
Energy Technology Data Exchange (ETDEWEB)
Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1997-12-31
The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.
Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code
Energy Technology Data Exchange (ETDEWEB)
Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1998-12-31
The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.
An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis
Kleijnen, J.P.C.
2004-01-01
Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models.This review surveys classic and modern designs for experiments with simulation models.Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc.These designs assume a
Analysis of JPSS J1 VIIRS Polarization Sensitivity Using the NIST T-SIRCUS
McIntire, Jeffrey W.; Young, James B.; Moyer, David; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong
2015-01-01
The polarization sensitivity of the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) measured pre-launch using a broadband source was observed to be larger than expected for many reflective bands. Ray trace modeling predicted that the observed polarization sensitivity was the result of larger diattenuation at the edges of the focal plane filter spectral bandpass. Additional ground measurements were performed using a monochromatic source (the NIST T-SIRCUS) to input linearly polarized light at a number of wavelengths across the bandpass of two VIIRS spectral bands and two scan angles. This work describes the data processing, analysis, and results derived from the T-SIRCUS measurements, comparing them with broadband measurements. Results have shown that the observed degree of linear polarization, when weighted by the sensor's spectral response function, is generally larger on the edges and smaller in the center of the spectral bandpass, as predicted. However, phase angle changes in the center of the bandpass differ between model and measurement. Integration of the monochromatic polarization sensitivity over wavelength produced results consistent with the broadband source measurements, for all cases considered.
Majkrzak, Charles F.; Carpenter, Elisabeth; Heinrich, Frank; Berk, Norman F.
2011-11-01
Specular neutron reflectometry has become an established probe of the nanometer scale structure of materials in thin film and multilayered form. It has contributed especially to our understanding of soft condensed matter of interest in polymer science, organic chemistry, and biology and of magnetic hard condensed matter systems. In this paper we examine a number of key factors which have emerged that can limit the sensitivity of neutron reflection as such a probe. Among these is loss of phase information, and we discuss how knowledge about material surrounding a film of interest can be applied to help resolve the problem. In this context we also consider what role the quantum phenomenon of interaction-free measurement might play in enhancing the statistical efficiency for obtaining reflectivity or transmission data.
Relative performance of academic departments using DEA with sensitivity analysis.
Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P
2009-05-01
The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.
Near-infrared reflectance analysis by Gauss-Jordan linear algebra
International Nuclear Information System (INIS)
Honigs, D.E.; Freelin, J.M.; Hieftje, G.M.; Hirschfeld, T.B.
1983-01-01
Near-infrared reflectance analysis is an analytical technique that uses the near-infrared diffuse reflectance of a sample at several discrete wavelengths to predict the concentration of one or more of the chemical species in that sample. However, because near-infrared bands from solid samples are both abundant and broad, the reflectance at a given wavelength usually contains contributions from several sample components, requiring extensive calculations on overlapped bands. In the present study, these calculations have been performed using an approach similar to that employed in multi-component spectrophotometry, but with Gauss-Jordan linear algebra serving as the computational vehicle. Using this approach, correlations for percent protein in wheat flour and percent benzene in hydrocarbons have been obtained and are evaluated. The advantages of a linear-algebra approach over the common one employing stepwise regression are explored
Full Polarization Analysis of Resonant Superlattice and Forbidden x-ray Reflections in Magnetite
International Nuclear Information System (INIS)
Wilkins, S.B.; Bland, S.R.; Detlefs, B.; Beale, T.A.W.; Mazzoli, C.; Joly, Y.; Hatton, P.D.; Lorenzo, J.E.; Brabers, V.A.M.
2009-01-01
Despite being one of the oldest known magnetic materials, and the classic mixed valence compound, thought to be charge ordered, the structure of magnetite below the Verwey transition is complex and the presence and role of charge order is still being debated. Here, we present resonant x-ray diffraction data at the iron K-edge on forbidden (0, 0, 2n+1) C and superlattice (0, 0, 2n+1/2)C reflections. Full linear polarization analysis of the incident and scattered light was conducted in order to explore the origins of the reflections. Through simulation of the resonant spectra we have confirmed that a degree of charge ordering takes place, while the anisotropic tensor of susceptibility scattering is responsible for the superlattice reflections below the Verwey transition. We also report the surprising result of the conversion of a significant proportion of the scattered light from linear to nonlinear polarization.
Analysis of archaeological ceramics by total-reflection X-ray fluorescence: Quantitative approaches
International Nuclear Information System (INIS)
Fernandez-Ruiz, R.; Garcia-Heras, M.
2008-01-01
This paper reports the quantitative methodologies developed for the compositional characterization of archaeological ceramics by total-reflection X-ray fluorescence at two levels. A first quantitative level which comprises an acid leaching procedure, and a second selective level, which seeks to increase the number of detectable elements by eliminating the iron present in the acid leaching procedure. Total-reflection X-ray fluorescence spectrometry has been compared, at a quantitative level, with Instrumental Neutron Activation Analysis in order to test its applicability to the study of this kind of materials. The combination of a solid chemical homogenization procedure previously reported with the quantitative methodologies here presented allows the total-reflection X-ray fluorescence to analyze 29 elements with acceptable analytical recoveries and accuracies
A system for the obtention and analysis of diffuse reflection spectra from biological tissue
International Nuclear Information System (INIS)
La Cadena, A. de; La Rosa, J. de; Stolik, S.
2012-01-01
The diffuse reflection spectroscopy is a technique with is possible to study biological tissue. In the field of the biomedical applications is useful for diagnostic purposes, since is possible to analyze biological tissue in a non invasive way. also, can be used with therapeutical purposes, for example in photodynamic therapy or laser surgery because with this technique it can be determined the biological effects produced by these treatments. In this paper is shown the development of a system to obtain and analyze diffuse reflection spectra of biological tissues, using a LED as a light source, that emits light between 400-700nm. The system has an interface for the regulation of the emittance of the LED. For diffuse reflectance spectra analysis, we use an HR4000CG-UV-NIR spectrometer. (Author)
Analysis of archaeological ceramics by total-reflection X-ray fluorescence: Quantitative approaches
Energy Technology Data Exchange (ETDEWEB)
Fernandez-Ruiz, R. [Servicio Interdepartamental de Investigacion, Facultad de Ciencias, Universidad Autonoma de Madrid, Modulo C-9, Laboratorio de TXRF, Crta. Colmenar, Km 15, Cantoblanco, E-28049, Madrid (Spain)], E-mail: ramon.fernandez@uam.es; Garcia-Heras, M. [Grupo de Arqueometria de Vidrios y Materiales Ceramicos, Instituto de Historia, Centro de Ciencias Humanas y Sociales, CSIC, C/ Albasanz, 26-28, 28037 Madrid (Spain)
2008-09-15
This paper reports the quantitative methodologies developed for the compositional characterization of archaeological ceramics by total-reflection X-ray fluorescence at two levels. A first quantitative level which comprises an acid leaching procedure, and a second selective level, which seeks to increase the number of detectable elements by eliminating the iron present in the acid leaching procedure. Total-reflection X-ray fluorescence spectrometry has been compared, at a quantitative level, with Instrumental Neutron Activation Analysis in order to test its applicability to the study of this kind of materials. The combination of a solid chemical homogenization procedure previously reported with the quantitative methodologies here presented allows the total-reflection X-ray fluorescence to analyze 29 elements with acceptable analytical recoveries and accuracies.
A comparison of hair colour measurement by digital image analysis with reflective spectrophotometry.
Vaughn, Michelle R; van Oorschot, Roland A H; Baindur-Hudson, Swati
2009-01-10
While reflective spectrophotometry is an established method for measuring macroscopic hair colour, it can be cumbersome to use on a large number of individuals and not all reflective spectrophotometry instruments are easily portable. This study investigates the use of digital photographs to measure hair colour and compares its use to reflective spectrophotometry. An understanding of the accuracy of colour determination by these methods is of relevance when undertaking specific investigations, such as those on the genetics of hair colour. Measurements of hair colour may also be of assistance in cases where a photograph is the only evidence of hair colour available (e.g. surveillance). Using the CIE L(*)a(*)b(*) colour space, the hair colour of 134 individuals of European ancestry was measured by both reflective spectrophotometry and by digital image analysis (in V++). A moderate correlation was found along all three colour axes, with Pearson correlation coefficients of 0.625, 0.593 and 0.513 for L(*), a(*) and b(*) respectively (p-values=0.000), with means being significantly overestimated by digital image analysis for all three colour components (by an average of 33.42, 3.38 and 8.00 for L(*), a(*) and b(*) respectively). When using digital image data to group individuals into clusters previously determined by reflective spectrophotometric analysis using a discriminant analysis, individuals were classified into the correct clusters 85.8% of the time when there were two clusters. The percentage of cases correctly classified decreases as the number of clusters increases. It is concluded that, although more convenient, hair colour measurement from digital images has limited use in situations requiring accurate and consistent measurements.
Lutz, Gabriele; Pankoke, Nina; Goldblatt, Hadass; Hofmann, Marzellus; Zupanic, Michaela
2017-07-14
Professional competence is important in delivering high quality patient care, and it can be enhanced by reflection and reflective discourse e.g. in mentoring groups. However, students are often reluctant though to engage in this discourse. A group mentoring program involving all preclinical students as well as faculty members and co-mentoring clinical students was initiated at Witten-Herdecke University. This study explores both the attitudes of those students towards such a program and factors that might hinder or enhance how students engage in reflective discourse. A qualitative design was applied using semi-structured focus group interviews with preclinical students and semi-structured individual interviews with mentors and co-mentors. The interview data were analyzed using thematic content analysis. Students' attitudes towards reflective discourse on professional challenges were diverse. Some students valued the new program and named positive outcomes regarding several features of professional development. Enriching experiences were described. Others expressed aversive attitudes. Three reasons for these were given: unclear goals and benefits, interpersonal problems within the groups hindering development and intrapersonal issues such as insecurity and traditional views of medical education. Participants mentioned several program setup factors that could enhance how students engage in such groups: explaining the program thoroughly, setting expectations and integrating the reflective discourse in a meaningful way into the curriculum, obliging participation without coercion, developing a sense of security, trust and interest in each other within the groups, randomizing group composition and facilitating group moderators as positive peer and faculty role models and as learning group members. A well-designed and empathetic setup of group mentoring programs can help raise openness towards engaging in meaningful reflective discourse. Reflection on and communication of
Analysis of Electric Field Propagation in Anisotropically Absorbing and Reflecting Waveplates
Carnio, B. N.; Elezzabi, A. Y.
2018-04-01
Analytical expressions are derived for half-wave plates (HWPs) and quarter-wave plates (QWPs) based on uniaxial crystals. This general analysis describes the behavior of anisotropically absorbing and reflecting waveplates across the electromagnetic spectrum, which allows for correction to the commonly used equations determined assuming isotropic absorptions and reflections. This analysis is crucial to the design and implementation of HWPs and QWPs in the terahertz regime, where uniaxial crystals used for waveplates are highly birefringent and anisotropically absorbing. The derived HWP equations describe the rotation of linearly polarized light by an arbitrary angle, whereas the QWP analysis focuses on manipulating a linearly polarized electric field to obtain any ellipticity. The HWP and QWP losses are characterized by determining equations for the total electric field magnitude transmitted through these phase-retarding elements.
PDASAC, Partial Differential Sensitivity Analysis of Stiff System
International Nuclear Information System (INIS)
Caracotsios, M.; Stewart, W.E.
2001-01-01
1 - Description of program or function: PDASAC solves stiff, nonlinear initial-boundary-value problems in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0, 1 or 2 in x. Parametric sensitivities of the calculated states are computed simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivities if desired. 2 - Method of solution: The method of lines is used, with a user- selected x-grid and a minimum-bandwidth finite-difference approximations of the x-derivatives. Starting conditions are reconciled with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithms of Shampine (1987), extended here to differential- algebraic equation systems. The solution is continued with the DASSL predictor-corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase deleted and with row scaling of the Jacobian added. The predictor and corrector are expressed in divided-difference form, with the fixed-leading-coefficient form of corrector (Jackson and Sacks-Davis 1989; Brenan et al. 1989). Weights for the error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations of Caracotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic equation systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigned the work array lengths and the output
Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators
Sykes, J. F.; Wilson, J. L.; Andrews, R. W.
1985-03-01
Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.
Design, Analysis, and On-Sun Evaluation of Reflective Strips Under Controlled Buckling
Jaworske, Donald A.; Sechkar, Edward A.; Colozza, Anthony J.
2014-01-01
Solar concentrators are envisioned for use in a variety of space-based applications, including applications involving in situ resource utilization. Identifying solar concentrators that minimize mass and cost are of great interest, especially since launch cost is driven in part by the mass of the payload. Concentrators must also be able to survive the wide temperature excursions on the lunar surface. Identifying smart structures which compensate for changes in concentrator geometry brought about by temperature extremes are of interest. Some applications may benefit from the ability to change the concentrators focal pattern at will. This paper addresses a method of designing a single reflective strip to produce a desired focal pattern through the use of controlled buckling. Small variations in the cross section over the length of the reflective strip influence the distribution of light in the focal region. A finite element method of analysis is utilized here which calculates the curve produced for a given strip cross section and axial load. Varying axial force and strip cross section over the length of the reflective strip provide a means of optimizing ray convergence in the focal region. Careful selection of a tapered cross section yields a reflective strip that approximates a parabola. An array of reflective strips under controlled buckling produces a light weight concentrator and adjustments in the compression of individual strips provide a means of compensating for temperature excursions or changing the focal pattern at will.
Ojeda, Jesús J; Romero-González, María E; Banwart, Steven A
2009-08-01
Reflectance micro-Fourier transform infrared (FT-IR) analysis has been applied to characterize biofilm formation of Aquabacterium commune, a common microorganism present on drinking water distribution systems, onto the increasingly popular pipe material stainless steel EN1.4307. The applicability of the reflectance micro-FT-IR technique for analyzing the bacterial functional groups is discussed, and the results are compared to spectra obtained using more conventional FT-IR techniques: transmission micro-FT-IR, attenuated transmitted reflectance (ATR), and KBr pellets. The differences between the infrared spectra of wet and dried bacteria, as well as free versus attached bacteria, are also discussed. The spectra obtained using reflectance micro-FT-IR spectroscopy were comparable to those obtained using other FT-IR techniques. The absence of sample preparation, the potential to analyze intact samples, and the ability to characterize opaque and thick samples without the need to transfer the bacterial samples to an infrared transparent medium or produce a pure culture were the main advantages of reflectance micro-FT-IR spectroscopy.
Energy Technology Data Exchange (ETDEWEB)
Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA
2016-04-08
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance
Rakhmatullina, Ekaterina; Bossen, Anke; Höschele, Christoph; Wang, Xiaojie; Beyeler, Barbara; Meier, Christoph; Lussi, Adrian
2011-01-01
We present assembly and application of an optical reflectometer for the analysis of dental erosion. The erosive procedure involved acid-induced softening and initial substance loss phases, which are considered to be difficult for visual diagnosis in a clinic. Change of the specular reflection signal showed the highest sensitivity for the detection of the early softening phase of erosion among tested methods. The exponential decrease of the specular reflection intensity with erosive duration was compared to the increase of enamel roughness. Surface roughness was measured by optical analysis, and the observed tendency was correlated with scanning electron microscopy images of eroded enamel. A high correlation between specular reflection intensity and measurement of enamel softening (r2 ≥ −0.86) as well as calcium release (r2 ≥ −0.86) was found during erosion progression. Measurement of diffuse reflection revealed higher tooth-to-tooth deviation in contrast to the analysis of specular reflection intensity and lower correlation with other applied methods (r2 = 0.42–0.48). The proposed optical method allows simple and fast surface analysis and could be used for further optimization and construction of the first noncontact and cost-effective diagnostic tool for early erosion assessment in vivo. PMID:22029364
Rakhmatullina, Ekaterina; Bossen, Anke; Höschele, Christoph; Wang, Xiaojie; Beyeler, Barbara; Meier, Christoph; Lussi, Adrian
2011-10-01
We present assembly and application of an optical reflectometer for the analysis of dental erosion. The erosive procedure involved acid-induced softening and initial substance loss phases, which are considered to be difficult for visual diagnosis in a clinic. Change of the specular reflection signal showed the highest sensitivity for the detection of the early softening phase of erosion among tested methods. The exponential decrease of the specular reflection intensity with erosive duration was compared to the increase of enamel roughness. Surface roughness was measured by optical analysis, and the observed tendency was correlated with scanning electron microscopy images of eroded enamel. A high correlation between specular reflection intensity and measurement of enamel softening (r2 >= -0.86) as well as calcium release (r2 >= -0.86) was found during erosion progression. Measurement of diffuse reflection revealed higher tooth-to-tooth deviation in contrast to the analysis of specular reflection intensity and lower correlation with other applied methods (r2 = 0.42-0.48). The proposed optical method allows simple and fast surface analysis and could be used for further optimization and construction of the first noncontact and cost-effective diagnostic tool for early erosion assessment in vivo.
An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis
Directory of Open Access Journals (Sweden)
S. Razmyan
2012-01-01
Full Text Available Discriminant analysis (DA is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.
Sensitivity analysis of alkaline plume modelling: influence of mineralogy
International Nuclear Information System (INIS)
Gaboreau, S.; Claret, F.; Marty, N.; Burnol, A.; Tournassat, C.; Gaucher, E.C.; Munier, I.; Michau, N.; Cochepin, B.
2010-01-01
Document available in extended abstract form only. In the context of a disposal facility for radioactive waste in clayey geological formation, an important modelling effort has been carried out in order to predict the time evolution of interacting cement based (concrete or cement) and clay (argillites and bentonite) materials. The high number of modelling input parameters associated with non negligible uncertainties makes often difficult the interpretation of modelling results. As a consequence, it is necessary to carry out sensitivity analysis on main modelling parameters. In a recent study, Marty et al. (2009) could demonstrate that numerical mesh refinement and consideration of dissolution/precipitation kinetics have a marked effect on (i) the time necessary to numerically clog the initial porosity and (ii) on the final mineral assemblage at the interface. On the contrary, these input parameters have little effect on the extension of the alkaline pH plume. In the present study, we propose to investigate the effects of the considered initial mineralogy on the principal simulation outputs: (1) the extension of the high pH plume, (2) the time to clog the porosity and (3) the alteration front in the clay barrier (extension and nature of mineralogy changes). This was done through sensitivity analysis on both concrete composition and clay mineralogical assemblies since in most published studies, authors considered either only one composition per materials or simplified mineralogy in order to facilitate or to reduce their calculation times. 1D Cartesian reactive transport models were run in order to point out the importance of (1) the crystallinity of concrete phases, (2) the type of clayey materials and (3) the choice of secondary phases that are allowed to precipitate during calculations. Two concrete materials with either nanocrystalline or crystalline phases were simulated in contact with two clayey materials (smectite MX80 or Callovo- Oxfordian argillites). Both
A BRDF-BPDF database for the analysis of Earth target reflectances
Breon, Francois-Marie; Maignan, Fabienne
2017-01-01
Land surface reflectance is not isotropic. It varies with the observation geometry that is defined by the sun, view zenith angles, and the relative azimuth. In addition, the reflectance is linearly polarized. The reflectance anisotropy is quantified by the bidirectional reflectance distribution function (BRDF), while its polarization properties are defined by the bidirectional polarization distribution function (BPDF). The POLDER radiometer that flew onboard the PARASOL microsatellite remains the only space instrument that measured numerous samples of the BRDF and BPDF of Earth targets. Here, we describe a database of representative BRDFs and BPDFs derived from the POLDER measurements. From the huge number of data acquired by the spaceborne instrument over a period of 7 years, we selected a set of targets with high-quality observations. The selection aimed for a large number of observations, free of significant cloud or aerosol contamination, acquired in diverse observation geometries with a focus on the backscatter direction that shows the specific hot spot signature. The targets are sorted according to the 16-class International Geosphere-Biosphere Programme (IGBP) land cover classification system, and the target selection aims at a spatial representativeness within the class. The database thus provides a set of high-quality BRDF and BPDF samples that can be used to assess the typical variability of natural surface reflectances or to evaluate models. It is available freely from the PANGAEA website (PANGAEA.864090" target="_blank">doi:10.1594/PANGAEA.864090). In addition to the database, we provide a visualization and analysis tool based on the Interactive Data Language (IDL). It allows an interactive analysis of the measurements and a comparison against various BRDF and BPDF analytical models. The present paper describes the input data, the selection principles, the database format, and the analysis tool
Direct analysis of biological samples by total reflection X-ray fluorescence
International Nuclear Information System (INIS)
Lue M, Marco P.; Hernandez-Caraballo, Edwin A.
2004-01-01
The technique of total reflection X-ray fluorescence (TXRF) is well suited for the direct analysis of biological samples due to the low matrix interferences and simultaneous multi-element nature. Nevertheless, biological organic samples are frequently analysed after digestion procedures. The direct determination of analytes requires shorter analysis time, low reactive consumption and simplifies the whole analysis process. On the other hand, the biological/clinical samples are often available in minimal amounts and routine studies require the analysis of large number of samples. To overcome the difficulties associated with the analysis of organic samples, particularly of solid ones, different procedures of sample preparation and calibration to approach the direct analysis have been evaluated: (1) slurry sampling, (2) Compton peak standardization, (3) in situ microwave digestion, (4) in situ chemical modification and (5) direct analysis with internal standardization. Examples of analytical methods developed by our research group are discussed. Some of them have not been previously published, illustrating alternative strategies for coping with various problems that may be encountered in the direct analysis by total reflection X-ray fluorescence spectrometry
Mas-Abellán, P.; Madrigal, R.; Fimia, A.
2015-05-01
Silver halide emulsions have been considered one of the most energetic sensitive materials for holographic applications. Nonlinear recording effects on holographic reflection gratings recorded on silver halide emulsions have been studied by different authors obtaining excellent experimental results. In this communication specifically we focused our investigation on the effects of refractive index modulation, trying to get high levels of overmodulation. We studied the influence of the grating thickness on the overmodulation and its effects on the transmission spectra for a wide exposure range by use of two different thickness ultrafine grain emulsion BB640, thin films (6 μm) and thick films (9 μm), exposed to single collimated beams using a red He-Ne laser (wavelength 632.8 nm) with Denisyuk configuration obtaining a spatial frequency of 4990 l/mm recorded on the emulsion. The experimental results show that high overmodulation levels of refractive index could offer some benefits such as high diffraction efficiency (reaching 90 %), increase of grating bandwidth (close to 80 nm), making lighter holograms, or diffraction spectra deformation, transforming the spectrum from sinusoidal to approximation of square shape. Based on these results, we demonstrate that holographic reflection gratings spectra recorded with overmodulation of refractive index is formed by the combination of several non-linear components due to very high overmodulation. This study is the first step to develop a new easy multiplexing technique based on the use of high index modulation reflection gratings.
Nadeau, Geneviève; Lippel, Katherine
2014-09-10
Emerging fields such as environmental health have been challenged, in recent years, to answer the growing methodological calls for a finer integration of sex and gender in health-related research and policy-making. Through a descriptive examination of 25 peer-reviewed social science papers published between 1996 and 2011, we explore, by examining methodological designs and theoretical standpoints, how the social sciences have integrated gender sensitivity in empirical work on Multiple Chemical Sensitivities (MCS). MCS is a "diagnosis" associated with sensitivities to chronic and low-dose chemical exposures, which remains contested in both the medical and institutional arenas, and is reported to disproportionately affect women. We highlighted important differences between papers that did integrate a gender lens and those that did not. These included characteristics of the authorship, purposes, theoretical frameworks and methodological designs of the studies. Reviewed papers that integrated gender tended to focus on the gender roles and identity of women suffering from MCS, emphasizing personal strategies of adaptation. More generally, terminological confusions in the use of sex and gender language and concepts, such as a conflation of women and gender, were observed. Although some men were included in most of the study samples reviewed, specific data relating to men was undereported in results and only one paper discussed issues specifically experienced by men suffering from MCS. Papers that overlooked gender dimensions generally addressed more systemic social issues such as the dynamics of expertise and the medical codification of MCS, from more consistently outlined theoretical frameworks. Results highlight the place for a critical, systematic and reflexive problematization of gender and for the development of methodological and theoretical tools on how to integrate gender in research designs when looking at both micro and macro social dimensions of environmental
Directory of Open Access Journals (Sweden)
A M Witkowski
Full Text Available Reflectance confocal microscopy (RCM is an imaging device that permits non-invasive visualization of cellular morphology and has been shown to improve diagnostic accuracy of dermoscopically equivocal cutaneous lesions. The application of double reader concordance evaluation of dermoscopy-RCM image sets in retrospective settings and its potential application to telemedicine evaluation has not been tested in a large study population.To improve diagnostic sensitivity of RCM image diagnosis using a double reader concordance evaluation approach; to reduce mismanagement of equivocal cutaneous lesions in retrospective consultation and telemedicine settings.1000 combined dermoscopy-RCM image sets were evaluated in blind by 10 readers with advanced training and internship in dermoscopy and RCM evaluation. We compared sensitivity and specificity of single reader evaluation versus double reader concordance evaluation as well as the effect of diagnostic confidence on lesion management in a retrospective setting.Single reader evaluation resulted in an overall sensitivity of 95.2% and specificity of 76.3%, with misdiagnosis of 8 melanomas, 4 basal cell carcinomas and 2 squamous cell carcinomas. Combined double reader evaluation resulted in an overall sensitivity of 98.3% and specificity of 65.5%, with misdiagnosis of 1 in-situ melanoma and 2 basal cell carcinomas.Evaluation of dermoscopy-RCM image sets of cutaneous lesions by single reader evaluation in retrospective settings is limited by sensitivity levels that may result in potential mismanagement of malignant lesions. Double reader blind concordance evaluation may improve the sensitivity of diagnosis and management safety. The use of a second check can be implemented in telemedicine settings where expert consultation and second opinions may be required.
Rice, M. S.; Cloutis, E. A.; Bell, J. F., III; Bish, D. L.; Horgan, B. H.; Mertzman, S. A.; Craig, M. A.; Renault, R. W.; Gautason, B.; Mountain, B.
2013-01-01
Hydrated silica-rich materials have recently been discovered on the surface of Mars by the Mars Exploration Rover (MER) Spirit, the Mars Reconnaissance Orbiter (MRO) Compact Reconnaissance Imaging Spectrometer for Mars (CRISM), and the Mars Express Observatoire pour la Mineralogie, l'Eau, les Glaces, et l'Activite'(OMEGA) in several locations. Having been interpreted as hydrothermal deposits and aqueous alteration products, these materials have important implications for the history of water on the martian surface. Spectral detections of these materials in visible to near infrared (Vis NIR) wavelengths have been based on a H2O absorption feature in the 934-1009 nm region seen with Spirit s Pancam instrument, and on SiOH absorption features in the 2.21-2.26 micron range seen with CRISM. Our work aims to determine how the spectral reflectance properties of silica-rich materials in Vis NIR wavelengths vary as a function of environmental conditions and formation. Here we present laboratory reflectance spectra of a diverse suite of silica-rich materials (chert, opal, quartz, natural sinters and synthetic silica) under a range of grain sizes and temperature, pressure, and humidity conditions. We find that the H2O content and form of H2O/OH present in silica-rich materials can have significant effects on their Vis NIR spectra. Our main findings are that the position of the approx.1.4 microns OH feature and the symmetry of the approx.1.9 microns feature can be used to discern between various forms of silica-rich materials, and that the ratio of the approx.2.2 microns (SiOH) and approx.1.9 microns (H2O) band depths can aid in distinguishing between silica phases (opal-A vs. opal-CT) and formation conditions (low vs. high temperature). In a case study of hydrated silica outcrops in Valles Marineris, we show that careful application of a modified version of these spectral parameters to orbital near-infrared spectra (e.g., from CRISM and OMEGA) can aid in characterizing the
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
Energy Technology Data Exchange (ETDEWEB)
Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)
2015-05-15
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.
ECOS - analysis of sensitivity to database and input parameters
International Nuclear Information System (INIS)
Sumerling, T.J.; Jones, C.H.
1986-06-01
The sensitivity of doses calculated by the generic biosphere code ECOS to parameter changes has been investigated by the authors for the Department of the Environment as part of its radioactive waste management research programme. The sensitivity of results to radionuclide dependent parameters has been tested by specifying reasonable parameter ranges and performing code runs for best estimate, upper-bound and lower-bound parameter values. The work indicates that doses are most sensitive to scenario parameters: geosphere input fractions, area of contaminated land, land use and diet, flux of contaminated waters and water use. Recommendations are made based on the results of sensitivity. (author)
A Sensitivity Analysis of fMRI Balloon Model
Zayane, Chadia
2015-04-22
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Methods and computer codes for probabilistic sensitivity and uncertainty analysis
International Nuclear Information System (INIS)
Vaurio, J.K.
1985-01-01
This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables
Nonparametric Bounds and Sensitivity Analysis of Treatment Effects
Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.
2015-01-01
This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743
Impact Responses and Parameters Sensitivity Analysis of Electric Wheelchairs
Directory of Open Access Journals (Sweden)
Song Wang
2018-06-01
Full Text Available The shock and vibration of electric wheelchairs undergoing road irregularities is inevitable. The road excitation causes the uneven magnetic gap of the motor, and the harmful vibration decreases the recovery rate of rehabilitation patients. To effectively suppress the shock and vibration, this paper introduces the DA (dynamic absorber to the electric wheelchair. Firstly, a vibration model of the human-wheelchair system with the DA was created. The models of the road excitation for wheelchairs going up a step and going down a step were proposed, respectively. To reasonably evaluate the impact level of the human-wheelchair system undergoing the step–road transition, evaluation indexes were given. Moreover, the created vibration model and the road–step model were validated via tests. Then, to reveal the vibration suppression performance of the DA, the impact responses and the amplitude frequency characteristics were numerically simulated and compared. Finally, a sensitivity analysis of the impact responses to the tire static radius r and the characteristic parameters was carried out. The results show that the DA can effectively suppress the shock and vibration of the human-wheelchair system. Moreover, for the electric wheelchair going up a step and going down a step, there are some differences in the vibration behaviors.
A Sensitivity Analysis of fMRI Balloon Model
Zayane, Chadia; Laleg-Kirati, Taous-Meriem
2015-01-01
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Sensitivity analysis on parameters and processes affecting vapor intrusion risk
Picone, Sara
2012-03-30
A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion. © 2012 SETAC.
Sensitivity study of CFD turbulent models for natural convection analysis
International Nuclear Information System (INIS)
Yu sun, Park
2007-01-01
The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation
Multi-scale sensitivity analysis of pile installation using DEM
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2017-12-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
Sensitivity Analysis for the CLIC Damping Ring Inductive Adder
Holma, Janne
2012-01-01
The CLIC study is exploring the scheme for an electron-positron collider with high luminosity and a nominal centre-of-mass energy of 3 TeV. The CLIC pre-damping rings and damping rings will produce, through synchrotron radiation, ultra-low emittance beam with high bunch charge, necessary for the luminosity performance of the collider. To limit the beam emittance blow-up due to oscillations, the pulse generators for the damping ring kickers must provide extremely flat, high-voltage pulses. The specifications for the extraction kickers of the CLIC damping rings are particularly demanding: the flattop of the output pulse must be 160 ns duration, 12.5 kV and 250 A, with a combined ripple and droop of not more than ±0.02 %. An inductive adder allows the use of different modulation techniques and is therefore a very promising approach to meeting the specifications. PSpice has been utilised to carry out a sensitivity analysis of the predicted output pulse to the value of both individual and groups of circuit compon...
Fourier X-ray line shape analysis of lattice defects from a single reflection
International Nuclear Information System (INIS)
Misra, N.K.; Bhanumurthy, K.
1981-01-01
A method of single reflection Fourier analysis has been described considering the fact that the rms strain (averaged over a distance) is not independent of averaging distance. Following the procedure of N.K. Misra and T.B. Ghosh (1976) and considering the initial slopes of dAsub(L)/dL against L curves, (Asub(L) is the Lsub(th) order Fourier coefficient) the effective size of the coherently diffracting domains and the rms strain in them are determined. The results of this analysis for pure Ti and Ag-3.55% Ga, Ag-15% In and Cu-12.46% Ge alloys compare fairly well with those obtained from different multiple reflections techniques. (author)
Directory of Open Access Journals (Sweden)
Michal Vondra
2009-01-01
Full Text Available The application of methods based on measurements of photosynthesis efficiency is now more and more popular and used not only for the evaluation of the efficiency of herbicides but also for the estimation of their phytotoxicity to the cultivated crop. These methods enable to determine also differences in the sensitivity of cultivars and/or hybrids to individual herbicides. The advantage of these methods consists above all in the speed and accuracy of measuring.In a field experiment, the sensitivity of several selected grain maize hybrids (EDENSTAR, NK AROBASE, NK LUGAN, LG 33.30 and NK THERMO to the herbicide CALLISTO 480 SC + ATPLUS 463 was tested for a period of three years. The sensitivity to a registered dose of 0.25 l . ha−1 + 0.5 % was measured by means of the apparatus PS1 meter, which could measure the reflected radiation. Measurements of sensitivity of hybrids were performed on the 2nd, 3rd, 4th, 5th and 8th day after the application of the tested herbicide, i.e. in the growing stage of the 3rd–5th leaf. Plant material was harvested using a small-plot combine harvester SAMPO 2010. Samples were weighed and converted to the yield with 15 % of moisture in grain DM.The obtained three-year results did not demonstrate differences in sensitivity of tested hybrids to the registered dose of the herbicide CALLISTO 480 SC + ATPLUS 463 (i.e. 0.25 l . ha−1 + 0,5 %. Recorded results indicated that for the majority of tested hybrids the most critical were the 4th and the 5th day after the application; on these days the average PS1 values were the highest at all. In years 2005 and 2007, none of the tested hybrids exceeded the limit value 15 (which indicated a certain decrease in the efficiency of photosynthesis. Although in 2006 three of tested hybrids showed a certain decrease in photosynthetic activity (i.e. EDENSTAR and NK AROBASE on the 3rd day and NK LUGAN on the 2nd–4th day after the application, no visual symptoms
International Nuclear Information System (INIS)
Kang, Dong Gu
2017-01-01
Highlights: • The nodalization of APR-1400 was modified to reflect the characteristic of upper region temperature. • The effect of nodalization and temperature of reactor upper region on LBLOCA consequence was evaluated. • The modification of nodalization is an essential prerequisite in APR-1400 LBLOCA analysis. - Abstract: In best estimate (BE) calculation, the definition of system nodalization is important step influencing the prediction accuracy for specific thermal-hydraulic phenomena. The upper region of reactor is defined as the region of the upper guide structure (UGS) and upper dome. It has been assumed that the temperature of upper region is close to average temperature in most large break loss of coolant accident (LBLOCA) analysis cases. However, it was recently found that the temperature of upper region of APR-1400 reactor might be little lower than or similar to hot leg temperature through the review of detailed design data. In this study, the nodalization of APR-1400 was modified to reflect the characteristic of upper region temperature, and the effect of nodalization and temperature of reactor upper region on LBLOCA consequence was evaluated by sensitivity analysis including best estimate plus uncertainty (BEPU) calculation. In basecase calculation, in case of modified version, the peak cladding temperature (PCT) in blowdown phase became higher and the blowdown quenching (or cooling) was significantly deteriorated as compared to original case, and as a result, the cladding temperature in reflood phase became higher and the final quenching was also delayed. In addition, thermal-hydraulic parameters were compared and analyzed to investigate the effect of change of upper region on cladding temperature. In BEPU analysis, the 95 percentile PCT used in current regulatory practice was increased due to the modification of upper region nodalization, and it occurred in the reflood phase unlike original case.
Applications of total reflection X-ray fluorescence in multi-element analysis
International Nuclear Information System (INIS)
Michaelis, W.; Prange, A.; Knoth, J.
1985-01-01
Although Total Reflection X-Ray Fluorescence Analysis (TXRF) became available for practical applications and routine measurements only few years ago, the number of programmes that make use of this method is increasing rapidly. The scope of work is widespread over environmental research and monitoring, mineralogy, mineral exploration, oceanography, biology, medicine and biochemistry. The present paper gives a brief survey of these applications and summarizes some of them which are typical for quite different matrices. (orig.)
International Nuclear Information System (INIS)
Rieder, R.; Wobrauschek, P.; Ladisich, W.; Streli, C.; Aiginger, H.; Garbe, S.; Gaul, G.; Knoechel, A.; Lechtenberg, F.
1995-01-01
To achieve lowest detection limits in total reflection X-ray fluorescence analysis (TXRF) synchrotron radiation has been monochromatized by a multilayer structure to obtain a relative broad energy band compared to Bragg single crystals for an efficient excitation. The energy has been set to 14 keV, 17.5 keV, 31 keV and about 55 keV. Detection limits of 20 fg and 150 fg have been achieved for Sr and Cd, respectively. ((orig.))
Rising to the challenges-Reflections on Future-oriented Technology Analysis
Georghiou, Luke; Cassingena Harper, Jennifer
2013-01-01
Drawing upon the presentations made at the fourth conference on Future-oriented Technology Analysis, this essay reflects on the implications of the current period of instability and discontinuity for the practice of FTA or foresight. In the past the demand environment for foresight on research and innovation policy favoured application to priority-setting and articulation of demand. New tendencies include a heightened search for breakthrough science and a focus on grand societal challenges. B...
The teacher is a facilitator: Reflecting on ESL teacher beliefs through metaphor analysis
Directory of Open Access Journals (Sweden)
Thomas S. C. Farrell
2016-01-01
Full Text Available Metaphors offer a lens through which language teachers express their understanding of their work. Metaphor analysis can be a powerful reflective tool for expressing meanings that underpin ways of thinking about teaching and learning English as a second/foreign language. Through reflecting on their personal teaching metaphors, teachers become more aware of the beliefs that underpin their work. This paper reports the reflections on the prior beliefs of three experienced ESL teachers in Canada through the use of metaphor analysis. The paper attempts to explore the prior beliefs of the three experienced ESL teachers in Canada through metaphor analysis by using the Oxford et al. (1998 framework as a theoretical lens in which to gain understanding of the use and meaning of these metaphors. Results indicated that all three teachers used a total of 94 metaphors throughout the period of the group discussions and interviews, and that the metaphors used most were those related to learner-centered growth, followed by social order, then social reform.
Considering Respiratory Tract Infections and Antimicrobial Sensitivity: An Exploratory Analysis
Directory of Open Access Journals (Sweden)
Amin, R.
2009-01-01
Full Text Available This study was conducted to observe the sensitivity and resistance of status of antibiotics for respiratory tract infection (RTI. Throat swab culture and sensitivity report of 383 patients revealed sensitivity profiles were observed with amoxycillin (7.9%, penicillin (33.7%, ampicillin (36.6%, co-trimoxazole (46.5%, azithromycin (53.5%, erythromycin (57.4%, cephalexin (69.3%, gentamycin (78.2%, ciprofloxacin (80.2%, cephradine (81.2%, ceftazidime (93.1%, ceftriaxone (93.1%. Sensitivity to cefuroxime was reported 93.1% cases. Resistance was found with amoxycillin (90.1%, ampicillin (64.1%, penicillin (61.4%, co-trimoxazole (43.6%, erythromycin (39.6%, and azithromycin (34.7%. Cefuroxime demonstrates high level of sensitivity than other antibiotics and supports its consideration with patients with upper RTI.
Sensitivity analysis: Theory and practical application in safety cases
International Nuclear Information System (INIS)
Kuhlmann, Sebastian; Plischke, Elmar; Roehlig, Klaus-Juergen; Becker, Dirk-Alexander
2014-01-01
The projects described here aim at deriving an adaptive and stepwise approach to sensitivity analysis (SA). Since the appropriateness of a single SA method strongly depends on the nature of the model under study, a top-down approach (from simple to sophisticated methods) is suggested. If simple methods explain the model behaviour sufficiently well then there is no need for applying more sophisticated ones and the SA procedure can be considered complete. The procedure is developed and tested using a model for a LLW/ILW repository in salt. Additionally, a new model for the disposal of HLW in rock salt will be available soon for SA studies within the MOSEL/NUMSA projects. This model will address special characteristics of waste disposal in undisturbed rock salt, especially the case of total confinement, resulting in a zero release which is indeed the objective of radioactive waste disposal. A high proportion of zero-output realisations causes many SA methods to fail, so special treatment is needed and has to be developed. Furthermore, the HLW disposal model will be used as a first test case for applying the procedure described above, which was and is being derived using the LLW/ILW model. How to treat dependencies in the input, model conservatism and time-dependent outputs will be addressed in the future project programme: - If correlations or, more generally, dependencies between input parameters exist, the question arises about the deeper meaning of sensitivity results in such cases: A strict separation between inputs, internal states and outputs is no longer possible. Such correlations (or dependencies) might have different reasons. In some cases correlated input parameters might have a common physically (well-)known fundamental cause but there are reasons why this fundamental cause cannot or should not be integrated into the model, i.e. the cause might generate a very complex model which cannot be calculated in appropriate time. In other cases the correlation may
van Leeuwen, René; Tiesinga, Lucas J; Jochemsen, Henk; Post, Doeke
2009-05-01
This study describes the learning effects of thematic peer-review discussion groups (Hendriksen, 2000. Begeleid intervisie model, Collegiale advisering en probleemoplossing, Nelissen, Baarn.) on developing nursing students' competence in providing spiritual care. It also discusses the factors that might influence the learning process. The method of peer-review is a form of reflective learning based on the theory of experiential learning (Kolb, 1984. Experiential learning, Experience as the source of learning development. Englewoods Cliffs, New Jersey, Prentice Hill). It was part of an educational programme on spiritual care in nursing for third-year undergraduate nursing students from two nursing schools in the Netherlands. Reflective journals (n=203) kept by students throughout the peer-review process were analysed qualitatively The analysis shows that students reflect on spirituality in the context of personal experiences in nursing practice. In addition, they discuss the nursing process and organizational aspects of spiritual care. The results show that the first two phases in the experiential learning cycle appear prominently; these are 'inclusion of actual experience' and 'reflecting on this experience'. The phases of 'abstraction of experience' and 'experimenting with new behaviour' are less evident. We will discuss possible explanations for these findings according to factors related to education, the students and the tutors and make recommendations for follow-up research.
Medina, José M; Díaz, José A; Vukusic, Pete
2015-04-20
Iridescent structural colors in biology exhibit sophisticated spatially-varying reflectance properties that depend on both the illumination and viewing angles. The classification of such spectral and spatial information in iridescent structurally colored surfaces is important to elucidate the functional role of irregularity and to improve understanding of color pattern formation at different length scales. In this study, we propose a non-invasive method for the spectral classification of spatial reflectance patterns at the micron scale based on the multispectral imaging technique and the principal component analysis similarity factor (PCASF). We demonstrate the effectiveness of this approach and its component methods by detailing its use in the study of the angle-dependent reflectance properties of Pavo cristatus (the common peacock) feathers, a species of peafowl very well known to exhibit bright and saturated iridescent colors. We show that multispectral reflectance imaging and PCASF approaches can be used as effective tools for spectral recognition of iridescent patterns in the visible spectrum and provide meaningful information for spectral classification of the irregularity of the microstructure in iridescent plumage.
Directory of Open Access Journals (Sweden)
Ol’ga Evgen’evna Kovrizhnykh
2016-12-01
Full Text Available Transaction costs emerge in different types of logistics activities and influence the material flow and the accompanying financial and information flows; due to this fact, the information support and assessment are important tasks for the enterprise. The paper analyzes transaction costs in logistics for automotive manufacturers; according to the analysis, the level of these costs in any functional area of “logistics supply” ranges from 1.5 to 20%. These are only the official figures of transaction costs of enterprises that do not take into consideration implicit costs. Despite the growing interest in transaction costs in logistics in the latest fifteen years, this topic is covered rather poorly in Russian literature; the definition of “transaction costs” is unclear, there is no technique of their information reflection and assessment. We have developed the methods for information reflection of transaction costs that can be used by automotive enterprises. Each enterprise will have an opportunity to choose the most suitable technique for information reflection of transaction costs or to compare the level of transaction costs when using different techniques. Application of techniques for information reflection of transaction costs allows the enterprises to increase profits by optimizing and reducing costs and using their assets more effectively, to identify possible ways to improve cost parameters of their performance, to improve their efficiency and productivity; to cut out unnecessary or duplicate activities, to optimize the number of staff involved in a particular activity
Angle-domain Migration Velocity Analysis using Wave-equation Reflection Traveltime Inversion
Zhang, Sanzong
2012-11-04
The main difficulty with an iterative waveform inversion is that it tends to get stuck in a local minima associated with the waveform misfit function. This is because the waveform misfit function is highly non-linear with respect to changes in the velocity model. To reduce this nonlinearity, we present a reflection traveltime tomography method based on the wave equation which enjoys a more quasi-linear relationship between the model and the data. A local crosscorrelation of the windowed downgoing direct wave and the upgoing reflection wave at the image point yields the lag time that maximizes the correlation. This lag time represents the reflection traveltime residual that is back-projected into the earth model to update the velocity in the same way as wave-equation transmission traveltime inversion. The residual movemout analysis in the angle-domain common image gathers provides a robust estimate of the depth residual which is converted to the reflection traveltime residual for the velocity inversion. We present numerical examples to demonstrate its efficiency in inverting seismic data for complex velocity model.
Uncertainty and sensitivity analysis in a Probabilistic Safety Analysis level-1
International Nuclear Information System (INIS)
Nunez Mc Leod, Jorge E.; Rivera, Selva S.
1996-01-01
A methodology for sensitivity and uncertainty analysis, applicable to a Probabilistic Safety Assessment Level I has been presented. The work contents are: correct association of distributions to parameters, importance and qualification of expert opinions, generations of samples according to sample sizes, and study of the relationships among system variables and systems response. A series of statistical-mathematical techniques are recommended along the development of the analysis methodology, as well as different graphical visualization for the control of the study. (author)
Economic impact analysis for global warming: Sensitivity analysis for cost and benefit estimates
International Nuclear Information System (INIS)
Ierland, E.C. van; Derksen, L.
1994-01-01
Proper policies for the prevention or mitigation of the effects of global warming require profound analysis of the costs and benefits of alternative policy strategies. Given the uncertainty about the scientific aspects of the process of global warming, in this paper a sensitivity analysis for the impact of various estimates of costs and benefits of greenhouse gas reduction strategies is carried out to analyze the potential social and economic impacts of climate change
Directory of Open Access Journals (Sweden)
Lucie Hajdušková
2010-12-01
Full Text Available The paper focuses on the linking pedagogical theory to teaching practicewith the aim to improve quality of education through its analytic reflection by teachersor student teachers. The text deals with the original method of didactic reflection– concept analysis. Concept analysis is characterized as a methodical instrument forreflection and evaluation of the instruction. It is based on investigation of didacticcontent transformation in educational processes and it is oriented to creative approachand experiential learning in the instruction. The explanation uses the results of research(2009–2010 on the state of didactic skills and pedagogical content knowledge of arteducation teachers during their didactic training.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Sorption of redox-sensitive elements: critical analysis
International Nuclear Information System (INIS)
Strickert, R.G.
1980-12-01
The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH - , CO -- 3 ) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states
Sorption of redox-sensitive elements: critical analysis
Energy Technology Data Exchange (ETDEWEB)
Strickert, R.G.
1980-12-01
The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.
Pan, Xin; Cao, Chen; Yang, Yingbao; Li, Xiaolong; Shan, Liangliang; Zhu, Xi
2018-04-01
The land surface temperature (LST) derived from thermal infrared satellite images is a meaningful variable in many remote sensing applications. However, at present, the spatial resolution of the satellite thermal infrared remote sensing sensor is coarser, which cannot meet the needs. In this study, LST image was downscaled by a random forest model between LST and multiple predictors in an arid region with an oasis-desert ecotone. The proposed downscaling approach was evaluated using LST derived from the MODIS LST product of Zhangye City in Heihe Basin. The primary result of LST downscaling has been shown that the distribution of downscaled LST matched with that of the ecosystem of oasis and desert. By the way of sensitivity analysis, the most sensitive factors to LST downscaling were modified normalized difference water index (MNDWI)/normalized multi-band drought index (NMDI), soil adjusted vegetation index (SAVI)/ shortwave infrared reflectance (SWIR)/normalized difference vegetation index (NDVI), normalized difference building index (NDBI)/SAVI and SWIR/NDBI/MNDWI/NDWI for the region of water, vegetation, building and desert, with LST variation (at most) of 0.20/-0.22 K, 0.92/0.62/0.46 K, 0.28/-0.29 K and 3.87/-1.53/-0.64/-0.25 K in the situation of +/-0.02 predictor perturbances, respectively.
Analysis of Sea Ice Cover Sensitivity in Global Climate Model
Directory of Open Access Journals (Sweden)
V. P. Parhomenko
2014-01-01
Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters
Infrared Reflectance Analysis of Epitaxial n-Type Doped GaN Layers Grown on Sapphire.
Tsykaniuk, Bogdan I; Nikolenko, Andrii S; Strelchuk, Viktor V; Naseka, Viktor M; Mazur, Yuriy I; Ware, Morgan E; DeCuir, Eric A; Sadovyi, Bogdan; Weyher, Jan L; Jakiela, Rafal; Salamo, Gregory J; Belyaev, Alexander E
2017-12-01
Infrared (IR) reflectance spectroscopy is applied to study Si-doped multilayer n + /n 0 /n + -GaN structure grown on GaN buffer with GaN-template/sapphire substrate. Analysis of the investigated structure by photo-etching, SEM, and SIMS methods showed the existence of the additional layer with the drastic difference in Si and O doping levels and located between the epitaxial GaN buffer and template. Simulation of the experimental reflectivity spectra was performed in a wide frequency range. It is shown that the modeling of IR reflectance spectrum using 2 × 2 transfer matrix method and including into analysis the additional layer make it possible to obtain the best fitting of the experimental spectrum, which follows in the evaluation of GaN layer thicknesses which are in good agreement with the SEM and SIMS data. Spectral dependence of plasmon-LO-phonon coupled modes for each GaN layer is obtained from the spectral dependence of dielectric of Si doping impurity, which is attributed to compensation effects by the acceptor states.
Sensitivity analysis of hybrid power systems using Power Pinch Analysis considering Feed-in Tariff
International Nuclear Information System (INIS)
Mohammad Rozali, Nor Erniza; Wan Alwi, Sharifah Rafidah; Manan, Zainuddin Abdul; Klemeš, Jiří Jaromír
2016-01-01
Feed-in Tariff (FiT) has been one of the most effective policies in accelerating the development of renewable energy (RE) projects. The amount of RE electricity in the FiT purchase agreement is an important decision that has to be made by the RE project developers. They have to consider various crucial factors associated with RE system operation as well as its stochastic nature. The presented work aims to assess the sensitivity and profitability of a hybrid power system (HPS) in cases of RE system failure or shutdown. The amount of RE electricity for the FiT purchase agreement in various scenarios was determined using a novel tool called On-Grid Problem Table based on the Power Pinch Analysis (PoPA). A sensitivity table has also been introduced to assist planners to evaluate the effects of the RE system's failure on the profitability of the HPS. This table offers insights on the variance of the RE electricity. The sensitivity analysis of various possible scenarios shows that the RE projects can still provide financial benefits via the FiT, despite the losses incurred from the penalty levied. - Highlights: • A Power Pinch Analysis (PoPA) tool to assess the economics of an HPS with FiT. • The new On-Grid Problem Table for targeting the available RE electricity for FiT sale. • A sensitivity table showing the effect of RE electricity changes on the HPS profitability.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, Marí a; Le Maitre, Olivier; Knio, Omar
2016-01-01
sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity
Spatiotemporal sensitivity analysis of vertical transport of pesticides in soil
Environmental fate and transport processes are influenced by many factors. Simulation models that mimic these processes often have complex implementations, which can lead to over-parameterization. Sensitivity analyses are subsequently used to identify critical parameters whose un...
Use of thermal neutron reflection method for chemical analysis of bulk samples
International Nuclear Information System (INIS)
Papp, A.; Csikai, J.
2014-01-01
Microscopic, σ β , and macroscopic, Σ β , reflection cross-sections of thermal neutrons averaged over bulk samples as a function of thickness (z) are given. The σ β values are additive even for bulk samples in the z=0.5–8 cm interval and so the σ βmol (z) function could be given for hydrogenous substances, including some illicit drugs, explosives and hiding materials of ∼1000 cm 3 dimensions. The calculated excess counts agree with the measured R(z) values. For the identification of concealed objects and chemical analysis of bulky samples, different neutron methods need to be used simultaneously. - Highlights: • Check the proposed analytical expression for the description of the flux. • Determination of the reflection cross-sections averaged over bulk samples. • Data rendered to estimate the excess counts for various materials
Naber, Jessica L; Hall, Joanne; Schadler, Craig Matthew
2014-09-01
This study sought to identify characteristics of clinically situated critical thinking in nursing students' reflections, originally part of a study guided by Richard Paul's model of critical thinking. Nurses are expected to apply critical thinking in all practice situations to improve health outcomes, including patient safety and satisfaction. In a previous study, Paul's model of critical thinking was used to develop questions for reflective writing assignments. Within that study, 30 nursing students completed six open-ended narratives of nurse-patient clinical encounters during an 8-week period. Improvements were seen in critical thinking scores after the intervention. This article reports the qualitative analysis of the content of six open-ended narratives. Six overarching themes were identified and combined into a tentative conceptual model. Faculty's understanding of the characteristics of critical thinking in the context of clinical education will help them to teach and evaluate students' progress and competencies for future practice.
Use of thermal neutron reflection method for chemical analysis of bulk samples
Energy Technology Data Exchange (ETDEWEB)
Papp, A., E-mail: papppa@atomki.hu [Institute of Nuclear Research of the Hungarian Academy of Sciences, (ATOMKI), 4001 Debrecen, Pf. 51 (Hungary); Csikai, J. [Institute of Nuclear Research of the Hungarian Academy of Sciences, (ATOMKI), 4001 Debrecen, Pf. 51 (Hungary); Institute of Experimental Physics, University Debrecen (IEP), 4010 Debrecen-10, Pf. 105 (Hungary)
2014-09-11
Microscopic, σ{sub β}, and macroscopic, Σ{sub β}, reflection cross-sections of thermal neutrons averaged over bulk samples as a function of thickness (z) are given. The σ{sub β} values are additive even for bulk samples in the z=0.5–8 cm interval and so the σ{sub βmol}(z) function could be given for hydrogenous substances, including some illicit drugs, explosives and hiding materials of ∼1000 cm{sup 3} dimensions. The calculated excess counts agree with the measured R(z) values. For the identification of concealed objects and chemical analysis of bulky samples, different neutron methods need to be used simultaneously. - Highlights: • Check the proposed analytical expression for the description of the flux. • Determination of the reflection cross-sections averaged over bulk samples. • Data rendered to estimate the excess counts for various materials.
International Nuclear Information System (INIS)
Esaka, Fumitaka; Esaka, Konomi T.; Magara, Masaaki; Sakurai, Satoshi; Usuda, Shigekazu; Watanabe, Kazuo
2006-01-01
The technique of single particle transfer was applied to quantitative analysis with total-reflection X-ray fluorescence (TXRF) spectrometry. The technique was evaluated by performing quantitative analysis of individual Cu particles with diameters between 3.9 and 13.2 μm. The direct quantitative analysis of the Cu particle transferred onto a Si carrier gave a discrepancy between measured and calculated Cu amounts due to the absorption effects of incident and fluorescent X-rays within the particle. By the correction for the absorption effects, the Cu amounts in individual particles could be determined with the deviation within 10.5%. When the Cu particles were dissolved with HNO 3 solution prior to the TXRF analysis, the deviation was improved to be within 3.8%. In this case, no correction for the absorption effects was needed for quantification
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Phalen, R N; Que Hee, Shane S
2007-02-01
The aim of this study was to investigate the surface variability of 13 powder-free, unlined, and unsupported nitrile rubber gloves using attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectrophotometry at key wavelengths for analysis of captan contamination. The within-glove, within-lot, and between-lot variability was measured at 740, 1124, 1252, and 1735 cm(-1), the characteristic captan reflectance minima wavelengths. Three glove brands were assessed after conditioning overnight at relative humidity (RH) values ranging from 2 +/- 1 to 87 +/- 4% and temperatures ranging from -8.6 +/- 0.7 to 59.2 +/- 0.9 degrees C. For all gloves, 1735 cm(-1) provided the lowest background absorbance and greatest potential sensitivity for captan analysis on the outer glove surface: absorbances ranged from 0.0074 +/- 0.0005 (Microflex) to 0.0195 +/- 0.0024 (SafeSkin); average within-glove coefficients of variation (CV) ranged from 2.7% (Best, range 0.9-5.3%) to 10% (SafeSkin, 1.2-17%); within-glove CVs greater than 10% were for one brand (SafeSkin); within-lot CVs ranged from 2.8% (Best N-Dex) to 28% (SafeSkin Blue); and between-lot variation was statistically significant (p < or = 0.05) for all but two SafeSkin lots. The RH had variable effects dependent on wavelength, being minimal at 1735, 1252, and 1124 cm(-1) and highest at 3430 cm(-1) (O-H stretch region). There was no significant effect of temperature conditioning. Substantial within-glove, within-lot, and between-lot variability was observed. Thus, surface analysis using ATR-FT-IR must treat glove brands and lots as different. ATR-FT-IR proved to be a useful real-time analytical tool for measuring glove variability, detecting surface humidity effects, and choosing selective and sensitive wavelengths for analysis of nonvolatile surface contaminants.
Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs
R.A. Zuidwijk (Rob)
2005-01-01
textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an
Sensitivity analysis of dynamic characteristic of the fixture based on design variables
International Nuclear Information System (INIS)
Wang Dongsheng; Nong Shaoning; Zhang Sijian; Ren Wanfa
2002-01-01
The research on the sensitivity analysis is dealt with of structural natural frequencies to structural design parameters. A typical fixture for vibration test is designed. Using I-DEAS Finite Element programs, the sensitivity of its natural frequency to design parameters is analyzed by Matrix Perturbation Method. The research result shows that the sensitivity analysis is a fast and effective dynamic re-analysis method to dynamic design and parameters modification of complex structures such as fixtures
Depletion GPT-free sensitivity analysis for reactor eigenvalue problems
International Nuclear Information System (INIS)
Kennedy, C.; Abdel-Khalik, H.
2013-01-01
This manuscript introduces a novel approach to solving depletion perturbation theory problems without the need to set up or solve the generalized perturbation theory (GPT) equations. The approach, hereinafter denoted generalized perturbation theory free (GPT-Free), constructs a reduced order model (ROM) using methods based in perturbation theory and computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error from using the ROM is quantified in the GPT-Free approach by means of a Wilks' order statistics error metric denoted the K-metric. Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally intractable. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT capabilities unless envisioned during code development. The GPT-Free approach addresses this limitation by requiring only the ability to compute the fundamental adjoint. This manuscript demonstrates the GPT-Free approach for depletion reactor calculations performed in SCALE6 using the 7x7 UAM assembly model. A ROM is developed for the assembly over a time horizon of 990 days. The approach both calculates the reduction error over the lifetime of the simulation using the K-metric and benchmarks the obtained sensitivities using sample calculations. (authors)
Global sensitivity analysis using sparse grid interpolation and polynomial chaos
International Nuclear Information System (INIS)
Buzzard, Gregery T.
2012-01-01
Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.
Energy Technology Data Exchange (ETDEWEB)
Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)
2015-08-15
Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.
Adjoint sensitivity analysis of the thermomechanical behavior of repositories
International Nuclear Information System (INIS)
Wilson, J.L.; Thompson, B.M.
1984-01-01
The adjoint sensitivity method is applied to thermomechanical models for the first time. The method provides an efficient and inexpensive answer to the question: how sensitive are thermomechanical predictions to assumed parameters. The answer is exact, in the sense that it yields exact derivatives of response measures to parameters, and approximate, in the sense that projections of the response fo other parameter assumptions are only first order correct. The method is applied to linear finite element models of thermomechanical behavior. Extensions to more complicated models are straight-forward but often laborious. An illustration of the method with a two-dimensional repository corridor model reveals that the chosen stress response measure was most sensitive to Poisson's ratio for the rock matrix
Tremblay, Marie-Claude; Brousselle, Astrid; Richard, Lucie; Beaudet, Nicole
2013-10-01
Program designers and evaluators should make a point of testing the validity of a program's intervention theory before investing either in implementation or in any type of evaluation. In this context, logic analysis can be a particularly useful option, since it can be used to test the plausibility of a program's intervention theory using scientific knowledge. Professional development in public health is one field among several that would truly benefit from logic analysis, as it appears to be generally lacking in theorization and evaluation. This article presents the application of this analysis method to an innovative public health professional development program, the Health Promotion Laboratory. More specifically, this paper aims to (1) define the logic analysis approach and differentiate it from similar evaluative methods; (2) illustrate the application of this method by a concrete example (logic analysis of a professional development program); and (3) reflect on the requirements of each phase of logic analysis, as well as on the advantages and disadvantages of such an evaluation method. Using logic analysis to evaluate the Health Promotion Laboratory showed that, generally speaking, the program's intervention theory appeared to have been well designed. By testing and critically discussing logic analysis, this article also contributes to further improving and clarifying the method. Copyright © 2013 Elsevier Ltd. All rights reserved.
An analysis of medical students’ reflective essays in problem-based learning
Directory of Open Access Journals (Sweden)
Jihyun Si
2018-03-01
Full Text Available Purpose This study aimed to explore students’ learning experience in problem-based learning (PBL particularly in terms of what they learned and how they learned in one Korean medical school by analyzing their reflective essays with qualitative research methods. Methods This study included 44 first-year medical students. They took three consecutive PBL courses and wrote reflective essays 3 times anonymously on the last day of each course. Their reflective essays were analyzed using an inductive content analysis method. Results The coding process yielded 16 sub-categories and these categories were grouped into six categories according to the distinctive characteristics of PBL learning experience: integrated knowledge base, clinical problem solving, collaboration, intrinsic motivation, self-directed learning, and professional attitude. Among these categories, integrated knowledge base (34.68% and professional attitude (2.31% were the categories mentioned most and least frequently. Conclusion The findings of this study provide an overall understanding of the learning experience of Korean medical students during PBL in terms of what they learned and how they learned with rich descriptive commentaries from their perspectives as well as several thoughtful insights to help develop instructional strategies to enhance the effectiveness of PBL.
Reflection and tubewave analysis of the seismic data from the Stripa crosshole site
International Nuclear Information System (INIS)
Cosma, C.; Baehler, S.; Hammarstroem, M.; Pihl, J.
1986-12-01
The data from the crosshole research program (radar, seismics and hydraulics) in the Stripa Phase II Project resulted in the construction of a model. The results from the present study were compared to this model. It was found that the existing data set used for tomographic analysis could only be used to a limited extent, as reflection analysis requires a more dense detector coverage. Nevertheless two reflectors were detected. The positions of the reflectors were compared to the existing crosshole model and proved to correlate well. For the tubewave analysis almost all crosshole seismic data could be used. By comparing the results with previous hydraulic tests, it was found that tubewave sources and hydraulically conductive zones are in concordance. All previously defined zones but one could be detected. (orig./HP)
Energy Technology Data Exchange (ETDEWEB)
Nagumo, S; Muraoka, S; Takahashi, T [Oyo Corp., Tokyo (Japan)
1997-05-27
Fault analysis is required in addition to the ordinary process of structural analysis (CDP stacking) for the examination of discontinuity in the reflection horizon in question. The fault shape restoration principle is that the reflection point of a reflection wave observed at a certain receiving point is on an ellipse with the shock point and receiving point at its focal points and that the sum of the distances between the reflection point and the focal points is equal to the reflection wave propagation time. The DMO velocity is worked out by calculation using the positive travel time and inverse travel time from the common reflection surface. When the reflection surface is inclined by {theta}, the average interval velocity/cos{theta} is called the DMO velocity. When the reflection surface inclination and the average interval velocities are determined separately in this way, the position of the reflection point may be worked out, and this enables the calculation of the amount of migration (lateral movement). The reflection wave lineups carried by the original record are picked up one by one, and the average interval velocities are treated very prudently. After such a basic DMO conversion treatment, the actualities of the fault are described fairly correctly. 3 figs.
Siozos, Panagiotis; Philippidis, Aggelos; Anglos, Demetrios
2017-11-01
A novel, portable spectrometer, combining two analytical techniques, laser-induced breakdown spectroscopy (LIBS) and diffuse reflectance spectroscopy, was developed with the aim to provide an enhanced instrumental and methodological approach with regard to the analysis of pigments in objects of cultural heritage. Technical details about the hybrid spectrometer and its operation are presented and examples are given relevant to the analysis of paint materials. Both LIBS and diffuse reflectance spectra in the visible and part of the near infrared, corresponding to several neat mineral pigment samples, were recorded and the complementary information was used to effectively distinguish different types of pigments even if they had similar colour or elemental composition. The spectrometer was also employed in the analysis of different paints on the surface of an ancient pottery sherd demonstrating the capabilities of the proposed hybrid diagnostic approach. Despite its instrumental simplicity and compact size, the spectrometer is capable of supporting analytical campaigns relevant to archaeological, historical or art historical investigations, particularly when quick data acquisition is required in the context of surveys of large numbers of objects and samples.
Analysis and suppression of reflections in far-field antenna measurement ranges
Sierra Castañer, Manuel; Cano Facila, Francisco Jose; Foged, Lars Jacob; Saccardi, Francesco; Nader Kawassaki, Guilherme; Raimundi, Lucas dos Reis; Vilela Rezende, Stefano Albino
2013-01-01
This paper presents the analysis of the reflections in two kind of spherical far field ranges: one if the classical acquisition where the AUT is rotated and the second one corresponds to the systems where the AUT is fixed and the antenna probe is rotated. In large far field systems this is not possible, but this can be used to the measurement of small antennas, for instance, with the SATIMO StarGate system. In both cases, it is assumed that only one frequency is acquired and the results shoul...
Energy dispersive X-ray fluorescence analysis with multiple total reflection
International Nuclear Information System (INIS)
Freitag, K.
1985-01-01
The development of a total reflection XRF analyzer and the performance data of this instrument are described. The drastic reduction of the scattered radiation is the outstanding property of the method. Detection limits of elements and matrix effects are discussed. The competition with other methods of analysis has proven its advantages in a wide range. In addition to its multi-element features down to the picogram level, particularly its universal calibration function has turned out to be a great help in the analytical practice. (orig./RB)
Abstracts of the 8th Conference on total reflection x-ray fluorescence analysis and related methods
International Nuclear Information System (INIS)
Wobrauschek, P.
2000-01-01
The 8. conference on total reflection x-ray fluorescence analysis and related methods held from 25.9 to 29.9.2000 contains 79 abstracts about x-ray fluorescence analysis (XRFA) as a powerful tool used for industrial production, geological prospecting and for environmental control. Total reflection x-ray fluorescence spectroscopy is also a tool used for chemical analysis in medicine, industry and research. (E.B.)
Hattori, Satoshi; Zhou, Xiao-Hua
2018-02-10
Publication bias is one of the most important issues in meta-analysis. For standard meta-analyses to examine intervention effects, the funnel plot and the trim-and-fill method are simple and widely used techniques for assessing and adjusting for the influence of publication bias, respectively. However, their use may be subjective and can then produce misleading insights. To make a more objective inference for publication bias, various sensitivity analysis methods have been proposed, including the Copas selection model. For meta-analysis of diagnostic studies evaluating a continuous biomarker, the summary receiver operating characteristic (sROC) curve is a very useful method in the presence of heterogeneous cutoff values. To our best knowledge, no methods are available for evaluation of influence of publication bias on estimation of the sROC curve. In this paper, we introduce a Copas-type selection model for meta-analysis of diagnostic studies and propose a sensitivity analysis method for publication bias. Our method enables us to assess the influence of publication bias on the estimation of the sROC curve and then judge whether the result of the meta-analysis is sufficiently confident or should be interpreted with much caution. We illustrate our proposed method with real data. Copyright © 2017 John Wiley & Sons, Ltd.
Bergin, Michael
2011-01-01
Qualitative data analysis is a complex process and demands clear thinking on the part of the analyst. However, a number of deficiencies may obstruct the research analyst during the process, leading to inconsistencies occurring. This paper is a reflection on the use of a qualitative data analysis program, NVivo 8, and its usefulness in identifying consistency and inconsistency during the coding process. The author was conducting a large-scale study of providers and users of mental health services in Ireland. He used NVivo 8 to store, code and analyse the data and this paper reflects some of his observations during the study. The demands placed on the analyst in trying to balance the mechanics of working through a qualitative data analysis program, while simultaneously remaining conscious of the value of all sources are highlighted. NVivo 8 as a qualitative data analysis program is a challenging but valuable means for advancing the robustness of qualitative research. Pitfalls can be avoided during analysis by running queries as the analyst progresses from tree node to tree node rather than leaving it to a stage whereby data analysis is well advanced.
Reinholz, Daniel L.; Dounas-Frazer, Dimitri R.
2017-11-01
One way to foster a supportive culture in physics departments is for instructors to provide students with personal attention regarding their academic difficulties. To this end, we have developed the Guided Reflection Form (GRF), an online tool that facilitates student reflections and personalized instructor responses. In the present work, we report on the experiences and practices of two instructors who used the GRF in an introductory physics lab course. Our analysis draws on two sources of data: (i) post-semester interviews with both instructors and (ii) the instructors' written responses to 134 student reflections. Interviews focused on the instructors' perceptions about the goals and framing of the GRF activity, and characteristics of good or bad feedback. Their GRF responses were analyzed for the presence of up to six types of statement: encouraging statements, normalizing statements, empathizing statements, strategy suggestions, resource suggestions, and feedback to the student on the structure of students' reflections. We find that both instructors used all six response types, in alignment with their perceptions of what counts as good feedback. In addition, although each instructor had their own unique feedback style, both instructors' feedback practices were compatible with two principles for effective feedback: praise should focus on effort, express confidence in students' abilities, and be sincere; and process-level feedback should be specific and strategy-oriented. This exploratory qualitative investigation demonstrates that the GRF can serve as a mechanism for instructors to pay personal attention to their students. In addition, it opens the door to future work about the impact of the GRF on student-teacher interactions.
Time course analysis of baroreflex sensitivity during postural stress
Westerhof, Berend E.; Gisolf, Janneke; Karemaker, John M.; Wesseling, Karel H.; Secher, Niels H.; van Lieshout, Johannes J.
2006-01-01
Postural stress requires immediate autonomic nervous action to maintain blood pressure. We determined time-domain cardiac baroreflex sensitivity (BRS) and time delay (tau) between systolic blood pressure and interbeat interval variations during stepwise changes in the angle of vertical body axis
Comparative Analysis of Intercultural Sensitivity among Teachers Working with Refugees
Strekalova-Hughes, Ekaterina
2017-01-01
The unprecedented global refugee crisis and the accompanying political discourse places added pressures on teachers working with children who are refugees in resettling countries. Given the increased chances of having a refugee child in one's classroom, it is critical to explore how interculturally sensitive teachers are and if working with…
Fecal bacteria source characterization and sensitivity analysis of SWAT 2005
The Soil and Water Assessment Tool (SWAT) version 2005 includes a microbial sub-model to simulate fecal bacteria transport at the watershed scale. The objectives of this study were to demonstrate methods to characterize fecal coliform bacteria (FCB) source loads and to assess the model sensitivity t...
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2013-01-01
-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which...
Smart optimisation and sensitivity analysis in water distribution systems
CSIR Research Space (South Africa)
Page, Philip R
2015-12-01
Full Text Available optimisation of a water distribution system by keeping the average pressure unchanged as water demands change, by changing the speed of the pumps. Another application area considered, using the same mathematical notions, is the study of the sensitivity...
Directory of Open Access Journals (Sweden)
Zhong Wu
2017-04-01
Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.
Noomen, M.F.; Skidmore, A.K.; Meer, van der F.D.; Prins, H.H.T.
2006-01-01
It is known that natural gas in the soil affects vegetation health, which may be detected through analysis of reflectance spectra. Since natural gas is invisible, changes in the vegetation could potentially indicate gas leakage. Although it is known that gas in soil affects plant reflectance, the
Byrnes, Nadia K; Hayes, John E
2016-08-01
Based on work a quarter century ago, it is widely accepted personality traits like sensation seeking are related to the enjoyment and intake of spicy foods; however, data supporting this belief is actually quite limited. Recently, we reported strong to moderate correlations between remembered spicy food liking and two personality traits measured with validated questionnaires. Here, participants consumed capsaicin-containing strawberry jelly to generate acute estimates of spicy food liking. Additionally, we used a laboratory-based behavioral measure of risk taking (the mobile Balloon Analogue Risk Task; mBART) to complement a range of validated self-report measures of risk-related personality traits. Present data confirm Sensation Seeking correlates with overall spicy meal liking and liking of the burn of a spicy meal, and extends prior findings by showing novel correlations with the liking of sampled stimuli. Other personality measures, including Sensitivity to Punishment (SP), Sensitivity to Reward (SR), and the Impulsivity and Risk Taking subscales of the DSM5 Personality Inventory (PID-5) did not show significant relationships with liking of spicy foods, either sampled or remembered. Our behavioral risk taking measure, the mBART, also failed to show a relationship with remembered or sampled liking. However, significant relationships were observed between reported intake of spicy foods and Sensitivity to Reward, and the Risk Taking subscale of the PID-5 (PID5-RT). Based on the observed patterns among various personality measures, and spicy food liking and intake, we propose that personality measures may exert their influence on intake of spicy food via different mechanisms. We also speculate that Sensation Seeking may reflect motivations for consuming spicy foods that are more intrinsic, while the motivations for eating spicy foods measured by SR and PID5-RT may be more extrinsic. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Ho-Yong Jeong
2015-08-01
Full Text Available Voltage is an important variable that reflects system conditions in DC distribution systems and affects many characteristics of a system. In a DC distribution system, there is a close relationship between the real power and the voltage magnitude, and this is one of major differences from the characteristics of AC distribution systems. One such relationship is expressed as the voltage sensitivity, and an understanding of voltage sensitivity is very useful to describe DC distribution systems. In this paper, a formulation for a novel approximate expression for the voltage sensitivity in a radial DC distribution system is presented. The approximate expression is derived from the power flow equation with some additional assumptions. The results of approximate expression is compared with an exact calculation, and relations between the voltage sensitivity and electrical quantities are analyzed analytically using both the exact form and the approximate voltage sensitivity equation.
PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT
Energy Technology Data Exchange (ETDEWEB)
Seitz, R
2008-06-25
Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.
Directory of Open Access Journals (Sweden)
Y. Tang
2007-01-01
Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.
Sensitivity Analysis of Transonic Flow over J-78 Wings
Directory of Open Access Journals (Sweden)
Alexander Kuzmin
2015-01-01
Full Text Available 3D transonic flow over swept and unswept wings with an J-78 airfoil at spanwise sections is studied numerically at negative and vanishing angles of attack. Solutions of the unsteady Reynolds-averaged Navier-Stokes equations are obtained with a finite-volume solver on unstructured meshes. The numerical simulation shows that adverse Mach numbers, at which the lift coefficient is highly sensitive to small perturbations, are larger than those obtained earlier for 2D flow. Due to the larger Mach numbers, there is an onset of self-exciting oscillations of shock waves on the wings. The swept wing exhibits a higher sensitivity to variations of the Mach number than the unswept one.
Sensitive Detection of Deliquescent Bacterial Capsules through Nanomechanical Analysis.
Nguyen, Song Ha; Webb, Hayden K
2015-10-20
Encapsulated bacteria usually exhibit strong resistance to a wide range of sterilization methods, and are often virulent. Early detection of encapsulation can be crucial in microbial pathology. This work demonstrates a fast and sensitive method for the detection of encapsulated bacterial cells. Nanoindentation force measurements were used to confirm the presence of deliquescent bacterial capsules surrounding bacterial cells. Force/distance approach curves contained characteristic linear-nonlinear-linear domains, indicating cocompression of the capsular layer and cell, indentation of the capsule, and compression of the cell alone. This is a sensitive method for the detection and verification of the encapsulation status of bacterial cells. Given that this method was successful in detecting the nanomechanical properties of two different layers of cell material, i.e. distinguishing between the capsule and the remainder of the cell, further development may potentially lead to the ability to analyze even thinner cellular layers, e.g. lipid bilayers.
Long vs. short-term energy storage:sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Schoenung, Susan M. (Longitude 122 West, Inc., Menlo Park, CA); Hassenzahl, William V. (,Advanced Energy Analysis, Piedmont, CA)
2007-07-01
This report extends earlier work to characterize long-duration and short-duration energy storage technologies, primarily on the basis of life-cycle cost, and to investigate sensitivities to various input assumptions. Another technology--asymmetric lead-carbon capacitors--has also been added. Energy storage technologies are examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. Sensitivity analyses include cost of electricity and natural gas, and system life, which impacts replacement costs and capital carrying charges. Results are presented in terms of annual cost, $/kW-yr. A major variable affecting system cost is hours of storage available for discharge.
Analytical sensitivity analysis of geometric errors in a three axis machine tool
International Nuclear Information System (INIS)
Park, Sung Ryung; Yang, Seung Han
2012-01-01
In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors
System reliability assessment via sensitivity analysis in the Markov chain scheme
International Nuclear Information System (INIS)
Gandini, A.
1988-01-01
Methods for reliability sensitivity analysis in the Markov chain scheme are presented, together with a new formulation which makes use of Generalized Perturbation Theory (GPT) methods. As well known, sensitivity methods are fundamental in system risk analysis, since they allow to identify important components, so to assist the analyst in finding weaknesses in design and operation and in suggesting optimal modifications for system upgrade. The relationship between the GPT sensitivity expression and the Birnbaum importance is also given [fr